Journal articles on the topic 'Natural users interfaces'

To see the other types of publications on this topic, follow the link: Natural users interfaces.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Natural users interfaces.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Marsh, William E., Jonathan W. Kelly, Julie Dickerson, and James H. Oliver. "Fuzzy Navigation Engine: Mitigating the Cognitive Demands of Semi-Natural Locomotion." Presence: Teleoperators and Virtual Environments 23, no. 3 (October 1, 2014): 300–319. http://dx.doi.org/10.1162/pres_a_00195.

Full text
Abstract:
Many interfaces exist for locomotion in virtual reality, although they are rarely considered fully natural. Past research has found that using such interfaces places cognitive demands on the user, with unnatural actions and concurrent tasks competing for finite cognitive resources. Notably, using semi-natural interfaces leads to poor performance on concurrent tasks requiring spatial working memory. This paper presents an adaptive system designed to track a user's concurrent cognitive task load and adjust interface parameters accordingly, varying the extent to which movement is fully natural. A fuzzy inference system is described and the results of an initial validation study are presented. Users of this adaptive interface demonstrated better performance than users of a baseline interface on several movement metrics, indicating that the adaptive interface helped users manage the demands of concurrent spatial tasks in a virtual environment. However, participants experienced some unexpected difficulties when faced with a concurrent verbal task.
APA, Harvard, Vancouver, ISO, and other styles
2

Majhadi, Khadija, and Mustapha Machkour. "The history and recent advances of Natural Language Interfaces for Databases Querying." E3S Web of Conferences 229 (2021): 01039. http://dx.doi.org/10.1051/e3sconf/202122901039.

Full text
Abstract:
Databases have been always the most important topic in the study of information systems, and an indispensable tool in all information management systems. However, the extraction of information stored in these databases is generally carried out using queries expressed in a computer language, such as SQL (Structured Query Language). This generally has the effect of limiting the number of potential users, in particular non-expert database users who must know the database structure to write such requests. One solution to this problem is to use Natural Language Interface (NLI), to communicate with the database, which is the easiest way to get information. So, the appearance of Natural Language Interfaces for Databases (NLIDB) is becoming a real need and an ambitious goal to translate the user’s query given in Natural Language (NL) into the corresponding one in Database Query Language (DBQL). This article provides an overview of the state of the art of Natural Language Interfaces as well as their architecture. Also, it summarizes the main recent advances on the task of Natural Language Interfaces for databases.
APA, Harvard, Vancouver, ISO, and other styles
3

Pan, Shimei, Michelle Zhou, Keith Houck, and Peter Kissa. "Natural Language Aided Visual Query Building for Complex Data Access." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 2 (July 11, 2010): 1821–26. http://dx.doi.org/10.1609/aaai.v24i2.18819.

Full text
Abstract:
Over the past decades, there have been significant efforts on developing robust and easy-to-use query interfaces to databases. So far, the typical query interfaces are GUI-based visual query interfaces. Visual query interfaces however, have limitations especially when they are used for accessing large and complex datasets. Therefore, we are developing a novel query interface where users can use natural language expressions to help author visual queries. Our work enhances the usability of a visual query interface by directly addressing the "knowledge gap" issue in visual query interfaces. We have applied our work in several real-world applications. Our preliminary evaluation demonstrates the effectiveness of our approach
APA, Harvard, Vancouver, ISO, and other styles
4

Macaranas, A., A. N. Antle, and B. E. Riecke. "What is Intuitive Interaction? Balancing Users' Performance and Satisfaction with Natural User Interfaces." Interacting with Computers 27, no. 3 (February 12, 2015): 357–70. http://dx.doi.org/10.1093/iwc/iwv003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pollard, Kimberly A., Stephanie M. Lukin, Matthew Marge, Ashley Foots, and Susan G. Hill. "How We Talk with Robots: Eliciting Minimally-Constrained Speech to Build Natural Language Interfaces and Capabilities." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62, no. 1 (September 2018): 160–64. http://dx.doi.org/10.1177/1541931218621037.

Full text
Abstract:
Industry, military, and academia are showing increasing interest in collaborative human-robot teaming in a variety of task contexts. Designing effective user interfaces for human-robot interaction is an ongoing challenge, and a variety of single and multiple-modality interfaces have been explored. Our work is to develop a bi-directional natural language interface for remote human-robot collaboration in physically situated tasks. When combined with a visual interface and audio cueing, we intend for the natural language interface to provide a naturalistic user experience that requires little training. Building the language portion of this interface requires first understanding how potential users would speak to the robot. In this paper, we describe our elicitation of minimally-constrained robot-directed language, observations about the users’ language behavior, and future directions for constructing an automated robotic system that can accommodate these language needs.
APA, Harvard, Vancouver, ISO, and other styles
6

Wojciechowski, A. "Hand’s poses recognition as a mean of communication within natural user interfaces." Bulletin of the Polish Academy of Sciences: Technical Sciences 60, no. 2 (October 1, 2012): 331–36. http://dx.doi.org/10.2478/v10175-012-0044-3.

Full text
Abstract:
Abstract. Natural user interface (NUI) is a successor of command line interfaces (CLI) and graphical user interfaces (GUI) so well known to computer users. A new natural approach is based on extensive human behaviors tracking, where hand tracking and gesture recognition seem to play the main roles in communication. The presented paper reviews common approaches to discussed hand features tracking and provides a very effective proposal of the contour based hand’s poses recognition method which can be straightforwardly used for a hand-based natural user interface. Its possible usage varies from medical systems interaction, through games up to impaired people communication support.
APA, Harvard, Vancouver, ISO, and other styles
7

Marsh, William E., Jonathan W. Kelly, Veronica J. Dark, and James H. Oliver. "Cognitive Demands of Semi-Natural Virtual Locomotion." Presence: Teleoperators and Virtual Environments 22, no. 3 (August 1, 2013): 216–34. http://dx.doi.org/10.1162/pres_a_00152.

Full text
Abstract:
There is currently no fully natural, general-purpose locomotion interface. Instead, interfaces such as gamepads or treadmills are required to explore large virtual environments (VEs). Furthermore, sensory feedback that would normally be used in real-world movement is often restricted in VR due to constraints such as reduced field of view (FOV). Accommodating these limitations with locomotion interfaces afforded by most virtual reality (VR) systems may induce cognitive demands on the user that are unrelated to the primary task to be performed in the VE. Users of VR systems often have many competing task demands, and additional cognitive demands during locomotion must compete for finite resources. Two studies were previously reported investigating the working memory demands imposed by semi-natural locomotion interfaces (Study 1) and reduced sensory feedback (Study 2). This paper expands on the previously reported results and adds discussion linking the two studies. The results indicated that locomotion with a less natural interface increases spatial working memory demands, and that locomotion with a lower FOV increases general attentional demands. These findings are discussed in terms of their practical implications for selection of locomotion interfaces when designing VEs.
APA, Harvard, Vancouver, ISO, and other styles
8

García, Alberto, J. Ernesto Solanes, Adolfo Muñoz, Luis Gracia, and Josep Tornero. "Augmented Reality-Based Interface for Bimanual Robot Teleoperation." Applied Sciences 12, no. 9 (April 26, 2022): 4379. http://dx.doi.org/10.3390/app12094379.

Full text
Abstract:
Teleoperation of bimanual robots is being used to carry out complex tasks such as surgeries in medicine. Despite the technological advances, current interfaces are not natural to the users, who spend long periods of time in learning how to use these interfaces. In order to mitigate this issue, this work proposes a novel augmented reality-based interface for teleoperating bimanual robots. The proposed interface is more natural to the user and reduces the interface learning process. A full description of the proposed interface is detailed in the paper, whereas its effectiveness is shown experimentally using two industrial robot manipulators. Moreover, the drawbacks and limitations of the classic teleoperation interface using joysticks are analyzed in order to highlight the benefits of the proposed augmented reality-based interface approach.
APA, Harvard, Vancouver, ISO, and other styles
9

Coury, Bruce G., John Sadowsky, Paul R. Schuster, Michael Kurnow, Marcus J. Huber, and Edmund H. Durfee. "Reducing the Interaction Burden of Complex Systems." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 41, no. 1 (October 1997): 335–39. http://dx.doi.org/10.1177/107118139704100175.

Full text
Abstract:
Reducing the burden of interacting with complex systems has been a long standing goal of user interface design. In our approach to this problem, we have been developing user interfaces that allow users to interact with complex systems in a natural way and in high-level, task-related terms. These capabilities help users concentrate on making important decisions without the distractions of manipulating systems and user interfaces. To attain such a goal, our approach uses a unique combination of multi-modal interaction and interaction planning. In this paper, we motivate the basis for our approach, we describe the user interface technologies we have developed, and briefly discuss the relevant research and development issues.
APA, Harvard, Vancouver, ISO, and other styles
10

Park, Sankyu, Key-Sun Choi, and K. H. (Kane) Kim. "A Framework for Multi-Agent Systems with Multi-Modal User Interfaces in Distributed Computing Environments." International Journal of Software Engineering and Knowledge Engineering 07, no. 03 (September 1997): 351–69. http://dx.doi.org/10.1142/s0218194097000217.

Full text
Abstract:
In current multi-agent systems, the user is typically interacting with a single agent at a time through relatively inflexible and modestly intelligent interfaces. As a consequence, these systems force the users to submit simplistic requests only and suffer from problems such as the low-level nature of the system services offered to users, the weak reusability of agents, and the weak extensibility of the systems. In this paper, a framework for multi-agent systems called the open agent architecture (OAA) which reduces such problems, is discussed. The OAA is designed to handle complex requests that involve multiple agents. In some cases of complex requests from users, the components of the requests do not directly correspond to the capabilities of various application agents, and therefore, the system is required to translate the user's model of the task into the system's model before apportioning subtasks to the agents. To maximize users' efficiency in generating this type of complex requests, the OAA offers an intelligent multi-modal user interface agent which supports a natural language interface with a mix of spoken language, handwriting, and gesture. The effectiveness of the OAA environment including the intelligent distributed multi-modal interface has been observed in our development of several practical multi-agent systems.
APA, Harvard, Vancouver, ISO, and other styles
11

Ricci, Katie, and John Hodak. "Current Issues and Research Related to the Development and Delivery of Interactive Electronic Technical Manuals." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 46, no. 5 (September 2002): 632–35. http://dx.doi.org/10.1177/154193120204600507.

Full text
Abstract:
For very practical reasons, technical publications and tactical information for military systems are now delivered with, or are being converted to, electronic format. While Interactive Electronic Technical Manuals (IETMs) have been available for quite some time, many research issues remain. Among these is the need to provide alternative search technologies that quickly and easily allow system users to navigate through large amounts of technical documentation to find required information. Further, natural language interfaces can provide users with a hands-free, voice-activated interface that may significantly improve a user's ability to perform their job. Finally, the use of intelligent diagnostics and tutoring can allow users access to expertise and learning opportunities. The objective of this panel is to provide an overview and discussion of issues related to the use of IETMs and current research related to IETMs for performance support and training.
APA, Harvard, Vancouver, ISO, and other styles
12

Shavitt, Carmel, Anastasia Kuzminykh, Itay Ridel, and Jessica R. Cauchard. "Naturally Together: A Systematic Approach for Multi-User Interaction With Natural Interfaces." Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (October 13, 2021): 1–31. http://dx.doi.org/10.1145/3476090.

Full text
Abstract:
New technology is moving towards intuitive and natural interaction techniques that are increasingly embedded in human space (e.g., home and office environment) and aims to support multiple users, yet their interfaces do not cover it to the full. Imagine that you have a multi-user device, should it act differently in different situations, people, and group settings? Current Multi-User Interfaces address each of the users as an individual that works independently from others, and there is a lack of understanding of the mechanisms that impact shared usage of these products. Thus we have linked environmental (external) and user-centered (internal) factors to the way users interact with multi-user devices. We analyzed 124 papers that involve multi-user interfaces and created a classification model out of 8 factors. Both the model and factors were validated by a large-scale online study. Our model defines the factors affecting multi-user usage with a single device and leads to a decision on the most important ones in different situations. This paper is the first to identify these factors and to create a set of practical guidelines for designing Multi-User Interfaces.
APA, Harvard, Vancouver, ISO, and other styles
13

Bailey, Shannon K. T., Daphne E. Whitmer, Bradford L. Schroeder, and Valerie K. Sims. "Development of Gesture-based Commands for Natural User Interfaces." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (September 2017): 1466–67. http://dx.doi.org/10.1177/1541931213601851.

Full text
Abstract:
Human-computer interfaces are changing to meet the evolving needs of users and overcome limitations of previous generations of computer systems. The current state of computers consists largely of graphical user interfaces (GUI) that incorporate windows, icons, menus, and pointers (WIMPs) as visual representations of computer interactions controlled via user input on a mouse and keyboard. Although this model of interface has dominated human-computer interaction for decades, WIMPs require an extra step between the user’s intent and the computer action, imposing both limitations on the interaction and introducing cognitive demands (van Dam, 1997). Alternatively, natural user interfaces (NUI) employ input methods such as speech, touch, and gesture commands. With NUIs, users can interact directly with the computer without using an intermediary device (e.g., mouse, keyboard). Using the body as an input device may be more “natural” because it allows the user to apply existing knowledge of how to interact with the world (Roupé, Bosch-Sijtsema, & Johansson, 2014). To utilize the potential of natural interfaces, research must first determine what interactions can be considered natural. For the purpose of this paper, we focus on the naturalness of gesture-based interfaces. The purpose of this study was to determine how people perform natural gesture-based computer actions. To answer this question, we first narrowed down potential gestures that would be considered natural for an action. In a previous study, participants ( n=17) were asked how they would gesture to interact with a computer to complete a series of actions. After narrowing down the potential natural gestures by calculating the most frequently performed gestures for each action, we asked participants ( n=188) to rate the naturalness of the gestures in the current study. Participants each watched 26 videos of gestures (3-5 seconds each) and were asked how natural or arbitrary they interpreted each gesture for the series of computer commands (e.g., move object left, shrink object, select object, etc.). The gestures in these videos included the 17 gestures that were most often performed in the previous study in which participants were asked what gesture they would naturally use to complete the computer actions. Nine gestures were also included that were created arbitrarily to act as a comparison to the natural gestures. By analyzing the ratings on a continuum from “Completely Arbitrary” to “Completely Natural,” we found that the natural gestures people produced in the first study were also interpreted as the intended action by this separate sample of participants. All the gestures that were rated as either “Mostly Natural” or “Completely Natural” by participants corresponded to how the object manipulation would be performed physically. For example, the gesture video that depicts a fist closing was rated as “natural” by participants for the action of “selecting an object.” All of the gestures that were created arbitrarily were interpreted as “arbitrary” when they did not correspond to the physical action. Determining how people naturally gesture computer commands and how people interpret those gestures is useful because it can inform the development of NUIs and contributes to the literature on what makes gestures seem “natural.”
APA, Harvard, Vancouver, ISO, and other styles
14

Shih, Heloisa Martins, and Ravindra S. Goonetilleke. "Effectiveness of Menu Orientation in Chinese." Human Factors: The Journal of the Human Factors and Ergonomics Society 40, no. 4 (December 1998): 569–76. http://dx.doi.org/10.1518/001872098779649373.

Full text
Abstract:
Graphical user interface guidelines have been developed predominantly in English-speaking countries, but aspects related to culture (e.g., local metaphors, symbols, color, and flow) are not universal and have received little or no attention. Even though the reading and writing flow of languages such as English, Japanese, and Chinese differ widely, most software interfaces do not take account of this. In this paper we investigate the effectiveness of menu flow or menu orientation in both the Chinese and English languages for Chinese users. The experimental results indicate that for the Chinese population, a horizontal menu in either language is more effective than the vertical orientation. Thus item differentiation in menus is best performed when the natural flow of the user's native language is broken through a transformation process similar to a matrix transpose. Even though we did not investigate search strategies explicitly, we hypothesize that the primary reason for the difference lies in the scanning patterns adopted by the Chinese population in search tasks so that there is no mismatch in the reading metaphors. Applications of this research include the design of culturally and linguistically adapted human-computer interfaces for Chinese users.
APA, Harvard, Vancouver, ISO, and other styles
15

Steinicke, Frank. "Natural Locomotion Interfaces – With a Little Bit of Magic!" Journal on Interactive Systems 2, no. 2 (November 16, 2011): 1. http://dx.doi.org/10.5753/jis.2011.588.

Full text
Abstract:
The mission of the Immersive Media Group (IMG) is to develop virtual locomotion user interfaces which allow humans to experience arbitrary 3D environments by means of the natural walking metaphor. Traveling through immersive virtual environments (IVEs) by means of real walking is an important activity to increase naturalness of virtual reality (VR)-based interaction. However, the size of the virtual world often differs from the size of the tracked lab space so that a straightforward implementation of omni-directional and unlimited walking is not possible. Redirected walking is one concept to address this issue by inconspicuously guiding the user on a physical path that may differ from the path the user perceives in the virtual world. For example, intentionally rotating the virtual camera to one side causes the user to unknowingly compensate by walking on a circular arc into the opposite direction. In the scope of the LOCUI project, which is funded by the German Research Foundation, we analyze how gains of locomotor speed, turns and curvatures can gradually alter the physical trajectory with respect to the path perceived in the virtual world without the users observing any discrepancy. Thus, users can be guided in order to avoid collisions with physical obstacles (e.g., lab walls) or they can be guided to arbitrary locations in the physical space. For example, if the user approaches a virtual object, she can be guided to a real proxy prop that is registered to and aligned with its virtual counterpart. Hence, the user can interact with a virtual object by touching the corresponding real-world proxy prop that provides haptic feedback. Based on the results of psychophysical experiments we plan With such a user interface it becomes possible to intuitively interact with any virtual object by touching registered real-world props.
APA, Harvard, Vancouver, ISO, and other styles
16

Berdasco, López, Diaz, Quesada, and Guerrero. "User Experience Comparison of Intelligent Personal Assistants: Alexa, Google Assistant, Siri and Cortana." Proceedings 31, no. 1 (November 20, 2019): 51. http://dx.doi.org/10.3390/proceedings2019031051.

Full text
Abstract:
Natural user interfaces are becoming popular. One of the most common natural user interfaces nowadays are voice activated interfaces, particularly smart personal assistants such as Google Assistant, Alexa, Cortana, and Siri. This paper presents the results of an evaluation of these four smart personal assistants in two dimensions: the correctness of their answers and how natural the responses feel to users. Ninety-two participants conducted the evaluation. Results show that Alexa and Google Assistant are significantly better than Siri and Cortana. However, there is no statistically significant difference between Alexa and Google Assistant.
APA, Harvard, Vancouver, ISO, and other styles
17

Norouzi, Nahal, Luke Bölling, Gerd Bruder, and Greg Welch. "Augmented rotations in virtual reality for users with a reduced range of head movement." Journal of Rehabilitation and Assistive Technologies Engineering 6 (January 2019): 205566831984130. http://dx.doi.org/10.1177/2055668319841309.

Full text
Abstract:
Introduction: A large body of research in the field of virtual reality is focused on making user interfaces more natural and intuitive by leveraging natural body movements to explore a virtual environment. For example, head-tracked user interfaces allow users to naturally look around a virtual space by moving their head. However, such approaches may not be appropriate for users with temporary or permanent limitations of their head movement. Methods: In this paper, we present techniques that allow these users to get virtual benefits from a reduced range of physical movements. Specifically, we describe two techniques that augment virtual rotations relative to physical movement thresholds. Results: We describe how each of the two techniques can be implemented with either a head tracker or an eye tracker, e.g. in cases when no physical head rotations are possible. Conclusions: We discuss their differences and limitations and we provide guidelines for the practical use of such augmented user interfaces.
APA, Harvard, Vancouver, ISO, and other styles
18

Trappey, Amy J. C., Charles V. Trappey, Min-Hua Chao, Nan-Jun Hong, and Chun-Ting Wu. "A VR-Enabled Chatbot Supporting Design and Manufacturing of Large and Complex Power Transformers." Electronics 11, no. 1 (December 28, 2021): 87. http://dx.doi.org/10.3390/electronics11010087.

Full text
Abstract:
Virtual reality (VR) immersive technology allows users to experience enhanced reality using human–computer interfaces (HCI). Many systems have implemented VR with improved HCI to provide strategic market advantages for industry and engineering applications. An intelligent chatbot is a conversational system capable of natural language communication allowing users to ask questions and receive answers online to enhance customer services. This research develops and implements a system framework for a VR-enabled large industrial power transformer mass-customization chatbot. The research collected 1272 frequently asked questions (FAQs) from a power transformer manufacturers’ knowledge base that is used for question matching and answer retrieval. More than 1.2 million Wikipedia engineering pages were used to train a word-embedding model for natural language understanding of question intent. The complex engineering questions and answers are integrated with an immersive VR computer human interface. The system enables users to ask questions and receive explicit and detailed answers combined with 3D immersive images of industrial sized power transformer assemblies. The user interfaces can be projected into the VR headwear or computer screen and manipulated with a controller. The unique immersive VR consultation chatbot system is to support real-time design consultation for the design and manufacturing of complex power transformers.
APA, Harvard, Vancouver, ISO, and other styles
19

Valério, Francisco Albernaz Machado, Tatiane Gomes Guimarães, Raquel Oliveira Prates, and Heloisa Candello. "Chatbots Explain Themselves: Designers' Strategies for Conveying Chatbot Features to Users." Journal on Interactive Systems 9, no. 3 (December 5, 2018): 1. http://dx.doi.org/10.5753/jis.2018.710.

Full text
Abstract:
Recently, text-based chatbots had a rise in popularity, possibly due to new APIs for online social networks and messenger services, and development platforms that help dealing with all the necessary Natural Language Processing. But, as chatbots use natural language as interface, their users may struggle to discover which sentences the chatbots will understand and what they can do. Because of that it is important to support their designers in deciding how to convey the chatbots’ features, as this might determine whether the user will continue chatting or not. In this work, our goal is to analyze the communicative strategies used by popular chatbots when conveying their features to users. We used the Semiotic Inspection Method (SIM) for that end, and as a result we were able to identify a series of strategies used by the analyzed chatbots for conveying their features to users. We then consolidate these findings by analyzing other chatbots. Finally, we discuss the use of these strategies, as well as challenges for designing such interfaces and limitations of using SIM on them.
APA, Harvard, Vancouver, ISO, and other styles
20

Bukhari, Syed Ahmad Chan, Hafsa Shareef Dar, M. Ikramullah Lali, Fazel Keshtkar, Khalid Mahmood Malik, and Seifedine Kadry. "Frameworks for Querying Databases Using Natural Language." International Journal of Data Warehousing and Mining 17, no. 2 (April 2021): 21–38. http://dx.doi.org/10.4018/ijdwm.2021040102.

Full text
Abstract:
A natural language interface is useful for a wide range of users to retrieve their desired information from databases without requiring prior knowledge of database query language such as SQL. The advent of user-friendly technologies, such as speech-enabled interfaces, have revived the use of natural language technology for querying databases; however, the most relevant and last work presenting state of the art was published back in 2013 and does not encompass several advancements. In this paper, the authors have reviewed 47 frameworks that have been developed during the last decade and categorized the SQL and NoSQL-based frameworks. Furthermore, the analysis of these frameworks is presented on the basis of criteria such as supporting language, scheme of heuristic rules, interoperability support, scope of the dataset, and overall performance score. The study concludes that the majority of frameworks focus on translating natural language queries to SQL and translates English language text to queries.
APA, Harvard, Vancouver, ISO, and other styles
21

Nanjappan, Vijayakumar, Rongkai Shi, Hai-Ning Liang, Haoru Xiao, Kim King-Tong Lau, and Khalad Hasan. "Design of Interactions for Handheld Augmented Reality Devices Using Wearable Smart Textiles: Findings from a User Elicitation Study." Applied Sciences 9, no. 15 (August 5, 2019): 3177. http://dx.doi.org/10.3390/app9153177.

Full text
Abstract:
Advanced developments in handheld devices’ interactive 3D graphics capabilities, processing power, and cloud computing have provided great potential for handheld augmented reality (HAR) applications, which allow users to access digital information anytime, anywhere. Nevertheless, existing interaction methods are still confined to the touch display, device camera, and built-in sensors of these handheld devices, which suffer from obtrusive interactions with AR content. Wearable fabric-based interfaces promote subtle, natural, and eyes-free interactions which are needed when performing interactions in dynamic environments. Prior studies explored the possibilities of using fabric-based wearable interfaces for head-mounted AR display (HMD) devices. The interface metaphors of HMD AR devices are inadequate for handheld AR devices as a typical HAR application require users to use only one hand to perform interactions. In this paper, we aim to investigate the use of a fabric-based wearable device as an alternative interface option for performing interactions with HAR applications. We elicited user-preferred gestures which are socially acceptable and comfortable to use for HAR devices. We also derived an interaction vocabulary of the wrist and thumb-to-index touch gestures, and present broader design guidelines for fabric-based wearable interfaces for handheld augmented reality applications.
APA, Harvard, Vancouver, ISO, and other styles
22

Procházka, David, Jaromír Landa, Tomáš Koubek, and Vít Ondroušek. "Mainstreaming gesture based interfaces." Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 61, no. 7 (2013): 2655–60. http://dx.doi.org/10.11118/actaun201361072655.

Full text
Abstract:
Gestures are a common way of interaction with mobile devices. They emerged especially with the iPhone production. Gestures in currently used devices are usually based on the original gestures presented by Apple in its iOS (iPhone Operating System). Therefore, there is a wide agreement on the mobile gesture design. In last years, it is possible to see experiments with gesture usage also in the other areas of consumer electronics and computers. The examples can include televisions, large projections etc. These gestures can be marked as spatial or 3D gestures. They are connected with a natural 3D environment rather than with a flat 2D screen. Nevertheless, it is hard to find a comparable design agreement within the spatial gestures. Various projects are based on completely different gesture sets. This situation is confusing for their users and slows down spatial gesture adoption.This paper is focused on the standardization of spatial gestures. The review of projects focused on spatial gesture usage is provided in the first part. The main emphasis is placed on the usability point-of-view. On the basis of our analysis, we argue that the usability is the key issue enabling the wide adoption. The mobile gesture emergence was possible easily because the iPhone gestures were natural. Therefore, it was not necessary to learn them.The design and implementation of our presentation software, which is controlled by gestures, is outlined in the second part of the paper. Furthermore, the usability testing results are provided as well. We have tested our application on a group of users not instructed in the implemented gestures design. These results were compared with the other ones, obtained with our original implementation. The evaluation can be used as the basis for implementation of similar projects.
APA, Harvard, Vancouver, ISO, and other styles
23

Chesebrough, Sam, Babak Hejrati, and John Hollerbach. "The Treadport: Natural Gait on a Treadmill." Human Factors: The Journal of the Human Factors and Ergonomics Society 61, no. 5 (January 17, 2019): 736–48. http://dx.doi.org/10.1177/0018720818819951.

Full text
Abstract:
Objective: To evaluate the differences between walking on an advanced robotic locomotion interface called the Treadport and walking overground with healthy subjects. Background: Previous studies have compared treadmill-based and overground walking in terms of gait parameters. The Treadport’s unique features including self-selected speed capability, large belt, kinesthetic force feedback, and virtual reality environment distinguish it from other locomotion interfaces and could provide a natural walking experience for the users. Method: Young, healthy subjects ( N = 17) walked 10 meters 10 times each for both overground and the Treadport environments. Comparison between walking conditions used spatiotemporal and kinematic parameters. In addition, electromyographic data was collected for five of the 17 subjects to compare muscle activity between the two conditions. Results: Gait on the Treadport was found to have no significant differences ( p > .05) with overground walking in terms of hip and knee joint angles, cadence and stride length and stride speed, and muscle activation of the four muscle groups measured. Differences ( p < .05) were observed in ankle dorsiflexion which was reduced by 2.47 ± 0.01 degrees on the Treadport. Conclusion: Walking overground and on the Treadport is highly correlated and not significantly different in 13 of 14 parameters. Application: This study suggests that the Treadport creates an environment for natural walking experience, where natural gait of users is almost preserved, with great potential to be useful for other applications, such as gait rehabilitation of individuals with walking impairments.
APA, Harvard, Vancouver, ISO, and other styles
24

Proença, Ricardo, Arminda Guerra, and Pedro Campos. "A Gestural Recognition Interface for Intelligent Wheelchair Users." International Journal of Sociotechnology and Knowledge Development 5, no. 2 (April 2013): 63–81. http://dx.doi.org/10.4018/jskd.2013040105.

Full text
Abstract:
The authors present a new system that exploits novel human-machine interfaces based on the recognition of static gestures of human hands. The aim is to aid the occupant of a wheelchair to have access to certain objects in order to facilitate his or her daily life. The authors’ approach is based on simple computational processes and low-cost hardware. Its development involves a comprehensive approach to computer vision problems based on video image capture, image segmentation, feature extraction, pattern recognition and classification. The importance of this work will be reflected in the way that differently-able users, with the use of new models of interaction, and in a natural and intuitive way, will have their life significantly facilitated.
APA, Harvard, Vancouver, ISO, and other styles
25

SOREL, ANTHONY, RICHARD KULPA, EMMANUEL BADIER, and FRANCK MULTON. "DEALING WITH VARIABILITY WHEN RECOGNIZING USER'S PERFORMANCE IN NATURAL 3D GESTURE INTERFACES." International Journal of Pattern Recognition and Artificial Intelligence 27, no. 08 (December 2013): 1350023. http://dx.doi.org/10.1142/s0218001413500237.

Full text
Abstract:
Recognition of natural gestures is a key issue in many applications including videogames and other immersive applications. Whatever is the motion capture device, the key problem is to recognize a motion that could be performed by a range of different users, at an interactive frame rate. Hidden Markov Models (HMM) that are commonly used to recognize the performance of a user however rely on a motion representation that strongly affects the overall recognition rate of the system. In this paper, we propose to use a compact motion representation based on Morphology-Independent features and we evaluate its performance compared to classical representations. When dealing with 15 very similar upper limb motions, HMM based on Morphology-Independent features yield significantly higher recognition rate (84.9%) than classical Cartesian or angular data (70.4% and 55.0%, respectively). Moreover, when the unknown motions are performed by a large number of users who have never contributed to the learning process, the recognition rate of Morphology-Independent input feature only decreases slightly (down to 68.2% for a HMM trained with the motions of only one subject) compared to other features (25.3% for Cartesian features and 17.8% for angular features in the same conditions). The method is illustrated through an interactive demo in which three virtual humans have to interactively recognize and replay the performance of the user. Each virtual human is associated with a HMM recognizer based on the three different input features.
APA, Harvard, Vancouver, ISO, and other styles
26

Nanjappan, Vijayakumar, Rongkai Shi, Hai-Ning Liang, Kim King-Tong Lau, Yong Yue, and Katie Atkinson. "Towards a Taxonomy for In-Vehicle Interactions Using Wearable Smart Textiles: Insights from a User-Elicitation Study." Multimodal Technologies and Interaction 3, no. 2 (May 9, 2019): 33. http://dx.doi.org/10.3390/mti3020033.

Full text
Abstract:
Textiles are a vital and indispensable part of our clothing that we use daily. They are very flexible, often lightweight, and have a variety of application uses. Today, with the rapid developments in small and flexible sensing materials, textiles can be enhanced and used as input devices for interactive systems. Clothing-based wearable interfaces are suitable for in-vehicle controls. They can combine various modalities to enable users to perform simple, natural, and efficient interactions while minimizing any negative effect on their driving. Research on clothing-based wearable in-vehicle interfaces is still underexplored. As such, there is a lack of understanding of how to use textile-based input for in-vehicle controls. As a first step towards filling this gap, we have conducted a user-elicitation study to involve users in the process of designing in-vehicle interactions via a fabric-based wearable device. We have been able to distill a taxonomy of wrist and touch gestures for in-vehicle interactions using a fabric-based wrist interface in a simulated driving setup. Our results help drive forward the investigation of the design space of clothing-based wearable interfaces for in-vehicle secondary interactions.
APA, Harvard, Vancouver, ISO, and other styles
27

Gehani, Ashish, Raza Ahmad, Hassan Irshad, Jianqiao Zhu, and Jignesh Patel. "Digging into Big Provenance (with SPADE)." Queue 19, no. 3 (June 30, 2021): 77–106. http://dx.doi.org/10.1145/3475965.3476885.

Full text
Abstract:
Several interfaces exist for querying provenance. Many are not flexible in allowing users to select a database type of their choice. Some provide query functionality in a data model that is different from the graph-oriented one that is natural for provenance. Others have intuitive constructs for finding results but have limited support for efficiently chaining responses, as needed for faceted search. This article presents a user interface for querying provenance that addresses these concerns and is agnostic to the underlying database being used.
APA, Harvard, Vancouver, ISO, and other styles
28

Gómez-Portes, Cristian, David Vallejo, Ana-Isabel Corregidor-Sánchez, Marta Rodríguez-Hernández, José L. Martín-Conty, Santiago Schez-Sobrino, and Begoña Polonio-López. "A Platform Based on Personalized Exergames and Natural User Interfaces to Promote Remote Physical Activity and Improve Healthy Aging in Elderly People." Sustainability 13, no. 14 (July 7, 2021): 7578. http://dx.doi.org/10.3390/su13147578.

Full text
Abstract:
In recent years, there has been a significant growth in the number of research works focused on improving the lifestyle and health of elderly people by means of technology. Telerehabilitation and the promotion of physical activity at home have been two of the fields that have attracted more attention, especially currently due to the COVID-19 pandemic. However, elderly people are sometimes reluctant to use technology at home, mainly due to fear of technology and lack of familiarity. In this context, this article presents a low-cost platform that relies on exergames and natural user interfaces to promote physical activity at home and improve the quality of life in elderly people. The underlying system is easy to use and accessible, offering a number of interaction mechanisms that guide users through the execution of routines and exercises. A relevant feature of the proposal is the ability to customize the exergames, making it possible for the therapist to adapt them according to the user’s needs. Motivation is also addressed within the developed platform to maintain the user’s engagement level as time passes by. An empirical experiment is conducted to measure the usability and motivational aspects of the proposal, which was evaluated by 17 users between 62 and 89 years of age. The obtained results showed that the proposal was well received, considering that most of the users were not experienced at all with exergame-based systems.
APA, Harvard, Vancouver, ISO, and other styles
29

Colli Alfaro, Jose Guillermo, and Ana Luisa Trejos. "User-Independent Hand Gesture Recognition Classification Models Using Sensor Fusion." Sensors 22, no. 4 (February 9, 2022): 1321. http://dx.doi.org/10.3390/s22041321.

Full text
Abstract:
Recently, it has been proven that targeting motor impairments as early as possible while using wearable mechatronic devices for assisted therapy can improve rehabilitation outcomes. However, despite the advanced progress on control methods for wearable mechatronic devices, the need for a more natural interface that allows for better control remains. To address this issue, electromyography (EMG)-based gesture recognition systems have been studied as a potential solution for human–machine interface applications. Recent studies have focused on developing user-independent gesture recognition interfaces to reduce calibration times for new users. Unfortunately, given the stochastic nature of EMG signals, the performance of these interfaces is negatively impacted. To address this issue, this work presents a user-independent gesture classification method based on a sensor fusion technique that combines EMG data and inertial measurement unit (IMU) data. The Myo Armband was used to measure muscle activity and motion data from healthy subjects. Participants were asked to perform seven types of gestures in four different arm positions while using the Myo on their dominant limb. Data obtained from 22 participants were used to classify the gestures using three different classification methods. Overall, average classification accuracies in the range of 67.5–84.6% were obtained, with the Adaptive Least-Squares Support Vector Machine model obtaining accuracies as high as 92.9%. These results suggest that by using the proposed sensor fusion approach, it is possible to achieve a more natural interface that allows better control of wearable mechatronic devices during robot assisted therapies.
APA, Harvard, Vancouver, ISO, and other styles
30

Ogden, William, and Craig Kaplan. "The use of and and or in a natural language computer interface." Proceedings of the Human Factors Society Annual Meeting 30, no. 8 (September 1986): 829–33. http://dx.doi.org/10.1177/154193128603000822.

Full text
Abstract:
A study of the use of and and or for specifying intersection and union relationships between conjoined qualifiers of varying characteristics was conducted using a simulated natural language query system. Subjects always used or correctly to indicate union but and was used to indicate both union and intersection. Interpretation rules were defined that could be used to clarify the intended meaning for and for some but not all of the cases. The results indicated subjects could implicitly learn to be more precise. These results suggest that natural language interfaces can be built to recognize ambiguous input and should prompt users for clarification. Simple syntactic elements that would distinguish the meaning of and can be defined and taught to users.
APA, Harvard, Vancouver, ISO, and other styles
31

Suh, Kil Soo, and A. Milton Jenkins. "A Comparison of Linear Keyword and Restricted Natural Language Data Base Interfaces for Novice Users." Information Systems Research 3, no. 3 (September 1992): 252–72. http://dx.doi.org/10.1287/isre.3.3.252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Hudec, Miroslav, Erika Bednárová, and Andreas Holzinger. "Augmenting Statistical Data Dissemination by Short Quantified Sentences of Natural Language." Journal of Official Statistics 34, no. 4 (December 1, 2018): 981–1010. http://dx.doi.org/10.2478/jos-2018-0048.

Full text
Abstract:
Abstract Data from National Statistical Institutes is generally considered an important source of credible evidence for a variety of users. Summarization and dissemination via traditional methods is a convenient approach for providing this evidence. However, this is usually comprehensible only for users with a considerable level of statistical literacy. A promising alternative lies in augmenting the summarization linguistically. Less statistically literate users (e.g., domain experts and the general public), as well as disabled people can benefit from such a summarization. This article studies the potential of summaries expressed in short quantified sentences. Summaries including, for example, “most visits from remote countries are of a short duration” can be immediately understood by diverse users. Linguistic summaries are not intended to replace existing dissemination approaches, but can augment them by providing alternatives for the benefit of diverse users of official statistics. Linguistic summarization can be achieved via mathematical formalization of linguistic terms and relative quantifiers by fuzzy sets. To avoid summaries based on outliers or data with low coverage, a quality criterion is applied. The concept based on linguistic summaries is demonstrated on test interfaces, interpreting summaries from real municipal statistical data. The article identifies a number of further research opportunities, and demonstrates ways to explore those.
APA, Harvard, Vancouver, ISO, and other styles
33

Huang, Ting-Hao K., Amos Azaria, Oscar J. Romero, and Jeffrey P. Bigham. "InstructableCrowd: Creating IF-THEN Rules for Smartphones via Conversations with the Crowd." Human Computation 6 (September 10, 2019): 113–46. http://dx.doi.org/10.15346/hc.v6i1.104.

Full text
Abstract:
Natural language interfaces have become a common part of modern digital life. Chatbots utilize text-based conversations to communicate with users; personal assistants on smartphones such as Google Assistant take direct speech commands from their users; and speech-controlled devices such as Amazon Echo use voice as their only input mode. In this paper, we introduce InstructableCrowd, a crowd-powered system that allows users to program their devices via conversation. The user verbally expresses a problem to the system, in which a group of crowd workers collectively respond and program relevant multi-part IF-THEN rules to help the user. The IF-THEN rules generated by InstructableCrowd connect relevant sensor combinations (e.g., location, weather, device acceleration, etc.) to useful effectors (e.g., text messages, device alarms, etc.). Our study showed that non-programmers can use the conversational interface of InstructableCrowd to create IF-THEN rules that have similar quality compared with the rules created manually. InstructableCrowd generally illustrates how users may converse with their devices, not only to trigger simple voice commands, but also to personalize their increasingly powerful and complicated devices.
APA, Harvard, Vancouver, ISO, and other styles
34

Holder, Sherrie, and Leia Stirling. "Effect of Gesture Interface Mapping on Controlling a Multi-degree-of-freedom Robotic Arm in a Complex Environment." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, no. 1 (December 2020): 183–87. http://dx.doi.org/10.1177/1071181320641045.

Full text
Abstract:
There are many robotic scenarios that require real-time function in large or unconstrained environments, for example, the robotic arm on the International Space Station (ISS). Use of fully-wearable gesture control systems are well-suited to human-robot interaction scenarios where users are mobile and must have hands free. A human study examined operation of a simulated ISS robotic arm using three different gesture input mappings compared to the traditional joystick interface. Two gesture mappings permitted multiple simultaneous inputs (multi-input), while the third was a single-input method. Experimental results support performance advantages of multi-input gesture methods over single input. Differences between the two multi-input methods in task completion and workload display an effect of user-directed attention on interface success. Mappings based on natural human arm movement are promising for gesture interfaces in mobile robotic applications. This study also highlights challenges in gesture mapping, including how users align gestures with their body and environment.
APA, Harvard, Vancouver, ISO, and other styles
35

Lieberman, Henry. "User Interface Goals, AI Opportunities." AI Magazine 30, no. 4 (September 18, 2009): 16. http://dx.doi.org/10.1609/aimag.v30i4.2266.

Full text
Abstract:
This is an opinion piece about the relationship between the fields of human-computer interaction (HCI), and artificial intelligence (AI). The ultimate goal of both fields is to make user interfaces more effective and easier to use for people. But historically, they have disagreed about whether "intelligence" or "direct manipulation" is the better route to achieving this. There is an unjustified perception in HCI that AI is unreliable. There is an unjustified perception in AI that interfaces are merely cosmetic. This disagreement is counterproductive.This article argues that AI's goals of intelligent interfaces would benefit enormously by the user-centered design and testing principles of HCI. It argues that HCI's stated goals of meeting the needs of users and interacting in natural ways, would be best served by application of AI. Peace.
APA, Harvard, Vancouver, ISO, and other styles
36

Benítez-Guijarro, Antonio, Zoraida Callejas, Manuel Noguera, and Kawtar Benghazi. "Introducing Computational Semantics for Natural Language Understanding in Conversational Nutrition Coaches for Healthy Eating." Proceedings 2, no. 19 (October 18, 2018): 506. http://dx.doi.org/10.3390/proceedings2190506.

Full text
Abstract:
Nutrition e-coaches have demonstrated to be a successful tool to foster healthy eating habits, most of these systems are based on graphical user interfaces where users select the meals they have ingested from predefined lists and receive feedback on their diet. On one side the use of conversational interfaces based on natural language processing allows users to interact with the coach more easily and with fewer restrictions. However, on the other side natural language introduces more ambiguity, as instead of selecting the input from a predefined finite list of meals, the user can describe the ingests in many different ways that must be translated by the system into a tractable semantic representation from which to derive the nutritional aspects of interest. In this paper, we present a method that improves state-of-the-art approaches by means of the inclusion of nutritional semantic aspects at different stages during the natural language understanding processing of the user written or spoken input. The outcome generated is a rich nutritional interpretation of each user ingest that is independent of the modality used to interact with the coach.
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Hao, Yu-Ping Wang, Jie Yin, and Gang Tan. "SmartShell: Automated Shell Scripts Synthesis from Natural Language." International Journal of Software Engineering and Knowledge Engineering 29, no. 02 (February 2019): 197–220. http://dx.doi.org/10.1142/s0218194019500098.

Full text
Abstract:
Modern shell scripts provide interfaces with rich functionality for system administration. However, it is not easy for end-users to write correct shell scripts; misusing commands may cause unpredictable results. In this paper, we present SmartShell, an automated function-based tool for shell script synthesis, which uses natural language descriptions as input. It can help the computer system to “understand” users’ intentions. SmartShell is based on two insights: (1) natural language descriptions for system objects (such as files and processes) and operations can be recognized by natural language processing tools; (2) system-administration tasks are often completed by short shell scripts that can be automatically synthesized from natural language descriptions. SmartShell synthesizes shell scripts in three steps: (1) using natural language processing tools to convert the description of a system-administration task into a syntax tree; (2) using program-synthesis techniques to construct a SmartShell intermediate-language script from the syntax tree; (3) translating the intermediate-language script into a shell script. Experimental results show that SmartShell can successfully synthesize 53.7% of tasks collected from shell-script helping forums.
APA, Harvard, Vancouver, ISO, and other styles
38

MATIUSHCHENKO, Oleh, and Ganna ZAVOLODKO. "EXPERIENCE USING VOICE ASSISTANTS." ITSynergy, no. 1 (November 30, 2021): 5–9. http://dx.doi.org/10.53920/its-2021-1-1.

Full text
Abstract:
Natural user interfaces are becoming popular. One of the most common today is interfaces with activated voice, including smart personal assistants such as Google Assistant, Alexa, Cortana, Siri, Alice, Bixby, Mycroft. This article presents the results of their evaluation in three dimensions: capabilities, language support, and how natural responses users experience. Evaluations were performed by analyzing existing reviews. The results show that Alexa and Google Assistant are much better than Siri and Cortana. However, there is no statistically significant difference between Alexa and Google Assistant, and neither of them integrates into modern messengers with a note-taking function, which is a significant disadvantage of such devices.
APA, Harvard, Vancouver, ISO, and other styles
39

Harlan, Jakob, Benjamin Schleich, and Sandro Wartzack. "A SYSTEMATIC COLLECTION OF NATURAL INTERACTIONS FOR IMMERSIVE MODELING FROM BUILDING BLOCKS." Proceedings of the Design Society 1 (July 27, 2021): 283–92. http://dx.doi.org/10.1017/pds.2021.29.

Full text
Abstract:
AbstractThe increased availability of affordable virtual reality hardware in the last years boosted research and development of such systems for many fields of application. While extended reality systems are well established for visualization of product data, immersive authoring tools that can create and modify that data are yet to see widespread productive use. Making use of building blocks, we see the possibility that such tools allow quick expression of spatial concepts, even for non-expert users. Optical hand-tracking technology allows the implementation of this immersive modeling using natural user interfaces. Here the users manipulated the virtual objects with their bare hands. In this work, we present a systematic collection of natural interactions suited for immersive building-block-based modeling systems. The interactions are conceptually described and categorized by the task they fulfil.
APA, Harvard, Vancouver, ISO, and other styles
40

He, Zecheng, Srinivas Sunkara, Xiaoxue Zang, Ying Xu, Lijuan Liu, Nevan Wichers, Gabriel Schubiner, Ruby Lee, and Jindong Chen. "ActionBert: Leveraging User Actions for Semantic Understanding of User Interfaces." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 7 (May 18, 2021): 5931–38. http://dx.doi.org/10.1609/aaai.v35i7.16741.

Full text
Abstract:
As mobile devices are becoming ubiquitous, regularly interacting with a variety of user interfaces (UIs) is a common aspect of daily life for many people. To improve the accessibility of these devices and to enable their usage in a variety of settings, building models that can assist users and accomplish tasks through the UI is vitally important. However, there are several challenges to achieve this. First, UI components of similar appearance can have different functionalities, making understanding their function more important than just analyzing their appearance. Second, domain-specific features like Document Object Model (DOM) in web pages and View Hierarchy (VH) in mobile applications provide important signals about the semantics of UI elements, but these features are not in a natural language format. Third, owing to a large diversity in UIs and absence of standard DOM or VH representations, building a UI understanding model with high coverage requires large amounts of training data. Inspired by the success of pre-training based approaches in NLP for tackling a variety of problems in a data-efficient way, we introduce a new pre-trained UI representation model called ActionBert. Our methodology is designed to leverage visual, linguistic and domain-specific features in user interaction traces to pre-train generic feature representations of UIs and their components. Our key intuition is that user actions, e.g., a sequence of clicks on different UI components, reveals important information about their functionality. We evaluate the proposed model on a wide variety of downstream tasks, ranging from icon classification to UI component retrieval based on its natural language description. Experiments show that the proposed ActionBert model outperforms multi-modal baselines across all downstream tasks by up to 15.5%.
APA, Harvard, Vancouver, ISO, and other styles
41

Griol, David, and Zoraida Callejas. "A Neural Network Approach to Intention Modeling for User-Adapted Conversational Agents." Computational Intelligence and Neuroscience 2016 (2016): 1–11. http://dx.doi.org/10.1155/2016/8402127.

Full text
Abstract:
Spoken dialogue systems have been proposed to enable a more natural and intuitive interaction with the environment and human-computer interfaces. In this contribution, we present a framework based on neural networks that allows modeling of the user’s intention during the dialogue and uses this prediction to dynamically adapt the dialogue model of the system taking into consideration the user’s needs and preferences. We have evaluated our proposal to develop a user-adapted spoken dialogue system that facilitates tourist information and services and provide a detailed discussion of the positive influence of our proposal in the success of the interaction, the information and services provided, and the quality perceived by the users.
APA, Harvard, Vancouver, ISO, and other styles
42

Denby, Bruce, Tamás Gábor Csapó, and Michael Wand. "Future Speech Interfaces with Sensors and Machine Intelligence." Sensors 23, no. 4 (February 10, 2023): 1971. http://dx.doi.org/10.3390/s23041971.

Full text
Abstract:
Speech is the most spontaneous and natural means of communication. Speech is also becoming the preferred modality for interacting with mobile or fixed electronic devices. However, speech interfaces have drawbacks, including a lack of user privacy; non-inclusivity for certain users; poor robustness in noisy conditions; and the difficulty of creating complex man–machine interfaces. To help address these problems, the Special Issue “Future Speech Interfaces with Sensors and Machine Intelligence” assembles eleven contributions covering multimodal and silent speech interfaces; lip reading applications; novel sensors for speech interfaces; and enhanced speech inclusivity tools for future speech interfaces. Short summaries of the articles are presented, followed by an overall evaluation. The success of this Special Issue has led to its being re-issued as “Future Speech Interfaces with Sensors and Machine Intelligence-II” with a deadline in March of 2023.
APA, Harvard, Vancouver, ISO, and other styles
43

Stock, Oliviero, Carlo Strapparava, and Massimo Zancanaro. "Multimodal Information Exploration." Journal of Educational Computing Research 17, no. 3 (October 1997): 277–95. http://dx.doi.org/10.2190/9g6k-ca95-uheb-3m4d.

Full text
Abstract:
Exploration is a fundamental aspect of learning. In computer-based systems, advanced interfaces to a rich information space can become a key element in this connection. Tools adaptable to different modes of exploring information and to the characteristics of different users are needed. In this article, the integration of hypermedia and natural language dialogue are discussed and reference is made to ALFRESCO, a natural language-centered multimodal system developed at IRST. The discussion is mainly focused on the role and the structure of the communicative context.
APA, Harvard, Vancouver, ISO, and other styles
44

Cruz-Cunha, Maria Manuela, Goran D. Putnik, Patrícia Gonçalves, and Joaquim Gonçalves. "Evaluation of User Acceptance of Virtual Environments and Interfaces for Communication in Virtual Teams." International Journal of Web Portals 6, no. 4 (October 2014): 18–40. http://dx.doi.org/10.4018/ijwp.2014100102.

Full text
Abstract:
Several studies have highlighted the relevance of face-to-face communication, suggesting that computer-mediated communication can lead to decreases in group effectiveness and reduce satisfaction levels in terms of trust and comfort of its users. Supported by an experiment where the emotional or affective aspects of communication were tested, this paper validates the thesis that, from the users' perspective, there is no opposition to the acceptance of virtual environments and interfaces for communication, and that these environments are able to cope with the reconfiguration dynamics requirements of virtual teams or client-server relations in a virtual enterprise operation. For the thesis validation, the authors experimented with two architectures, the Direct Communication Architecture (DCA) and the Virtual Communication Architecture (VCA) and found that the VCA could represent a “natural” environment to cope with the new generation of organizational environments and teams, characterised by intense reconfiguration dynamics.
APA, Harvard, Vancouver, ISO, and other styles
45

Costa, Abílio, and João P. Pereira. "Analysis and Evaluation of Sketch Recognizers in the Creation of Physics Simulations." International Journal of Creative Interfaces and Computer Graphics 4, no. 2 (July 2013): 1–21. http://dx.doi.org/10.4018/ijcicg.2013070101.

Full text
Abstract:
Sketch-based interfaces can provide a natural way for users to interact with applications. Since the core of a sketch-based interface is the gesture recognizer, there is a need to correctly evaluate various recognizers before choosing one. In this paper the authors present an evaluation of three gesture recognizers: Rubine's recognizer, CALI and the $1 Recognizer. The evaluation relied on a set of real gesture samples drawn by 32 subjects, with a gesture repertoire arranged for use in SketchyDynamics, a programming library that intends to facilitate the creation of applications by rapidly providing them with a sketch-based interface and physics simulation capabilities. The authors also discuss some improvements to the recognizers' implementation that helped achieving higher recognition rates. In the end, CALI had the best recognition rate with 94% accuracy, followed by $1 Recognizer with 87% and finally by Rubine's recognizer with 79%.
APA, Harvard, Vancouver, ISO, and other styles
46

Liu, Qian, Bei Chen, Jian-Guang Lou, Ge Jin, and Dongmei Zhang. "FANDA: A Novel Approach to Perform Follow-Up Query Analysis." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 6770–77. http://dx.doi.org/10.1609/aaai.v33i01.33016770.

Full text
Abstract:
Recent work on Natural Language Interfaces to Databases (NLIDB) has attracted considerable attention. NLIDB allow users to search databases using natural language instead of SQL-like query languages. While saving the users from having to learn query languages, multi-turn interaction with NLIDB usually involves multiple queries where contextual information is vital to understand the users’ query intents. In this paper, we address a typical contextual understanding problem, termed as follow-up query analysis. In spite of its ubiquity, follow-up query analysis has not been well studied due to two primary obstacles: the multifarious nature of follow-up query scenarios and the lack of high-quality datasets. Our work summarizes typical follow-up query scenarios and provides a new FollowUp dataset with 1000 query triples on 120 tables. Moreover, we propose a novel approach FANDA, which takes into account the structures of queries and employs a ranking model with weakly supervised max-margin learning. The experimental results on FollowUp demonstrate the superiority of FANDA over multiple baselines across multiple metrics.
APA, Harvard, Vancouver, ISO, and other styles
47

Pfeiffer, Thies, and Nadine Pfeiffer-Leßmann. "Virtual Prototyping of Mixed Reality Interfaces with Internet of Things (IoT) Connectivity." i-com 17, no. 2 (August 28, 2018): 179–86. http://dx.doi.org/10.1515/icom-2018-0025.

Full text
Abstract:
AbstractOne key aspect of the Internet of Things (IoT) is, that human machine interfaces are disentangled from the physicality of the devices. This provides designers with more freedom, but also may lead to more abstract interfaces, as they lack the natural context created by the presence of the machine. Mixed Reality (MR) on the other hand, is a key technology that enables designers to create user interfaces anywhere, either linked to a physical context (augmented reality, AR) or embedded in a virtual context (virtual reality, VR). Especially today, designing MR interfaces is a challenge, as there is not yet a common design language nor a set of standard functionalities or patterns. In addition to that, neither customers nor future users have substantial experiences in using MR interfaces.Prototypes can contribute to overcome this gap, as they continuously provide user experiences of increasing realism along the design process. We present ExProtoVAR, a tool that supports quick and lightweight prototyping of MR interfaces for IoT using VR technology.
APA, Harvard, Vancouver, ISO, and other styles
48

Liang, Xiubo, Zhen Wang, Weidong Geng, and Franck Multon. "A Motion-based User Interface for the Control of Virtual Humans Performing Sports." International Journal of Virtual Reality 10, no. 3 (January 1, 2011): 1–8. http://dx.doi.org/10.20870/ijvr.2011.10.3.2815.

Full text
Abstract:
Traditional human computer interfaces are not intuitive and natural for the choreography of human motions in the field of VR and video games. In this paper we present a novel approach to control virtual humans performing sports with a motion-based user interface. The process begins by asking the user to draw some gestures in the air with a Wii Remote. The system then recognizes the gestures with pre-trained hidden Markov models. Finally, the recognized gestures are employed to choreograph the simulated sport motions of a virtual human. The average recognition rate of the recognition algorithm is more than 90% on our test set of 20 gestures. Results on the interactive simulation of several kinds of sport motions are given to show the efficiency and interestingness of our system, which is easy-to-use especially for novice users
APA, Harvard, Vancouver, ISO, and other styles
49

VAILLANT, PASCAL. "Interpretation of iconic utterances based on contents representation: Semantic analysis in the PVI system." Natural Language Engineering 4, no. 1 (March 1998): 17–40. http://dx.doi.org/10.1017/s1351324997001836.

Full text
Abstract:
This article focuses on the need for technological aid for agrammatics, and presents a system designed to meet this need. The field of Augmentative and Alternative Communication (AAC) explores ways to allow people with speech or language disabilities to communicate. The use of computers and natural language processing techniques offers a range of new possibilities in this direction. Yet AAC addresses speech deficits mainly, not linguistic disabilities. A model of aided AAC interfaces with a place for natural language processing is presented. The PVI system, described in this contribution, makes use of such advanced techniques. It has been developed at Thomson-CSF for the use of children with cerebral palsy. It presents a customizable interface helping the disabled to compose sequences of icons displayed on a computer screen. A semantic parser, using lexical semantics information, is used to determine the best case assignments for predicative icons in the sequence. It maximizes a global value, the ‘semantic harmony’ of the sequence. The resulting conceptual graph is fed to a natural language generation module which uses Tree Adjoining Grammars (TAG) to generate French sentences. Evaluation by users demonstrates the system's strengths and limitations, and shows the ways for future developments.
APA, Harvard, Vancouver, ISO, and other styles
50

Manjula, M., and R. Rengalakshmi. "Making Research Collaborations: Learning from Processes of Transdisciplinary Engagement in Agricultural Research." Review of Development and Change 26, no. 1 (May 4, 2021): 25–39. http://dx.doi.org/10.1177/09722661211007589.

Full text
Abstract:
This article is an attempt to capture the process and outcomes of disciplinary collaborations in two multi-partner transdisciplinary research projects on agriculture. The focus of the projects was building smallholder resilience in semi-arid tropics. The collaborating disciplines fall broadly into natural sciences and social sciences. The farming community and other actors across the agricultural value chain, being the end users of research, were active stakeholders. This paper details the drivers and barriers in transdisciplinary collaboration and articulates the extent of disciplinary integration achieved between the natural sciences, social sciences and the end users of research. The key elements contributing to effectiveness of transdisciplinary research are the conceptual clarity of disciplinary contributions and interfaces, shared knowledge of the expected research outcomes, positioning of the different disciplines within the research framework, openness of the researchers to disciplinary cross fertilisation, the transdisciplinary research experience of the partnering institutions and accommodation of the cultural differences between the collaborating partners.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography