Academic literature on the topic 'Natural users interfaces'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Natural users interfaces.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Natural users interfaces"

1

Marsh, William E., Jonathan W. Kelly, Julie Dickerson, and James H. Oliver. "Fuzzy Navigation Engine: Mitigating the Cognitive Demands of Semi-Natural Locomotion." Presence: Teleoperators and Virtual Environments 23, no. 3 (October 1, 2014): 300–319. http://dx.doi.org/10.1162/pres_a_00195.

Full text
Abstract:
Many interfaces exist for locomotion in virtual reality, although they are rarely considered fully natural. Past research has found that using such interfaces places cognitive demands on the user, with unnatural actions and concurrent tasks competing for finite cognitive resources. Notably, using semi-natural interfaces leads to poor performance on concurrent tasks requiring spatial working memory. This paper presents an adaptive system designed to track a user's concurrent cognitive task load and adjust interface parameters accordingly, varying the extent to which movement is fully natural. A fuzzy inference system is described and the results of an initial validation study are presented. Users of this adaptive interface demonstrated better performance than users of a baseline interface on several movement metrics, indicating that the adaptive interface helped users manage the demands of concurrent spatial tasks in a virtual environment. However, participants experienced some unexpected difficulties when faced with a concurrent verbal task.
APA, Harvard, Vancouver, ISO, and other styles
2

Majhadi, Khadija, and Mustapha Machkour. "The history and recent advances of Natural Language Interfaces for Databases Querying." E3S Web of Conferences 229 (2021): 01039. http://dx.doi.org/10.1051/e3sconf/202122901039.

Full text
Abstract:
Databases have been always the most important topic in the study of information systems, and an indispensable tool in all information management systems. However, the extraction of information stored in these databases is generally carried out using queries expressed in a computer language, such as SQL (Structured Query Language). This generally has the effect of limiting the number of potential users, in particular non-expert database users who must know the database structure to write such requests. One solution to this problem is to use Natural Language Interface (NLI), to communicate with the database, which is the easiest way to get information. So, the appearance of Natural Language Interfaces for Databases (NLIDB) is becoming a real need and an ambitious goal to translate the user’s query given in Natural Language (NL) into the corresponding one in Database Query Language (DBQL). This article provides an overview of the state of the art of Natural Language Interfaces as well as their architecture. Also, it summarizes the main recent advances on the task of Natural Language Interfaces for databases.
APA, Harvard, Vancouver, ISO, and other styles
3

Pan, Shimei, Michelle Zhou, Keith Houck, and Peter Kissa. "Natural Language Aided Visual Query Building for Complex Data Access." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 2 (July 11, 2010): 1821–26. http://dx.doi.org/10.1609/aaai.v24i2.18819.

Full text
Abstract:
Over the past decades, there have been significant efforts on developing robust and easy-to-use query interfaces to databases. So far, the typical query interfaces are GUI-based visual query interfaces. Visual query interfaces however, have limitations especially when they are used for accessing large and complex datasets. Therefore, we are developing a novel query interface where users can use natural language expressions to help author visual queries. Our work enhances the usability of a visual query interface by directly addressing the "knowledge gap" issue in visual query interfaces. We have applied our work in several real-world applications. Our preliminary evaluation demonstrates the effectiveness of our approach
APA, Harvard, Vancouver, ISO, and other styles
4

Macaranas, A., A. N. Antle, and B. E. Riecke. "What is Intuitive Interaction? Balancing Users' Performance and Satisfaction with Natural User Interfaces." Interacting with Computers 27, no. 3 (February 12, 2015): 357–70. http://dx.doi.org/10.1093/iwc/iwv003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pollard, Kimberly A., Stephanie M. Lukin, Matthew Marge, Ashley Foots, and Susan G. Hill. "How We Talk with Robots: Eliciting Minimally-Constrained Speech to Build Natural Language Interfaces and Capabilities." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62, no. 1 (September 2018): 160–64. http://dx.doi.org/10.1177/1541931218621037.

Full text
Abstract:
Industry, military, and academia are showing increasing interest in collaborative human-robot teaming in a variety of task contexts. Designing effective user interfaces for human-robot interaction is an ongoing challenge, and a variety of single and multiple-modality interfaces have been explored. Our work is to develop a bi-directional natural language interface for remote human-robot collaboration in physically situated tasks. When combined with a visual interface and audio cueing, we intend for the natural language interface to provide a naturalistic user experience that requires little training. Building the language portion of this interface requires first understanding how potential users would speak to the robot. In this paper, we describe our elicitation of minimally-constrained robot-directed language, observations about the users’ language behavior, and future directions for constructing an automated robotic system that can accommodate these language needs.
APA, Harvard, Vancouver, ISO, and other styles
6

Wojciechowski, A. "Hand’s poses recognition as a mean of communication within natural user interfaces." Bulletin of the Polish Academy of Sciences: Technical Sciences 60, no. 2 (October 1, 2012): 331–36. http://dx.doi.org/10.2478/v10175-012-0044-3.

Full text
Abstract:
Abstract. Natural user interface (NUI) is a successor of command line interfaces (CLI) and graphical user interfaces (GUI) so well known to computer users. A new natural approach is based on extensive human behaviors tracking, where hand tracking and gesture recognition seem to play the main roles in communication. The presented paper reviews common approaches to discussed hand features tracking and provides a very effective proposal of the contour based hand’s poses recognition method which can be straightforwardly used for a hand-based natural user interface. Its possible usage varies from medical systems interaction, through games up to impaired people communication support.
APA, Harvard, Vancouver, ISO, and other styles
7

Marsh, William E., Jonathan W. Kelly, Veronica J. Dark, and James H. Oliver. "Cognitive Demands of Semi-Natural Virtual Locomotion." Presence: Teleoperators and Virtual Environments 22, no. 3 (August 1, 2013): 216–34. http://dx.doi.org/10.1162/pres_a_00152.

Full text
Abstract:
There is currently no fully natural, general-purpose locomotion interface. Instead, interfaces such as gamepads or treadmills are required to explore large virtual environments (VEs). Furthermore, sensory feedback that would normally be used in real-world movement is often restricted in VR due to constraints such as reduced field of view (FOV). Accommodating these limitations with locomotion interfaces afforded by most virtual reality (VR) systems may induce cognitive demands on the user that are unrelated to the primary task to be performed in the VE. Users of VR systems often have many competing task demands, and additional cognitive demands during locomotion must compete for finite resources. Two studies were previously reported investigating the working memory demands imposed by semi-natural locomotion interfaces (Study 1) and reduced sensory feedback (Study 2). This paper expands on the previously reported results and adds discussion linking the two studies. The results indicated that locomotion with a less natural interface increases spatial working memory demands, and that locomotion with a lower FOV increases general attentional demands. These findings are discussed in terms of their practical implications for selection of locomotion interfaces when designing VEs.
APA, Harvard, Vancouver, ISO, and other styles
8

García, Alberto, J. Ernesto Solanes, Adolfo Muñoz, Luis Gracia, and Josep Tornero. "Augmented Reality-Based Interface for Bimanual Robot Teleoperation." Applied Sciences 12, no. 9 (April 26, 2022): 4379. http://dx.doi.org/10.3390/app12094379.

Full text
Abstract:
Teleoperation of bimanual robots is being used to carry out complex tasks such as surgeries in medicine. Despite the technological advances, current interfaces are not natural to the users, who spend long periods of time in learning how to use these interfaces. In order to mitigate this issue, this work proposes a novel augmented reality-based interface for teleoperating bimanual robots. The proposed interface is more natural to the user and reduces the interface learning process. A full description of the proposed interface is detailed in the paper, whereas its effectiveness is shown experimentally using two industrial robot manipulators. Moreover, the drawbacks and limitations of the classic teleoperation interface using joysticks are analyzed in order to highlight the benefits of the proposed augmented reality-based interface approach.
APA, Harvard, Vancouver, ISO, and other styles
9

Coury, Bruce G., John Sadowsky, Paul R. Schuster, Michael Kurnow, Marcus J. Huber, and Edmund H. Durfee. "Reducing the Interaction Burden of Complex Systems." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 41, no. 1 (October 1997): 335–39. http://dx.doi.org/10.1177/107118139704100175.

Full text
Abstract:
Reducing the burden of interacting with complex systems has been a long standing goal of user interface design. In our approach to this problem, we have been developing user interfaces that allow users to interact with complex systems in a natural way and in high-level, task-related terms. These capabilities help users concentrate on making important decisions without the distractions of manipulating systems and user interfaces. To attain such a goal, our approach uses a unique combination of multi-modal interaction and interaction planning. In this paper, we motivate the basis for our approach, we describe the user interface technologies we have developed, and briefly discuss the relevant research and development issues.
APA, Harvard, Vancouver, ISO, and other styles
10

Park, Sankyu, Key-Sun Choi, and K. H. (Kane) Kim. "A Framework for Multi-Agent Systems with Multi-Modal User Interfaces in Distributed Computing Environments." International Journal of Software Engineering and Knowledge Engineering 07, no. 03 (September 1997): 351–69. http://dx.doi.org/10.1142/s0218194097000217.

Full text
Abstract:
In current multi-agent systems, the user is typically interacting with a single agent at a time through relatively inflexible and modestly intelligent interfaces. As a consequence, these systems force the users to submit simplistic requests only and suffer from problems such as the low-level nature of the system services offered to users, the weak reusability of agents, and the weak extensibility of the systems. In this paper, a framework for multi-agent systems called the open agent architecture (OAA) which reduces such problems, is discussed. The OAA is designed to handle complex requests that involve multiple agents. In some cases of complex requests from users, the components of the requests do not directly correspond to the capabilities of various application agents, and therefore, the system is required to translate the user's model of the task into the system's model before apportioning subtasks to the agents. To maximize users' efficiency in generating this type of complex requests, the OAA offers an intelligent multi-modal user interface agent which supports a natural language interface with a mix of spoken language, handwriting, and gesture. The effectiveness of the OAA environment including the intelligent distributed multi-modal interface has been observed in our development of several practical multi-agent systems.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Natural users interfaces"

1

Jacobs, Gershwin. "User experience guidelines for mobile natural user interfaces: a case study of physically disabled users." Thesis, Nelson Mandela Metropolitan University, 2017. http://hdl.handle.net/10948/17547.

Full text
Abstract:
Motor impaired people are faced with many challenges, one being the of lack integration into certain spheres of society. Access to information is seen as a major issue for the motor impaired since most forms of interaction or interactive devices are not suited to the needs of motor impaired people. People with motor impairments, like the rest of the population, are increasingly using mobile phones. As a result of the current devices and methods used for interaction with content on mobile phones, various factors prohibit a pleasant experience for users with motor impairments. To counter these factors, this study recognizes the need to implement better suited methods of interaction and navigation to improve accessibility, usability and user experience for motor impaired users. The objective of the study was to gain an understanding of the nature of motor impairments and the challenges that this group of people face when using mobile phones. Once this was determined, a solution to address this problem was found in the form of natural user interfaces. In order to gain a better understanding of this technology, various forms of NUIs and the benefits thereof were studied by the researcher in order to determine how this technology can be implemented to meet the needs of motor impaired people. To test theory, the Samsung Galaxy s5 was selected as the NUI device for the study. It must be noted that this study started in the year 2013 and the Galaxy S5 was the latest device claiming to improve interaction for disabled people at the time. This device was used in a case study that made use of various data collection methods, including participant interviews. Various motor impaired participants were requested to perform predefined tasks on the device, along with the completion of a set of user experience questionnaires. Based on the results of the study, it was found that interaction with mobile phones is an issue for people with motor impairments and that alternative methods of interaction need to be implemented. These results contributed to the final output of this study, namely a set of user experience guidelines for the design of mobile human computer interaction for motor impaired users.
APA, Harvard, Vancouver, ISO, and other styles
2

Manresa, Yee Cristina Suemay. "Advanced and natural interaction system for motion-impaired users." Doctoral thesis, Universitat de les Illes Balears, 2009. http://hdl.handle.net/10803/9412.

Full text
Abstract:
Human-computer interaction is an important area that searches for better and more comfortable systems to promote communication between humans and machines. Vision-based interfaces can offer a more natural and appealing way of communication. Moreover, it can help in the e-accessibility component of the e-inclusion. The aim is to develop a usable system, that is, the end-user must consider the use of this device effective, efficient and satisfactory.
The research's main contribution is SINA, a hands-free interface based on computer vision techniques for motion impaired users. This interface does not require the user to use his upper body limbs, as only nose motion is considered. Besides the technical aspect, user's satisfaction when using an interface is a critical issue. The approach that we have adopted is to integrate usability evaluation at relevant points of the software development.
APA, Harvard, Vancouver, ISO, and other styles
3

De, La Cruz Laureano Eliel Josue. "Technology integration in the architectural schematic design phase: understanding the factors that affect users' acceptance." Thesis, The University of Sydney, 2017. http://hdl.handle.net/2123/17841.

Full text
Abstract:
Tangible user interfaces and augmented reality are just some of the emerging technologies which are blending in our everyday lives. The number of systems and devices which use these technologies is quickly increasing, yet little has been done towards investigating the connection between architectural designers’ preferences, the factors that determine their use of technology, and ways to integrate technology in the design process. This research identified and studied factors which affect users’ acceptance of technology, as well as its integration in the schematic phase of the architectural design process. The schematic, or conceptual phase is where designers use rough sketches to create a design scheme which seeks to define the general scope of the project. I used a practice-based design research method, to gather data from studies to improve technology adoption, which in turn increased productivity in the design process. We found that technology in the design process supports the improvement of design intent communication and designers’ productivity. We found a relationship between the age group of architects and how they perceive a technology’s usefulness, which in turn affects how a new technology is accepted. Our findings show that for future sketching interfaces to increase user’s acceptance, developers should focus on the pragmatic and hedonic qualities of the proposed interface. We found that participants prefer digitizing devices which provide users with the feel of paper and pen while simultaneously offering digital versatility. Our findings provide recommendations and insight for better integration of digitizing technologies in the design process, using the field of architecture as the study context. We suggest that the data from our research can be used to improve the development of the next generation of digitizing devices.
APA, Harvard, Vancouver, ISO, and other styles
4

Di, Tore Stefano. "Il corpo aumentato: le interfacce naturali come strategia semplesse di interazione uomo-macchina. Implicazioni didattiche e linee di ricerca." Doctoral thesis, Universita degli studi di Salerno, 2013. http://hdl.handle.net/10556/1329.

Full text
Abstract:
2011 - 2012
I concetti di corpo e di corporeità sono stati oggetto, nel corso degli ultimi decenni, di una attenzione che li ha condotti ad essere il luogo teorico di incontro (e di scontro) tra diversi saperi e diverse traiettorie di ricerca scientifica. Dalla filosofia alla medicina, dalle neuroscienze all‟antropologia, dal diritto alla pedagogia, le discipline che pongono l‟uomo come oggetto della propria indagine hanno rivendicato tutte, ognuna da un proprio peculiare punto di vista epistemico, la centralità del corpo. Nell‟ambito degli studi sulla cognizione e sull‟apprendimento, spinte provenienti da direzioni e tradizioni diverse hanno messo in crisi una visione che postulava una idea di conoscenza astratta, basata su regole formali, indipendente sia dagli aspetti biologici che dal contesto socioculturale; conseguentemente, aggettivi quali “situato”, “embodied”, “sociale”, “distribuito”, hanno cominciato ad accompagnare il concetto di cognizione nella letteratura scientifica Questa nuova prospettiva non considera più il corpo come semplice “operaio del pensiero” (M. Sibilio, 2002), ma riconosce che “la maggior parte delle conoscenze, specie quelle vitali, sono espresse nella struttura stessa del corpo”(Longo, 1995), il quale non è più considerato come un semplice mediatore tra il nostro cervello e la realtà esterna, ma come il "dispositivo principale attraverso il quale, realizzando esperienze, sviluppiamo apprendimento e produciamo conoscenza" (Rivoltella, 2012). In tale ottica "l'astrazione e le generalizzazioni possono produrre utilmente apprendimento solo se sono state costruite a partire dall'esperienza corporea del mondo"(Rivoltella, 2012)ed il corpo diviene cosi “macchina della conoscenza”(F. Varela, 1990). Il ruolo della tecnologia, nell‟accezione più ampia, non è alieno ai processi che hanno determinato questo rovesciamento di prospettive. L‟idea della non neutralità delle tecnologie rispetto alle forme di produzione della conoscenza non è certo una novità. Pure, va segnalata un‟accelerazione, un aumento vertiginoso del dato quantitativo: le sollecitazioni tecnologiche all‟idea di corpo e all‟idea di conoscenza sono numerose ed incessanti, e oggetto di riflessione da più parti. Diverse linee, in quest‟ambito, si sovrappongono in modo quasi inestricabile: l‟idea dei media come estensione delle facoltà umane (McLuhan, 2001), il “luogo di innovazione e di estensione delle tecnologie del potere” (Chignola, 2007)individuato dal concetto di biopolitica nell‟accezione focaultiana, l‟esplicita (pur se obsoleta) analogia tra mente e calcolatore postulata dal cognitivismo, la progettazione e l‟evoluzione di protesi intelligenti, il design di brain-computer interfaces. Queste linee contribuiscono a spostare e a rendere meno netti i confini del corpo umano: la macchina, prolunga le facoltà del soggetto, ben oltre quanto ipotizzato dai teorici della comunicazione, e ne infrange l‟integrità, ridefinendone l‟identità. In campo didattico, la relazione tecnologia/apprendimento, fino agli anni ‟80 si è inserita in una tradizione prevalentemente comportamentista e poi cognitivista, sulla scorta dello Human Information Processing (H.I.P.), concentrandosi sul computer-istruttore e veicolando l‟idea della conoscenza come rispecchiamento della realtà. “È nel corso degli anni „80 che diventano sempre più forti i segni di insoddisfazione verso questo quadro teorico. Quella particolare solidarietà tra modello della conoscenza (conoscenza come acquisizione-elaborazione di informazioni), modello didattico e di apprendimento (sequenziale-curricolare) e modello tecnologico (computer istruttore)”(Calvani, 1998), incomincia a sgretolarsi, e i concetti di corpo e tecnologia trovano un terreno comune Tale terreno comune, in principio circoscritto al campo della disabilità, cresce in progressione geometrica nel periodo recente; all‟idea di tecnologie della mente, ancora di stampo cognitivista, si sostituisce l‟idea di tecnologie della mente/corpo. “Le tecnologie della mente pura nascono belle e fatte dal programmatore/analista, quelle della mente/corpo „vengono al mondo‟; il tecnologo si limita a creare le condizioni iniziali di un processo di sviluppo, di apprendimento, di evoluzione”(Calvani, 1998). Da un punto di vista squisitamente tecnologico, la diffusione di Interfacce Naturali, basate su devices che consentono il recupero alla Human Computer Interaction di paradigmi naturali della interazione umana (suono, voce, tatto, movimento), fa saltare il collo di bottiglia delle interfacce grafiche: l‟interazione non avviene “attraverso lo specchio”(Carroll, 2012)dello schermo, ma avviene nella “bolla percettiva” del soggetto, nello spazio, digitalizzato, che circonda l‟utente. Gli ambienti digitali di apprendimento abbandonano gradualmente l‟appiattimento sul piano cartesiano che costringe la realtà in una dimensione non naturale, limitando l‟interazione al solo binomio occhio-mano, per espandersi nello spazio tridimensionale, fondando l‟interazione sull‟intero corpo, con implicazioni cognitive dall‟impatto non ancora esplorato. Le interfacce naturali (Natural User Interface) e le tecnologie di GestureRecognition prefigurano l‟imminente scenario, nella azione didattica, della convergenza tra centralità del corpo e dimensione tecnologica: il Corpo Aumentato inteso come corpo/interfaccia nell‟Augmented Reality e nell‟Augmented Learning. Con queste premesse, il presente lavoro intende presentare, nella prima parte, un framework concettuale per inquadrare il “corpo aumentato” nel perimetro corpo – apprendimento – tecnologia e, nella seconda parte, presentare le sperimentazioni condotte sulla base di questo framework. In questo senso, il primo capitolo presenta una riflessione sullo scenario globale del corpo aumentato, disegnando (a grana grossa) il quadro del corpo come macchina della conoscenza nel rapporto con le tecnologie e l‟attività educativa, attraverso la disamina del ruolo non neutrale delle tecnologie nelle forme di produzione della conoscenza e dei concetti di ergonomia cognitiva ed ergonomia didattica. Il secondo capitolo affronta, di conseguenza, la dimensione epistemologica, inquadrando lo scenario citato nell‟evoluzione del concetto di complessità, che ha costituito la prospettiva (meta)teorica della più recente riflessione in ambito educativo, e introducendo il concetto di semplessità (A. Berthoz, 2011). L‟elaborazione di Berthoz, infatti, offre non tanto un quadro teorico finito, paradigmatico, quanto piuttosto una serie di strumenti concettuali che forniscono una chiave epistemologica per risolvere, nella ricerca in ambito didattico, l‟equazione corpo – apprendimento – tecnologia. Attraverso la lente dei Simplifying Principles for a Complex World, i concetti di spazio, ambiente (naturale e digitale), conoscenza, apprendimento sembrano trovare una collocazione armonica e funzionale nell‟ambito degli “Human Brain Projects upon theWorld”(A. Berthoz, 2009). “L‟ipotesi a questo proposito è duplice. La prima è che gli strumenti mentali elaborati nel corso dell‟evoluzione per risolvere i molteplici problemi che pone l‟avanzamento nello spazio siano stati utilizzati anche per le funzioni cognitive più elevate: la memoria e il ragionamento, la relazione con l‟altro e anche la creatività. La seconda ipotesi è che i meccanismi mentali deputati all‟elaborazione spaziale permettano di semplificare numerosi altri problemi posti agli organismi viventi.”(A. Berthoz, 2011). Il terzo capitolo introduce l‟elemento delle Interfacce Naturali come principi semplificativi nell‟interazione uomo-macchina, che consentono la digitalizzazione dello user space e, di conseguenza, il recupero alle interfacce della dimensione spaziale, nel senso definito nel secondo capitolo, determinando, di fatto, una umwelt digitale che non sostituisce ma aumenta ed estende la umwelt naturale. Il quarto capitolo presenta la serie di sperimentazioni sul design di software didattico NUI-based, descrivendo nel dettaglio i tre percorsi avviati, autonomi e ad un tempo interconnessi, presentandone il framework teorico, la metodologia di progettazione e sviluppo, le scelte squisitamente tecnologiche e presentando i risultati dei test effettuati nelle scuole. Nelle conclusioni, infine, si presentano i commenti al percorso sperimentale, dalla definizione dello scenario, alla elaborazione del quadro concettuale, dal percorso di design e sviluppo degli applicativi alla raccolta dei dati. Complessivamente, il lavoro si presenta come un prodotto che, pur essendo compiuto (dall‟idea alla discussione dei dati sperimentali), si presenta come tutt‟altro che concluso, costituendo anzi il primo atto di una ricerca più ampia e rappresentando, in questo senso, la necessaria base sperimentale a dimostrazione della fondatezza degli assunti teorici, che si offre come punto di partenza per la sperimentazione successiva. [a cura dell'autore]
XI n.s.
APA, Harvard, Vancouver, ISO, and other styles
5

Janis, Sean Patrick. "Interactive natural user interfaces /." Online version of thesis, 2010. http://hdl.handle.net/1850/12267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mavridis, Avraam. "Navigation in 2.5D spaces using Natural User Interfaces." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-116482.

Full text
Abstract:
Natural User Interfaces (NUI) are system interfaces that aim to make the human-computer interaction more “natural”. Both academic and industry sectors have developed for years new ways to interact with the machines. With the development of these interaction techniques to support more and more complex communication actions between a human and a machine new challenges arise. In this work the theory part describe the challenges of developing NUI from the perspective of developers and designers of those systems and also discusses methods of evaluating the system under development. In addition for the aim of the thesis a prototype video game have been developed in which three different interaction methods have been encapsulated. The goal of the prototype was to explore possible NUI that can be used in 2.5D spaces. The interaction techniques that have been developed are evaluated by a small group of users using a questionnaire and the results are presented at the end of this document.
APA, Harvard, Vancouver, ISO, and other styles
7

Martín, San José Juan Fernando. "USING NATURAL USER INTERFACES TO SUPPORT LEARNING ENVIRONMENTS." Doctoral thesis, Universitat Politècnica de València, 2015. http://hdl.handle.net/10251/55845.

Full text
Abstract:
[EN] Considering the importance of games and new technologies for learning, in this thesis, two different systems that use Natural User Interfaces (NUI) for learning about a period of history were designed and developed. One of these systems uses autostereoscopic visualization, which lets the children see themselves as a background in the game, and that renders the elements with 3D sensation without the need for wearing special glasses or other devices. The other system uses frontal projection over a large-size tabletop display for visualization. The two systems have been developed from scratch. A total of five studies were carried out to determine the efficacy of games with NUI interaction with regard to acquiring knowledge, ease of use, satisfaction, fun and engagement, and their influence on children. In the first study, a comparison of the autostereoscopic system with the frontal projected system was carried out. 162 children from 8 to 11 years old participated in the study. We observed that the different characteristics of the systems did not influence the children's acquired knowledge, engagement, or satisfaction; we also observed that the systems are specially suitable for boys and older children (9-11 years old). The children had the depth perception with the autostereoscopic system. The children considered the two systems easy to use. However, they found the frontal projection to be easier to use. A second study was performed to determine the mode in which the children learn more about the topic of the game. The two modes were the collaborative mode, where the children played with the game in pairs; and the individual mode, where the children played with the game solo. 46 children from 7 to 10 years old participated in this study. We observed that there were statistically significant differences between playing with the game in the two modes. The children who played with the game in pairs in the collaborative mode got better knowledge scores than children who played with the game individually. A third study comparing traditional learning with a collaborative learning method (in pairs and in large groups) using the game was carried out. 100 children from 8 to 10 years old participated in this study. The results are in line with the second study. The children obtained higher score when collaborated in large groups or in pairs than attending to a traditional class. There were no statistically significant differences between playing in large groups and playing in pairs. For personalized learning, a Free Learning Itinerary has been included, where the children can decide how to direct the flow of their own learning process. For comparison, a Linear Learning Itinerary has also been included, where the children follow a determined learning flow. A fourth study to compare the two different learning itineraries was carried out. 29 children from 8 to 9 years old participated in this fourth study. The results showed that there were no statistically significant differences between the two learning itineraries. Regarding the online formative assessment and multiple-choice questions, there is usually a question and several possible answers in questionnaires of this kind in which the student must select only one answer. It is very common for the answers to be just text. However, images could also be used. We have carried out a fifth study to determine if an added image that represents/defines an object helps the children to choose the correct answer. 94 children from 7 to 8 years old participated in the study. The children who filled out the questionnaires with imaged obtained higher score than the children who filled out the text-only questionnaire. No statistically significant differences were found between the two questionnaire types with images. The results from the studies suggest that games of this kind could be appropriate educational games, and that autostereoscopy is a technology to exploit in their development.
[ES] Teniendo en cuenta la importancia de los juegos y las nuevas tecnologías en el aprendizaje, en esta tesis se han diseñado y desarrollado dos sistemas diferentes que utilizan interfaces de usuario naturales (NUI) para aprender los periodos de la historia. Uno de estos sistemas utiliza visión autoestereoscópica, la cual permite a los niños verse a ellos mismos dentro del juego y muestra los elementos del juego en 3D sin necesidad de llevar gafas especiales u otros dispositivos. El otro sistema utiliza proyección frontal como método de visualización. Los dos sistemas han sido desarrollados desde cero. Se han llevado a cabo un total de cinco estudios para determinar la eficacia de los juegos con NUI en cuanto al aprendizaje, facilidad de uso, satisfacción, diversión y su influencia en los niños. En el primer estudio, se ha comparado el sistema autoestereoscópico con el sistema de proyección frontal. Un total de 162 niños de 8 a 11 años han participado en este estudio. Observamos que las diferentes características de los sistemas no han influido en el aprendizaje, en la usabilidad o en la satisfacción; también observamos que los sistemas son especialmente apropiados para chicos y niños mayores (de 9 a 11 años). Los niños tienen percepción de profundidad con el sistema autoestereoscópico. Los niños consideraron los dos sistemas fáciles de usar. Sin embargo, encontraron el sistema de proyección frontal más fácil de usar. En el segundo estudio, se determina el modo con el que los niños pueden aprender en mayor medida el tema del juego. Los dos modos comparados han sido el modo colaborativo, en el que los niños jugaban por parejas; y el modo individual, en el que los niños jugaban solos. 46 niños de 7 a 10 años han participado en este estudio. Observamos que existen diferencias estadísticas significativas entre jugar al juego de un modo o de otro. Los niños que jugaron al juego en parejas en el modo colaborativo obtuvieron un mejor resultado que los niños que jugaron al juego en el modo individual. El tercer estudio compara el aprendizaje tradicional con el aprendizaje colaborativo (en parejas y en grupos grandes) utilizando el juego desarrollado. 100 niños de 8 a 10 años han participado en este estudio. Los resultados son similares al segundo estudio. Los niños obtuvieron una mejor puntuación jugando en colaboración que en el método tradicional en clase. No hubo diferencias estadísticas significativas entre jugar en grupos grandes y jugar en parejas. Teniendo en cuenta el aprendizaje personalizado se ha incluido un itinerario libre de aprendizaje, en el cual los niños pueden elegir cómo quieren dirigir su propio proceso de aprendizaje. A modo de comparación, se ha incluido también un itinerario lineal de aprendizaje, donde los niños siguen un flujo predeterminado. En este cuarto estudio han participado 29 niños de 8 a 9 años. Los resultados han mostrado que no hubo diferencias estadísticas significativas entre los dos itinerarios de aprendizaje. En cuanto a la evaluación online con preguntas de test, normalmente, hay una pregunta y varias opciones de respuesta, donde se debe seleccionar solo una de ellas. Es muy común que la respuesta esté formada únicamente por texto. Sin embargo, también se pueden utilizar imágenes. En este quinto estudio se ha llevado a cabo una comparación para determinar si las imágenes que representan las respuestas son de ayuda para elegir la correcta. 94 niños de 7 a 8 años han participado en este estudio. Los niños que rellenaron los cuestionarios con imágenes obtuvieron una mejor puntuación que los niños que rellenaron los cuestionarios en los que solo había texto. No se encontraron diferencias estadísticas significativas entre los dos tipos de cuestionarios con imágenes. Los resultados de estos estudios sugieren que los juegos de este tipo podrían ser apropiados para utilizarlos como juegos educativos, y que la autoestereoscopía es una te
[CAT] Tenint en compte la importància dels jocs i les noves tecnologies en l'aprenentatge, en aquesta tesi s'han dissenyat i desenvolupat dos sistemes diferents que utilitzen interfícies naturals d'usuari per aprendre els períodes de la història. Un d'aquests sistemes utilitza visió autoestereoscòpica, la qual permet als xiquets veure's a ells mateixos dins del joc i mostra els elements del joc en 3D sense necessitat de portar ulleres especials o altres dispositius. L'altre sistema utilitza projecció frontal com a mètode de visualització. Els dos sistemes han sigut desenvolupats des de zero. S'han dut a terme un total de cinc estudis per a determinar l'eficàcia dels jocs amb interfícies d'usuari naturals quant a l'aprenentatge, facilitat d'ús, satisfacció, diversió i la seua influència en els xiquets. En el primer estudi, s'ha comparat el sistema autoestereoscòpic amb el sistema de projecció frontal. 162 xiquets de 8 a 11 anys han participat en aquest estudi. Pels resultats, observem que les diferents característiques dels sistemes no han influït en l'aprenentatge, en la usabilitat o en la satisfacció; també observem que els sistemes són especialment apropiats per a xics i xiquets majors (de 9 a 11 anys). Els xiquets tenen percepció de profunditat amb el sistema autoestereoscòpic. Els xiquets van considerar els dos sistemes fàcils d'usar. No obstant açò, van trobar el sistema de projecció frontal més fàcil d'usar. En el segon estudi es determina la manera amb el qual els xiquets poden aprendre en major mesura el tema del joc. Les dues maneres comparades han sigut la manera col·laborativa, en la qual els xiquets jugaven per parelles; i la manera individual, en la qual els xiquets jugaven sols. 46 xiquets de 7 a 10 anys han participat en aquest estudi. Pels resultats, observem que existeixen diferències estadístiques significatives entre jugar al joc d'una manera o d'una altra. Els xiquets que van jugar al joc en parelles en la manera col·laborativa van obtindre un millor resultat que els xiquets que van jugar al joc en la manera individual. El tercer estudi compara l'aprenentatge tradicional amb l'aprenentatge col·laboratiu (en parelles i en grups grans). 100 xiquets de 8 a 10 anys ha participat en aquest estudi. Els resultats són similars al segon. Els xiquets van obtindre una millor puntuació jugant en col·laboració que en el mètode tradicional en classe. No va haver-hi diferències estadístiques significatives entre jugar en grups grans i jugar en parelles. Tenint en compte l'aprenentatge personalitzat s'ha inclòs un itinerari lliure d'aprenentatge, en el qual els xiquets poden triar com volen dirigir el seu propi procés d'aprenentatge. A manera de comparació, s'ha inclòs també un itinerari lineal d'aprenentatge, on els xiquets segueixen un flux predeterminat. En aquest quart estudi han participat 29 xiquets de 8 a 9 anys. Els resultats han mostrat que no va haver-hi diferències estadístiques significatives entre els dos itineraris d'aprenentatge. Quant a l'avaluació online amb preguntes de test, normalment, hi ha una pregunta i diverses opcions de resposta, on s'ha de seleccionar solament una d'elles. És molt comú que la resposta estiga formada únicament per text. No obstant açò, també es poden utilitzar imatges. En aquest cinquè estudi s'ha dut a terme una comparació per a determinar si les imatges que representen les respostes són d'ajuda per a triar la correcta. 94 xiquets de 7 a 8 anys han participat en aquest estudi. Els xiquets que van emplenar els qüestionaris amb imatges van obtindre una millor puntuació que els xiquets que van emplenar els qüestionaris en els quals solament hi havia text. No es van trobar diferències estadístiques significatives entre els dos tipus de qüestionaris amb imatges. Els resultats d'aquests estudis suggereixen que els jocs d'aquest tipus podrien ser apropiats per a utilitzar-los com a jocs educatius, i que l'autoestereoscòpia és
Martín San José, JF. (2015). USING NATURAL USER INTERFACES TO SUPPORT LEARNING ENVIRONMENTS [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/55845
TESIS
APA, Harvard, Vancouver, ISO, and other styles
8

Saber, Tehrani Daniel, and Lemon Samuel Johansson. "Natural and Assistive Driving Simulator User Interfaces for CARLA." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-293836.

Full text
Abstract:
As the autonomous vehicles are getting clo-ser to commercial roll out, the challenges for the developersof the software are getting more complex. One challenge thedevelopers are facing is the interaction between humans andautonomous vehicles in traffic.Such situation requires a hugeamount of data to in order to design and proof test autonomoussystem than can handle complex interactions with humans.Such data can not be collected in real traffic situations withoutcompromising the safety of the human counterparts, thereforesimulations will be necessary. Since human driving behavior ishard to predict, these simulations need human interaction inorder to get valid data of human behaviour.The purpose of thisproject is to develop a driving interface and then evaluate theusers experience in an experiment. To do this we have designedand implemented steering,braking and acceleration on a userinterface for a simulator used in autonomous driving researchcalled Car Learning to Act (CARLA) at the Smart Mobility Lab(SML) at KTH. We have implemented two driving simulatoruser interfaces, with different levels of information feedbackto the user. To evaluate the developed user interface, a surveywas designed to measure how intuitive the driving experiencewas while also comparing it to the original setup at SML. Thesurvey showed that the driving experience was more intuitivewith the two developed user interfaces and that 60% would feelcomfortable using the new systems on a real vehicle in traffic.
Allteftersom autonoma bilar kommer närmare kommersiell lansering blir utmaningarna för utvecklarna av mjukvaran mer komplexa. En utmaning som utvecklarna står inför är interaktionen mellan autonoma bilar och människor i och utanför trafiken. Dessa situationer kommer kräva en stor mängd data för att säkerhetställa att autonoma bilar kommer kunna agera optimalt. För att inhämta sådan data utan att riskera säkerheten för alla ute i trafiken kommer simulatorer behövas. Eftersom vi inte kan förutspå mänskligt beteende kommer industrin behöva använda mänskliga förare i dessa simulatorer för att få realistiska resultat. Syftet med detta projekt är att utveckla ett förargränssnitt för människor och sedan utvärdera autenticiten av upplevelsen från ett mänskligt perspektiv. Genom att implementera olika bilmekanismer så som styrning, inbromsning, accelerationen och retardation i en simulator för autonom bil forskning, Car Learning To Act(CARLA) i Smart Mobility Lab(SML) på KTH. Vi implementerade två användargränssnitt med olika nivåer av informations återkoppling till användaren. För att utvärdera användargränssnitten utformades ett frågeformulär för att mäta hur intuitivt körupplevelsen var och samtidigt jämföra med det originella användargränssnittet i SML. Undersökningen visade att körupplevelsen var mer intuitiv med det två utvecklade användargränssnitten och att 60% skulle vara bekväma med att använda ett utav dessa system för att styra ett riktigt fordon i trafik.
Kandidatexjobb i elektroteknik 2020, KTH, Stockholm
APA, Harvard, Vancouver, ISO, and other styles
9

Chandra, Yohan. "Natural Language Interfaces to Databases." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5474/.

Full text
Abstract:
Natural language interfaces to databases (NLIDB) are systems that aim to bridge the gap between the languages used by humans and computers, and automatically translate natural language sentences to database queries. This thesis proposes a novel approach to NLIDB, using graph-based models. The system starts by collecting as much information as possible from existing databases and sentences, and transforms this information into a knowledge base for the system. Given a new question, the system will use this knowledge to analyze and translate the sentence into its corresponding database query statement. The graph-based NLIDB system uses English as the natural language, a relational database model, and SQL as the formal query language. In experiments performed with natural language questions ran against a large database containing information about U.S. geography, the system showed good performance compared to the state-of-the-art in the field.
APA, Harvard, Vancouver, ISO, and other styles
10

Potgieter, Timothy Kyle. "Using natural user interfaces to support synchronous distributed collaborative work." Thesis, Nelson Mandela Metropolitan University, 2014. http://hdl.handle.net/10948/10880.

Full text
Abstract:
Synchronous Distributed Collaborative Work (SDCW) occurs when group members work together at the same time from different places together to achieve a common goal. Effective SDCW requires good communication, continuous coordination and shared information among group members. SDCW is possible because of groupware, a class of computer software systems that supports group work. Shared-workspace groupware systems are systems that provide a common workspace that aims to replicate aspects of a physical workspace that is shared among group members in a co-located environment. Shared-workspace groupware systems have failed to provide the same degree of coordination and awareness among distributed group members that exists in co-located groups owing to unintuitive interaction techniques that these systems have incorporated. Natural User Interfaces (NUIs) focus on reusing natural human abilities such as touch, speech, gestures and proximity awareness to allow intuitive human-computer interaction. These interaction techniques could provide solutions to the existing issues of groupware systems by breaking down the barrier between people and technology created by the interaction techniques currently utilised. The aim of this research was to investigate how NUI interaction techniques could be used to effectively support SDCW. An architecture for such a shared-workspace groupware system was proposed and a prototype, called GroupAware, was designed and developed based on this architecture. GroupAware allows multiple users from distributed locations to simultaneously view and annotate text documents, and create graphic designs in a shared workspace. Documents are represented as visual objects that can be manipulated through touch gestures. Group coordination and awareness is maintained through document updates via immediate workspace synchronization, user action tracking via user labels and user availability identification via basic proxemic interaction. Members can effectively communicate via audio and video conferencing. A user study was conducted to evaluate GroupAware and determine whether NUI interaction techniques effectively supported SDCW. Ten groups of three members each participated in the study. High levels of performance, user satisfaction and collaboration demonstrated that GroupAware was an effective groupware system that was easy to learn and use, and effectively supported group work in terms of communication, coordination and information sharing. Participants gave highly positive comments about the system that further supported the results. The successful implementation of GroupAware and the positive results obtained from the user evaluation provides evidence that NUI interaction techniques can effectively support SDCW.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Natural users interfaces"

1

B, Anderson Alan, and Construction Engineering Research Laboratories (U.S.), eds. LCTA users interface program users manual: Version 1.0. Champaign, Ill: US Army Corps of Engineers, Construction Engineering Research Laboratories, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sabourin, Conrad. Natural language interfaces: Bibliography. Montréal, Qué: Infolingua, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sabourin, Conrad. Natural language interfaces: Bibliography. Montréal: Infolingua Inc., 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ogiela, Marek R., and Tomasz Hachaj. Natural User Interfaces in Medical Image Analysis. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-07800-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

United States. National Aeronautics and Space Administration., ed. A natural language interface to databases. Huntsville, AL: University of Alabama in Huntsville, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Developing natural language interfaces: Processing human conversations. New York: McGraw-Hill, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dennis, Wixon, ed. Brave NUI world: Designing natural user interfaces for touch and gesture. Burlington, Mass: Morgan Kaufmann, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Practical Speech User Interface Design. Boca Raton, FL: CRC Press, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Blake, Joshua. Natural user interfaces in .NET: WPF 4, Surface 2, and Kinect. Greenwich, Conn: Manning, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mihalcea, Rada. Graph-based natural language processing and information retrieval. Cambridge: Cambridge University Press, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Natural users interfaces"

1

Guerino, Guilherme Corredato, Breno Augusto Guerra Zancan, Tatiany Xavier de Godoi, Daniela de Freitas Guilhermino Trindade, José Reinaldo Merlin, Ederson Marcos Sgarbi, Carlos Eduardo Ribeiro, and Tércio Weslley Sant’Anna de Paula Lima. "Conceptual Framework for Supporting the Creation of Virtual Museums with Focus on Natural User Interfaces." In Design, User Experience, and Usability: Users, Contexts and Case Studies, 490–502. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-91806-8_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Barbieri, Luciana, Edmundo Roberto Mauro Madeira, Kleber Stroeh, and Wil M. P. van der Aalst. "Towards a Natural Language Conversational Interface for Process Mining." In Lecture Notes in Business Information Processing, 268–80. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_20.

Full text
Abstract:
AbstractDespite all the recent advances in process mining, making it accessible to non-technical users remains a challenge. In order to democratize this technology and make process mining ubiquitous, we propose a conversational interface that allows non-technical professionals to retrieve relevant information about their processes and operations by simply asking questions in their own language. In this work, we propose a reference architecture to support a conversational, process mining oriented interface to existing process mining tools. We combine classic natural language processing techniques (such as entity recognition and semantic parsing) with an abstract logical representation for process mining queries. We also provide a compilation of real natural language questions (aiming to form a dataset of that sort) and an implementation of the architecture that interfaces to an existing commercial tool: Everflow. Last but not least, we analyze the performance of this implementation and point out directions for future work.
APA, Harvard, Vancouver, ISO, and other styles
3

Pazos Rangel, Rodolfo A., Marco A. Aguirre, Juan J. González, and Juan Martín Carpio. "Features and Pitfalls that Users Should Seek in Natural Language Interfaces to Databases." In Studies in Computational Intelligence, 617–30. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-05170-3_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gu, Sijia, Yue Lu, Yuwei Kong, Jiale Huang, and Weishun Xu. "Diversifying Emotional Experience by Layered Interfaces in Affective Interactive Installations." In Proceedings of the 2021 DigitalFUTURES, 221–30. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-5983-6_21.

Full text
Abstract:
AbstractThis paper aims to improve users’ experience in affective interactive installations through the diversification of interfaces. With logically organized hierarchical experience, diverse interfaces with emotion data as inputs enhance users’ emotional interaction to be more natural and immersive. By using facial affect detection technology, an installation with diverse input interfaces was tested with an organic formal setting. Mechanical flowers and support structure based on the organic form were deployed as its physical output for a multitude of sensorial dimensions. With actions of the mechanical flowers, such as blooming, closing, rotating, glowing and blinking, a layered experiential sequence was created and the atmosphere of the installation was evaluated to be more engaging. In this way, the layered complexity of information was transferred to users’ immersive emotional experience. We believe that the practices in this work can contribute to deeper emotional engagement with users and add new layers of emotional interactivity.
APA, Harvard, Vancouver, ISO, and other styles
5

Kaufmann, Esther, and Abraham Bernstein. "How Useful Are Natural Language Interfaces to the Semantic Web for Casual End-Users?" In The Semantic Web, 281–94. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-76298-0_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Câmara, António. "Natural User Interfaces." In Human-Computer Interaction – INTERACT 2011, 1. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23774-4_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kientz, Julie A., Matthew S. Goodwin, Gillian R. Hayes, and Gregory D. Abowd. "Natural User Interfaces." In Interactive Technologies for Autism, 93–103. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-031-01595-3_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kientz, Julie A., Gillian R. Hayes, Matthew S. Goodwin, Mirko Gelsomini, and Gregory D. Abowd. "Natural User Interfaces." In Interactive Technologies and Autism, Second Edition, 119–31. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-031-01604-2_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Calle, Javier, Paloma Martínez, David del Valle, and Dolores Cuadra. "Towards the Achievement of Natural Interaction." In Engineering the User Interface, 1–19. London: Springer London, 2008. http://dx.doi.org/10.1007/978-1-84800-136-7_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

D’Amico, Gianpaolo, Alberto Del Bimbo, Fabrizio Dini, Lea Landucci, and Nicola Torpei. "Natural Human–Computer Interaction." In Multimedia Interaction and Intelligent User Interfaces, 85–106. London: Springer London, 2010. http://dx.doi.org/10.1007/978-1-84996-507-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Natural users interfaces"

1

De Bellis, Mauro, Paul Phamduy, and Maurizio Porfiri. "A Natural User Interface to Drive a Robotic Fish." In ASME 2015 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/dscc2015-9749.

Full text
Abstract:
Interactive control modes for robotic fish based informal science learning activities have been shown to increase user interest in STEM careers. This study explores the use of natural user interfaces to engage users in an interactive activity and excite them about the possibility of a robotics career. In this work, we propose a novel natural user interface platform for enhancing participant interaction by controlling a robotic fish in a set of tasks. Specifically, we develop and characterize a new platform, which utilizes a Microsoft Kinect and an ad-hoc communication protocol. Preliminary studies are conducted to assess the usability of the platform.
APA, Harvard, Vancouver, ISO, and other styles
2

Sanchez, J. Alfredo, Soraia Prietch, and Josue I. Cruz-Cortez. "Natural Sign Language Interfaces for Deaf Users: Rationale and Design Guidelines." In 2022 IEEE Mexican International Conference on Computer Science (ENC). IEEE, 2022. http://dx.doi.org/10.1109/enc56672.2022.9882905.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fernandes, Vinicius B. P., Jared A. Frank, and Vikram Kapila. "A Wearable Interface for Intuitive Control of Robotic Manipulators Without User Training." In ASME 2014 12th Biennial Conference on Engineering Systems Design and Analysis. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/esda2014-20128.

Full text
Abstract:
This paper describes the development of a wearable interface that exploits the user’s natural arm movements to intuitively control a robotic manipulator. The design is intended to alleviate the time and effort spent in operating the robotic manipulator, regardless of the age and technological experience of the user. The interface is made to be low-cost, comfortably worn, and easy to put on and remove. Kinematic models of human and robot arms are used to produce a natural mapping from the user’s arm movements to the commanded movements of the robotic manipulator. An experiment is conducted with 30 participants of varied ages and experience to assess the usability of the wearable interface. Each of the participants is assigned to perform a pick and place task using two of three different interfaces (the wearable interface, a game controller, and a mobile interface running on a tablet computer) for a total of 60 trials. The results of the study show that the wearable interface is easier to learn compared to the alternative interfaces and is chosen as the preferred interface by the participants. Performance data shows that the users complete the pick and place task faster with the wearable interface than with the alternative interfaces.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhiqiang, Zhao, Aby Varghese, Chua Wei Quan, and Prabhu Vinayak Ashok. "Natural-Language Chat and Control HMI for Manufacturing Shopfloor." In ASME 2019 14th International Manufacturing Science and Engineering Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/msec2019-2702.

Full text
Abstract:
Abstract The ability to instantly and successfully interact among technical engineers, management team and machines is vital to the productivity and efficiency of manufacturing shopfloor. Chat messenger with smart mobile has obvious advantages for shopfloor management in group communication, instant notification and remote control. This paper presents a natural-language chat & control HMI for manufacturing shopfloor. The system successfully realizes a smooth two-way communication between users and shopfloor machines. Users can access just-in-time information, receive instant notifications, and remotely control shopfloor machines. All relevant parties can communicate over shopfloor matters in a chat group. The system comprises of four core modules, i.e. Chat Messenger, Chat Service Engine, Control & Communication Engine and Local Command Service. Chat Messenger provides chatting user interface and group management dealing with end-user’s enquiries and notifications via natural language. Chat Service Engine and Control & Communication Engine are two cloud-based service modules, which process questionnaire logic and transmit relevant commands and data bi-directionally between Chat Messenger and Local Command Service. Local Command Service is a local service terminal which implements interfaces and protocols directly interacting with shopfloor machines. It processes the requests and commands from Chat Messenger to shopfloor machines. It also checks real-time machine anomalies and automatically generates corresponding notifications. The system has been implemented in the advanced manufacturing shopfloor at Nanyang Polytechnic. The results and validation show the improvement of manufacturing shopfloor efficiency.
APA, Harvard, Vancouver, ISO, and other styles
5

MacAllister, Anastacia, Eliot Winer, Tsung-Pin Yeh, Daniel Seal, and Grant Degenhardt. "A Natural User Interface for Immersive Design Review." In ASME 2014 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/detc2014-34633.

Full text
Abstract:
As markets demand engineered products faster, waiting on the cyclical design processes of the past is not an option. Instead, industry is turning to concurrent design and interdisciplinary teams. When these teams collaborate, engineering CAD tools play a vital role in conceptualizing and validating designs. These tools require significant user investment to master, due to challenging interfaces and an overabundance of features. These challenges often prohibit team members from using these tools for exploring alternatives. This paper presents a method allowing users to interact with a design using intuitive gestures and head tracking, all while keeping the model in a CAD format. Specifically, Siemens’ Teamcenter® Lifecycle Visualization Mockup (Mockup) was used to display the design geometry while modifications were made through a set of gestures captured by a Microsoft Kinect™ in real time. This proof of concept program allowed a user to rotate the scene, activate Mockup’s immersive menu, move the immersive wand, and manipulate the view based on head position. The result is an immersive user-friendly low cost platform for interdisciplinary design review.
APA, Harvard, Vancouver, ISO, and other styles
6

Awada, Imadalex, Irina Mocanu, and Adina magda Florea. "EXPLOITING MULTIMODAL INTERFACES IN ELEARNING SYSTEMS." In eLSE 2018. Carol I National Defence University Publishing House, 2018. http://dx.doi.org/10.12753/2066-026x-18-094.

Full text
Abstract:
Nowadays, eLearning becomes an important way to transfer knowledge to learners. Thus, the number of eLearning content (eLearning websites, digital books, tutorials) has increased significantly during the last decade. However, the majority of the available eLearning solutions interact with the user through the traditional Human Computer Interface, using mainly, texts and graphics to communicate the information which overload the visual channel of the user. Therefore, the communicated information may be missed, as well as the focus and interest of the user can decrease in time. Multimodal interactions turn the communication between the user and the machine in a more natural way using different channels that humans usually use to communicate between them. Integrating a Multimodal Interface to any eLearning module will make the delivery of the information easier and more efficient. It will also ensure the delivery of the communicated information and improves the users’ focus and interests during the sessions. This paper investigates the use and effects of a multimodal interface in an eLearning module, part of a complex system, that helps users to optimize the benefits of the system and assists them in case of any problem. The investigation involves two different types of interfaces. The first type is a textual interface based on text and graphics, with the traditional input modalities. The second type is based on multimodal metaphors and integrates three input modalities: speech, gesture and touch, beside the traditional ones; and two output modalities: visual and phonetic. We propose a testing platform for comparing the two types of interfaces, that will track different aspects in the user-machine interactions: the testing platform will track the number of the interactions in each session, the emotional status of the user, the time needed by the user to accomplish a task, the task difficulty and the obtained results of each session… All these results will be enhanced with information collected through a questionnaire that will be filled by the user after each session. In order to ensure a thorough understanding in the eLearning module, all the collected data will be used to create a user profile in order to customize the interface using user’s preferences and needs, which will optimize the experience of the user and increase the quality of life.
APA, Harvard, Vancouver, ISO, and other styles
7

Awada, Imadalex, Mihaela Apostu, Irina Mocanu, Adina magda Florea, and Andra Codreanu. "AN ADAPTIVE MULTIMODAL INTERFACE TO IMPROVE ELDERLY PEOPLE'S REHABILITATION EXERCISES." In eLSE 2017. Carol I National Defence University Publishing House, 2017. http://dx.doi.org/10.12753/2066-026x-17-092.

Full text
Abstract:
Over the last two decades, technology has evolved fast causing mass evolution in every aspect of life, among which health care domain and human-machine interaction. However, the benefits of the technology progress were limited for its elderly users, since the traditional human-machine interfaces were always a barrier between them and any new devices. Compared to those interfaces, multimodal interfaces offer a more natural way of interaction with the machines, coping with some inherent difficulties of elderly people. The paper presents a system dedicated to assist people with special needs, such as elderly people or people recovering from specific diseases, to perform rehabilitation exercises by using a multimodal interface. The system presents to the user a set of required exercises prescribed by the doctor or by the caregiver, and tracks the user exercise evolution, informs the user about his/her performances and mistakes, changes exercise levels depending on the users' physical condition and results. Furthermore, the system gives advice to its user to optimize the results of the exercise. To be easily accessible by elderly people, the interface is a key-factor of the system. The supervision of exercises is achieved by means of a Kinect camera, successive images of user's exercises being analyzed automatically. The feed-back to the user is given either by messages displayed on the screen or through voice output. The user is represented on the screen by an avatar, which is mirroring the user's movements during the exercise. The user can interact with the system by giving simple voice commands or by using the touch facilities of the screen. In this way, it allows the user to interact with the system without interrupting his exercise and in the most natural possible way. The interface can be adapted to user preferences by configuring the type and color of fonts, the arrangement of icons on the screen and the aspect of the display. After repeated interactions with the system, the icons corresponding to the most used commands given, either by touch or by voice, will become bigger on the screen or better placed. We claim that our proposed approach will help the elderly to perform their rehabilitation exercise in their preferred medium with a grade of independence and to improve the efficiency of the exercise and well-being.
APA, Harvard, Vancouver, ISO, and other styles
8

Murugappan, Sundar, and Karthik Ramani. "FEAsy: A Sketch-Based Interface Integrating Structural Analysis in Early Design." In ASME 2009 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/detc2009-87727.

Full text
Abstract:
The potential advantages of freehand sketches have been widely recognized and exploited in many fields especially in engineering design and analysis. This is mainly because the freehand sketches are an efficient and natural way for users to visually communicate ideas. However, due to a lack of fundamental techniques for understanding them, sketch-based interfaces have not yet evolved as the preferred computing platform over traditional menu-based tools. In this paper, we address the specific challenge of transforming informal and ambiguous freehand inputs to more formalized and structured representations. We present a domain-independent, multi-stroke, multi-primitive beautification method which detects and uses the spatial relationships implied in the sketches. Spatial relationships are represented as geometric constraints and satisfied by a geometric constraint solver. To demonstrate the utility of this technique and also to build a natural working environment for structural analysis in early design, we have developed FEAsy (acronym for Finite Element Analysis made easy) as shown in Fig. 1. This tool allows the users to transform, simulate and analyze their finite element models quickly and easily through freehand sketching, just as they would draw on paper. Further, we have also developed simple, domain specific rules-based algorithms for recognizing the commonly used symbols and for understanding the different contexts in finite element modeling. Finally, we illustrate the proposed approach with a few examples.
APA, Harvard, Vancouver, ISO, and other styles
9

Seitz, Roger, Mark Freshley, Mark Williamson, Paul Dixon, Kurt Gerdes, Yvette T. Collazo, and Susan Hubbard. "Identification and Implementation of End-User Needs During Development of a State-of-the-Art Modeling Toolset." In ASME 2011 14th International Conference on Environmental Remediation and Radioactive Waste Management. ASMEDC, 2011. http://dx.doi.org/10.1115/icem2011-59069.

Full text
Abstract:
The U.S. Department of Energy (US DOE) Office of Environmental Management, Technology Innovation and Development is supporting a multi-National Laboratory effort to develop the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is an emerging state-of-the-art scientific approach and software infrastructure for understanding and predicting contaminant fate and transport in natural and engineered systems. These modular and open-source high performance computing tools and user interfaces will facilitate integrated approaches that enable standardized assessments of performance and risk for EM cleanup and closure decisions. The ASCEM team recognized that engaging end-users in the ASCEM development process would lead to enhanced development and implementation of the ASCEM toolsets in the user community. End-user involvement in ASCEM covers a broad spectrum of perspectives, including: performance assessment (PA) and risk assessment practitioners, research scientists, decision-makers, oversight personnel, and regulators engaged in the US DOE cleanup mission. End-users are primarily engaged in ASCEM via the ASCEM User Steering Committee (USC) and the ‘user needs interface’ task. Future plans also include user involvement in demonstrations of the ASCEM tools. This paper will describe the details of how end users have been engaged in the ASCEM program and will demonstrate how this involvement has strengthened both the tool development and community confidence. ASCEM tools requested by end-users specifically target modeling challenges associated with US DOE cleanup activities. The demonstration activities involve application of ASCEM tools and capabilities to representative problems at DOE sites. Selected results from the ASCEM Phase 1 demonstrations are discussed to illustrate how capabilities requested by end-users were implemented in prototype versions of the ASCEM tool.
APA, Harvard, Vancouver, ISO, and other styles
10

Cramariuc, Gabriel, and Stefan gheorghe Pentiuc. "OBSERVING MOTOR DEVELOPMENT OF PRESCHOOL CHILDREN USING DEPTH CAMERAS." In eLSE 2015. Carol I National Defence University Publishing House, 2015. http://dx.doi.org/10.12753/2066-026x-15-076.

Full text
Abstract:
In this article we analyze the 3D interaction of children (using gestures made with whole body) using depth cameras (ie, Microsoft Kinect). It should be noted that there are few research studies in the academic literature that investigates the interactions that children made with whole body through gesture acquisition systems using Kinect sensor, which translates into lack of level design recommendations regarding prototypes involving this type of interaction for children. Natural interfaces, such as those favored by Kinect sensor tend to be intuitive and to require little knowledge from users. Gestures commonly preferred by users are likely to be easier to perform than gestures proposed by designers - which often requires a sustained practice from user. There are few recommendations on specific child gesture interaction with a depth chamber as Microsoft Kinect. As the current market related to Kinect games for children continues to grow, there are still few informations about the ways in which young users interact with these interfaces. Children performance in gestural command execution performance may be different from adults. Children are a unique group of users that require special consideration. We did a study in children aged 3 to 6 years to identify how children perform gestures in working with interfaces that require interaction with the whole body when they were faced with various tasks but common in preschool activities. Gestures and body movements were videotaped, arranged by category and coded for the purpose of comparison by age, sex and degree of experimentation with Kinect interfaces. In the same study parents answered a questionnaire to provide data on the degree of motor development of their children. We correlated the data obtained and we analyzed opportunities to use Kinect type deep cameras for motor development of preschool children.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Natural users interfaces"

1

Clement, Michael. Engineering With Nature website user guide. Engineer Research and Development Center (U.S.), March 2022. http://dx.doi.org/10.21079/11681/43440.

Full text
Abstract:
The Engineering With Nature (EWN) program is a high-profile effort that aims to deliver cost-effective, broadly beneficial solutions to natural re-source and sustainability challenges across the nation. A portion of this is accomplished through the use of the EWN website, which features news, podcasts, articles, and more. The content on the EWN website serves to educate and inform hundreds of visitors monthly. This content is generated and managed by EWN team members with web development experience, as it requires manually editing the website HTML and staging changes on a development server. With the EWN website 2.0, a new website framework (WordPress) has been implemented that will save content managers time and effort by providing a front-end user interface (UI) to enable the uploading, staging, and approval of new content for the website, along with a visual refresh to herald the impending release of season 2 of the EWN Podcast. This document’s purpose is to demonstrate the functionality of the new EWN website and provide instructional material for those managing content via the new EWN website.
APA, Harvard, Vancouver, ISO, and other styles
2

Hendrickson, J. J., and R. D. Williams. The Effect of Input Device on User Performance With a Menu-Based Natural Language Interface. Fort Belvoir, VA: Defense Technical Information Center, January 1988. http://dx.doi.org/10.21236/ada193095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lasko, Kristofer, and Sean Griffin. Monitoring Ecological Restoration with Imagery Tools (MERIT) : Python-based decision support tools integrated into ArcGIS for satellite and UAS image processing, analysis, and classification. Engineer Research and Development Center (U.S.), April 2021. http://dx.doi.org/10.21079/11681/40262.

Full text
Abstract:
Monitoring the impacts of ecosystem restoration strategies requires both short-term and long-term land surface monitoring. The combined use of unmanned aerial systems (UAS) and satellite imagery enable effective landscape and natural resource management. However, processing, analyzing, and creating derivative imagery products can be time consuming, manually intensive, and cost prohibitive. In order to provide fast, accurate, and standardized UAS and satellite imagery processing, we have developed a suite of easy-to-use tools integrated into the graphical user interface (GUI) of ArcMap and ArcGIS Pro as well as open-source solutions using NodeOpenDroneMap. We built the Monitoring Ecological Restoration with Imagery Tools (MERIT) using Python and leveraging third-party libraries and open-source software capabilities typically unavailable within ArcGIS. MERIT will save US Army Corps of Engineers (USACE) districts significant time in data acquisition, processing, and analysis by allowing a user to move from image acquisition and preprocessing to a final output for decision-making with one application. Although we designed MERIT for use in wetlands research, many tools have regional or global relevancy for a variety of environmental monitoring initiatives.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography