Littérature scientifique sur le sujet « Multimodal user interface »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Multimodal user interface ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Multimodal user interface"

1

Reeves, Leah M., Jean-Claude Martin, Michael McTear, TV Raman, Kay M. Stanney, Hui Su, Qian Ying Wang et al. « Guidelines for multimodal user interface design ». Communications of the ACM 47, no 1 (1 janvier 2004) : 57. http://dx.doi.org/10.1145/962081.962106.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Karpov, A. A., et A. L. Ronzhin. « Information enquiry kiosk with multimodal user interface ». Pattern Recognition and Image Analysis 19, no 3 (septembre 2009) : 546–58. http://dx.doi.org/10.1134/s1054661809030225.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Baker, Kirk, Ashley Mckenzie, Alan Biermann et Gert Webelhuth. « Constraining User Response via Multimodal Dialog Interface ». International Journal of Speech Technology 7, no 4 (octobre 2004) : 251–58. http://dx.doi.org/10.1023/b:ijst.0000037069.82313.57.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Ryumin, Dmitry, Ildar Kagirov, Alexandr Axyonov, Nikita Pavlyuk, Anton Saveliev, Irina Kipyatkova, Milos Zelezny, Iosif Mporas et Alexey Karpov. « A Multimodal User Interface for an Assistive Robotic Shopping Cart ». Electronics 9, no 12 (8 décembre 2020) : 2093. http://dx.doi.org/10.3390/electronics9122093.

Texte intégral
Résumé :
This paper presents the research and development of the prototype of the assistive mobile information robot (AMIR). The main features of the presented prototype are voice and gesture-based interfaces with Russian speech and sign language recognition and synthesis techniques and a high degree of robot autonomy. AMIR prototype’s aim is to be used as a robotic cart for shopping in grocery stores and/or supermarkets. Among the main topics covered in this paper are the presentation of the interface (three modalities), the single-handed gesture recognition system (based on a collected database of Russian sign language elements), as well as the technical description of the robotic platform (architecture, navigation algorithm). The use of multimodal interfaces, namely the speech and gesture modalities, make human-robot interaction natural and intuitive, as well as sign language recognition allows hearing-impaired people to use this robotic cart. AMIR prototype has promising perspectives for real usage in supermarkets, both due to its assistive capabilities and its multimodal user interface.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Goyzueta, Denilson V., Joseph Guevara M., Andrés Montoya A., Erasmo Sulla E., Yuri Lester S., Pari L. et Elvis Supo C. « Analysis of a User Interface Based on Multimodal Interaction to Control a Robotic Arm for EOD Applications ». Electronics 11, no 11 (25 mai 2022) : 1690. http://dx.doi.org/10.3390/electronics11111690.

Texte intégral
Résumé :
A global human–robot interface that meets the needs of Technical Explosive Ordnance Disposal Specialists (TEDAX) for the manipulation of a robotic arm is of utmost importance to make the task of handling explosives safer, more intuitive and also provide high usability and efficiency. This paper aims to evaluate the performance of a multimodal system for a robotic arm that is based on Natural User Interface (NUI) and Graphical User Interface (GUI). The mentioned interfaces are compared to determine the best configuration for the control of the robotic arm in Explosive Ordnance Disposal (EOD) applications and to improve the user experience of TEDAX agents. Tests were conducted with the support of police agents Explosive Ordnance Disposal Unit-Arequipa (UDEX-AQP), who evaluated the developed interfaces to find a more intuitive system that generates the least stress load to the operator, resulting that our proposed multimodal interface presents better results compared to traditional interfaces. The evaluation of the laboratory experiences was based on measuring the workload and usability of each interface evaluated.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Li Deng, Kuansan Wang, A. Acero, Hsiao-Wuen Hon, J. Droppo, C. Boulis, Ye-Yi Wang et al. « Distributed speech processing in miPad's multimodal user interface ». IEEE Transactions on Speech and Audio Processing 10, no 8 (novembre 2002) : 605–19. http://dx.doi.org/10.1109/tsa.2002.804538.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Shi, Yu, Ronnie Taib, Natalie Ruiz, Eric Choi et Fang Chen. « MULTIMODAL HUMAN-MACHINE INTERFACE AND USER COGNITIVE LOAD MEASUREMENT ». IFAC Proceedings Volumes 40, no 16 (2007) : 200–205. http://dx.doi.org/10.3182/20070904-3-kr-2922.00035.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

La Tona, Giuseppe, Antonio Petitti, Adele Lorusso, Roberto Colella, Annalisa Milella et Giovanni Attolico. « Modular multimodal user interface for distributed ambient intelligence architectures ». Internet Technology Letters 1, no 2 (9 février 2018) : e23. http://dx.doi.org/10.1002/itl2.23.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Argyropoulos, Savvas, Konstantinos Moustakas, Alexey A. Karpov, Oya Aran, Dimitrios Tzovaras, Thanos Tsakiris, Giovanna Varni et Byungjun Kwon. « Multimodal user interface for the communication of the disabled ». Journal on Multimodal User Interfaces 2, no 2 (15 juillet 2008) : 105–16. http://dx.doi.org/10.1007/s12193-008-0012-2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Gaouar, Lamia, Abdelkrim Benamar, Olivier Le Goaer et Frédérique Biennier. « HCIDL : Human-computer interface description language for multi-target, multimodal, plastic user interfaces ». Future Computing and Informatics Journal 3, no 1 (juin 2018) : 110–30. http://dx.doi.org/10.1016/j.fcij.2018.02.001.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Multimodal user interface"

1

Schneider, Thomas W. « A Voice-based Multimodal User Interface for VTQuest ». Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/33267.

Texte intégral
Résumé :
The original VTQuest web-based software system requires users to interact using a mouse or a keyboard, forcing the usersâ hands and eyes to be constantly in use while communicating with the system. This prevents the user from being able to perform other tasks which require the userâ s hands or eyes at the same time. This restriction on the userâ s ability to multitask while using VTQuest is unnecessary and has been eliminated with the creation of the VTQuest Voice web-based software system. VTQuest Voice extends the original VTQuest functionality by providing the user with a voice interface to interact with the system using the Speech Application Language Tags (SALT) technology. The voice interface provides the user with the ability to navigate through the site, submit queries, browse query results, and receive helpful hints to better utilize the voice system. Individuals with a handicap that prevents them from using their arms or hands, users who are not familiar with the mouse and keyboard style of communication, and those who have their hands preoccupied need alternative communication interfaces which do not require the use of their hands. All of these users require and benefit from a voice interface being added onto VTQuest. Through the use of the voice interface, all of the systemâ s features can be accessed exclusively with voice and without the use of a userâ s hands. Using a voice interface also frees the userâ s eyes from being used during the process of selecting an option or link on a page, which allows the user to look at the system less frequently. VTQuest Voice is implemented and tested for operation on computers running Microsoft Windows using Microsoft Internet Explorer with the correct SALT and Adobe Scalable Vector Graphics (SVG) Viewer plug-ins installed. VTQuest Voice offers a variety of features including an extensive grammar and out-of-turn interaction, which are flexible for future growth. The grammar offers ways in which users may begin or end a query to better accommodate the variety of ways users may phrase their queries. To accommodate for abbreviations of building names and alternate pronunciations of building names, the grammar also includes nicknames for the buildings. The out-of-turn interaction combines multiple steps into one spoken sentence thereby shortening the interaction and also making the process more natural for the user. The addition of a voice interface is recommended for web applications which a user may need to use his or her eyes and hands to multitask. Additional functionality which can be added later to VTQuest Voice is touch screen support and accessibility from cell phones, Personal Digital Assistants (PDAs), and other mobile devices.
Master of Science
Styles APA, Harvard, Vancouver, ISO, etc.
2

Reeves, Leah. « OPTIMIZING THE DESIGN OF MULTIMODAL USER INTERFACES ». Doctoral diss., University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4130.

Texte intégral
Résumé :
Due to a current lack of principle-driven multimodal user interface design guidelines, designers may encounter difficulties when choosing the most appropriate display modality for given users or specific tasks (e.g., verbal versus spatial tasks). The development of multimodal display guidelines from both a user and task domain perspective is thus critical to the achievement of successful human-system interaction. Specifically, there is a need to determine how to design task information presentation (e.g., via which modalities) to capitalize on an individual operator's information processing capabilities and the inherent efficiencies associated with redundant sensory information, thereby alleviating information overload. The present effort addresses this issue by proposing a theoretical framework (Architecture for Multi-Modal Optimization, AMMO) from which multimodal display design guidelines and adaptive automation strategies may be derived. The foundation of the proposed framework is based on extending, at a functional working memory (WM) level, existing information processing theories and models with the latest findings in cognitive psychology, neuroscience, and other allied sciences. The utility of AMMO lies in its ability to provide designers with strategies for directing system design, as well as dynamic adaptation strategies (i.e., multimodal mitigation strategies) in support of real-time operations. In an effort to validate specific components of AMMO, a subset of AMMO-derived multimodal design guidelines was evaluated with a simulated weapons control system multitasking environment. The results of this study demonstrated significant performance improvements in user response time and accuracy when multimodal display cues were used (i.e., auditory and tactile, individually and in combination) to augment the visual display of information, thereby distributing human information processing resources across multiple sensory and WM resources. These results provide initial empirical support for validation of the overall AMMO model and a sub-set of the principle-driven multimodal design guidelines derived from it. The empirically-validated multimodal design guidelines may be applicable to a wide range of information-intensive computer-based multitasking environments.
Ph.D.
Department of Industrial Engineering and Management Systems
Engineering and Computer Science
Industrial Engineering PhD
Styles APA, Harvard, Vancouver, ISO, etc.
3

McGee, Marilyn Rose. « Investigating a multimodal solution for improving force feedback generated textures ». Thesis, University of Glasgow, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.274183.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

BOLLINI, LETIZIA. « Multimodal Directing in New-Media. The design of the human-computer interface as a directorship of communication modes integrated in a global medium ». Doctoral thesis, Politecnico di Milano, 2002. http://hdl.handle.net/10281/10854.

Texte intégral
Résumé :
This research work is meant to offer a theoretical contribution to the study of Visual Design: in particular, the concept of Multimodal Directorship in New Media is approached, defined and developed. Multimodal Directorship is presented as knowledge framewok for research and experimentation of communication languages supporting human-computer interfaces design. The research about Multimodal Directorship gives a theoretical ground to develop better hermeneutic and design methods. The starting point of this research is to define New-Media as digital communication tools structured around hypertextual links. Many, active communication channels (i. e. visual, textual, acoustic and so on) convey simoultaneously information to the user perception. Every single channel does not act indipendently: it must work as a co-operating element with the other modes of communication within a complex system. Interface design in New Media is part of an interdisciplinary context where a system of direction is identified, articulating the design of modes and languages specific of different perceptions into a global, single, efficient medium. As stated in the cognitive psychology approach of the San Diego School, and from the metaphore of the multimedial authoring software of Canter, the human-computer interaction is very similar to a theatre drama. In this framework the action of the designer is analysed and accordingly the metaphore is resolved. Following this approach the communications designer acts very similarly to a movie or theater director coordinating all the psycho-perceptual effects. The knowledge and the know-how extend therefore outside the traditional field of composition and graphical communication and the designer is involved in a global approach to project. The designer is the privileged author of the intertextual script that writes the different expression modes and the user, as co-author, during the interaction will activate the different communication modes by experiencing the interface. Designers tried to adapt in a mimetical and naïve way the previouse experiences usual in professional practice to digital media. The potential expressions of new-media have been missed and misused. Only the mymesis of traditional media (i. e. like printed paper) was exploited. The challenge for the Multimodal Directorship is to find out a new grammatic and sintaxis to sintetyze the different communication modalities. The methodology doesn’t depend on an episodic technological development aiming to the creation of new languages to cope with the technological innovation. The directorship approach should develop within an abstract, conceptual framework; is constantly growing through interaction with other research fields like semiotic, cognitive psycology, information technology and so on. The knowledge that comes out from design research modifies the professional process. The Multimodal Directorship as discipline of new-media design utilies and reshapes knowledge previously produced within its own field, and foster constantly a critical knowledge on the professional practice.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Cross, E. Vincent Gilbert Juan E. « Human coordination of robot teams an empirical study of multimodal interface design / ». Auburn, Ala, 2009. http://hdl.handle.net/10415/1701.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Ronkainen, S. (Sami). « Designing for ultra-mobile interaction:experiences and a method ». Doctoral thesis, University of Oulu, 2010. http://urn.fi/urn:isbn:9789514261794.

Texte intégral
Résumé :
Abstract Usability methodology has matured into a well-defined, industrially relevant field with its own findings, theories, and tools, with roots in applying information technology to user interfaces ranging from control rooms to computers, and more recently to mobile communications devices. The purpose is regularly to find out the users’ goals and to test whether a design fulfils the usability criteria. Properly applied, usability methods provide reliable and repeatable results, and are excellent tools in fine-tuning existing solutions The challenges of usability methodologies are in finding new concepts and predicting their characteristics before testing, especially when it comes to the relatively young field of mobile user interfaces. Current usability methods concentrate on utilising available user-interface technologies. They do not provide means to clearly identify, e.g., the potential of auditory or haptic output, or gestural input. Consequently, these new interaction techniques are rarely used, and the long-envisioned useful multimodal user interfaces are yet to appear, despite their assumed and existing potential in mobile devices. Even the most advocated and well-known multimodal interaction concepts, such as combined manual pointing and natural language processing, have not materialised in applications. An apparent problem is the lack of a way to utilise a usage environment analysis in finding out user requirements that could be met with multimodal user interfaces. To harness the full potential of multimodality, tools to identify both little or unused and overloaded modalities in current usage contexts are needed. Such tools would also help in putting possibly existing novel interaction paradigms in context and pointing out possible deficiencies in them. In this thesis, a novel framework for analysing the usage environment from a user-centric perspective is presented. Based on the findings, a designer can decide between offering a set of multiple devices utilising various user-interface modalities, or a single device that offers relevant modalities, perhaps by adapting to the usage environment. Furthermore, new techniques for creating mobile user interfaces utilising various modalities are proposed. The framework has evolved from the experiences gathered from the designs of experimental and actual product-level uni- and multimodal user interface solutions for mobile devices. It has generated novel multimodal interaction and interface techniques that can be used as building blocks in system designs.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Almutairi, Badr. « Multimedia communication in e-government interface : a usability and user trust investigation ». Thesis, De Montfort University, 2014. http://hdl.handle.net/2086/10503.

Texte intégral
Résumé :
In the past few years, e-government has been a topic of much interest among those excited about the advent of Web technologies. Due to the growing demand for effective communication to facilitate real-time interaction between users and e-government applications, many governments are considering installing new tools by e-government portals to mitigate the problems associated with user – interface communication. Therefore, this study is to indicate the use of multimodal metaphors such as audio-visual avatars in e-government interfaces; to increase the user performance of communications and to reduce information overload and lack of trust that is common with many e-government interfaces. However, only a minority of empirical studies has been focused on assessing the role of audio-visual metaphors in e-government. Therefore, the subject of this thesis’ investigation was the use of novel combinations of multimodal metaphors in the presentation of messaging content to produce an evaluation of these combinations’ effects on the users’ communication performance as well as the usability of e-government interfaces and perception of trust. The thesis outlines research comprising three experimental phases. An initial experiment was to explore and compare the usability of text in the presentation of the messaging content versus recorded speech and text with graphic metaphors. The second experimental was to investigate two different styles of incorporating initial avatars versus the auditory channel. The third experiment examined a novel approach around the use of speaking avatars with human-like facial expressions, obverse speaking avatars full body gestures during the presentation of the messaging content to compare the usability and communication performance as well as the perception of trust. The achieved results demonstrated the usefulness of the tested metaphors to enhance e-government usability, improve the performance of communication and increase users’ trust. A set of empirically derived ground-breaking guidelines for the design and use of these metaphors to generate more usable e-government interfaces was the overall provision of the results.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Zanello, Marie-Laure. « Contribution à la connaissance sur l'utilisation et la gestion de l'interface multimodale ». Université Joseph Fourier (Grenoble), 1997. http://www.theses.fr/1997GRE10139.

Texte intégral
Résumé :
Dans le cadre de l'interaction homme-machine multimodale, le presuppose de base est que la communication humaine est multimodale et efficace, et qu'il doit donc en etre de meme pour ce cas particulier d'interaction. On peut cependant se demander si les modalites que l'on cherche a integrer dans les interfaces correspondent a une potentialite humaine, si elles peuvent etre mises en uvre par l'utilisateur, dans quelle mesure, et surtout si elles augmentent ou non l'efficacite de l'interaction. L'objectif de ce travail est donc d'etudier, par le biais de deux experiences et de leur analyse comparative avec une troisieme, le comportement d'utilisateurs potentiels d'interfaces multimodales. Nos resultats montrent, entre autres, une utilisation spontanee et simple de la multimodalite ; un apprentissage est donc necessaire pour une utilisation performante et complexe de ces nouvelles possibilites d'interaction. Nous avons pu aussi observer une interdependance entre le type de tache a realiser et l'utilisation d'une modalite ou d'une autre, ou encore l'interet de la multimodalite comme moyen de repousser les situations bloquantes.
Styles APA, Harvard, Vancouver, ISO, etc.
9

García, Sánchez Juan Carlos. « Towards a predictive interface for the specification of intervention tasks in underwater robotics ». Doctoral thesis, Universitat Jaume I, 2021. http://dx.doi.org/10.6035/14101.2021.93456.

Texte intégral
Résumé :
Robots play a critical role in our everyday lives, performing tasks as diverse as maintenance, surveillance, exploration in harsh environments, or search and rescue operations. Concerning the different environments where they operate, the submarine is one of those that has increased its activity the most. Nowadays, there are three types of robots: ROVs, AUVs and HROVs. Despite the differences in structure and capabilities, there is a problem common to all three: the human-robot interaction has various deficiencies, and the user continues to play a central role from the point of view of decision-making. This thesis is focused on research related to human-robot interaction: the use of algorithms to assist the user during the mission specification (making the user interface easy to use), the exploration of a multimodal interface and the proposal for a robot control architecture (allowing change from autonomous to teleoperated, or vice versa).
Los robots desempeñan un papel fundamental en nuestra vida cotidiana, realizando tareas tan diversas como mantenimiento, vigilancia, exploración en entornos hostiles u operaciones de búsqueda y rescate. De entre todos los entornos donde actúan, el submarino es uno de los que más ha aumentado su actividad. Los tipos de robots utilizados son: ROVs, AUVs y HROVs. Existe un problema común a los tres: la interacción hombre-robot presenta diversas deficiencias y el usuario sigue jugando un papel central desde el punto de vista de la toma de decisiones. La presente tesis está centrada en la investigación relacionada con la interacción hombre-robot: el uso de algoritmos para asistir al usuario durante la especificación de la misión (haciendo que la interfaz de usuario sea fácil de usar), la exploración de una interfaz multimodal y la propuesta de una arquitectura de control del robot (permitiendo cambiar desde autónomo a teleoperado, o viceversa).
Programa de Doctorat en Informàtica
Styles APA, Harvard, Vancouver, ISO, etc.
10

Husseini, Orabi Ahmed. « Multi-Modal Technology for User Interface Analysis including Mental State Detection and Eye Tracking Analysis ». Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/36451.

Texte intégral
Résumé :
We present a set of easy-to-use methods and tools to analyze human attention, behaviour, and physiological responses. A potential application of our work is evaluating user interfaces being used in a natural manner. Our approach is designed to be scalable and to work remotely on regular personal computers using expensive and noninvasive equipment. The data sources our tool processes are nonintrusive, and captured from video; i.e. eye tracking, and facial expressions. For video data retrieval, we use a basic webcam. We investigate combinations of observation modalities to detect and extract affective and mental states. Our tool provides a pipeline-based approach that 1) collects observational, data 2) incorporates and synchronizes the signal modality mentioned above, 3) detects users' affective and mental state, 4) records user interaction with applications and pinpoints the parts of the screen users are looking at, 5) analyzes and visualizes results. We describe the design, implementation, and validation of a novel multimodal signal fusion engine, Deep Temporal Credence Network (DTCN). The engine uses Deep Neural Networks to provide 1) a generative and probabilistic inference model, and 2) to handle multimodal data such that its performance does not degrade due to the absence of some modalities. We report on the recognition accuracy of basic emotions for each modality. Then, we evaluate our engine in terms of effectiveness of recognizing basic six emotions and six mental states, which are agreeing, concentrating, disagreeing, interested, thinking, and unsure. Our principal contributions include the implementation of a 1) multimodal signal fusion engine, 2) real time recognition of affective and primary mental states from nonintrusive and inexpensive modality, 3) novel mental state-based visualization techniques, 3D heatmaps, 3D scanpaths, and widget heatmaps that find parts of the user interface where users are perhaps unsure, annoyed, frustrated, or satisfied.
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Multimodal user interface"

1

C, Yuen P., Tang Yuan Yan 1943- et Wang Patrick S-P, dir. Multimodal interface for human-machine communication. River Edge, N.J : World Scientific, 2002.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Tzovaras, Dimitrios, dir. Multimodal User Interfaces. Berlin, Heidelberg : Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-78345-9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Dimitrios, Tzovaras, dir. Multimodal user interfaces : From signals to interaction. Berlin : Springer, 2008.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Littlehales, Jane Margaret. Multimodal user interfaces : An investigation and evaluation. Birmingham : University of Birmingham, 1994.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Association, United Kingdom Literacy, dir. Beyond words : Developing children's response to multimodal text. Leicester : UKLA, 2010.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Maruchimōdaru intarakushon : Multimodal Interaction. Tōkyō-to Bunkyō-ku : Koronasha, 2013.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Bezold, Matthias. Adaptive multimodal interactive systems. New York : Springer, 2011.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

1959-, Dybkjær Laila, dir. Multimodal usability. Berlin : Springer, 2009.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Bourgeois, Paul Alan. The instrumentation of the multimodel and multilingual user interface. Monterey, Calif : Naval Postgraduate School, 1993.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Immersive multimodal interactive presence. London : Springer, 2012.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Multimodal user interface"

1

Tsui, Kwok Ching, et Behnam Azvine. « Intelligent Multimodal User Interface ». Dans Intelligent Systems and Soft Computing, 259–83. Berlin, Heidelberg : Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/10720181_11.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

López, Juan Miguel, Idoia Cearreta, Nestor Garay-Vitoria, Karmele López de Ipiña et Andoni Beristain. « A Methodological Approach for Building Multimodal Acted Affective Databases ». Dans Engineering the User Interface, 1–17. London : Springer London, 2008. http://dx.doi.org/10.1007/978-1-84800-136-7_12.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Sevillano, Xavier, Javier Melenchón, Germán Cobo, Joan Claudi Socoró et Francesc Alías. « Audiovisual Analysis and Synthesis for Multimodal Human-Computer Interfaces ». Dans Engineering the User Interface, 1–16. London : Springer London, 2008. http://dx.doi.org/10.1007/978-1-84800-136-7_13.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Ratzka, Andreas. « User Interface Patterns for Multimodal Interaction ». Dans Lecture Notes in Computer Science, 111–67. Berlin, Heidelberg : Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38676-3_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Kim, Laehyun, Hyunchul Cho, Sehyung Park et Manchul Han. « A Tangible User Interface with Multimodal Feedback ». Dans Human-Computer Interaction. HCI Intelligent Multimodal Interaction Environments, 94–103. Berlin, Heidelberg : Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-73110-8_11.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Harding, Chris, Ioannis Kakadiaris et R. Bowen Loftin. « A Multimodal User Interface for Geoscientific Data Investigation ». Dans Advances in Multimodal Interfaces — ICMI 2000, 615–23. Berlin, Heidelberg : Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-40063-x_80.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Hietala, P., et J. Nummenmaa. « MEDUSA — A Multimodal Database User Interface and Framework Supporting User Learning and User Interface Evaluation ». Dans Workshops in Computing, 392–405. London : Springer London, 1993. http://dx.doi.org/10.1007/978-1-4471-3423-7_22.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Velhinho, Luis, et Arminda Lopes. « A Framework in Support of Multimodal User Interface ». Dans IFIP Advances in Information and Communication Technology, 175–82. Berlin, Heidelberg : Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-41145-8_15.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Iannizzotto, Giancarlo, Francesco La Rosa, Carlo Costanzo et Pietro Lanzafame. « A Multimodal Perceptual User Interface for Collaborative Environments ». Dans Image Analysis and Processing – ICIAP 2005, 115–22. Berlin, Heidelberg : Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11553595_14.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Li, Yang, Zhiwei Guan, Youdi Chen et Guozhong Dai. « Penbuilder : Platform for the Development of Pen-Based User Interface ». Dans Advances in Multimodal Interfaces — ICMI 2000, 534–41. Berlin, Heidelberg : Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-40063-x_70.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Multimodal user interface"

1

Sobota, B., S. Korecko, D. Bella et M. Mattova. « Experimental multimodal user interface ». Dans 2022 20th International Conference on Emerging eLearning Technologies and Applications (ICETA). IEEE, 2022. http://dx.doi.org/10.1109/iceta57911.2022.9974796.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

« Data Fusion in Multimodal Interface ». Dans Special Session on Multimodal User Interfaces, Human Computer Interfaces and Gesture Interfaces. SciTePress - Science and and Technology Publications, 2013. http://dx.doi.org/10.5220/0004365903060310.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Ertl, Dominik. « Semi-automatic multimodal user interface generation ». Dans the 1st ACM SIGCHI symposium. New York, New York, USA : ACM Press, 2009. http://dx.doi.org/10.1145/1570433.1570494.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Siltanen, Sanni, Mika Hakkarainen, Otto Korkalo, Tapio Salonen, Juha Saaski, Charles Woodward, Theofanis Kannetis, Manolis Perakakis et Alexandros Potamianos. « Multimodal User Interface for Augmented Assembly ». Dans 2007 IEEE 9th Workshop on Multimedia Signal Processing. IEEE, 2007. http://dx.doi.org/10.1109/mmsp.2007.4412822.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Blumendorf, Marco, Dirk Roscher et Sahin Albayrak. « Dynamic user interface distribution for flexible multimodal interaction ». Dans International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction. New York, New York, USA : ACM Press, 2010. http://dx.doi.org/10.1145/1891903.1891930.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Syskov, A. M., V. I. Borisov et V. S. Kublanov. « Intelligent multimodal user interface for telemedicine application ». Dans 2017 25th Telecommunication Forum (TELFOR). IEEE, 2017. http://dx.doi.org/10.1109/telfor.2017.8249439.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Bellik, Yacine, et Daniel Teil. « A multimodal dialogue controller for multimodal user interface management system application ». Dans INTERACT '93 and CHI '93 conference companion. New York, New York, USA : ACM Press, 1993. http://dx.doi.org/10.1145/259964.260124.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Çığ, Çağla. « Gaze-Based Proactive User Interface for Pen-Based Systems ». Dans ICMI '14 : INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA : ACM, 2014. http://dx.doi.org/10.1145/2663204.2666287.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Plano, S., et E. P. Blasch. « User performance improvement via multimodal interface fusion augmentation ». Dans Proceedings of the Sixth International Conference of Information Fusion. IEEE, 2003. http://dx.doi.org/10.1109/icif.2003.177490.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Salem, Ben. « Implementing a multimodal user interface for telepresence systems ». Dans Photonics East '99, sous la direction de Matthew R. Stein. SPIE, 1999. http://dx.doi.org/10.1117/12.369289.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "Multimodal user interface"

1

Zhang, Yongping, Wen Cheng et Xudong Jia. Enhancement of Multimodal Traffic Safety in High-Quality Transit Areas. Mineta Transportation Institute, février 2021. http://dx.doi.org/10.31979/mti.2021.1920.

Texte intégral
Résumé :
Numerous extant studies are dedicated to enhancing the safety of active transportation modes, but very few studies are devoted to safety analysis surrounding transit stations, which serve as an important modal interface for pedestrians and bicyclists. This study bridges the gap by developing joint models based on the multivariate conditionally autoregressive (MCAR) priors with a distance-oriented neighboring weight matrix. For this purpose, transit-station-centered data in Los Angeles County were used for model development. Feature selection relying on both random forest and correlation analyses was employed, which leads to different covariate inputs to each of the two jointed models, resulting in increased model flexibility. Utilizing an Integrated Nested Laplace Approximation (INLA) algorithm and various evaluation criteria, the results demonstrate that models with a correlation effect between pedestrians and bicyclists perform much better than the models without such an effect. The joint models also aid in identifying significant covariates contributing to the safety of each of the two active transportation modes. The research results can furnish transportation professionals with additional insights to create safer access to transit and thus promote active transportation.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie