Journal articles on the topic 'Multimodal user interface'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 journal articles for your research on the topic 'Multimodal user interface.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.
Reeves, Leah M., Jean-Claude Martin, Michael McTear, TV Raman, Kay M. Stanney, Hui Su, Qian Ying Wang, et al. "Guidelines for multimodal user interface design." Communications of the ACM 47, no. 1 (January 1, 2004): 57. http://dx.doi.org/10.1145/962081.962106.
Full textKarpov, A. A., and A. L. Ronzhin. "Information enquiry kiosk with multimodal user interface." Pattern Recognition and Image Analysis 19, no. 3 (September 2009): 546–58. http://dx.doi.org/10.1134/s1054661809030225.
Full textBaker, Kirk, Ashley Mckenzie, Alan Biermann, and Gert Webelhuth. "Constraining User Response via Multimodal Dialog Interface." International Journal of Speech Technology 7, no. 4 (October 2004): 251–58. http://dx.doi.org/10.1023/b:ijst.0000037069.82313.57.
Full textRyumin, Dmitry, Ildar Kagirov, Alexandr Axyonov, Nikita Pavlyuk, Anton Saveliev, Irina Kipyatkova, Milos Zelezny, Iosif Mporas, and Alexey Karpov. "A Multimodal User Interface for an Assistive Robotic Shopping Cart." Electronics 9, no. 12 (December 8, 2020): 2093. http://dx.doi.org/10.3390/electronics9122093.
Full textGoyzueta, Denilson V., Joseph Guevara M., Andrés Montoya A., Erasmo Sulla E., Yuri Lester S., Pari L., and Elvis Supo C. "Analysis of a User Interface Based on Multimodal Interaction to Control a Robotic Arm for EOD Applications." Electronics 11, no. 11 (May 25, 2022): 1690. http://dx.doi.org/10.3390/electronics11111690.
Full textLi Deng, Kuansan Wang, A. Acero, Hsiao-Wuen Hon, J. Droppo, C. Boulis, Ye-Yi Wang, et al. "Distributed speech processing in miPad's multimodal user interface." IEEE Transactions on Speech and Audio Processing 10, no. 8 (November 2002): 605–19. http://dx.doi.org/10.1109/tsa.2002.804538.
Full textShi, Yu, Ronnie Taib, Natalie Ruiz, Eric Choi, and Fang Chen. "MULTIMODAL HUMAN-MACHINE INTERFACE AND USER COGNITIVE LOAD MEASUREMENT." IFAC Proceedings Volumes 40, no. 16 (2007): 200–205. http://dx.doi.org/10.3182/20070904-3-kr-2922.00035.
Full textLa Tona, Giuseppe, Antonio Petitti, Adele Lorusso, Roberto Colella, Annalisa Milella, and Giovanni Attolico. "Modular multimodal user interface for distributed ambient intelligence architectures." Internet Technology Letters 1, no. 2 (February 9, 2018): e23. http://dx.doi.org/10.1002/itl2.23.
Full textArgyropoulos, Savvas, Konstantinos Moustakas, Alexey A. Karpov, Oya Aran, Dimitrios Tzovaras, Thanos Tsakiris, Giovanna Varni, and Byungjun Kwon. "Multimodal user interface for the communication of the disabled." Journal on Multimodal User Interfaces 2, no. 2 (July 15, 2008): 105–16. http://dx.doi.org/10.1007/s12193-008-0012-2.
Full textGaouar, Lamia, Abdelkrim Benamar, Olivier Le Goaer, and Frédérique Biennier. "HCIDL: Human-computer interface description language for multi-target, multimodal, plastic user interfaces." Future Computing and Informatics Journal 3, no. 1 (June 2018): 110–30. http://dx.doi.org/10.1016/j.fcij.2018.02.001.
Full textKim, Myeongseop, Eunjin Seong, Younkyung Jwa, Jieun Lee, and Seungjun Kim. "A Cascaded Multimodal Natural User Interface to Reduce Driver Distraction." IEEE Access 8 (2020): 112969–84. http://dx.doi.org/10.1109/access.2020.3002775.
Full textChoi, E. H. C., R. Taib, Y. Shi, and F. Chen. "Multimodal user interface for traffic incident management in control room." IET Intelligent Transport Systems 1, no. 1 (2007): 27. http://dx.doi.org/10.1049/iet-its:20060038.
Full textHoffman, Donald D. "Sensory Experiences as Cryptic Symbols of a Multimodal User Interface." Activitas Nervosa Superior 52, no. 3-4 (September 2010): 95–104. http://dx.doi.org/10.1007/bf03379572.
Full textDiaz, Carlos, and Shahram Payandeh. "Multimodal Sensing Interface for Haptic Interaction." Journal of Sensors 2017 (2017): 1–24. http://dx.doi.org/10.1155/2017/2072951.
Full textChandarana, Meghan, Erica L. Meszaros, Anna Trujillo, and B. Danette Allen. "Natural Language Based Multimodal Interface for UAV Mission Planning." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (September 2017): 68–72. http://dx.doi.org/10.1177/1541931213601483.
Full textKueBum Lee, SangHyeon Jin, and KwangSeok Hong. "An Implementation of Multimodal User Interface using Speech, Image and EOG." International Journal of Engineering and Industries 2, no. 2 (June 30, 2011): 76–87. http://dx.doi.org/10.4156/ijei.vol2.issue2.8.
Full textTakebayashi, Yoichi. "Spontaneous speech dialogue system TOSBURG IIâthe user-centered multimodal interface." Systems and Computers in Japan 26, no. 14 (1995): 77–91. http://dx.doi.org/10.1002/scj.4690261407.
Full textRautiainen, Samu, Matteo Pantano, Konstantinos Traganos, Seyedamir Ahmadi, José Saenz, Wael M. Mohammed, and Jose L. Martinez Lastra. "Multimodal Interface for Human–Robot Collaboration." Machines 10, no. 10 (October 20, 2022): 957. http://dx.doi.org/10.3390/machines10100957.
Full textObrenovic, Zeljko, and Dusan Starcevic. "Adapting the unified software development process for user interface development." Computer Science and Information Systems 3, no. 1 (2006): 33–52. http://dx.doi.org/10.2298/csis0601033o.
Full textKazi, Zunaid, and Richard Foulds. "Knowledge Driven Planning and Multimodal Control of a Telerobot." Robotica 16, no. 5 (September 1998): 509–16. http://dx.doi.org/10.1017/s0263574798000666.
Full textRigas, Dimitrios, and Badr Almutairi. "An Empirical Investigation into the Role of Avatars in Multimodal E-government Interfaces." International Journal of Sociotechnology and Knowledge Development 5, no. 1 (January 2013): 14–22. http://dx.doi.org/10.4018/jskd.2013010102.
Full textBos, Edwin, Carla Huls, and Wim Claassen. "EDWARD: full integration of language and action in a multimodal user interface." International Journal of Human-Computer Studies 40, no. 3 (March 1994): 473–95. http://dx.doi.org/10.1006/ijhc.1994.1022.
Full textCrangle, Colleen. "Conversational interfaces to robots." Robotica 15, no. 1 (January 1997): 117–27. http://dx.doi.org/10.1017/s0263574797000143.
Full textKasprzak, Włodzimierz, Wojciech Szynkiewicz, Maciej Stefańczyk, Wojciech Dudek, Maksym Figat, Maciej Węgierek, Dawid Seredyński, and Cezary Zieliński. "Agent Structure of Multimodal User Interface to the National Cybersecurity Platform – Part 1." Pomiary Automatyka Robotyka 23, no. 3 (September 30, 2019): 41–54. http://dx.doi.org/10.14313/par_233/41.
Full textKasprzak, Włodzimierz, Wojciech Szynkiewicz, Maciej Stefańczyk, Wojciech Dudek, Maksym Figat, Maciej Węgierek, Dawid Seredyński, and Cezary Zieliński. "Agent Structure of Multimodal User Interface to the National Cybersecurity Platform – Part 2." Pomiary Automatyka Robotyka 23, no. 4 (December 30, 2019): 5–18. http://dx.doi.org/10.14313/par_234/5.
Full textWang, Jian. "Integration model of eye-gaze, voice and manual response in multimodal user interface." Journal of Computer Science and Technology 11, no. 5 (September 1996): 512–18. http://dx.doi.org/10.1007/bf02947219.
Full textFaria, Brígida Mónica, Luís Paulo Reis, and Nuno Lau. "Knowledge Discovery and Multimodal Inputs for Driving an Intelligent Wheelchair." International Journal of Knowledge Discovery in Bioinformatics 2, no. 4 (October 2011): 18–34. http://dx.doi.org/10.4018/jkdb.2011100102.
Full textde Ryckel, Xavier, Arthur Sluÿters, and Jean Vanderdonckt. "SnappView, a Software Development Kit for Supporting End-user Mobile Interface Review." Proceedings of the ACM on Human-Computer Interaction 6, EICS (June 14, 2022): 1–38. http://dx.doi.org/10.1145/3534527.
Full textNormand, Véronique, Didier Pernel, and Béatrice Bacconnet. "Speech-based Multimodal Interaction in Virtual Environments: Research at the Thomson-CSF Corporate Research Laboratories." Presence: Teleoperators and Virtual Environments 6, no. 6 (December 1997): 687–700. http://dx.doi.org/10.1162/pres.1997.6.6.687.
Full textYing, Fang Tian, Peng Cheng Zhu, Mi Lan Ye, Jing Chang Chen, Zhao He, and Yue Pan. "Bubble Journey: Multimodal Input Tools Design to Augment Sense Experience in Computer Game." Advanced Materials Research 102-104 (March 2010): 326–30. http://dx.doi.org/10.4028/www.scientific.net/amr.102-104.326.
Full textKETTEBEKOV, SANSHZAR, and RAJEEV SHARMA. "UNDERSTANDING GESTURES IN MULTIMODAL HUMAN COMPUTER INTERACTION." International Journal on Artificial Intelligence Tools 09, no. 02 (June 2000): 205–23. http://dx.doi.org/10.1142/s021821300000015x.
Full textSeong, Ki Eun, Yu Jin Park, and Soon Ju Kang. "Design of Multimodal User Interface using Speech and Gesture Recognition for Wearable Watch Platform." KIISE Transactions on Computing Practices 21, no. 6 (June 15, 2015): 418–23. http://dx.doi.org/10.5626/ktcp.2015.21.6.418.
Full textSerefoglou, S., W. Lauer, A. Perneczky, T. Lutze, and K. Radermacher. "Multimodal user interface for a semi-robotic visual assistance system for image guided neurosurgery." International Congress Series 1281 (May 2005): 624–29. http://dx.doi.org/10.1016/j.ics.2005.03.292.
Full textDoumanis, Ioannis, and Serengul Smith. "An Empirical Investigation of the Impact of an Embodied Conversational Agent on the User's Perception and Performance with a Route-Finding Application." International Journal of Virtual and Augmented Reality 3, no. 2 (July 2019): 68–87. http://dx.doi.org/10.4018/ijvar.2019070106.
Full textYANAGIHARA, Yoshimasa, Sinyo MUTO, and Takao KAKIZAKI. "The Experimental Evaluation of User Interface of Multimodal Teaching Advisor using a Wearable Personal Computer." Journal of the Japan Society for Precision Engineering 67, no. 5 (2001): 739–43. http://dx.doi.org/10.2493/jjspe.67.739.
Full textYu, Tianyou, Yuanqing Li, Jinyi Long, and Feng Li. "A Hybrid Brain-Computer Interface-Based Mail Client." Computational and Mathematical Methods in Medicine 2013 (2013): 1–9. http://dx.doi.org/10.1155/2013/750934.
Full textNoto, Christopher T., Suleman Mazhar, James Gnadt, and Jagmeet S. Kanwal. "A flexible user-interface for audiovisual presentation and interactive control in neurobehavioral experiments." F1000Research 2 (June 6, 2013): 20. http://dx.doi.org/10.12688/f1000research.2-20.v2.
Full textNoto, Christopher T., Suleman Mazhar, James Gnadt, and Jagmeet S. Kanwal. "A flexible user-interface for audiovisual presentation and interactive control in neurobehavioral experiments." F1000Research 2 (June 10, 2014): 20. http://dx.doi.org/10.12688/f1000research.2-20.v3.
Full textJones, Matt. "Classic and Alternative Mobile Search." International Journal of Mobile Human Computer Interaction 3, no. 1 (January 2011): 22–36. http://dx.doi.org/10.4018/jmhci.2011010102.
Full textNiu, Hongwei, Cees Van Leeuwen, Jia Hao, Guoxin Wang, and Thomas Lachmann. "Multimodal Natural Human–Computer Interfaces for Computer-Aided Design: A Review Paper." Applied Sciences 12, no. 13 (June 27, 2022): 6510. http://dx.doi.org/10.3390/app12136510.
Full textKaushik, Abhishek, Billy Jacob, and Pankaj Velavan. "An Exploratory Study on a Reinforcement Learning Prototype for Multimodal Image Retrieval Using a Conversational Search Interface." Knowledge 2, no. 1 (February 28, 2022): 116–38. http://dx.doi.org/10.3390/knowledge2010007.
Full textMuhammad Habib and Noor ul Qamar. "Multimodal Interaction Recognition Mechanism by Using Midas Featured By Data-Level and Decision-Level Fusion." Lahore Garrison University Research Journal of Computer Science and Information Technology 1, no. 2 (June 30, 2017): 41–51. http://dx.doi.org/10.54692/lgurjcsit.2017.010227.
Full textAtzenbeck, Claus. "Interview with Beat Signer." ACM SIGWEB Newsletter, Winter (January 2021): 1–5. http://dx.doi.org/10.1145/3447879.3447881.
Full textFakhrurroja, Hanif, Carmadi Machbub, Ary Setijadi Prihatmanto, and Ayu Purwarianti. "Multimodal Fusion Algorithm and Reinforcement Learning-Based Dialog System in Human-Machine Interaction." International Journal on Electrical Engineering and Informatics 12, no. 4 (December 31, 2020): 1016–46. http://dx.doi.org/10.15676/ijeei.2020.12.4.19.
Full textKobayashi, Toru. "An Application Framework for Trend Surfing System based on Multi-aspect, Multi-screen and Multimodal User Interface." Journal of Information Processing 23, no. 6 (2015): 795–803. http://dx.doi.org/10.2197/ipsjjip.23.795.
Full textIliev, Yuliy, and Galina Ilieva. "A Framework for Smart Home System with Voice Control Using NLP Methods." Electronics 12, no. 1 (December 27, 2022): 116. http://dx.doi.org/10.3390/electronics12010116.
Full textTaghezout, Noria. "An Agent-Based Dialog System for Adaptive and Multimodal Interface: A Case Study." Advanced Materials Research 217-218 (March 2011): 578–83. http://dx.doi.org/10.4028/www.scientific.net/amr.217-218.578.
Full textKellmeyer, David, and Glenn A. Osga. "Usability Testing & Analysis of Advanced Multimodal Watchstation Functions." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 44, no. 37 (July 2000): 654–57. http://dx.doi.org/10.1177/154193120004403729.
Full textZhao, Chen, Chuanqi Zheng, Leah Roldan, Thomas Shkurti, Ammar Nahari, Wyatt Newman, Dustin Tyler, Kiju Lee, and Michael Fu. "Adaptable Mixed-Reality Sensorimotor Interface for Human-Swarm Teaming: Person with Limb Loss Case Study and Field Experiments." Field Robotics 3, no. 1 (January 10, 2023): 243–65. http://dx.doi.org/10.55417/fr.2023007.
Full textKhelifi, Adel, Gabriele Ciccone, Mark Altaweel, Tasnim Basmaji, and Mohammed Ghazal. "Autonomous Service Drones for Multimodal Detection and Monitoring of Archaeological Sites." Applied Sciences 11, no. 21 (November 5, 2021): 10424. http://dx.doi.org/10.3390/app112110424.
Full text