Thèses sur le sujet « Multimodal user interface »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Multimodal user interface ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.
Schneider, Thomas W. « A Voice-based Multimodal User Interface for VTQuest ». Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/33267.
Texte intégralMaster of Science
Reeves, Leah. « OPTIMIZING THE DESIGN OF MULTIMODAL USER INTERFACES ». Doctoral diss., University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4130.
Texte intégralPh.D.
Department of Industrial Engineering and Management Systems
Engineering and Computer Science
Industrial Engineering PhD
McGee, Marilyn Rose. « Investigating a multimodal solution for improving force feedback generated textures ». Thesis, University of Glasgow, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.274183.
Texte intégralBOLLINI, LETIZIA. « Multimodal Directing in New-Media. The design of the human-computer interface as a directorship of communication modes integrated in a global medium ». Doctoral thesis, Politecnico di Milano, 2002. http://hdl.handle.net/10281/10854.
Texte intégralCross, E. Vincent Gilbert Juan E. « Human coordination of robot teams an empirical study of multimodal interface design / ». Auburn, Ala, 2009. http://hdl.handle.net/10415/1701.
Texte intégralRonkainen, S. (Sami). « Designing for ultra-mobile interaction:experiences and a method ». Doctoral thesis, University of Oulu, 2010. http://urn.fi/urn:isbn:9789514261794.
Texte intégralAlmutairi, Badr. « Multimedia communication in e-government interface : a usability and user trust investigation ». Thesis, De Montfort University, 2014. http://hdl.handle.net/2086/10503.
Texte intégralZanello, Marie-Laure. « Contribution à la connaissance sur l'utilisation et la gestion de l'interface multimodale ». Université Joseph Fourier (Grenoble), 1997. http://www.theses.fr/1997GRE10139.
Texte intégralGarcía, Sánchez Juan Carlos. « Towards a predictive interface for the specification of intervention tasks in underwater robotics ». Doctoral thesis, Universitat Jaume I, 2021. http://dx.doi.org/10.6035/14101.2021.93456.
Texte intégralLos robots desempeñan un papel fundamental en nuestra vida cotidiana, realizando tareas tan diversas como mantenimiento, vigilancia, exploración en entornos hostiles u operaciones de búsqueda y rescate. De entre todos los entornos donde actúan, el submarino es uno de los que más ha aumentado su actividad. Los tipos de robots utilizados son: ROVs, AUVs y HROVs. Existe un problema común a los tres: la interacción hombre-robot presenta diversas deficiencias y el usuario sigue jugando un papel central desde el punto de vista de la toma de decisiones. La presente tesis está centrada en la investigación relacionada con la interacción hombre-robot: el uso de algoritmos para asistir al usuario durante la especificación de la misión (haciendo que la interfaz de usuario sea fácil de usar), la exploración de una interfaz multimodal y la propuesta de una arquitectura de control del robot (permitiendo cambiar desde autónomo a teleoperado, o viceversa).
Programa de Doctorat en Informàtica
Husseini, Orabi Ahmed. « Multi-Modal Technology for User Interface Analysis including Mental State Detection and Eye Tracking Analysis ». Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/36451.
Texte intégralNordmark, Anton. « Designing Multimodal Warning Signals for Cyclists of the Future ». Thesis, Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-74884.
Texte intégralTrafiken är en komplex miljö med många deltagare; diverse ny teknik gör anspråk på att underlätta denna komplexitet. Men, cyklister—en särskilt utsatt grupp av trafikanter—har hittills hamnat i skymundan för sådana utvecklingar. Vidare, aspekten av användbara gränssnitt för cyklister inom sådana uppkopplade och samverkande trafiksystem (C-ITS) har utforskats desto mindre. Det här examensarbetet inom Teknisk design presenterar fem multimodala kollisionsvarningar avsedda för cyklister—framtida sådana i dessa C-ITS—genom en ny och bimodal användning av benledande hörlurar via både ljud och vibrationer. Examensarbetet genomfördes i koppling till forskningsprojektet V2Cyclist, orkestrerat av RISE Interactive, vars projektmål var att anpassa det trådlösa kommunikationsprotokollet V2X för cyklister via en fysisk prototyp i form av en cykelhjälm och parallellt utveckla ett tillhörande användargränssnitt. En viktig del av det teoretiska ramverket för det här examensarbetet grundar sig på multiple resource theory: uppgifter kan utföras mer effektivt i en annan modalitet än i en som redan är belastad med uppmärksamhet. Mänskliga faktorer och teori om vår uppfattning användes; bevis pekar på att människor har evolutionärt utvecklat en bias för hotande ljud som upplevs inkräkta på vårt närmsta personliga revir; etologiska rön visar på en koppling mellan lågfrekventa ljud och ‘storhet.’ Tekniker inom ljuddesign vanligtvis använda till mer artistiska ändamål, såsom syntes och mixning, användes här till godo för att utforska den nya och bimodala designrymden. Processen för arbetet grundade sig i design thinking och bestod av fyra faser: kontextfördjupning, idégenerering, konceptutveckling, och utvärdering. En ny och tidigare outforskad designrymd beståendes av en bimodal, ljudtaktil användning av benledande hörlurar divergerades och konvergerades. Ett initialt utforskande angreppssätt gav upphov till en bred mängd av idéer. Ett senare renodlande angreppssätt gick, dock, inte hela vägen till endast en optimal lösning, då vidare utvärdering krävs men också på grund av okända teknologiska begränsningar. Dessutom, givet cyklisters stora mångfald, kan det möjligtvis följa att det inte finns någon enskild design av den optimala kollisionsvarningen. Ett spann på fem olika lösningar presenteras därmed. Fem koncept för multimodala kollisionsvarningar presenteras där varje variant uttrycker fara och kritiskhet på olika sätt. Vissa är statiska i typ, medan andra verkar mer kontinuerligt och dynamiskt. Det antogs att kollisionsvarningar sker sällan. Olika designtekniker och motiveringar har använts, ibland i kombination med varandra, för att skapa kollisionsvarningar vars avsikter omedelbart förstås: normer inom design och kultur gällande ljud; uttalad kommunikation i form av tal; anspråk på människors biologiska intuition via hotfulla och djurliknande klangfärger; dynamisk och procedurellt genererad feedback; multimodal effektfullhet; korsmodal känsla av grova texturer; size-sound symbolism för att antyda ‘storhet;’ samt de naturligt aktiverande egenskaperna hos looming sounds.
Leme, Ricardo Roberto. « Uma proposta de design da interação multimodal para e com a terceira idade para dispositivos móveis ». Universidade Federal de São Carlos, 2015. https://repositorio.ufscar.br/handle/ufscar/633.
Texte intégralThe use of small mobile devices such as smartphones and tablets has been growing in various public in Brazil. Even though many devices have features such as touch, gestures, voice, many applications do not adopt such resources to support different ways of interaction. Among the various audiences that have been using the mobile devices in Brazil, it can highlight the elderly person. To understand the needs of elderly audience is important not only observe their actions as well as brings them to inside of the process of development. User Centered Design (UCD) is a methodology that uses models, methods and processes for software design aiming to fulfill the user needs, composed by the steps of research, development and evaluation. Based on the UCD principals, this work presents an approach to support the design of interaction which besides being to the user, is also developed with the elderly user adding to the UCD cycle the techniques of participatory design and of personas. The application developed by the approach allows the elderly users to use the multimodal interaction according to their preferences and to the context of use. The methodology of this work was composed by three steps. First, a systematic review gave the support to the fundamentals and to the delimitation of the work scope. After that, the approach was designed based on UCD method including the participatory aspects. The proposal verification was conducted following the approach phases with the participation of a total of 279 elderly users. The results show that the elderly user, as a active member of the development process, aiding on the identification of real interaction aspects of the application.
O uso de dispositivos móveis, como smartphones e tablets vem crescendo junto a vários públicos no Brasil. Mesmo que muitos dispositivos possuam características tais como toque, gestos, voz, muitas aplicações não adotam tais recursos para apoiar diferentes formas de interação. Entre os diversos públicos que utilizam os dispositivos móveis no Brasil, pode-se destacar o usuário da terceira idade. Para entender as necessidades deste público, é importante não somente observar as suas ações, mas trazê-los para o processo de desenvolvimento. O Design Centrado no Usuário (UCD) é uma metodologia que utiliza modelos, métodos e processos de design de software com o objetivo para atender as necessidades do usuário, composto pelas etapas de pesquisa, desenvolvimento e avaliação. Com base nos princípios da UCD, este trabalho apresenta uma abordagem para apoiar o projeto de interação que além de ser para o usuário, também é desenvolvido com o usuário da terceira idade somando-se ao ciclo da UCD as técnicas de design participativo e de personas. A aplicação desenvolvida pela abordagem permite que os usuários idosos utilizem a interação multimodal de acordo com as suas preferências e para o contexto de utilização. A metodologia deste trabalho foi composta por três etapas. Em primeiro lugar, uma revisão sistemática deu o apoio para os fundamentos e para a delimitação do âmbito de trabalho. Depois disso, a abordagem foi concebida com base no método da UCD incluindo os aspectos de participação. A abordagem proposta foi realizada seguindo as fases de aproximação com a participação de um total de 279 usuários idosos. Os resultados mostram que o usuário idoso, como um membro ativo do processo de desenvolvimento, pode auxiliar na identificação de aspectos reais de interação da aplicação.
Morin, Philippe. « Partner, un système de dialogue homme-machine multimodal pour applications finalisées à forte composante orale ». Nancy 1, 1994. http://www.theses.fr/1994NAN10423.
Texte intégralKernchen, Jochen Ralf. « Mobile multimodal user interfaces ». Thesis, University of Surrey, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.531385.
Texte intégralMiners, William Ben. « Toward Understanding Human Expression in Human-Robot Interaction ». Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/789.
Texte intégralAn intuitive method to minimize human communication effort with intelligent devices is to take advantage of our existing interpersonal communication experience. Recent advances in speech, hand gesture, and facial expression recognition provide alternate viable modes of communication that are more natural than conventional tactile interfaces. Use of natural human communication eliminates the need to adapt and invest time and effort using less intuitive techniques required for traditional keyboard and mouse based interfaces.
Although the state of the art in natural but isolated modes of communication achieves impressive results, significant hurdles must be conquered before communication with devices in our daily lives will feel natural and effortless. Research has shown that combining information between multiple noise-prone modalities improves accuracy. Leveraging this complementary and redundant content will improve communication robustness and relax current unimodal limitations.
This research presents and evaluates a novel multimodal framework to help reduce the total human effort and time required to communicate with intelligent devices. This reduction is realized by determining human intent using a knowledge-based architecture that combines and leverages conflicting information available across multiple natural communication modes and modalities. The effectiveness of this approach is demonstrated using dynamic hand gestures and simple facial expressions characterizing basic emotions. It is important to note that the framework is not restricted to these two forms of communication. The framework presented in this research provides the flexibility necessary to include additional or alternate modalities and channels of information in future research, including improving the robustness of speech understanding.
The primary contributions of this research include the leveraging of conflicts in a closed-loop multimodal framework, explicit use of uncertainty in knowledge representation and reasoning across multiple modalities, and a flexible approach for leveraging domain specific knowledge to help understand multimodal human expression. Experiments using a manually defined knowledge base demonstrate an improved average accuracy of individual concepts and an improved average accuracy of overall intents when leveraging conflicts as compared to an open-loop approach.
Lingam, Sumanth (Sumanth Kumar) 1978. « User interfaces for multimodal systems ». Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/8614.
Texte intégralIncludes bibliographical references (leaves 68-69).
As computer systems become more powerful and complex, efforts to make computer interfaces more simple and natural become increasingly important. Natural interfaces should be designed to facilitate communication in ways people are already accustomed to using. Such interfaces allow users to concentrate on the tasks they are trying to accomplish, not worry about what they must do to control the interface. Multimodal systems process combined natural input modes- such as speech, pen, touch, manual gestures, gaze, and head and body movements- in a coordinated manner with multimedia system output. The initiative at W3C is to make the development of interfaces simple and easy to distribute applications across the Internet in an XML development environment. The languages so far such as HTML designed at W3C are for a particular platform and are not portable to other platforms. User Interface Markup Language (UIML) has been designed to develop cross-platform interfaces. It will be shown in this thesis that UIML can be used not only to develop multi-platform interfaces but also for creating multimodal interfaces. A survey of existing multimodal applications is performed and an efficient and easy-to-develop methodology is proposed. Later it will be also shown that the methodology proposed satisfies a major set of requirements laid down by W3C for multimodal dialogs.
by Sumanth Lingam.
M.Eng.
Bourgeois, Paul Alan, et David K. Hsiao. « The instrumentation of the multimodel and multilingual user interface ». Thesis, Monterey, California : Naval Postgraduate School, 1993. http://hdl.handle.net/10945/24184.
Texte intégralRaisamo, Roope. « Multimodal human-computer interaction a constructive and empirical study / ». Tampere, [Finland] : University of Tampere, 1999. http://acta.uta.fi/pdf/951-44-4702-6.pdf.
Texte intégralMcGee, David R. « Augmenting environments with multimodal interaction / ». Full text open access at:, 2003. http://content.ohsu.edu/u?/etd,222.
Texte intégralRovelo, Ruiz Gustavo Alberto. « Multimodal 3D User Interfaces for Augmented Reality and Omni-Directional Video ». Doctoral thesis, Universitat Politècnica de València, 2015. http://hdl.handle.net/10251/53916.
Texte intégral[ES] El campo de la Interacción Persona-Computadora es un área multidisciplinaria que combina, entre otras a las Ciencias de la Computación y Psicología. Estudia la interacción entre los sistemas computacionales y las personas considerando tanto el desarrollo tecnológico, como la experiencia del usuario. Los dispositivos necesarios para crear interfaces de usuario 3D son ahora más asequibles que nunca (v.gr. dispositivos de visualización, de seguimiento o móviles) abriendo así un área de oportunidad para los investigadores de esta disciplina. La Realidad Aumentada y el Video Omnidireccional son dos ejemplos de este tipo de interfaces en donde el usuario es capaz de interactuar en el espacio tridimensional más allá de la pantalla de la computadora. El trabajo presentado en esta tesis se centra en la evaluación de la interacción del usuario con estos dos tipos de aplicaciones. El objetivo principal es contribuir a incrementar la base de conocimiento sobre este tipo de interfaces y así, mejorar su diseño. En este trabajo investigamos de qué manera se pueden emplear de forma eficiente las interfaces multimodales para proporcionar información relevante en aplicaciones de Realidad Aumentada. Además, evaluamos de qué forma el usuario puede usar interfaces 3D usando más de un tipo de interacción; para ello evaluamos la interacción basada en gestos para Video Omnidireccional. A lo largo de este documento se describen los experimentos realizados y los resultados obtenidos para cada caso en particular. Se presenta además una discusión general de los resultados.
[CAT] El camp de la Interacció Persona-Ordinador és una àrea d'investigació multidisciplinar que combina, entre d'altres, les Ciències de la Informàtica i de la Psicologia. Estudia la interacció entre els sistemes computacionals i les persones considerant tant el desenvolupament tecnològic, com l'experiència de l'usuari. Els dispositius necessaris per a crear interfícies d'usuari 3D són ara més assequibles que mai (v.gr. dispositius de visualització, de seguiment o mòbils) obrint així una àrea d'oportunitat per als investigadors d'aquesta disciplina. La Realitat Augmentada i el Vídeo Omnidireccional són dos exemples d'aquest tipus d'interfícies on l'usuari és capaç d'interactuar en l'espai tridimensional més enllà de la pantalla de l'ordinador. El treball presentat en aquesta tesi se centra en l'avaluació de la interacció de l'usuari amb aquests dos tipus d'aplicacions. L'objectiu principal és contribuir a augmentar el coneixement sobre aquest nou tipus d'interfícies i així, millorar el seu disseny. En aquest treball investiguem de quina manera es poden utilitzar de forma eficient les interfícies multimodals per a proporcionar informació rellevant en aplicacions de Realitat Augmentada. A més, avaluem com l'usuari pot utilitzar interfícies 3D utilitzant més d'un tipus d'interacció; per aquesta raó, avaluem la interacció basada en gest per a Vídeo Omnidireccional. Al llarg d'aquest document es descriuen els experiments realitzats i els resultats obtinguts per a cada cas particular. A més a més, es presenta una discussió general dels resultats.
Rovelo Ruiz, GA. (2015). Multimodal 3D User Interfaces for Augmented Reality and Omni-Directional Video [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/53916
TESIS
Wang, Yifei. « Designing chatbot interfaces for language learning : ethnographic research into affect and users' experiences ». Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2742.
Texte intégralWang, Chong, et 王翀. « Joint color-depth restoration with kinect depth camera and its applications to image-based rendering and hand gesture recognition ». Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/206343.
Texte intégralCiuffreda, Antonio. « An empirical investigation in using multi-modal metaphors to browse internet search results : an investigation based upon experimental browsing platforms to examine usability issues of multi-nodal metaphors to communicate internet-based search engine results ». Thesis, University of Bradford, 2008. http://hdl.handle.net/10454/4301.
Texte intégralLe, gouis Benoît. « Contribution to the study of haptic perception and renderinf of deformable virtual objects ». Thesis, Rennes, INSA, 2017. http://www.theses.fr/2017ISAR0019/document.
Texte intégralHaptics is a key part for the interaction with physically-based environments, with many applications in virtual training, prototyping and teleoperations assistance. In particular, deformable objects are challenging, due to the complexity oftheir behavior. Due to the specific need in performance associated to haptic interaction, a trade-off is usually necessarybetween accuracy and efficiency, and taking the best of this trade-off is a major challenge. The objectives of this PhD are to improve haptic rendering with physically-based deformable objects that exhibit complex behavior, and study how perception can be used to achieve this goal.In this PhD, we first propose a model for the physically-based simulation of complex heterogeneous deformable objects. More specifically, we address the issue of geometric multiresolution for deformable heterogeneous objects, with a major focus on the heterogeneity representation at the coarse resolution of the simulated objects. The contribution consists in a method for elasticity attribution at coarser resolution of the object, and an evaluation of the geometric coarsening on the haptic perception.We then focus on another class of complex objects behavior, topology changes, by proposing a simulation pipeline forbimanual haptic tearing of thin deformable surfaces. This contribution mainly focuses on two main aspects for an efficientsimulation of tearing, namely collision detection for thin objects and efficient physically-based simulation of tearing phenomena. The simulation is especially optimized for tear propagation.The last aspect that is covered by this PhD is the influence of the environment over haptic perception of stiffness, and more specifically of Augmented Reality (AR) environments. How are objects perceived in AR compared to Virtual Reality (VR)? Do we interact the same way on these two environments? In order to assess these questions, we conducted an experiment aiming at comparing the haptic stiffness perception of a piston surrounded by everyday life objects in AR and of the same piston surrounded by a virtual replica of the real environment in VR.These contributions open new perspectives for haptic interaction with virtual environments, from the efficient yet faithful simulation of complex deformable objects behavior to a better understanding of haptic perception and interaction strategies
Singh, Anjeli Gilbert Juan E. « The potential benefits of multi-modal social interaction on the web for senior users ». Auburn, Ala., 2009. http://hdl.handle.net/10415/2007.
Texte intégralBueno, Danilo Camargo. « HyMobWeb : uma abordagem para a adaptação híbrida de interfaces Web móveis sensíveis ao contexto e com suporte à multimodalidade ». Universidade Federal de São Carlos, 2017. https://repositorio.ufscar.br/handle/ufscar/9157.
Texte intégralApproved for entry into archive by Milena Rubi ( ri.bso@ufscar.br) on 2017-10-17T13:29:18Z (GMT) No. of bitstreams: 1 BUENO_Danilo_2017.pdf: 12404601 bytes, checksum: b788f4b11bbb3fb391f608449597ae16 (MD5)
Approved for entry into archive by Milena Rubi ( ri.bso@ufscar.br) on 2017-10-17T13:29:26Z (GMT) No. of bitstreams: 1 BUENO_Danilo_2017.pdf: 12404601 bytes, checksum: b788f4b11bbb3fb391f608449597ae16 (MD5)
Made available in DSpace on 2017-10-17T13:29:35Z (GMT). No. of bitstreams: 1 BUENO_Danilo_2017.pdf: 12404601 bytes, checksum: b788f4b11bbb3fb391f608449597ae16 (MD5) Previous issue date: 2017-06-30
Não recebi financiamento
The use of mobile devices to browse the Web has become increasingly popular as a consequence of easy access to the Internet. However, moving from the desktop development to the mobile platform features, requests from developers an important focus on interaction elements which fit into the interaction demands. There by, several approaches have emerged for adaptation of Web applications. One of the most adopted solution by Web developers are front-end frameworks . Nevertheless, this technique has shortcomings that directly impact in the interaction elements and user satisfaction. In this scenario, the objective of this work is to propose a hybrid adaptation approach of context-sensitive Web interfaces with multimodality support, called HyMobWeb, which aims to help developers to create solutions closer to the device’s characteristics, to the contexts of use and the needs of end-users. The approach is composed of stages of static and dynamic adaptation. Static adaptation subsidizes developers in marking elements to be adapted through a grammar that can reduce the coding effort of solutions that address aspects related to multimodality and context sensitivity. Dynamic adaptation is responsible for analyzing changes in the context of the user and performing the marked adaptations in static adaptation. The approach was outlined from a review of the literature on mobile Web interface adaptation and three exploratory studies. The first and second study dealt with end users’ difficulties regarding the use of non-mobile Web applications. The third is about gaps in traditional adaptations - made through frameworks front-end - in relation to the users needs. Aiming to evaluate the approach, two evaluations were carried out, one from the perspective of the developer and another from the end user. The first one focused on verifying the acceptance of the proposal by software developers in the use of the grammar and resources proposed from it. The second sought to identify if the adaptation, previously implemented by the developers, brought satisfaction to the end-users during its use. The findings suggested that HyMobWeb brought significant contributions to the work of the developers and that the resources explored by the approach provided positive reactions to the satisfaction of end-users.
O uso de dispositivos móveis para navegar na Web tornou-se cada vez mais popular devido à proliferação dos aparelhos e sua facilidade de acesso. No entanto, a transição da plataforma desktop para mobile inseriu um novo desafio aos desenvolvedores sobre o uso de elementos de interação e de suas funcionalidades. Com isso, surgiram diversas abordagens para adaptação das aplicações Web. Entre elas, uma das mais comuns entre Desenvolvedores Web é a utilização de frameworks front-end . Contudo, estes frameworks possuem limitantes nas funcionalidades de adaptação das interfaces com deficiências que impactam diretamente nos elementos de interação e na satisfação do usuário. Diante deste cenário, este trabalho tem como objetivo propor a HyMobWeb, uma abordagem híbrida de adaptação de interfaces Web móveis sensíveis ao contexto e com suporte a multimodalidade, que visa a auxiliar os desenvolvedores na criação de soluções mais próximas às características do dispositivo, aos contextos de utilização, e as necessidades dos usuários finais. A abordagem é composta das etapas de adaptação estática e dinâmica. A adaptação estática subsidia os desenvolvedores na marcação de elementos a serem adaptados através de uma gramática que pode reduzir o esforço de codificação de soluções que abordam aspectos relacionados à multimodalidade e à sensibilidade ao contexto. A adaptação dinâmica é a responsável por analisar as mudanças no contexto do usuário e realizar as adaptações marcadas na adaptação estática. A abordagem foi delineada a partir de uma revisão da literatura acerca da adaptação de interface Web em dispositivos móveis e três estudos exploratórios. O primeiro estudo tratou sobre as dificuldades dos usuários finais em relação à utilização de aplicações Web não adaptadas aos dispositivos móveis. O segundo sobre os impactos da adição da multimodalidade em tais ambientes. Enquanto o terceiro, sobre as lacunas existentes nas adaptações tradicionais - realizadas através de frameworks front-end - em relação às necessidades dos usuários finais. Visando avaliar a abordagem, duas avaliações foram realizadas: uma na perspectiva do desenvolvedor e outra do usuário final. A primeira focou em verificar aceitação da proposta por parte dos desenvolvedores de software no uso da gramática e recursos propostos a partir dela. A segunda buscou identificar se a adaptação, previamente implementada pelos desenvolvedores, trazia satisfação para os usuários finais durante seu uso. Os resultados encontrados sugeriram que a HyMobWeb trouxe contribuições significativas para o trabalho dos desenvolvedores e que os recursos explorados pela abordagem propiciaram reações positivas na satisfação dos usuários finais
Rubio, Carlos R. (Carlos Roberto). « An API for smart objects and multimodal user interfaces for the smart home and office ». Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100642.
Texte intégralThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 94).
As more people move to cities, space is becoming more limited and expensive. Robotic furniture can increase functionality and optimize space, allowing spaces to feel as if they were three times the size. These mechatronic systems need capable electronics and connected microcontrollers to bring furniture to the Internet of Things (IoT). We present these electronics and firmware for three smart robotic spaces. These smart spaces need powerful software and computing systems to enable the transformations and give magic to the space. We present software written for three smart robotic spaces. The right user interfaces are vital for rich user experience. User studies with different smart home user interfaces show that although tactile interfaces are the most reliable and easiest to work with, people are hopeful for sufficiently robust gestural and speech interfaces in future smart homes. The urban homes and offices of the future are smart, customizable, and robotic.
by Carlos R. Rubio.
M. Eng.
Olofsson, Stina. « Designing interfaces for the visually impaired : Contextual information and analysis of user needs ». Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-144370.
Texte intégralSchäfer, Robbie [Verfasser]. « Model-Based Development of Multimodal and Multi-Device User Interfaces in Context-Aware Environments / Robbie Schäfer ». Aachen : Shaker, 2007. http://d-nb.info/1163610100/34.
Texte intégralWåhlén, Herje. « Voice Assisted Visual Search ». Thesis, Umeå universitet, Institutionen för informatik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-38204.
Texte intégralVoice-Assisted Visual Search
Tan, Ning. « Posture and Space in Virtual Characters : application to Ambient Interaction and Affective Interaction ». Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00675937.
Texte intégralZapata, Rojas Julian. « Translators in the Loop : Observing and Analyzing the Translator Experience with Multimodal Interfaces for Interactive Translation Dictation Environment Design ». Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34978.
Texte intégralVOLTA, ERICA. « Multisensory learning in adaptive interactive systems ». Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/1001795.
Texte intégralJeon, Myounghoon. « "Spindex" (speech index) enhances menu navigation user experience of touch screen devices in various input gestures : tapping, wheeling, and flicking ». Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/37101.
Texte intégralKern, Dagmar [Verfasser], Albrecht [Akademischer Betreuer] Schmidt et Antonio [Akademischer Betreuer] Krüger. « Supporting the Development Process of Multimodal and Natural Automotive User Interfaces / Dagmar Kern. Gutachter : Antonio Krüger. Betreuer : Albrecht Schmidt ». Duisburg, 2012. http://d-nb.info/1021899739/34.
Texte intégralMealla, Cincuegrani Sebastián. « Designing sonic interactions for implicit physiological computing ». Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/393715.
Texte intégralThe field of Human-Computer Interaction (HCI) is devoted to understand the interplay between people and computers. However, it is mainly based on explicit control by means of devices such as the keyboard and mouse. As systems become increasingly complex, this traditional approach constitutes a bottleneck for seamless HCI. However, HCI can incorporate implicit user information through Physiological Computing, which monitors psychophysiological states (affective, perceptive or cognitive) for adapting computer systems without explicit user control. At the output level, sound appears as an excellent medium for representing implicit states, as it can be processed faster than visual presentation, can be easily localized, it has a good temporal resolution, and account for multiple data display. In this dissertation we conceptualize, prototype and evaluate sonic interactions for implicit Physiological Computing in the context of HCI, focusing on their perceptualization quality, the role of mapping complexity, and their meaningfulness in the musical domain.
Möller, Andreas [Verfasser], Matthias [Akademischer Betreuer] Kranz, Gerhard [Akademischer Betreuer] Rigoll et Uwe [Akademischer Betreuer] Baumgarten. « Leveraging Mobile Interaction with Multimodal and Sensor-Driven User Interfaces / Andreas Möller. Gutachter : Gerhard Rigoll ; Matthias Kranz ; Uwe Baumgarten. Betreuer : Matthias Kranz ». München : Universitätsbibliothek der TU München, 2015. http://d-nb.info/1073970132/34.
Texte intégralRatzka, Andreas. « Patternbasiertes User Interface Design für multimodale Interaktion : Identifikation und Validierung von Patterns auf Basis einer Analyse der Forschungsliteratur und explorativer Benutzertests an Systemprototypen / ». Boizenburg : Vwh, 2010. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=018938406&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.
Texte intégralRatzka, Andreas. « Patternbasiertes User Interface Design für multimodale Interaktion Identifikation und Validierung von Patterns auf Basis einer Analyse der Forschungsliteratur und explorativer Benutzertests an Systemprototypen ». Boizenburg Hülsbusch, 2009. http://d-nb.info/999633082/04.
Texte intégralAlseid, Marwan N. K. « Multimodal interactive e-learning : an empirical study : an experimental study that investigates the effect of multimodal metaphors on the usability of e-learning interfaces and the production of empirically derived guidelines for the use of these metaphors in the software engineering process ». Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4246.
Texte intégralPruvost, Gaëtan. « Modélisation et conception d’une plateforme pour l’interaction multimodale distribuée en intelligence ambiante ». Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112017/document.
Texte intégralThis thesis deals with ambient intelligence and the design of Human-Computer Interaction (HCI). It studies the automatic generation of user interfaces that are adapted to the interaction context in ambient environments. This problem raises design issues that are specific to ambient HCI, particularly in the reuse of multimodal and multidevice interaction techniques. The present work falls into three parts. The first part is an analysis of state-of-the-art software architectures designed to solve those issues. This analysis outlines the limits of current approaches and enables us to propose, in the second part, a new approach for the design of ambient HCI called DAME. This approach relies on the automatic and dynamic association of software components that build a user interface. We propose and define two complementary models that allow the description of ergonomic and architectural properties of the software components. The design of such components is organized in a layered architecture that identifies reusable levels of abstraction of an interaction language. A third model, called behavioural model, allows the specification of recommendations about the runtime instantiation of components. We propose an algorithm that allows the generation of context-adapted user interfaces and the evaluation of their quality according to the recommendations issued from the behavioural model. In the third part, we detail our implementation of a platform that implements the DAME approach. This implementation is used in a qualitative experiment that involves end-users. Encouraging preliminary results have been obtained and open new perspectives on multi-devices and multimodal HCI in ambient computing
YI, WANG JU, et 王如薏. « Experience Analysis on Multimodal User Interface of Activity Tracker ». Thesis, 2016. http://ndltd.ncl.edu.tw/handle/765g94.
Texte intégral國立臺北科技大學
互動設計系碩士班(碩士在職專班)
104
With increasing health awareness, consumers use various devices and applications to record and manage personal exercise patterns. For different needs in exercise data and information, this research explores user experiences on device operation and uploading data to cloud in pilot and case studies. In the pilot study, three types of road running record devices are chosen: traditional pedometer, wearable device fitbit one with cloud-based web interface and Nike+ Running mobile application. By way of making personas for simulating the groups which use multi-modal exercise devices, it finds out the corresponding group for conducting in-depth interview, performing tasks and completing questionnaires in order to understand the user groups experience. It summarises the patterns and behaviours of the user group. In the case study, two web interfaces, classified version and integrated version, are designed based on user requirement to upload exercise data. The study uses eye trackers and usability software Morae to test, verify analyze their utility and effectiveness. The pilot study results include: (1) Multimodal exercise record device users are inclined to exercise focused group, health management group and social group. (2) Users care more about calories burned, distance, exercise frequency and sleep quality. Users care less on water and calories intake. (3) Exercise record devices should define main user group first, as too many features would impact on satisfaction level. (4) Improvement suggestions to exercise record devices and cloud web interfaces on the market. The case study results include: (1) Comparisons between classified and integrated interfaces. (2) Classified version performs better on eye movement tracking. (3) Classified version is also significantly better on satisfaction and usability than integrated version. (4) Integrated version layout is more suitable for heavy users. Six cloud interface design rules of multimodal exercise record device are proposed in the end for future reference.
ZAPPIA, IVAN. « Model and framework for multimodal and adaptive user interfaces generation in the context of business processes development ». Doctoral thesis, 2015. http://hdl.handle.net/2158/984637.
Texte intégralAmorim, Vítor Manuel da Costa. « Design e avaliação de um fantoma digital e multimodal de um ratinho, usando simulação computacional ». Master's thesis, 2019. http://hdl.handle.net/10316/87812.
Texte intégralO cancro é um dos maiores problemas de saúde pública, sendo a segunda maior causa de morte a nível mundial. Dada a prevalência desta doença, enormes esforços têm sido feitos, no sentido de melhorar a prevenção, deteção e tratamento. Efeitos notórios destes esforços foram sentidos ao nível da imagem médica, muito devido ao aparecimento de objetos denominados por fantomas.A grande vantagem da utilização de fantomas em estudos científicos corresponde ao facto de conhecermos à partida a anatomia exata do objeto, providenciando um gold standard a partir do qual é possível desenvolver, avaliar e melhorar, aparelhos e técnicas de imagem.Os primeiros fantomas a surgir, foram os fantomas físicos, contudo, o facto de serem caros e requererem a utilização dos mais variados protocolos de segurança radioativa, limita o seu uso em larga escala. Os fantomas computacionais não têm esses constrangimentos, podendo ser utilizados em aplicações como simulação computacional. O facto de permitirem a realização de estudos integralmente no computador, torna-os mais versáteis, eficientes, precisos e seguros.Sabendo a importância dos fantomas computacionais em contexto científico o objetivo principal deste estudo é desenvolver um fantoma computacional estático de um rato. Para tal, dividiu-se o projeto em duas fases.Na primeira fase, correspondente à criação do fantoma computacional, procedeu-se à segmentação de dois tipos de imagens anatómicas, no caso de Tomografia Axial Computorizada (Computed Tomography, CT) e de Ressonância Magnética (Magnetic Ressonance Imaging, MRI). Este processo permitiu extrair as estruturas anatómicas presentes nas imagens médicas, através da associação de labels (valores numéricos) a cada pixel das imagens. Os resultados da segmentação produziram modelos tridimensionais de diferentes estruturas anatómicas, estando a grande maioria bem definida. O senão ocorre quando se sobrepõem ambas as segmentações (CT e MRI).Nesta sobreposição, aparecem algumas imperfeições no modelo tridimensional, fruto de um co-registo (fusão) deficiente das imagens de CT e MRI, tendo em conta estas imperfeições, houve a necessidade de aplicar algumas correções ao modelo tridimensional, as quais poderão ter impacto no realismo do mesmo.Na segunda fase, criou-se uma interface gráfica, cujo objetivo passava por substituir as labels identificativas, resultantes do processo de segmentação, por propriedades físicas úteis em contexto de simulação computacional. Para tal, recorreu-se à ferramenta Graphical User Interface Development Environment (GUIDE), pertencente ao software MATLAB.A interface pede os parâmetros de entrada (propriedades físicas) ao utilizador e substitui as labels pelos valores indicados. Os testes realizados com a aplicação de diferentes parâmetros de entrada vieram aprovar a utilização da interface para o propósito que foi desenhada.
Cancer is one of the biggest public health problems, being the second leading cause of death worldwide. Considering the worldwide prevalence of this disease, enormous efforts have been made to improve prevention, detection and treatment. Noticeble effects of these efforts were felt at medical imaging level, largely due to the appearance of phantoms.The advantage of using phantoms in scientific studies, is that it knows exactly anatomy of the object, providing a gold standard to develop, evaluate and improve imaging equipment and techniques. Their use enable possibility to test applied doses and image reconstruction processes.Physical phantoms appeared first. However, the fact that they are expensive and require the use of various radioactive safety protocols limits their use. Computational phantoms doesn´t have these constraints, they can be used in applications such as computer simulation. The fact that they allow for computerized simulations studies makes them more versatile, efficient, accurate and safe.Knowing the importance of computational phantoms in a scientific context, the principal objective of this study is to develop a static computational phantom of a mouse. To do this, the project was divided into two phases.In the first phase, corresponding to the creation of the computational phantom, we segmented anatomical images from two medical modalities: Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). This process allowed to extract the anatomical structures presented in medical images, associating labels (numerical values) to each pixel of the images.Segmentation results produced three-dimensional models of different anatomical structures, with high definition. However, overlapping both segments (CT and MRI), some imperfections appear in the three-dimensional model resulting from a poor co-registration (fusion) of CT and MRI images. Given these imperfections, it was necessary to apply some corrections to the three-dimensional model, which could impact its realism.In the second phase, a graphical user interface was created, whose purpose was to replace the identification labels, resulting from the segmentation process, with physical properties useful in the context of computational simulation. For do this, we use the Graphical User Interface Development Environment (GUIDE) tool, which belongs to the MATLAB software.The interface prompts for input (physical properties) from the user and replaces the labels with the provided values. Tests performed using different inputs approved the use of the interface for its intended purpose.
Gouveia, Duarte Paulo Ferreira. « A multimodal tablet-based interface for designing and reviewing 3D engineering models ». Master's thesis, 2012. http://hdl.handle.net/10400.13/1192.
Texte intégralMann, Sandra [Verfasser]. « User concepts for in-car speech dialogue systems and their integration into a multimodal human-machine interface / vorgelegt von Sandra Mann ». 2010. http://d-nb.info/1007327162/34.
Texte intégral« A methodology for developing multimodal user interfaces of information systems ». Université catholique de Louvain, 2008. http://edoc.bib.ucl.ac.be:81/ETD-db/collection/available/BelnUcetd-06202008-120602/.
Texte intégralMantravadi, Chandra Sekhar. « Adaptive multimodal integration of speech and gaze ». 2009. http://hdl.rutgers.edu/1782.2/rucore10001600001.ETD.000051872.
Texte intégral« Cross-modality semantic integration and robust interpretation of multimodal user interactions ». Thesis, 2010. http://library.cuhk.edu.hk/record=b6075023.
Texte intégralWe have also performed a latent semantic modeling (LSM) for interpreting multimodal user input consisting of speech and pen gestures. Each modality of a multimodal input carries semantics related to a domain-specific task goal (TG). Each input is annotated manually with a TG based on the semantics. Multimodal input usually has a simpler syntactic structure and different order of semantic constituents from unimodal input. Therefore, we proposed to use LSM to derive the latent semantics from the multimodal inputs. In order to achieve this, we characterized the cross-modal integration pattern as 3-tuple multimodal terms taking into account SLR, pen gesture type and their temporal relation. The correlation term matrix is then decomposed using singular value decomposition (SVD) to derive the latent semantics automatically. TG inference on disjoint test set based on the latent semantics achieves accurate performance for 99% of the multimodal inquiries.
Hui, Pui Yu.
Adviser: Helen Meng.
Source: Dissertation Abstracts International, Volume: 73-02, Section: B, page: .
Thesis (Ph.D.)--Chinese University of Hong Kong, 2010.
Includes bibliographical references (leaves 294-306).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract also in Chinese.
Blumendorf, Marco [Verfasser]. « Multimodal interaction in smart environments : a model-based runtime system for ubiquitous user interfaces / vorgelegt von Marco Blumendorf ». 2009. http://d-nb.info/997036575/34.
Texte intégral