Thèses sur le sujet « Tangible user Interfaces, Interaction design »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Tangible user Interfaces, Interaction design.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Tangible user Interfaces, Interaction design ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Bijman, Nicolaas Peter. « Exploring affordances of tangible user interfaces for interactive lighting ». Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-248002.

Texte intégral
Résumé :
This paper explores interaction with lighting through a tangible user interface (TUI). In a TUI the physical object and space around it are part of the interface. A subset of tangible interaction called spatial interaction is the main focus of this paper. Spatial interaction refers to translation, rotation or location of objects or people within a space. The aim of this paper is to explore the relation between spatial inputs and lighting outputs based on different design properties. A user test is set up to explore the effect that design properties of a TUI have on the lighting output that participants map to spatial inputs. The results of the conducted user test indicate that communicating affordances to the user is an important factor when designing couplings between spatial inputs and lighting outputs. The results further show that the shape of the interface plays a central role in communicating those affordances and that the overlap of input and output space of the interface improves the clarity of the coupling.
Den här studien utforskar gripbar (tangible) interaktionsdesign med fokus på ljus och belysning. Vid användning av ett gripbart (tangible) gränssnitt används den fysiska miljön som gränssnitt. Detta skiljer sig till stor del från interaktion med ett grafiskt användargränssnitt, där alla interaktioner sker och begränsas av en skärms egenskaper. Denna studie fokuserar på rumslig (spatial) interaktionsdesign, vilket är en del av gripbar interaktionsdesign. Rumslig interaktion refererar till översättning, rotation eller plats av objekt eller människor i ett utrymme. Ett användartest har utförts för att testa vad för effekt olika rumsliga indata och designegenskaper har på förväntad utdata för ljus och belysning. Resultatet från användartestet visar att starka affordances och begränsningar, tillsammans med överlappningen av rumslig indata och utdata för ljus och belysning, är de viktigaste egenskaperna för att designa tydliga övergångar.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Stenbacka, Erik. « Cubieo : Observations of Explorative User Behavior with an Abstract Tangible Interface ». Thesis, Södertörns högskola, Institutionen för naturvetenskap, miljö och teknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-19639.

Texte intégral
Résumé :
Recent years have shown a broad spectra of tangible interfaces or TUI's, based upon interaction with music, but also other interfaces containing ubiquitous computing. This is an interesting field due to how engaging music can be and work as connector between people. But the field of human computer interaction has some explorational properties. This paper presents an idea of abstraction with a tangible interface for creating music. The idea behind abstraction of the interface is to engage the user(s) in exploring the artifact, rather than explaining the artifact to the user what can and cannot be done with the artifact.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Merrad, Walid. « Interfaces tangibles et réalité duale pour la résolution collaborative de problèmes autour de tables interactives distribuées ». Thesis, Valenciennes, Université Polytechnique Hauts-de-France, 2020. http://www.theses.fr/2020UPHF0010.

Texte intégral
Résumé :
De nouvelles modalités d’interactions reposant sur les postures et les gestes complètentprogressivement les modalités couramment employées par les ordinateurs de bureau, lestablettes et les surfaces interactives. Ces modalités peuvent être enrichies par l’adjonctiond’objets tangibles, directement tirés de la vie quotidienne ou représentant de manièresymbolique des concepts abstraits de l’interface. Les tables interactives, de par leurhorizontalité et leurs cadres d’utilisation, souvent collaboratifs voire conviviaux, sont unterritoire privilégié d’exploration des usages des objets tangibles et de la manière dont ilssont capables d’enrichir les modalités classiques d’interaction avec ces tables que sont lepointage et le toucher.Le sujet de cette thèse porte sur l’étude des interactions utilisateur avec des tablesinteractives tangibles, dans un contexte d’utilisation en environnement de réalité dualeconstitué de deux mondes symétriques, interconnectés et d’influence mutuellement. Lesinterfaces utilisateur tangibles offrent aux utilisateurs la possibilité d’appréhender et desaisir la signification des informations numériques en manipulant des représentations tangiblesjudicieuses de notre monde physique. Ces métaphores d’interaction établissent unpont entre les deux environnements qui constituent la réalité duale : le monde physiqueet le monde virtuel.Dans cette perspective, ce travail présente une contribution théorique, ainsi que sesapplications. Nous proposons de combiner l’interaction tangible sur table interactiveavec la réalité duale dans un cadre conceptuel, essentiellement destiné aux concepteursd’applications, qui modélise et explique les interactions et les représentations, quifonctionnent dans des configurations de réalité duale. Nous exposons tout d’aborddifférents travaux réalisés dans le domaine de l’interaction tangible en général, puis nousnous concentrons sur des travaux menés sur les tables interactives. Nous proposonségalement de recenser et répertorier 112 tables interactives, classées et caractérisées selonplusieurs critères. Ensuite, nous présentons le concept de la réalité duale et ses domainesd’application possibles. Ensuite, nous proposons un framework de conception, illustronset expliquons ses éléments constitutifs, et comment il peut s’adapter à diverses situationsde réalité duale, notamment avec des tables interactives équipées de la technologie RFID.Enfin, quant à nos contributions applicatives, nous montrons des études de cas que nousavons conçues sur la base de notre proposition, qui illustrent les mises en oeuvre deséléments de notre framework proposé. Les perspectives de recherche sont enfin mises enévidence à la fin du manuscrit
In everyday life, new interactions are gradually replacing the standard computer keyboardand mouse, by using the human body gestures (hands, fingers, head, etc.) as alternativesof interactions on surfaces and in-air. Another type of interaction resides within the manipulationof everyday objects to interact with digital systems. Interactive tabletops haveemerged as new platforms in several domains, offering better usability and facilitatingmulti-user collaboration, thanks to their large display surface and different interactiontechniques on their surfaces, such as multi-touch and tangible. Therefore, improving interaction(s) on these devices and combining it (respectively them) with other conceptscan prove more useful and helpful in the everyday life of users and designers.The topic of this thesis focuses on studying user interactions on tangible interactivetabletops, in a context of use set in a dual reality environment. Tangible User Interfacesoffer users the possibility to apprehend and grasp the meaning of digital information bymanipulating insightful tangible representations in our physical world. These interactionmetaphors are bridging both environments that constitute the dual reality: the physicalworld and the virtual world.In this perspective, this work presents a theoretical contribution along with itsapplications. We propose to combine tangible interaction on tabletops and dual realityin a conceptual framework, basically intended for application designers, that models andexplains interactions and representations, which operate in dual reality setups. First ofall, we expose various works carried out in the field of tangible interaction in general,then we focus on existing work conducted on tabletops. We also propose to list 112interactive tabletops, classified and characterized by several criteria. Next, we presentthe dual reality concept and its possible application domains. Second, we design ourproposal of the framework, illustrate and explain its composing elements, and how itcan adapt to various situations of dual reality, particularly with interactive tabletopsequipped with RFID technology. Finally, and as application contributions, we show casestudies that we designed based on our proposal, which illustrate implementations ofelements from our proposed framework. Research perspectives are finally highlighted atthe end of the manuscript
Styles APA, Harvard, Vancouver, ISO, etc.
4

De, Oliveira Clarissa C. « Experience Programming : an exploration of hybrid tangible-virtual block based programming interaction ». Thesis, Malmö universitet, Fakulteten för kultur och samhälle (KS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-22279.

Texte intégral
Résumé :
In less than a century, programming languages have assumed many forms in adapting to system’s needs and capacities, of which our cognitive systems are a part. One variation, tailored specifically for the cognitive processes in children’s education of computational concepts, and nowadays successful among novice adult learners too, is that of visual block based programming. From the pool of available block based programming environments, Scratch is the most popular for users, and therefore becomes a good topic for researchers interested in contemporary educational discussions, including that of coding as a curricular activity in schools. Although inspired by the educational philosophy of using abstract physical blocks in foundational learning, the mainly visual interface of Scratch is made for keyboard and mouse mediated interaction with the digital content on-screen, producing audio-visual feedback. This research is a case study of Scratch, where the shortcomings found in interactions with its environment motivate the investigation of a potential hybrid technology – tangible and visual – for enhanced learning of foundational concepts in block based programming. The investigation is characterized by progressive cycles of conceptual design, supported by prototyping and testing. The results from its design process present the benefits and challenges of this hybrid concept to inform and inspire the development of new technologies for learning, as well as it should inspire designers of Tangible User Interfaces (TUIs) for learning and the educational community of computing to challenge the current ways of learning. The work here presented is concerned with acknowledging and building onto strengths of existing technologies, rather than substituting them with disruptive ideas.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Sirera, I. Pulido Judith. « Designing A Tangible Device for Re-Framing Unproductivity ». Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-285518.

Texte intégral
Résumé :
We report on the design of a tangible device for encouraging the acceptance of unproductive time. We first conducted interviews for a better understanding of the subjective experience of productivity. We found that while the idea of being productive can evoke positive feelings of satisfaction, dealing with unproductive time can be a struggle, negatively affecting people’s moods and self-esteem. These findings guided the design and implementation of RU, a tangible device for reflecting on self-care time. Our prototype offers a physical representation of the mainstream productivity mindset and plays with the idea of connecting and charging energy to encourage the user to experience the time considered unproductive as self-care. In a second study, participants used the device for 5 days and our results suggest that the device motivates reflection on activities beyond work and increases awareness of the importance of taking time for self-care.
I rapporten redogör vi utformningen av ett fysiskt verktyg vars syfte är att öka acceptansen för icke-produktiv tid. Först användes intervjuer för att skapa en bättre förståelse och insikt i vad en “produktiv upplevelse” är. Intervjuerna visade att, samtidigt som idéen av att vara produktiv kan ge positiva känslor i form av “uppfyllnad”, så kan hanteringen av icke-produktiv tid vara jobbig och därmed negativt påverka människors humör och självkänsla. Insikterna från intervjuerna användes som stöd för designen och implementationen av RU, ett fysiskt verktyg vars användning är menad att härleda till reflektion samt tid för självvård. Prototypen är en fysisk representation av vad som anses var den stereotypiska bilden av ett produktivt sinne. Prototypen spelar på idéen av att koppla samman och ge energi i syfte om att motivera användaren att uppleva oproduktiv tid som självvård. I en ytterligare exekverad studie använde deltagarna RU under 5 dagar där resultatet indikerade på att verktyget motiverar till reflektion i aktiviteter bortom jobb och en ökad medvetenhet om vikten i att ta sig tiden för självvård.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Myra, Jess. « Memorality : The Future of Our Digital Selves ». Thesis, Umeå universitet, Institutionen Designhögskolan, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-74466.

Texte intégral
Résumé :
Digital Immortality or Not?The aim of this thesis was to explore how we might be stewards for our post-life digital self after physical death, and to provide a new interaction experience in the form of a tangible, digital, or service design solution. Prior to the project kick-off secondary research, including academic research papers, analogous services, and existing projects, was distilled to form topical questions. These questions were then presented in many casual topical conversations and revealed that although post-life digital asset management awareness is increasing, little consideration exists on how to reflect legacies into the future long after death. A second stage of primary research included multiple on-site investigations, paired with in-person interviews and a quantitative online survey. Insights and understandings then lead to initial concepts that were tested to address distinctive qualities between tangible and digital design solutions. The main findings included that although people want to be remembered long after they die, current methods of tangible and digital content management can not sufficiently support the reflection of legacies long into the future. In conclusion, this thesis argues that to become part of an everlasting legacy, the interaction experience can leverage commonalities and shared moments from life events captured in digital media. These points of connections rely on associated metadata (i.e. keyword tags, date stamps, geolocation) to align relevant moments that transcend time and generations. The solution proposed here harnesses the benefits that both digital and tangible media afford and are presented as a tablet interface with an associated tangible token used as a connection key.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Taylor, Jennyfer Lawrence. « Ngana Wubulku Junkurr-Jiku Balkaway-Ka : The intergenerational co-design of a tangible technology to keep active use of the Kuku Yalanji Aboriginal language strong ». Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/206447/1/Jennyfer_Taylor_Thesis.pdf.

Texte intégral
Résumé :
This project involved the co-design of a tangible technology to enrich everyday Kuku Yalanji language use by children and their families, in partnership with the Wujal Wujal Aboriginal Shire Council and community. This thesis contributes the design of a relational language technology, the 'Crocodile Language Friend' talking soft toy with a paired web application, along with novel co-design methods and whole-of-community engagement approaches. The thesis argues that participatory design practices involving tangible technologies can support community alignment of resources and initiatives towards Indigenous language revitalization efforts.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Iezzi, Valeria. « Connectedness : Designing interactive systems that foster togetherness as a form of resilience for people in social distancing during Covid-19 pandemic. Exploring novel user experiences in the intersection between light perception, tangible interactions and social interaction design (SxD) ». Thesis, Malmö universitet, Malmö högskola, Institutionen för konst, kultur och kommunikation (K3), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-37697.

Texte intégral
Résumé :
This thesis project explores how interactive technologies can facilitate a sense of social connectedness with others whilst remotely located. While studying the way humans use rituals for emotional management, I focused my interest on the act of commensality because it is one of the oldest and most important rituals used to foster togetherness among families and groups of friends. Dining with people who do not belong to the same household is of course hard during a global pandemic, just like many of the other forms of social interactions that were forcibly replaced by the use of technological means such as video-chat apps, instant messaging and perhaps an excessive use of social networking websites. These ways of staying connected, however, lack the subtleties of real physical interaction, which I tried to replicate with my prototype system, which consists of two sets of a lamp and a coaster which enable to communicate through light and tactile cues. The use of such devices creates a new kind of ritual based on the simultaneous use of the devices by two people, thus enabling a new and original form of commensality that happens through a shared synchronized experience.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Pederson, Thomas. « From Conceptual Links to Causal Relations — Physical-Virtual Artefacts in Mixed-Reality Space ». Doctoral thesis, Umeå : Univ, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-137.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Knibbe, Cédric. « Concevoir avec des technologies émergentes pour la construction conjointe des pratiques et des artefacts : apports d’une méthodologie participative à l’innovation technologique et pédagogique ». Thesis, Paris, CNAM, 2016. http://www.theses.fr/2016CNAM1123/document.

Texte intégral
Résumé :
Les Technologies de l’Information et de la Communication pour l’Enseignement (TICE) peuvent transformer profondément les pratiques pédagogiques. Cependant, pour que ce bénéfice potentiel se réalise, il faut que les solutions produites en conception puissent à la fois s’intégrer à ces pratiques et être sources d’innovation potentielles, en termes de plus-values pour les activités d’enseignement et d’apprentissage. L’objectif de cette thèse est de mettre en avant des facteurs de conception qui permettent d’articuler ces enjeux dans le cadre d’un projet de conception sur technologie émergente pour l’enseignement. Ainsi, la recherche s’intéresse à une démarche participative mise en œuvre dans la conception conjointe d’un système technique (l’application sur table interactive) et des pratiques enseignantes (par l’intermédiaire de scénarios pédagogiques). Nos hypothèses concernent les effets de différents facteurs sur l’élaboration d’un compromis entre des enjeux d’intégration et d’innovation : implication de futurs utilisateurs « pionniers » ; opportunités de confrontation de leurs hypothèses de conception ; cadrage du champ des possibles. Les analyses portent sur l’ensemble de la démarche de conception, afin de caractériser ces effets d’un point de vue longitudinal en les situant par rapport aux différentes méthodes mobilisées et à l’avancement des solutions de conception. En particulier, les justifications des choix de conception relatifs à certaines composantes de l’artefact en cours d’élaboration sont étudiées pour caractériser, d’une part, les facteurs de la conception qui ont contextualisé ces choix et, d’autre part, leurs liens avec les enjeux d’innovation ou d’intégration, voire les deux. Les résultats montrent que : (i) la mobilisation et la redéfinition des scénarios pédagogiques, l’implication d’enseignants en tant que co-concepteurs, la confrontation des solutions de conception sur prototype et en simulation et enfin le recueil des besoins favorisent la définition de caractéristiques techniques et l’intégration du système technique ; (ii) la définition des caractéristiques techniques de l’application, l’implication d’enseignants pionniers, l’identification de leurs besoins et la simulation des solutions favorisent l’adaptation des pratiques enseignantes aux caractéristiques de la technologie en vue d’optimiser son intégration ; (iii) les différentes formes de confrontation à la nouvelle technologie ainsi que les apprentissages mutuels en conception participative vis-à-vis du potentiel technique et interactif des tables interactives contribuent à l’exploitation de ce potentiel par les concepteurs ; (iv) les caractéristiques innovantes des tables interactives, l’anticipation de leurs usages potentiels en salle de classe, la mise en œuvre des solutions de conception en situation réelle, la participation d’enseignants futurs utilisateurs leur permettant de s’approprier la nouvelle technologie et l’identification de leurs difficultés actuelles favorisent l’innovation dans les scénarios pédagogiques et l’amélioration des activités d’enseignement et d’apprentissage
Information and Communication Technologies have the potential for deeply transforming teachers’ practices. However, this requires design solutions to be adapted to these practices and, at the same time, to foster innovations, in terms of improvements for teaching and learning activities. This thesis aims at highlighting design factors that allow the articulation between these goals, in the context of a design project with emerging technologies for education. The research focuses onthe design process: joint definition of a technical system (an application on an interactive tabletop) and of teaching practices (via pedagogical scenarios); involvement of future users; design hypothesis assessment modalities; framing the scope of design possibilities. Our hypotheses concern the potential effects of these factors on the reaching of a compromise between integration and innovation related goals.Analyses cover the entire design process, in order to longitudinally examine the various design techniques used and the design process advancement. In particular, design choices related to some of the features of the artifact are analyzed to investigate the links between design factors and integration/innovation related goals.Results show that: (i) using and redefining pedagogical scenarios, involving users as co-designers, confronting the design solutions with prototypes and simulations and identifying users’ needs facilitate the technical definition of the application and its integration in future teaching activities; (ii) defining the technical properties of an artifact, involving teachers as experimenters, identifying their needs and simulating on the design solution foster the adaptation of teachers’ practices to the specificities of the technologies and optimize its integration ; (iii) allowing participants to interact with the emerging technology in different ways and the mutual learning processes between designers, regarding tabletops technical and interactional potential,help them capitalize on this potential ; (iv) identifying the innovative features of tabletops, anticipating their potential uses, testing prototypes in real class situations and involving teachers, to let them learn how to use an emerging technology and to express the existing limits of in their teaching practices, foster innovation in their pedagogical scenarios and, thus, can improve teaching and learning activities
Styles APA, Harvard, Vancouver, ISO, etc.
11

Aljundi, Liam. « Moving Mathematics : Exploring constructivist tools to enhance mathematics learning ». Thesis, Malmö universitet, Institutionen för konst, kultur och kommunikation (K3), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-42981.

Texte intégral
Résumé :
The challenges faced by mathematics education reflect the more immense difficulties of the schooling system as a whole. This thesis investigates such challenges in the light of an ethical learning foundation and aims for a transformation through the use of technologies as learning tools.  Interaction design methods are used to craft constructivist learning kits that aim to move mathematics students from passive receivers of knowledge to active learners. The proposed tools modify new technologies by adapting them to teachers’ and learners’ needs to be best suited for mathematics classroom adoption. Additionally, social, political, and economic issues that may hinder the adoption of constructivist learning are presented and critically discussed.  Finally, this thesis paves the way for future designers who aim to design mathematics educational kits by providing a design framework based on the learning theory and the design process presented in this thesis.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Farr, William John. « Tangible user interfaces and social interaction in children with autism ». Thesis, University of Sussex, 2011. http://sro.sussex.ac.uk/id/eprint/6962/.

Texte intégral
Résumé :
Tangible User Interfaces (TUIs) offer the potential for new modes of social interaction for children with Autism Spectrum Conditions (ASC). Familiar objects that are embedded with digital technology may help children with autism understand the actions of others by providing feedback that is logical and predictable. Objects that move, playback sound or create sound – thus repeating programmed effects – offer an exciting way for children to investigate objects and their effects. This thesis presents three studies of children with autism interacting with objects augmented with digital technology. Study one looked at Topobo, a construction toy augmented with kinetic memory. Children played with Topobo in groups of three of either Typically Developing (TD) or ASC children. The children were given a construction task, and were also allowed to play with the construction sets with no task. Topobo in the task condition showed an overall significant effect for more onlooker, cooperative, parallel, and less solitary behaviour. For ASC children significantly less solitary and more parallel behaviour was recorded than other play states. In study two, an Augmented Knights Castle (AKC) playset was presented to children with ASC. The task condition was extended to allow children to configure the playset with sound. A significant effect in a small sample was found for configuration of the AKC, leading to less solitary behaviour, and more cooperative behaviour. Compared to non-digital play, the AKC showed reduction of solitary behaviour because of augmentation. Qualitative analysis showed further differences in learning phase, user content, behaviour oriented to other children, and system responsiveness. Tangible musical blocks (‘d-touch') in study three focused on the task. TD and ASC children were presented with a guided/non-guided task in pairs, to isolate effects of augmentation. Significant effects were found for an increase in cooperative symbolic play in the guided condition, and more solitary functional play was found in the unguided condition. Qualitative analysis highlighted differences in understanding blocks and block representation, exploratory and expressive play, understanding of shared space and understanding of the system. These studies suggest that the structure of the task conducted with TUIs may be an important factor for children's use. When the task is undefined, play tends to lose structure and the benefits of TUIs decline. Tangible technology needs to be used in an appropriately structured manner with close coupling (the distance between digital housing and digital effect), and works best when objects are presented in familiar form.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Le, Goc Mathieu. « Supporting Versatility in Tangible User Interfaces Using Collections of Small Actuated Objects ». Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS563/document.

Texte intégral
Résumé :
Dans ce manuscrit, je présente mes travaux visant à rendre les interfaces tangibles plus polyvalentes et plus physiques afin de réduire l’espace entre le réel et le virtuel. Pour ce faire, j'étudie et conçois des dispositifs technologiques permettant d’interagir avec le monde numérique exploitant au mieux le potentiel de nos mains. Je commence par examiner l’état de l’art et souligne le besoin d’approfondissement dans cette direction. J’y observe la spécificité des systèmes existants, limitant leur utilisation et diffusion, de même que l’utilisation récurrente d’écrans et autres dispositifs de projections comme moyen de représentation du monde numérique. Tirant les leçons de la recherche existante, je choisis d'orienter mes travaux autour de dispositifs physiques constitués uniquement de collections d’objets génériques et interactifs. Mon but est d’apporter plus de polyvalence aux interfaces purement tangibles. J’articule pour cela ma recherche en quatre temps. Je mène tout d’abord une étude comparant les interfaces tangibles et tactiles, dans le but d’évaluer de potentiels bénéfices de l’utilisation d’objets physiques. J’étudie conjointement l’influence de l’épaisseur des objets sur la manipulation. Les résultats suggèrent tout d’abord de modérer les conclusions de nombre d’études existantes, quant aux avantages de la tangibilité en terme de performances. Ces résultats confirment toutefois l’amélioration de l’agrément lors de l’utilisation de dispositifs physiques, expliquée par une plus grande variété ainsi qu’une plus grande fiabilité des manipulations réalisées. Je présente dans un deuxième temps SmartTokens, un dispositif à base de petits objets capable de détecter et reconnaître les manipulations auxquelles ils sont sujets. J’illustre les SmartTokens dans un scénario de gestion de notifications et de tâches personnelles. Je poursuis en introduisant les Interfaces en essaim, une sous-catégorie des interfaces tangibles, constituée de collections de nombreux robots autonomes et interactifs. Pour les illustrer, je présente les Zooids, une plateforme ouverte pour développer des Interfaces en essaim. Je démontre le potentiel quant à leur polyvalence avec un assortiment d’applications, et clarifie les règles de conception des Interfaces en essaim. Je définis les physicalisations de données composites, et les implémentent en utilisant les Zooids. Je termine en ouvrant perspectives et futures directions, et en tirant les conclusions des travaux réalisés au cours de cette thèse
In this dissertation, I present my work aiming at making tangible user interfaces more versatile with a higher degree of physicality, in order to bridge the gap between digital and physical worlds. To this end, I study and design systems which support interaction with digital information while better leveraging human hand capabilities. I start with an examination of the current related work, and highlight the need for further research towards more versatility with a higher degree of physicality. I argue that the specificity of existing systems tends to impair their usability and diffusion and induce a dependence on screens and other projections as media to represent the digital world. Building on lessons learned from previous work, I choose to focus my work on physical systems made of collections of generic and interactive objects. I articulate my research in four steps. Firstly, I present a study that compares tangible and multitouch interfaces to help assess potential benefits of physical objects. At the same time, I investigate the influence of object thickness on how users manipulate objects. Results suggest that conclusions from numerous previous studies need to be tempered, in particular regarding the advantages of physicality in terms of performance. These results however confirm that physicality improves user experience, due to the higher diversity of possible manipulations. As a second step, I present SmartTokens, a system based on small objects capable of detecting and recognizing user manipulations. I illustrate SmartTokens in a notification and personal task management scenario. In a third step, I introduce Swarm User Interfaces as a subclass of tangible user interfaces that are composed of collections of many interactive autonomous robots. To illustrate them, I present Zooids, an open-source open-hardware platform for developing tabletop Swarm User Interfaces. I demonstrate their potential and versatility through a set of application scenarios. I then describe their implementation, and clarify design considerations for Swarm User Interfaces. As a fourth step, I define composite data physicalizations and implement them using Zooids. I finally draw conclusions from the presented work, and open perspectives and directions for future work
Styles APA, Harvard, Vancouver, ISO, etc.
14

Nowacka, Diana. « Autonomous behaviour in tangible user interfaces as a design factor ». Thesis, University of Newcastle upon Tyne, 2017. http://hdl.handle.net/10443/3708.

Texte intégral
Résumé :
This thesis critically explores the design space of autonomous and actuated artefacts, considering how autonomous behaviours in interactive technologies might shape and influence users’ interactions and behaviours. Since the invention of gearing and clockwork, mechanical devices were built that both fascinate and intrigue people through their mechanical actuation. There seems to be something magical about moving devices, which draws our attention and piques our interest. Progress in the development of computational hardware is allowing increasingly complex commercial products to be available to broad consumer-markets. New technologies emerge very fast, ranging from personal devices with strong computational power to diverse user interfaces, like multi-touch surfaces or gestural input devices. Electronic systems are becoming smaller and smarter, as they comprise sensing, controlling and actuation. From this, new opportunities arise in integrating more sensors and technology in physical objects. These trends raise some specific questions around the impacts smarter systems might have on people and interaction: how do people perceive smart systems that are tangible and what implications does this perception have for user interface design? Which design opportunities are opened up through smart systems? There is a tendency in humans to attribute life-like qualities onto non-animate objects, which evokes social behaviour towards technology. Maybe it would be possible to build user interfaces that utilise such behaviours to motivate people towards frequent use, or even motivate them to build relationships in which the users care for their devices. Their aim is not to increase the efficiency of user interfaces, but to create interfaces that are more engaging to interact with and excite people to bond with these tangible objects. This thesis sets out to explore autonomous behaviours in physical interfaces. More specifically, I am interested in the factors that make a user interpret an interface as autonomous. Through a review of literature concerned with animated objects, autonomous technology and robots, I have mapped out a design space exploring the factors that are important in developing autonomous interfaces. Building on this and utilising workshops conducted with other researchers, I have vi developed a framework that identifies key elements for the design of Tangible Autonomous Interfaces (TAIs). To validate the dimensions of this framework and to further unpack the impacts on users of interacting with autonomous interfaces I have adopted a ‘research through design’ approach. I have iteratively designed and realised a series of autonomous, interactive prototypes, which demonstrate the potential of such interfaces to establish themselves as social entities. Through two deeper case studies, consisting of an actuated helium balloon and desktop lamp, I provide insights into how autonomy could be implemented into Tangible User Interfaces. My studies revealed that through their autonomous behaviour (guided by the framework) these devices established themselves, in interaction, as social entities. They furthermore turned out to be acceptable, especially if people were able to find a purpose for them in their lives. This thesis closes with a discussion of findings and provides specific implications for design of autonomous behaviour in interfaces.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Eibl, Maximilian, et Marc Ritter. « Workshopband der Mensch & ; Computer 2011 ». Technische Universität Chemnitz, 2011. https://monarch.qucosa.de/id/qucosa%3A19535.

Texte intégral
Résumé :
Die seit 2001 stattfindende Konferenz Mensch & Computer geht in diesem Jahr in ihre elfte Runde. Thema ist: überMEDIEN|ÜBERmorgen. Die Mensch & Computer lebt von den vielfältigen und spannenden Beiträgen, welche während der Veranstaltung präsentiert und diskutiert werden. Seit Beginn machen die durch die Community organisierten Workshops einen wesentlichen Teil der Konferenz aus. Dieser Workshopband enthält die Beiträge zu acht Workshops der Mensch & Computer sowie zu einem Workshop des Thementracks Entertainment Interfaces sowie Kurzbeschreibungen zweier weiterer Workshops der Mensch & Computer. Begreifbare Interaktion in gemischten Wirklichkeiten Interaktive Displays in der Kooperation – Herausforderung an Gestaltung und Praxis Motivation und kulturelle Barrieren bei der Wissensteilung im Enterprise 2.0 (MKBE 2011) Mousetracking – Analyse und Interpretation von Interaktionsdaten Menschen, Medien, Auto-Mobilität mi.begreifbar – Medieninformatik begreifbar machen Partizipative Modelle des mediengestützten Lernens – Erfahrungen und Visionen Innovative Computerbasierte Musikinterfaces (ICMI) Senioren. Medien. Übermorgen. Designdenken in Deutschland Game Development in der Hochschulinformatik
First initiated in 2001, the conference series Mensch & Comuter has evolved as the leading event in the area of human-computer interaction in German speaking countrires hosting extremely vivid and exciting contributions with an audience that is keen to debate. Taking place the 11th time under the topical theme überMEDIEN|ÜBERmorgen, key topics of the conference are media themselves and their opportunities, risks, uses, influence on our lives and our influence on them, today and tomorrow. From the beginning, the workshops being organized by the community constitute a major part of the conference. These proceedings cover the contributions of eight workshops and two brief descriptions from Mensch & Computer as well as one workshop from Entertainment Interface track. Begreifbare Interaktion in gemischten Wirklichkeiten Interaktive Displays in der Kooperation – Herausforderung an Gestaltung und Praxis Motivation und kulturelle Barrieren bei der Wissensteilung im Enterprise 2.0 (MKBE 2011) Mousetracking – Analyse und Interpretation von Interaktionsdaten Menschen, Medien, Auto-Mobilität mi.begreifbar – Medieninformatik begreifbar machen Partizipative Modelle des mediengestützten Lernens – Erfahrungen und Visionen Innovative Computerbasierte Musikinterfaces (ICMI) Senioren. Medien. Übermorgen. Designdenken in Deutschland Game Development in der Hochschulinformatik
Styles APA, Harvard, Vancouver, ISO, etc.
16

Riedenklau, Eckard [Verfasser]. « Development of actuated Tangible User Interfaces : new interaction concepts and evaluation methods / Eckard Riedenklau ». Bielefeld : Universitätsbibliothek Bielefeld, 2016. http://d-nb.info/1082845000/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

Rose, Cody M. (Cody McCullough). « Towards interactive sustainable neighborhood design : combining a tangible user interface with real time building simulations ». Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99253.

Texte intégral
Résumé :
Thesis: S.M. in Building Technology, Massachusetts Institute of Technology, Department of Architecture, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 73-74).
An increasingly urbanizing human population presents new challenges for urban planners and designers. While the field of urban design tools is expanding, urban development scenarios require the input of multiple stakeholders, each with different outlooks, expertise, requirements, and preconceptions, and good urban design requires communication and compromise as much as it requires effective use of tools. The best tools will facilitate this communication while remaining evidence-based, allowing diverse planning teams to develop high quality, healthy, sustainable urban plans. Presented in this work is a new such urban design tool, implemented as a design "game," created to facilitate collaboration between urban planners, designers, policymakers, citizens, and any other stakeholders in urban development scenarios. Users build a neighborhood or city out of Lego pieces on a plexiglass tabletop, and the system simulates the built design in real time, projecting colors onto the Lego pieces that reflect their performance with respect to three urban performance metrics: operational energy consumption, neighborhood walkability, and building daylighting availability. The system requires little training, allowing novice users to explore the design tradeoffs associated with urban density. The simulation method uses a novel precalculation method to quickly approximate the results of existing, validated simulation tools. The game is presented in the context of a case study that took place at the planning commission of Riyadh, Saudi Arabia in March 2015. Post-game analysis indicates that the precalculation method performs suitable approximations in the Saudi climate, and that users were able to use the interface to improve their neighborhoods' performance with respect to two of the three offered performance metrics. Furthermore, users demonstrated substantial enthusiasm for interactive, tangible, urban design of the sort provided. Improvements to future versions of the design game based on the case study are suggested, but overall, the work presented indicates that collaborative, interactive design tools for diverse stakeholders are an excellent path forward for sustainable design.
by Cody M. Rose.
S.M. in Building Technology
Styles APA, Harvard, Vancouver, ISO, etc.
18

Edge, D. K. « Tangible user interfaces for peripheral interaction : episodic engagement with objects of physical, digital & ; social significance ». Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.598755.

Texte intégral
Résumé :
This dissertation describes how TUIs (Tangible User Interfaces) can support a “peripheral” style of interaction, in which users engage in short, dispersed episodes of low-attention interaction with digitally-augmented physical tokens. The application domain in which I develop this concept is the office context, where physical tokens can represent items of common interest to members of a team whose work is mutually interrelated, but predominantly performed independently by individuals at their desks. An “analytic design process” is introduced as a way of developing TUI designs appropriate for their intended contexts of use. This process is then used to present the design of a bimanual desktop TUI that complements the existing workstation, and encourages peripheral interaction in parallel with workstation-intensive tasks. Implementation of a prototype TUI is then described, comprising “task” tokens for work-time management, “document” tokens for face-to-face sharing of collaborative documents, and “contact” tokens for awareness of other team members’ status and workload. Finally, evaluation of this TUI is presented via description of its extended deployment in a real office context. The main empirically-grounded results of this work are a categorisation of the different ways in which users can interact with physical tokens, and an identification of the qualities of peripheral interaction that differentiate it from other interaction styles. The foremost benefits of peripheral interaction were found to arise from the freedom with which tokens can be appropriated to create meaningful information structures of both cognitive and social significance, in the physical desktop environment and beyond.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Finkelstein, Ali (Ali S. ). « Implementation and design of an updated user interface for an interactive 3D printed tangible map of MIT ». Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112848.

Texte intégral
Résumé :
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 29).
The Tangible Map is a unique exhibit created by the MIT Mobile Experience lab located in the Atlas Welcome Service Center created to display dynamic data, like shuttle status and event information, collected across the campus of MIT in an interactive manner. Now, in the final stage of implementation, extensive research on the user interface demonstrated it was lacking in features to provide a rich and engaging experience with the Tangible Map. In this study, I explore how an updated user interface improves the user's interactions when accessing live data, as well as provides a rich experience of enhanced map functionalities. Additionally, I further discuss the establishment and implementation of a new user interface that provides this updated experience.
by Ali Finkelstein.
M. Eng.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Schmidt, Toni. « Interaction Concepts for Multi-Touch User Interfaces : Design and Implementation ». [S.l. : s.n.], 2008. http://nbn-resolving.de/urn:nbn:de:bsz:352-opus-72395.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
21

Rivière, Guillaume. « Interaction tangible sur table interactive : application aux géosciences ». Thesis, Bordeaux 1, 2009. http://www.theses.fr/2009BOR13837/document.

Texte intégral
Résumé :
Cette thèse traite des interfaces utilisateur tangibles (TUI). La première partie de ce manuscrit concerne l'interaction tangible sur table interactive. Nous introduisons tout d'abord les TUIs et les tables interactives. Nous validons une hypothèse concernant la spécialisation de la forme des interacteurs tangibles et nous en tirons les conséquences pour la conception des TUIs. Nous proposons une solution de boîtier à boutons pour y déporter certaines opérations dans le contexte d'une TUI sur table interactive. Nous abordons la construction et le développement d'un système de tables interactives tangibles transportables et à faible coût permettant de faire du prototypage rapide de TUIs. Nous terminons en soulignant les particularités de l'évaluation expérimentale des TUIs. La seconde partie de ce manuscrit traite un cas d'application d'une TUI pour les géosciences : GeoTUI. Nous commençons par présenter le contexte métier des géophysiciens et leurs besoins en terme de nouveaux moyens d'interaction. Nous présentons les résultats de notre conception d'une TUI pour les géosciences. Nous précisons le détail du développement de notre prototype. Pour terminer, nous présentons les deux expérimentations utilisateurs qui ont été conduites pour valider nos choix de conception
This thesis focuses on tangible user interfaces (TUI). The first part of this manuscript is about tangible interaction on tabletop. We first introduce TUIs and tabletops. We validate an hypothesis about the specialization of the form of the tangible objects, and conclude from that consequences on TUIs design. We propose the solution of a button box to deport some operations in the context of tabletop TUI. We present the construction and development of a transportable and low cost tabletop TUI system that allows rapid TUI prototyping. We end pointing out the special features of user experiments of TUIs. The second part of this manuscript deals with an application case of a TUI for geoscience: GeoTUI. We start presenting the context of the geophysicists work and their need in term of new way of interation. We present the results of our design of a TUI for geoscience. We detail the development of our prototype. To finish, we present two user experiments we conducted to validate our design choices
Styles APA, Harvard, Vancouver, ISO, etc.
22

Reeves, Leah. « OPTIMIZING THE DESIGN OF MULTIMODAL USER INTERFACES ». Doctoral diss., University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4130.

Texte intégral
Résumé :
Due to a current lack of principle-driven multimodal user interface design guidelines, designers may encounter difficulties when choosing the most appropriate display modality for given users or specific tasks (e.g., verbal versus spatial tasks). The development of multimodal display guidelines from both a user and task domain perspective is thus critical to the achievement of successful human-system interaction. Specifically, there is a need to determine how to design task information presentation (e.g., via which modalities) to capitalize on an individual operator's information processing capabilities and the inherent efficiencies associated with redundant sensory information, thereby alleviating information overload. The present effort addresses this issue by proposing a theoretical framework (Architecture for Multi-Modal Optimization, AMMO) from which multimodal display design guidelines and adaptive automation strategies may be derived. The foundation of the proposed framework is based on extending, at a functional working memory (WM) level, existing information processing theories and models with the latest findings in cognitive psychology, neuroscience, and other allied sciences. The utility of AMMO lies in its ability to provide designers with strategies for directing system design, as well as dynamic adaptation strategies (i.e., multimodal mitigation strategies) in support of real-time operations. In an effort to validate specific components of AMMO, a subset of AMMO-derived multimodal design guidelines was evaluated with a simulated weapons control system multitasking environment. The results of this study demonstrated significant performance improvements in user response time and accuracy when multimodal display cues were used (i.e., auditory and tactile, individually and in combination) to augment the visual display of information, thereby distributing human information processing resources across multiple sensory and WM resources. These results provide initial empirical support for validation of the overall AMMO model and a sub-set of the principle-driven multimodal design guidelines derived from it. The empirically-validated multimodal design guidelines may be applicable to a wide range of information-intensive computer-based multitasking environments.
Ph.D.
Department of Industrial Engineering and Management Systems
Engineering and Computer Science
Industrial Engineering PhD
Styles APA, Harvard, Vancouver, ISO, etc.
23

Micheloni, Edoardo. « Models and methods for sound-based input in Natural User Interfaces ». Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3422847.

Texte intégral
Résumé :
In the last years, the Multimodal Interfaces and the Natural User Interfaces (NUIs)are finding more and more applications, thanks to the diffusion of mobile devices and smart objects that do not allow a traditional WIMP interaction. In these contexts, the interaction modes most used are the natural language and the gestures recognition. The objective of this thesis is to explore innovative interfaces based on non-verbal sounds,produced by the interaction of the user with common objects. The potentialities and the problems related to the design and implementation of this type of interfaces will be discussed through three case studies, in which non-verbal sounds are used for interaction with embedded systems developed for the valorization of cultural heritage. The sounds analysed in these projects are i) broadband noises, ii) impulses and iii) pitched sounds.The obtained results, thanks to a strong multidisciplinary approach, opened to a fruitful technology transfer between university and companies/institutions involved. First of all, the study of broadband noisy sounds was addressed through the interpretation of air blown signal. The resulting sensors equipped system was included in a multimedia installation for the valorization of an ancient Pan flute preserved at the Museum ofArchaeological Sciences and Art of Padova (Italy). Secondly, the impulsive sounds were studied from footsteps detection on a wooden runway in order to realize a real-time position mapping technology. The resulting system was used for the 3D exploration of a usual 2D painting exposed during "The European Researchers’ Night 2018" in Padova(Italy). Finally, pitched sound signals were studied analysing notes produced by an acoustic piano. The resulting algorithm for real time note detection was applied to the video-gameMusa, which had the goal to teach children how to play the piano. In these projects, both the algorithms, by means of quantitative analysis, and the interfaces between user and computer, by means of qualitative analysis, were validated to assess the "naturalness" of the interaction.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Jain, Nibha. « Exploring interactive tangrams for teaching basic school physics ». Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34755.

Texte intégral
Résumé :
This Thesis explores the application of Tangible User Interfaces to Education. For this, a research study was conducted by building and testing an interactive game called Tangram Bridge. This Tangram based game was designed to teach players about basic physics principles such as balance, friction and motion on inclined planes. The focus of this Tangram Bridge is middle school physics, and therefore concerns children aged 11 years and up, their instructors and care givers. This research also lays a lot of emphasis on constructive play amongst children. Tangram Bridge is a versatile platform that can be scaled for younger or older populations A comparative study of existing Tangible User Interfaces ( TUIs) revealed opportunity spaces for this project. Through a compilation of related research in the fields of education, hands on learning, Tangible interaction and understanding play and learning amongst children, the constructionist views on learning are explored as guidelines for the design of this study. Through the analysis of comparative research studies, trends on TUI with relation to education emerged, informing the design process for Tangram Bridge. This research study discusses the application of Tangible user interfaces to education. It combines the research data collected through market research, user testing and literature reviews to explore the efficacy of TUI as teaching tool for abstract concepts that require imagination and experimentation.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Took, Roger Kenton. « Surface interaction : separating direct manipulation interfaces from their applications ». Thesis, University of York, 1990. http://etheses.whiterose.ac.uk/13997/.

Texte intégral
Résumé :
To promote both quality and economy in the production of applications and their interactive interfaces, it is desirable to delay their mutual binding. The later the binding, the more separable the interface from its application. An ideally separated interface can factor tasks from a range of applications, can provide a level of independence from hardware I/O devices, and can be responsive to end-user requirements. Current interface systems base their separation on two different abstractions. In linguistic architectures, for example User Interface Management Systems in the Seeheim model, the dialogue or syntax of interaction is abstracted in a separate notation. In agent architectures like Toolkits, interactive devices, at various levels of complexity, are abstracted into a class or call hierarchy. This Thesis identifies an essential feature of the popular notion of direct manipulation: directness requires that the same object be used both for output and input. In practice this compromises the separation of both dialogue and devices. In addition, dialogue cannot usefully be abstracted from its application functionality, while device abstraction reduces the designer's expressive control by binding presentation style to application semantics. This Thesis proposes an alternative separation, based on the abstraction of the medium of interaction, together with a dedicated user agent which allows direct manipulation of the medium. This interactive medium is called the surface. The Thesis proposes two new models for the surface, the first of which has been implemented as Presenter, the second of which is an ideal design permitting document quality interfaces. The major contribution of the Thesis is a precise specification of an architecture (UMA), whereby a separated surface can preserve directness without binding in application semantics, and at the same time an application can express its semantics on the surface without needing to manage all the details of interaction. Thus UMA partitions interaction into Surface Interaction, and deep interaction. Surface Interaction factors a large portion of the task of maintaining a highly manipulable interface, and brings the roles of user and application designer closer.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Carden, Benjamin J. « Do touch : The impact of tangible interaction on situated community engagement ». Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/127641/1/Benjamin_Carden_Thesis.pdf.

Texte intégral
Résumé :
This research investigated how the use of tangible interaction and map-based interfaces can affect the quality of participants responses to situated community engagement. This was done by creating two prototypes of a tangible mapping interface that were deployed as urban probes for community engagement. Results from the studies suggest that tangible interfaces encourage playful engagement and discussion, which in turn result in participants putting more thought into their responses and generating richer data.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Sadun, Erica. « Djasa : a language, environment and methodology for interaction design ». Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/9250.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
28

Mathew, Justin D. « A design framework for user interfaces of 3D audio production tools ». Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS328/document.

Texte intégral
Résumé :
Il y a un intérêt important et croissant à procurer des expériences d’écoute immersives pour une variété d’applications, et les améliorations constantes des technologies de reproduction audio 3D permettent de produire des scènes auditives immersives à la fois créatives et réalistes. Mais bien que ces technologies de rendu audio 3D soient maintenant relativement disponibles pour les consommateurs, la production et la création des contenus adéquats restent difficiles en raison de la variété des techniques de rendu, des considérations perceptives et des limites des interfaces utilisateur disponibles. Cette thèse traite de ces problèmes en développant un cadre de conception basé sur deux points de vue : l’analyse morphologique des méthodes et des pratiques audio 3D, et la conception d’interaction. À partir du recueil de données ethnographiques sur les outils, les méthodes et les pratiques pour la production de contenu audio 3D, de considérations sur la perception spatiale liée à l’audio 3D, et d’une analyse morphologique sur les objets d’intérêt connexes (objets audio 3D, paramètres interactifs et techniques de rendu), nous avons identifié les taches que doivent supporter les interfaces utilisateur audio 3D et proposé un cadre de conception qui caractérise la création et la manipulation des objets audio. Ensuite, nous avons conçu plusieurs techniques d’interaction pour la création audio 3D et avons étudié leurs performances et leur facilité d’utilisation selon différentes caractéristiques des méthodes d’entrée et de ’mapping’ (multiplexage, intégralité, ’directitude’). Nous avons observé des différences de performances lors de la création et de l’édition de trajectoires audio suggérant que l’augmentation de la sensibilité de la technique de ’mapping’ améliore les performances, et qu’un équilibre entre la séparabilité et l’intégralité des méthodes d’entrée peut résulter en un compromis satisfaisant entre la performance de l’utilisateur et la simplicité matérielle de la solution. Plus généralement, à partir de ces perspectives, nous avons identifié les critères de conception requis pour les interfaces utilisateur audio 3D en vue de compléter notre cadre de conception. Ce dernier, associé à nos résultats expérimentaux, sont un moyen d’aider les concepteurs à mieux prendre en compte les dimensions importantes dans le processus de conception, analyser les fonctionnalités et améliorer les interfaces utilisateur pour les outils de production audio 3D
There has been a significant interest in providing immersive listening experiences for a variety of applications, and recent improvements in audio production have provided the capability for 3D audio practitioners to produce realistic and imaginative immersive auditory scenes. Even though technologies to reproduce 3D audio content are becoming readily available for consumers, producing and authoring this type of content is difficult due to the variety of rendering techniques, perceptual considerations, and limitations of available user interfaces. This thesis examines these issues through the development of a framework of design spaces that classifies how 3D audio objects can be created and manipulated from two different viewpoints : Morphological Analysis of 3D Audio Methods and Practices and Interaction Design. By gathering ethnographic data on tools, methods, and practices of 3D audio practitioners, overviewing spatial perception related to 3D audio, and conducting a morphological analysis on related objects of interest (3D audio objects, interactive parameters, and rendering techniques), we identified the tasks required to produce 3D audio content and how 3D audio objects can be created and manipulated. This work provided the dimensions of two design spaces that identify the interactive spatial parameters of audio objects by their recording and rendering methods, describing how user interfaces provide visual feedback and control the interactive parameters. Lastly, we designed several interaction techniques for 3D audio authoring and studied their performance and usability according to different characteristics of input and mapping methods (multiplexing, integrality, directness). We observed performance differences when creating and editing audio trajectories, suggesting that increasing the directness of the mapping technique improves performance and that a balance between separability and integrality of input methods can result into a satisfactory trade-off between user performance and cost of equipment. This study provided results that inform designers on what they might expect in terms of usability when designing input and mapping methods for 3D audio trajectory authoring tasks. From these viewpoints, we proposed design criteria required for user interfaces for 3D audio user production that developed and improved the framework of design spaces. We believe this framework and the results of our studies could help designers better account for important dimensions in the design process, analyze functionalities in current tools, and improve the usability of user interfaces for 3D audio production tools
Styles APA, Harvard, Vancouver, ISO, etc.
29

Santos, Lages Wallace. « Walk-Centric User Interfaces for Mixed Reality ». Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/84460.

Texte intégral
Résumé :
Walking is a natural part of our lives and is also becoming increasingly common in mixed reality. Wireless headsets and improved tracking systems allow us to easily navigate real and virtual environments by walking. In spite of the benefits, walking brings challenges to the design of new systems. In particular, designers must be aware of cognitive and motor requirements so that walking does not negatively impact the main task. Unfortunately, those demands are not yet fully understood. In this dissertation, we present new scientific evidence, interaction designs, and analysis of the role of walking in different mixed reality applications. We evaluated the difference in performance of users walking vs. manipulating a dataset during visual analysis. This is an important task, since virtual reality is increasingly being used as a way to make sense of progressively complex datasets. Our findings indicate that neither option is absolutely better: the optimal design choice should consider both user's experience with controllers and user's inherent spatial ability. Participants with reasonable game experience and low spatial ability performed better using the manipulation technique. However, we found that walking can still enable higher performance for participants with low spatial ability and without significant game experience. In augmented reality, specifying points in space is an essential step to create content that is registered with the world. However, this task can be challenging when information about the depth or geometry of the target is not available. We evaluated different augmented reality techniques for point marking that do not rely on any model of the environment. We found that triangulation by physically walking between points provides higher accuracy than purely perceptual methods. However, precision may be affected by head pointing tremors. To increase the precision, we designed a new technique that uses multiple samples to obtain a better estimate of the target position. This technique can also be used to mark points while walking. The effectiveness of this approach was demonstrated with a controlled augmented reality simulation and actual outdoor tests. Moving into the future, augmented reality will eventually replace our mobile devices as the main method of accessing information. Nonetheless, to achieve its full potential, augmented reality interfaces must support the fluid way we move in the world. We investigated the potential of adaptation in achieving this goal. We conceived and implemented an adaptive workspace system, based in the study of the design space and through user contextual studies. Our final design consists in a minimum set of techniques to support mobility and integration with the real world. We also identified a set of key interaction patterns and desirable properties of adaptation-based techniques, which can be used to guide the design of the next-generation walking-centered workspaces.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Nobelius, Jörgen. « Fysisk, känslomässig och social interaktion : En analys av upplevelserna av robotsälen Paro hos kognitivt funktionsnedsatta och på äldreboende ». Thesis, Södertörns högskola, Institutionen för kommunikation, medier och it, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-15096.

Texte intégral
Résumé :
This field study examined how elderly and cognitively disabled people used and experienced a social companion robot. The following pages explores the questions: Which are the physical, social and affective qualities during the interaction? The aim was to through observations see how qualities of interaction could activate different forms of behavior. The results show that motion, sound and the eyes together created communicative and emotional changes for users who felt joy and were willing to share the activity with others. The robot stimulated to some extent users to create their own imaginative experiences but often failed to involve user or group for a long time and was also considered too large and heavy to handle.
Denna fältstudie undersökte hur äldre och kognitivt funktionsnedsatta personer använde och upplevde en social robot. Följande sidor utforskar frågorna: Vilka fysiska, sociala och affektiva kvaliteter finns i interaktionen? Målet var att genom observationer se hur kvaliteterna i interaktionen kunde aktivera olika typer av beteenden. Resultatet visar att rörelse, ljud och ögon tillsammans skapade kommunikativa och känslomässiga förändringar hos användarna som visade glädje och som gärna delade upplevelsen med andra. Roboten stimulerade till viss del användarna att skapa egna fantasifulla upplevelser men lyckades inte ofta involvera användare eller grupp under någon längre tid och ansågs även vara för stor och tung att hantera.
Styles APA, Harvard, Vancouver, ISO, etc.
31

CAMPANA, JULIA RAMOS. « DESIGN OF GRAPHICAL ROBOT USER INTERFACES : A STUDY OF USABILITY AND HUMAN-MACHINE INTERACTION ». PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2018. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=34356@1.

Texte intégral
Résumé :
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
Hoje, os constantes avanços tecnológicos em interfaces digitais, e por consequência as interfaces gráficas do usuário, se fazem cada vez mais presentes na interação humano-máquina. Porém, num contexto em que sistemas inteligentes, a exemplo dos sistemas robóticos, já são uma realidade, ainda restam lacunas a serem preenchidas quando se pensa em integrar, com fluidez, robôs a trabalhos customizados e complexos. Esta pesquisa tem como foco a análise da usabilidade de interfaces de usuário específicas para a interação com robôs remotos também conhecidas como Robot User Interfaces (RUIs). Quando bem executadas, tais interfaces permitem aos operadores realizar remotamente tarefas em ambientes complexos. Para tanto, trabalha-se com a hipótese de que, se RUIs forem concebidas considerando as especificidades desses modelos de interação, as falhas operacionais serão reduzidas. O objetivo desta pesquisa foi avaliar diretrizes específicas para sistemas robóticos, compreendendo a relevância destas na usabilidade de interfaces. Para uma base teórica, foram levantados os modelos já existentes de interação com robôs e sistemas automatizados; e os princípios de design que se aplicam a estes modelos. Após a revisão bibliográfica, foram realizadas entrevistas contextuais com usuários de sistemas robóticos e testes de usabilidade, a fim de reproduzir, em interfaces com e sem diretrizes de RUIs, os processos de interação na realização de tarefas. Os resultados finais das técnicas aplicadas apontaram para a validade da hipótese - se interfaces específicas para sistemas robóticos forem concebidas considerando as especificidades dos modelos de interação humano-robô, as falhas operacionais na interação serão reduzidas - à medida que os sistemas desenvolvidos com interfaces específicas ao contexto de interação com robôs proporcionaram uma melhor usabilidade e mitigaram a ocorrência de uma série de possíveis falhas humanas.
Nowadays, the constant technological advances, and consequently the graphical user interfaces, have become more and more present in the humanmachine interaction. However, in a context where intelligent systems, such as robotic systems, are already a reality, there are still gaps to be filled when we think about integrating robots with custom and complex activities. This research aims on the analysis of Robot User Interfaces (RUIs) usability. When well executed, such interfaces allow operators to remotely perform tasks in complex environments. To that intent, our hypothesis is that, if RUIs are conceived considering the specificities of these interaction models, operational failures will be reduced. The main goal of this research was to evaluate specific guidelines for robotic systems, understanding their relevance in usability. For a theoretical basis, the existing models of interaction with robots and autonomous systems were raised; as well as the design principles that apply to these models. After a bibliographic review, we conducted contextual interviews with users of robotic systems, and usability tests to reproduce, in interfaces with and without RUI guidelines, the interaction processes in the task completion. The final results of the applied techniques proved the validity of the hypothesis, as the systems developed with interfaces specific to the interaction with robots provided better usability and mitigated the occurrence of array of human faults.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Lindberg, Martin. « Introducing Gestures : Exploring Feedforward in Touch-Gesture Interfaces ». Thesis, Malmö universitet, Fakulteten för kultur och samhälle (KS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-23555.

Texte intégral
Résumé :
This interaction design thesis aimed to explore how users could be introduced to the different functionalities of a gesture-based touch screen interface. This was done through a user-centred design research process where the designer was taught different artefacts by experienced users. Insights from this process lay the foundation for an interactive, digital gesture-introduction prototype.Testing said prototype with users yielded this study's results. While containing several areas for improvement regarding implementation and behaviour, the prototype's base methods and qualities were well received. Further development would be needed to fully assess its viability. The user-centred research methods used in this project proved valuable for later ideation and prototyping stages. Activities and results from this project indicate a potential for designers to further explore the possibilities for ensuring the discoverability of touch-gesture interactions. For future projects the author suggests more extensive research and testing using a greater sample size and wider demographic.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Seichter, Hartmut. « Augmented reality aided design ». Thesis, View the Table of Contents & ; Abstract, 2007. http://sunzi.lib.hku.hk/hkuto/record/B38289052.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
34

Fernández, Baena Adso. « Animation and Interaction of Responsive, Expressive, and Tangible 3D Virtual Characters ». Doctoral thesis, Universitat Ramon Llull, 2015. http://hdl.handle.net/10803/311800.

Texte intégral
Résumé :
This thesis is framed within the field of 3D Character Animation. Virtual characters are used in many Human Computer Interaction applications such as video games and serious games. Within these virtual worlds they move and act in similar ways to humans controlled by users through some form of interface or by artificial intelligence. This work addresses the challenges of developing smoother movements and more natural behaviors driving motions in real-time, intuitively, and accurately. The interaction between virtual characters and intelligent objects will also be explored. With these subjects researched the work will contribute to creating more responsive, expressive, and tangible virtual characters. The navigation within virtual worlds uses locomotion such as walking, running, etc. To achieve maximum realism, actors' movements are captured and used to animate virtual characters. This is the philosophy of motion graphs: a structure that embeds movements where the continuous motion stream is generated from concatenating motion pieces. However, locomotion synthesis, using motion graphs, involves a tradeoff between the number of possible transitions between different kinds of locomotion, and the quality of these, meaning smooth transition between poses. To overcome this drawback, we propose the method of progressive transitions using Body Part Motion Graphs (BPMGs). This method deals with partial movements, and generates specific, synchronized transitions for each body part (group of joints) within a window of time. Therefore, the connectivity within the system is not linked to the similarity between global poses allowing us to find more and better quality transition points while increasing the speed of response and execution of these transitions in contrast to standard motion graphs method. Secondly, beyond getting faster transitions and smoother movements, virtual characters also interact with each other and with users by speaking. This interaction requires the creation of appropriate gestures according to the voice that they reproduced. Gestures are the nonverbal language that accompanies voiced language. The credibility of virtual characters when speaking is linked to the naturalness of their movements in sync with the voice in speech and intonation. Consequently, we analyzed the relationship between gestures, speech, and the performed gestures according to that speech. We defined intensity indicators for both gestures (GSI, Gesture Strength Indicator) and speech (PSI, Pitch Strength Indicator). We studied the relationship in time and intensity of these cues in order to establish synchronicity and intensity rules. Later we adapted the mentioned rules to select the appropriate gestures to the speech input (tagged text from speech signal) in the Gesture Motion Graph (GMG). The evaluation of resulting animations shows the importance of relating the intensity of speech and gestures to generate believable animations beyond time synchronization. Subsequently, we present a system that leads automatic generation of gestures and facial animation from a speech signal: BodySpeech. This system also includes animation improvements such as: increased use of data input, more flexible time synchronization, and new features like editing style of output animations. In addition, facial animation also takes into account speech intonation. Finally, we have moved virtual characters from virtual environments to the physical world in order to explore their interaction possibilities with real objects. To this end, we present AvatARs, virtual characters that have tangible representation and are integrated into reality through augmented reality apps on mobile devices. Users choose a physical object to manipulate in order to control the animation. They can select and configure the animation, which serves as a support for the virtual character represented. Then, we explored the interaction of AvatARs with intelligent physical objects like the Pleo social robot. Pleo is used to assist hospitalized children in therapy or simply for playing. Despite its benefits, there is a lack of emotional relationship and interaction between the children and Pleo which makes children lose interest eventually. This is why we have created a mixed reality scenario where Vleo (AvatAR as Pleo, virtual element) and Pleo (real element) interact naturally. This scenario has been tested and the results conclude that AvatARs enhances children's motivation to play with Pleo, opening a new horizon in the interaction between virtual characters and robots.
Aquesta tesi s'emmarca dins del món de l'animació de personatges virtuals tridimensionals. Els personatges virtuals s'utilitzen en moltes aplicacions d'interacció home màquina, com els videojocs o els serious games, on es mouen i actuen de forma similar als humans dins de mons virtuals, i on són controlats pels usuaris per mitjà d'alguna interfície, o d'altra manera per sistemes intel·ligents. Reptes com aconseguir moviments fluids i comportament natural, controlar en temps real el moviment de manera intuitiva i precisa, i inclús explorar la interacció dels personatges virtuals amb elements físics intel·ligents; són els que es treballen a continuació amb l'objectiu de contribuir en la generació de personatges virtuals responsius, expressius i tangibles. La navegació dins dels mons virtuals fa ús de locomocions com caminar, córrer, etc. Per tal d'aconseguir el màxim de realisme, es capturen i reutilitzen moviments d'actors per animar els personatges virtuals. Així funcionen els motion graphs, una estructura que encapsula moviments i per mitjà de cerques dins d'aquesta, els concatena creant un flux continu. La síntesi de locomocions usant els motion graphs comporta un compromís entre el número de transicions entre les diferents locomocions, i la qualitat d'aquestes (similitud entre les postures a connectar). Per superar aquest inconvenient, proposem el mètode transicions progressives usant Body Part Motion Graphs (BPMGs). Aquest mètode tracta els moviments de manera parcial, i genera transicions específiques i sincronitzades per cada part del cos (grup d'articulacions) dins d'una finestra temporal. Per tant, la conectivitat del sistema no està lligada a la similitud de postures globals, permetent trobar més punts de transició i de més qualitat, i sobretot incrementant la rapidesa en resposta i execució de les transicions respecte als motion graphs estàndards. En segon lloc, més enllà d'aconseguir transicions ràpides i moviments fluids, els personatges virtuals també interaccionen entre ells i amb els usuaris parlant, creant la necessitat de generar moviments apropiats a la veu que reprodueixen. Els gestos formen part del llenguatge no verbal que acostuma a acompanyar a la veu. La credibilitat dels personatges virtuals parlants està lligada a la naturalitat dels seus moviments i a la concordança que aquests tenen amb la veu, sobretot amb l'entonació d'aquesta. Així doncs, hem realitzat l'anàlisi de la relació entre els gestos i la veu, i la conseqüent generació de gestos d'acord a la veu. S'han definit indicadors d'intensitat tant per gestos (GSI, Gesture Strength Indicator) com per la veu (PSI, Pitch Strength Indicator), i s'ha estudiat la relació entre la temporalitat i la intensitat de les dues senyals per establir unes normes de sincronia temporal i d'intensitat. Més endavant es presenta el Gesture Motion Graph (GMG), que selecciona gestos adients a la veu d'entrada (text anotat a partir de la senyal de veu) i les regles esmentades. L'avaluació de les animaciones resultants demostra la importància de relacionar la intensitat per generar animacions cre\"{ibles, més enllà de la sincronització temporal. Posteriorment, presentem un sistema de generació automàtica de gestos i animació facial a partir d'una senyal de veu: BodySpeech. Aquest sistema també inclou millores en l'animació, major reaprofitament de les dades d'entrada i sincronització més flexible, i noves funcionalitats com l'edició de l'estil les animacions de sortida. A més, l'animació facial també té en compte l'entonació de la veu. Finalment, s'han traslladat els personatges virtuals dels entorns virtuals al món físic per tal d'explorar les possibilitats d'interacció amb objectes reals. Per aquest fi, presentem els AvatARs, personatges virtuals que tenen representació tangible i que es visualitzen integrats en la realitat a través d'un dispositiu mòbil gràcies a la realitat augmentada. El control de l'animació es duu a terme per mitjà d'un objecte físic que l'usuari manipula, seleccionant i parametritzant les animacions, i que al mateix temps serveix com a suport per a la representació del personatge virtual. Posteriorment, s'ha explorat la interacció dels AvatARs amb objectes físics intel·ligents com el robot social Pleo. El Pleo s'utilitza per a assistir a nens hospitalitzats en teràpia o simplement per jugar. Tot i els seus beneficis, hi ha una manca de relació emocional i interacció entre els nens i el Pleo que amb el temps fa que els nens perdin l'interès en ell. Així doncs, hem creat un escenari d'interacció mixt on el Vleo (un AvatAR en forma de Pleo; element virtual) i el Pleo (element real) interactuen de manera natural. Aquest escenari s'ha testejat i els resultats conclouen que els AvatARs milloren la motivació per jugar amb el Pleo, obrint un nou horitzó en la interacció dels personatges virtuals amb robots.
Esta tesis se enmarca dentro del mundo de la animación de personajes virtuales tridimensionales. Los personajes virtuales se utilizan en muchas aplicaciones de interacción hombre máquina, como los videojuegos y los serious games, donde dentro de mundo virtuales se mueven y actúan de manera similar a los humanos, y son controlados por usuarios por mediante de alguna interfaz, o de otro modo, por sistemas inteligentes. Retos como conseguir movimientos fluidos y comportamiento natural, controlar en tiempo real el movimiento de manera intuitiva y precisa, y incluso explorar la interacción de los personajes virtuales con elementos físicos inteligentes; son los que se trabajan a continuación con el objetivo de contribuir en la generación de personajes virtuales responsivos, expresivos y tangibles. La navegación dentro de los mundos virtuales hace uso de locomociones como andar, correr, etc. Para conseguir el máximo realismo, se capturan y reutilizan movimientos de actores para animar los personajes virtuales. Así funcionan los motion graphs, una estructura que encapsula movimientos y que por mediante búsquedas en ella, los concatena creando un flujo contínuo. La síntesi de locomociones usando los motion graphs comporta un compromiso entre el número de transiciones entre las distintas locomociones, y la calidad de estas (similitud entre las posturas a conectar). Para superar este inconveniente, proponemos el método transiciones progresivas usando Body Part Motion Graphs (BPMGs). Este método trata los movimientos de manera parcial, y genera transiciones específicas y sincronizadas para cada parte del cuerpo (grupo de articulaciones) dentro de una ventana temporal. Por lo tanto, la conectividad del sistema no está vinculada a la similitud de posturas globales, permitiendo encontrar más puntos de transición y de más calidad, incrementando la rapidez en respuesta y ejecución de las transiciones respeto a los motion graphs estándards. En segundo lugar, más allá de conseguir transiciones rápidas y movimientos fluídos, los personajes virtuales también interaccionan entre ellos y con los usuarios hablando, creando la necesidad de generar movimientos apropiados a la voz que reproducen. Los gestos forman parte del lenguaje no verbal que acostumbra a acompañar a la voz. La credibilidad de los personajes virtuales parlantes está vinculada a la naturalidad de sus movimientos y a la concordancia que estos tienen con la voz, sobretodo con la entonación de esta. Así pues, hemos realizado el análisis de la relación entre los gestos y la voz, y la consecuente generación de gestos de acuerdo a la voz. Se han definido indicadores de intensidad tanto para gestos (GSI, Gesture Strength Indicator) como para la voz (PSI, Pitch Strength Indicator), y se ha estudiado la relación temporal y de intensidad para establecer unas reglas de sincronía temporal y de intensidad. Más adelante se presenta el Gesture Motion Graph (GMG), que selecciona gestos adientes a la voz de entrada (texto etiquetado a partir de la señal de voz) y las normas mencionadas. La evaluación de las animaciones resultantes demuestra la importancia de relacionar la intensidad para generar animaciones creíbles, más allá de la sincronización temporal. Posteriormente, presentamos un sistema de generación automática de gestos y animación facial a partir de una señal de voz: BodySpeech. Este sistema también incluye mejoras en la animación, como un mayor aprovechamiento de los datos de entrada y una sincronización más flexible, y nuevas funcionalidades como la edición del estilo de las animaciones de salida. Además, la animación facial también tiene en cuenta la entonación de la voz. Finalmente, se han trasladado los personajes virtuales de los entornos virtuales al mundo físico para explorar las posibilidades de interacción con objetos reales. Para este fin, presentamos los AvatARs, personajes virtuales que tienen representación tangible y que se visualizan integrados en la realidad a través de un dispositivo móvil gracias a la realidad aumentada. El control de la animación se lleva a cabo mediante un objeto físico que el usuario manipula, seleccionando y configurando las animaciones, y que a su vez sirve como soporte para la representación del personaje. Posteriormente, se ha explorado la interacción de los AvatARs con objetos físicos inteligentes como el robot Pleo. Pleo se utiliza para asistir a niños en terapia o simplemente para jugar. Todo y sus beneficios, hay una falta de relación emocional y interacción entre los niños y Pleo que con el tiempo hace que los niños pierdan el interés. Así pues, hemos creado un escenario de interacción mixto donde Vleo (AvatAR en forma de Pleo; virtual) y Pleo (real) interactúan de manera natural. Este escenario se ha testeado y los resultados concluyen que los AvatARs mejoran la motivación para jugar con Pleo, abriendo un nuevo horizonte en la interacción de los personajes virtuales con robots.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Jansen, Yvonne. « Physical and tangible information visualization ». Phd thesis, Université Paris Sud - Paris XI, 2014. http://tel.archives-ouvertes.fr/tel-00983501.

Texte intégral
Résumé :
Visualizations in the most general sense of external, physical representations of information are older than the invention of writing. Generally, external representations promote external cognition and visual thinking, and humans developed a rich set of skills for crafting and exploring them. Computers immensely increased the amount of data we can collect and process as well as diversified the ways we can represent it visually. Computer-supported visualization systems, studied in the field of information visualization (infovis), have become powerful and complex, and sophisticated interaction techniques are now necessary to control them. With the widening of technological possibilities beyond classic desktop settings, new opportunities have emerged. Not only display surfaces of arbitrary shapes and sizes can be used to show richer visualizations, but also new input technologies can be used to manipulate them. For example, tangible user interfaces are an emerging input technology that capitalizes on humans' abilities to manipulate physical objects. However, these technologies have been barely studied in the field of information visualization. A first problem is a poorly defined terminology. In this dissertation, I define and explore the conceptual space of embodiment for information visualization. For visualizations, embodiment refers to the level of congruence between the visual elements of the visualization and their physical shape. This concept subsumes previously introduced concepts such as tangibility and physicality. For example, tangible computing aims to represent virtual objects through a physical form but the form is not necessarily congruent with the virtual object. A second problem is the scarcity of convincing applications of tangible user interfaces for infovis purposes. In information visualization, standard computer displays and input devices are still widespread and considered as most effective. Both of these provide however opportunities for embodiment: input devices can be specialized and adapted so that their physical shape reflects their functionality within the system; computer displays can be substituted by transformable shape changing displays or, eventually, by programmable matter which can take any physical shape imaginable. Research on such shape-changing interfaces has so far been technology-driven while the utility of such interfaces for information visualization remained unexploited. In this thesis, I suggest embodiment as a design principle for infovis purposes, I demonstrate and validate the efficiency and usability of both embodied visualization controls and embodied visualization displays through three controlled user experiments. I then present a conceptual interaction model and visual notation system that facilitates the description, comparison and criticism of various types of visualization systems and illustrate it through case studies of currently existing point solutions. Finally, to aid the creation of physical visualizations, I present a software tool that supports users in building their own visualizations. The tool is suitable for users new to both visualization and digital fabrication, and can help to increase users' awareness of and interest in data in their everyday live. In summary, this thesis contributes to the understanding of the value of emerging physical representations for information visualization.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Parra, González Luis Otto. « gestUI : a model-driven method for including gesture-based interaction in user interfaces ». Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/89090.

Texte intégral
Résumé :
The research reported and discussed in this thesis represents a novel approach to define custom gestures and to include gesture-based interaction in user interfaces of the software systems with the aim of help to solve the problems found in the related literature about the development of gesture-based user interfaces. The research is conducted according to Design Science methodology that is based on the design and investigation of artefacts in a context. In this thesis, the new artefact is the model-driven method to include gesture-based interaction in user interfaces. This methodology considers two cycles: the main cycle is an engineering cycle where we design a model-driven method to include interaction based on gestures. The second cycle is the research cycle, we define two research cycles: the first research cycle corresponds to the validation of the proposed method with an empirical evaluation and the second cycle corresponds to the technical action research to validate the method in an industrial context. Additionally, Design Science provides us the clues on how to conduct the research, be rigorous, and put in practice scientific rules. Besides Design Science has been a key issue for organising our research, we acknowledge the application of this framework since it has helps us to report clearly our findings. The thesis presents a theoretical framework introducing concepts related with the research performed, followed by a state of the art where we know about the related work in three areas: Human-computer Interaction, Model-driven paradigm in Human-Computer Interaction and Empirical Software Engineering. The design and implementation of gestUI is presented following the Model-driven Paradigm and the Model-View-Controller design pattern. Then, we performed two evaluations of gestUI: (i) an empirical evaluation based on ISO 25062-2006 to evaluate usability considering effectiveness, efficiency and satisfaction. Satisfaction is measured with perceived ease of use, perceived usefulness and intention of use, and (ii) a technical action research to evaluate user experience and usability. We use Model Evaluation Method, User Experience Questionnaire and Microsoft Reaction cards as guides to perform the aforementioned evaluations. The contributions of our thesis, limitations of the tool support and the approach are discussed and further work are presented.
La investigación reportada y discutida en esta tesis representa un método nuevo para definir gestos personalizados y para incluir interacción basada en gestos en interfaces de usuario de sistemas software con el objetivo de ayudar a resolver los problemas encontrados en la literatura relacionada respecto al desarrollo de interfaces basadas en gestos de usuarios. Este trabajo de investigación ha sido realizado de acuerdo a la metodología Ciencia del Diseño, que está basada en el diseño e investigación de artefactos en un contexto. En esta tesis, el nuevo artefacto es el método dirigido por modelos para incluir interacción basada en gestos en interfaces de usuario. Esta metodología considera dos ciclos: el ciclo principal, denominado ciclo de ingeniería, donde se ha diseñado un método dirigido por modelos para incluir interacción basada en gestos. El segundo ciclo es el ciclo de investigación, donde se definen dos ciclos de este tipo. El primero corresponde a la validación del método propuesto con una evaluación empírica y el segundo ciclo corresponde a un Technical Action Research para validar el método en un contexto industrial. Adicionalmente, Ciencia del Diseño provee las claves sobre como conducir la investigación, sobre cómo ser riguroso y poner en práctica reglas científicas. Además, Ciencia del Diseño ha sido un recurso clave para organizar la investigación realizada en esta tesis. Nosotros reconocemos la aplicación de este marco de trabajo puesto que nos ayuda a reportar claramente nuestros hallazgos. Esta tesis presenta un marco teórico introduciendo conceptos relacionados con la investigación realizada, seguido por un estado del arte donde conocemos acerca del trabajo relacionado en tres áreas: Interacción Humano-Ordenador, paradigma dirigido por modelos en Interacción Humano-Ordenador e Ingeniería de Software Empírica. El diseño e implementación de gestUI es presentado siguiendo el paradigma dirigido por modelos y el patrón de diseño Modelo-Vista-Controlador. Luego, nosotros hemos realizado dos evaluaciones de gestUI: (i) una evaluación empírica basada en ISO 25062-2006 para evaluar la usabilidad considerando efectividad, eficiencia y satisfacción. Satisfacción es medida por medio de la facilidad de uso percibida, utilidad percibida e intención de uso; y, (ii) un Technical Action Research para evaluar la experiencia del usuario y la usabilidad. Nosotros hemos usado Model Evaluation Method, User Experience Questionnaire y Microsoft Reaction Cards como guías para realizar las evaluaciones antes mencionadas. Las contribuciones de nuestra tesis, limitaciones del método y de la herramienta de soporte, así como el trabajo futuro son discutidas y presentadas.
La investigació reportada i discutida en aquesta tesi representa un mètode per definir gests personalitzats i per incloure interacció basada en gests en interfícies d'usuari de sistemes de programari. L'objectiu és ajudar a resoldre els problemes trobats en la literatura relacionada al desenvolupament d'interfícies basades en gests d'usuaris. Aquest treball d'investigació ha sigut realitzat d'acord a la metodologia Ciència del Diseny, que està basada en el disseny i investigació d'artefactes en un context. En aquesta tesi, el nou artefacte és el mètode dirigit per models per incloure interacció basada en gests en interfícies d'usuari. Aquesta metodologia es considerada en dos cicles: el cicle principal, denominat cicle d'enginyeria, on es dissenya un mètode dirigit per models per incloure interacció basada en gestos. El segon cicle és el cicle de la investigació, on es defineixen dos cicles d'aquest tipus. El primer es correspon a la validació del mètode proposat amb una avaluació empírica i el segon cicle es correspon a un Technical Action Research per validar el mètode en un context industrial. Addicionalment, Ciència del Disseny proveeix les claus sobre com conduir la investigació, sobre com ser rigorós i ficar en pràctica regles científiques. A més a més, Ciència del Disseny ha sigut un recurs clau per organitzar la investigació realitzada en aquesta tesi. Nosaltres reconeixem l'aplicació d'aquest marc de treball donat que ens ajuda a reportar clarament les nostres troballes. Aquesta tesi presenta un marc teòric introduint conceptes relacionats amb la investigació realitzada, seguit per un estat del art on coneixem a prop el treball realitzat en tres àrees: Interacció Humà-Ordinador, paradigma dirigit per models en la Interacció Humà-Ordinador i Enginyeria del Programari Empírica. El disseny i implementació de gestUI es presenta mitjançant el paradigma dirigit per models i el patró de disseny Model-Vista-Controlador. Després, nosaltres hem realitzat dos avaluacions de gestUI: (i) una avaluació empírica basada en ISO 25062-2006 per avaluar la usabilitat considerant efectivitat, eficiència i satisfacció. Satisfacció es mesura mitjançant la facilitat d'ús percebuda, utilitat percebuda i intenció d'ús; (ii) un Technical Action Research per avaluar l'experiència del usuari i la usabilitat. Nosaltres hem usat Model Evaluation Method, User Experience Questionnaire i Microsoft Reaction Cards com guies per realitzar les avaluacions mencionades. Les contribucions de la nostra tesi, limitacions del mètode i de la ferramenta de suport així com el treball futur són discutides i presentades.
Parra González, LO. (2017). gestUI: a model-driven method for including gesture-based interaction in user interfaces [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/89090
TESIS
Styles APA, Harvard, Vancouver, ISO, etc.
37

Büring, Thorsten. « Zoomable user interfaces on small screens presentation and interaction design for pen-operated mobile devices / ». [S.l. : s.n.], 2007. http://nbn-resolving.de/urn:nbn:de:bsz:352-opus-32080.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
38

Hinson, Kenneth Paul. « A foundation for translating user interaction designs into OSF/Motif-based software ». Thesis, Virginia Tech, 1994. http://hdl.handle.net/10919/40635.

Texte intégral
Résumé :
The user interface development process occurs in a behavioral domain and in a constructional domain. The development process in the behavioral domain focuses on the "look and feel" of the user interface and its behavior in response to user actions. The development process in the constructional domain focuses on developing software to implement the user interface. Although one may attempt to design a user interface from a constructional view, it is important to concentrate design efforts in the behavioral domain to improve software usability.

User Action Notation (UAN) is a useful technique for representing user interaction designs in the behavioral domain. Primary abstractions in UAN-expressed designs are user tasks. Information about interface objects is encapsulated in user task descriptions and scenarios. Primary abstractions in a GUI such as Motifâ ¢ are interface objects. Motif implements objects' behavior and appearance using system functions that are encapsulated within pre-defined object classes. Therefore, user interaction developers and software developers must communicate well to translate UAN-expressed interaction designs into Motif-based software designs. Translation is not trivial since it is a translation between two significantly different domains.

This thesis contributes to understanding of the user interface development process by developing a foundation to assist translation of user interaction designs into Motif-based software designs. This thesis develops the foundation as follows: 1. Adapt UAN for use with Motif. 2. Summarize Motif concepts about objects and object relationships. 3. Develop new approaches for discussing objects and object relationships. 4. Develop a partial translation guide containing VAN descriptions of selected Motif abstractions.
Master of Science

Styles APA, Harvard, Vancouver, ISO, etc.
39

Fröjdman, Sofia. « User experience guidelines for design of virtual reality graphical user interfaces controlled by head orientation input ». Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-12522.

Texte intégral
Résumé :
With the recent release of head-mounted displays for consumers, virtual reality experiences are more accessible than ever. However, there is still a shortage of research concerning how to design user interfaces in virtual reality for good experiences. This thesis focuses on what aspects should be considered when designing a graphical user interface in virtual reality - controlled by head orientation input - for a qualitative user experience. The research has included a heuristic evaluation, interviews, usability tests, and a survey. A virtual reality prototype of a video on demand service was investigated and served as the application for the research. Findings from the analysis of the data were application specific pragmatic and hedonic goals of the users, relevant to the subjective user experience, and current user experience problems with the prototype tested. In combination with previous recommendations, the result led to the development of seven guidelines. However, these guidelines are considered only to serve as a foundation for future research since they need to be validated. New head-mounted displays and virtual reality applications are released every day and with the increasing number of users, there will be a continuous need for more research.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Gabbard, Joseph L. « Usability Engineering of Text Drawing Styles in Augmented Reality User Interfaces ». Diss., Virginia Tech, 2008. http://hdl.handle.net/10919/29093.

Texte intégral
Résumé :
In the coming years, augmented reality, mobile computing, and related technologies have the potential to completely redefine how we interact with and use computers. No longer will we be bound to desktops and laptops, nor will we be bound to monitors, two-dimensional (2D) screens, and graphical user interface (GUI) backgrounds. Instead we will employ wearable systems to move about and augmented reality displays to overlay 2D and three-dimensional (3D) graphics onto the real world. When the computer graphics and user interface communities evolved from text-based user interfaces to 2D GUIs, many in the field noted the need for “new analyses and metrics“ [Shneiderman et al., 1995]; the same is equally true today as we shift from 2D GUI-based user interfaces and environments, to 3D, stereoscopic virtual (VR) and augmented reality (AR) environments. As we rush to advance the state of technology of AR and its capabilities, we need to advance the processes by which these environments are designed, built, and evaluated. Along these lines, this dissertation provides insight into the processes and products of AR usability evaluation. Despite the fact that this technology fundamentally changes the way we visualize, use, and interact with information, very little HCI work in general, and user-centered design and evaluation in particular, have been done to date specifically in AR [Swan & Gabbard, 2005]. While traditional HCI methods can be successfully applied in AR to determine what information should be presented to the user [Gabbard, 2002], these approaches do not tell us, and what, to date, has not been researched, is how information should be presented to the user. A difficulty in producing effective AR user interfaces (UIs) in outdoor AR settings lies in the wide range of environmental conditions that may be present, and specifically large-scale fluctuations in natural lighting and wide variations in likely backgrounds or objects in the scene. In many cases, a carefully designed AR user interface may be easily legible under some lighting and background conditions, and minutes later be totally illegible in others. Since lighting and background conditions may vary from minute to minute in dynamic AR usage contexts, there is a need for basic research to understand the relationship between real-world backgrounds and objects and associated augmenting text drawing styles. This research identifies characteristics of AR text drawing styles that affect legibility on common real-world backgrounds. We present the concept of active text drawing styles that adapt in real-time to changes in the real-world backgrounds. We also present lessons learned on applying traditional usability engineering techniques to outdoor AR application development and propose a modified usability engineering process to support user interface design of novel technologies such as AR. Results of this research provide the following scientific contributions to the field of AR: Empirical evidence regarding effectiveness of various text drawing styles in affording legibility to outdoor AR users. Empirical evidence that real-world backgrounds have an effect on the legibility of text drawing styles. Guidelines to aid AR user interface designers in choosing among various text drawing styles and characteristics of drawing styles produced by the pilot and user-based studies described in this dissertation. Candidate drawing style algorithms to support an active, real-time, AR display system, where sensors interpret real-world backgrounds to determine appropriate values for display drawing style characteristics. Lessons learned on applying traditional usability engineering processes to outdoor AR. A modified usability engineering process to assist developers in identifying effective UI designs vis-à-vis user-based studies.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Wentzel, Alicia Veronica. « User interface design guidelines for digital television virtual remote controls ». Thesis, Rhodes University, 2016. http://hdl.handle.net/10962/d1020617.

Texte intégral
Résumé :
The remote control is a pivotal component in households worldwide. It helps users enjoy leisurely television (TV) viewing. The remote control has various user interfaces that people interact with. For example, the physical user interface includes the shape of the remote and the physical buttons; the logical user interface refers to how the information is laid out; and the graphical user interface refers to the colours and aesthetic features of the remote control. All of the user interfaces together with the context of use, cultural factors, social factors, and prior experiences of the user influences the ways people interact with their remote control and ultimately has an effect on their user experiences. Advances in the broadcasting sector and transformations of the TV physical remote control have compounded the simple remote control into a multifaceted, indispensable device, overcrowded with buttons. The usability and ultimately the user experience of physical remote controls (PRCs) have been affected by the overloaded functionality and small button sizes. The usability issues with current PRCs, the evolution of mobile phones into touchscreen smartphones, and the trend of global companies moving towards virtual remote controls (VRCs) have prompted this research to discover what user interface design features will contribute towards an enhanced user experience for digital TV VRCs. This research used the design science research process model (DSRP), which comprised six steps, to investigate this topic area further. A review of the domain literature pertaining to mobile user experiences (MUX) and all the encompassing factors, mobile human computer interaction (MHCI) and the physical, logical, graphical and natural user interfaces was completed, as well as a review of the literature regarding the usability issues of PRCs and VRCs. A contextual task analysis (CTA) of a single South African digital TV PRC was used to identify how users utilise PRCs to perform tasks, and the usability issues they encountered during the tasks. Brainstorming focus groups were used to understand how to represent certain user interface elements and attempted to source ideas from users about what potential functionality digital TV VRCs should contain. Together with all the other results gathered from the previous chapters amalgamated into a set of user interface design guidelines for digital TV VRCs. The proposed user interface guidelines were used to instantiate a digital TV VRC prototype that underwent usability testing in order to validate the proposed user interface design guidelines. The results of the usability testing revealed that the user interface design guidelines for digital TV VRCs were successful, with the addition of one guideline that was discovered during the usability testing.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Van, Tonder Bradley Paul. « Enhanced sensor-based interaction techniques for mobile map-based applications ». Thesis, Nelson Mandela Metropolitan University, 2012. http://hdl.handle.net/10948/d1012995.

Texte intégral
Résumé :
Mobile phones are increasingly being equipped with a wide range of sensors which enable a variety of interaction techniques. Sensor-based interaction techniques are particularly promising for domains such as map-based applications, where the user is required to interact with a large information space on the small screen of a mobile phone. Traditional interaction techniques have several shortcomings for interacting with mobile map-based applications. Keypad interaction offers limited control over panning speed and direction. Touch-screen interaction is often a two-handed form of interaction and results in the display being occluded during interaction. Sensor-based interaction provides the potential to address many of these shortcomings, but currently suffers from several limitations. The aim of this research was to propose enhancements to address the shortcomings of sensor-based interaction, with a particular focus on tilt interaction. A comparative study between tilt and keypad interaction was conducted using a prototype mobile map-based application. This user study was conducted in order to identify shortcomings and opportunities for improving tilt interaction techniques in this domain. Several shortcomings, including controllability, mental demand and practicality concerns were highlighted. Several enhanced tilt interaction techniques were proposed to address these shortcomings. These techniques were the use of visual and vibrotactile feedback, attractors, gesture zooming, sensitivity adaptation and dwell-time selection. The results of a comparative user study showed that the proposed techniques achieved several improvements in terms of the problem areas identified earlier. The use of sensor fusion for tilt interaction was compared to an accelerometer-only approach which has been widely applied in existing research. This evaluation was motivated by advances in mobile sensor technology which have led to the widespread adoption of digital compass and gyroscope sensors. The results of a comparative user study between sensor fusion and accelerometer-only implementations of tilt interaction showed several advantages for the use of sensor fusion, particularly in a walking context of use. Modifications to sensitivity adaptation and the use of tilt to perform zooming were also investigated. These modifications were designed to address controllability shortcomings identified in earlier experimental work. The results of a comparison between tilt zooming and Summary gesture zooming indicated that tilt zooming offered better results, both in terms of performance and subjective user ratings. Modifications to the original sensitivity adaptation algorithm were only partly successful. Greater accuracy improvements were achieved for walking tasks, but the use of dynamic dampening factors was found to be confusing. The results of this research were used to propose a framework for mobile tilt interaction. This framework provides an overview of the tilt interaction process and highlights how the enhanced techniques proposed in this research can be integrated into the design of tilt interaction techniques. The framework also proposes an application architecture which was implemented as an Application Programming Interface (API). This API was successfully used in the development of two prototype mobile applications incorporating tilt interaction.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Yang, Grant. « WIMP and Beyond : The Origins, Evolution, and Awaited Future of User Interface Design ». Scholarship @ Claremont, 2015. http://scholarship.claremont.edu/cmc_theses/1126.

Texte intégral
Résumé :
The field of computer user interface design is rapidly changing and diversifying as new devices are developed every day. Technology has risen to become an integral part of life for people of all ages around the world. Modern life as we know it depends on computers, and understanding the interfaces through which we communicate with them is critically important in an increasingly digital age. The first part of this paper examines the technological origins and historical background driving the development of graphical user interfaces from its earliest incarnations to today. Hardware advancements and key turning points are presented and discussed. In the second part of this paper, skeuomorphism and flat design, two of the most common design trends today, are analyzed and explained. Finally, the future course of user interface is predicted based off of emergent technologies such as the Apple Watch, Google Glass, Microsoft HoloLens, and Microsoft PixelSense. Through understanding the roots and current state of computer user interface design, engineers, designers, and scientists can help us get the most out of our ever-changing world of advanced technology as it becomes further intertwined with our existence.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Levine, Jonathan. « Computer based dialogs : theory and design / ». Online version of thesis, 1990. http://hdl.handle.net/1850/10590.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
45

Mwanza, Daisy. « Towards an activity-oriented design method for HCI research and practice ». Thesis, n.p, 2002. http://ethos.bl.uk/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
46

Green, Anders. « Designing and Evaluating Human-Robot Communication : Informing Design through Analysis of User Interaction ». Doctoral thesis, KTH, Människa-datorinteraktion, MDI, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-9917.

Texte intégral
Résumé :
This thesis explores the design and evaluation of human-robot communication for service robots that use natural language to interact with people.  The research is centred around three themes: design of human-robot communication; evaluation of miscommunication in human-robot communication; and the analysis of spatial influence as empiric phenomenon and design element.  The method has been to put users in situations of future use through means of Hi-fi simulation. Several scenarios were enacted using the Wizard-of-Oz technique: a robot intended for fetch- and carry services in an office environment; and a robot acting in what can be characterised as a home tour, where the user teaches objects and locations to the robot. Using these scenarios a corpus of human-robot communication was developed and analysed.  The analysis of the communicative behaviours led to the following observations: the users communicate with the robot in order to solve a main task goal. In order to fulfil this goal they overtake service actions that the robot is incapable of. Once users have understood that the robot is capable of performing actions, they explore its capabilities.  During the interactions the users continuously monitor the behaviour of the robot, attempting to elicit feedback or to draw its perceptual attention to the users’ communicative behaviour. Information related to the communicative status of the robot seems to have a fundamental impact on the quality of interaction. Large portions of the miscommunication that occurs in the analysed scenarios can be attributed to ill-timed, lacking or irrelevant feedback from the robot.  The analysis of the corpus data also showed that the users’ spatial behaviour seemed to be influenced by the robot’s communicative behaviour, embodiment and positioning. This means that we in robot design can consider the use strategies for spatial prompting to influence the users’ spatial behaviour.  The understanding of the importance of continuously providing information of the communicative status of the robot to it’s users leaves us with an intriguing design challenge for the future: When designing communication for a service robot we need to design communication for the robot work tasks; and simultaneously, provide information based on the systems communicative status to continuously make users aware of the robots communicative capability.
QC 20100714
Styles APA, Harvard, Vancouver, ISO, etc.
47

Hawthorn, Dan. « Designing Effective Interfaces for Older Users ». The University of Waikato, 2006. http://hdl.handle.net/10289/2538.

Texte intégral
Résumé :
The thesis examines the factors that need to be considered in order to undertake successful design of user interfaces for older users. The literature on aging is surveyed for age related changes that are of relevance to interface design. The findings from the literature review are extended and placed in a human context using observational studies of older people and their supporters as these older people attempted to learn about and use computers. These findings are then applied in three case studies of interface design and product development for older users. These case studies are reported and examined in depth. For each case study results are presented on the acceptance of the final product by older people. These results show that, for each case study, the interfaces used led to products that the older people evaluating them rated as unusually suitable to their needs as older users. The relationship between the case studies and the overall research aims is then examined in a discussion of the research methodology. In the case studies there is an evolving approach used in developing the interface designs. This approach includes intensive contribution by older people to the shaping of the interface design. This approach is analyzed and is presented as an approach to designing user interfaces for older people. It was found that a number of non-standard techniques were useful in order to maximize the benefit from the involvement of the older contributors and to ensure their ethical treatment. These techniques and the rationale behind them are described. Finally the interface design approach that emerged has strong links to the approach used by the UTOPIA team based at the university of Dundee. The extent to which the thesis provides support for the UTOPIA approach is discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Wilson, Rory Howard 1957. « An assessment of the impact of grouped item prompts versus single item prompts for human computer interface design ». Thesis, The University of Arizona, 1988. http://hdl.handle.net/10150/276934.

Texte intégral
Résumé :
Current research in screen design for human computer interaction has demonstrated that user task performance is influenced by placement, prompting methodology, and screen complexity. To assess the difference between a grouped item screen prompt and a series of single item screen prompts, a field experiment in a semiconductor manufacturing facility was designed. Subjects were randomly assigned to one of two groups to use a data entry system. Seven of the screen prompts differed between the two groups. During the four weeks of the study, a significant difference was measured between groups. The group screen users had lower task times for all four weeks. No significant correlation exists between work experience, performance review scores, or designated work shift. A strong negative correlation exists between frequency of system usage and task time. No difference was noted for measured errors. Subjective scores significantly favored the group screen design.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Rossa, Michael. « System images : user's understanding and system structure in the design of information tools ». Thesis, Royal College of Art, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.602326.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
50

Bentley, Brian Todd. « Quality in use addressing and validating affective requirements / ». Australasian Digital Theses Program, 2006. http://adt.lib.swin.edu.au/public/adt-VSWT20070214.143122/index.html.

Texte intégral
Résumé :
Thesis (PhD) - Swinburne University of Technology, 2006.
[Submitted for the degree of Doctor of Philosophy, Swinburne University of Technology - 2006]. Typescript. Includes bibliographical references (p. 218-231).
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie