Dissertations / Theses on the topic 'Mixed reality interfaces'

To see the other types of publications on this topic, follow the link: Mixed reality interfaces.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 27 dissertations / theses for your research on the topic 'Mixed reality interfaces.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Santos, Lages Wallace. "Walk-Centric User Interfaces for Mixed Reality." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/84460.

Full text
Abstract:
Walking is a natural part of our lives and is also becoming increasingly common in mixed reality. Wireless headsets and improved tracking systems allow us to easily navigate real and virtual environments by walking. In spite of the benefits, walking brings challenges to the design of new systems. In particular, designers must be aware of cognitive and motor requirements so that walking does not negatively impact the main task. Unfortunately, those demands are not yet fully understood. In this dissertation, we present new scientific evidence, interaction designs, and analysis of the role of walking in different mixed reality applications. We evaluated the difference in performance of users walking vs. manipulating a dataset during visual analysis. This is an important task, since virtual reality is increasingly being used as a way to make sense of progressively complex datasets. Our findings indicate that neither option is absolutely better: the optimal design choice should consider both user's experience with controllers and user's inherent spatial ability. Participants with reasonable game experience and low spatial ability performed better using the manipulation technique. However, we found that walking can still enable higher performance for participants with low spatial ability and without significant game experience. In augmented reality, specifying points in space is an essential step to create content that is registered with the world. However, this task can be challenging when information about the depth or geometry of the target is not available. We evaluated different augmented reality techniques for point marking that do not rely on any model of the environment. We found that triangulation by physically walking between points provides higher accuracy than purely perceptual methods. However, precision may be affected by head pointing tremors. To increase the precision, we designed a new technique that uses multiple samples to obtain a better estimate of the target position. This technique can also be used to mark points while walking. The effectiveness of this approach was demonstrated with a controlled augmented reality simulation and actual outdoor tests. Moving into the future, augmented reality will eventually replace our mobile devices as the main method of accessing information. Nonetheless, to achieve its full potential, augmented reality interfaces must support the fluid way we move in the world. We investigated the potential of adaptation in achieving this goal. We conceived and implemented an adaptive workspace system, based in the study of the design space and through user contextual studies. Our final design consists in a minimum set of techniques to support mobility and integration with the real world. We also identified a set of key interaction patterns and desirable properties of adaptation-based techniques, which can be used to guide the design of the next-generation walking-centered workspaces.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
2

Lacoche, Jérémy. "Plasticity for user interfaces in mixed reality." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S034/document.

Full text
Abstract:
Cette thèse s'intéresse à la plasticité des interfaces de Réalité Mixte (RM), c'est-à-dire les applications de Réalité Virtuelle (RV), Réalité Augmentée (RA) et de Virtualité Augmentée (AV). Il y a un réel engouement aujourd’hui pour ce type d’applications notamment grâce à la démocratisation des périphériques tels les lunettes et casques immersifs, les caméras de profondeur et les capteurs de mouvement. La Réalité Mixte trouve notamment ses usages dans le divertissement, la visualisation de données, la formation et la conception en ingénierie. La plasticité d'un système interactif est sa capacité à s'adapter aux contraintes matérielles et environnementales dans le respect de son utilisabilité. La continuité de l'utilisabilité d'une interface plastique est assurée quel que soit le contexte d'usage. Nous proposons ainsi des modèles et une solution logicielle nommée 3DPlasticToolkit afin de permettre aux développeurs de créer des interfaces de Réalité Mixtes plastiques. Tout d'abord, nous proposons trois modèles pour modéliser les sources d'adaptation : un modèle pour représenter les dispositifs d'interaction et les dispositifs d'affichage, un modèle pour représenter les utilisateurs et leurs préférences et un modèle pour représenter la structure et la sémantique des données. Ces sources d'adaptation vont être prises en compte par un processus d'adaptation qui va déployer dans une application les composants applicatifs adaptés au contexte d'usage grâce à des mécanismes de notation. Le déploiement de ces composants va permettre d'adapter à la fois les techniques d'interaction de l'application et également la présentation de son contenu. Nous proposons également un processus de redistribution qui va permettre à l'utilisateur final de changer la distribution des composants de son système sur différentes dimensions : affichage, utilisateur et plateforme. Ce processus va ainsi permettre à l'utilisateur de changer de plateforme dynamiquement ou encore de combiner plusieurs plateformes. L'implémentation de ces modèles dans 3DPlasticToolkit permet de fournir aux développeurs une solution prête à l'usage qui peut gérer les périphériques actuels de Réalité Mixte et qui inclut un certain nombre de techniques d'interaction, d'effets visuels et de métaphores de visualisation de données
This PhD thesis focuses on plasticity for Mixed Reality (MR) User interfaces, which includes Virtual Reality (VR), Augmented Reality (AR) and Augmented Virtuality (AV) applications. Today, there is a growing interest for this kind of applications thanks to the generalization of devices such as Head Mounted Displays, Depth sensors and tracking systems. Mixed Reality application can be used in a wide variety of domains such as entertainment, data visualization, education and training, and engineering. Plasticity refers to the capacity of an interactive system to withstand variations of both the system physical characteristics and the environment while preserving its usability. Usability continuity of a plastic interface is ensured whatever the context of use. Therefore, we propose a set of software models, integrated in a software solution named 3DPlasticToolkit, which allow any developer to create plastic MR user interfaces. First, we propose three models for modeling adaptation sources: a model for the description of display devices and interaction devices, a model for the description of the users and their preferences, a model for the description of data structure and semantic. These adaptation sources are taken into account by an adaptation process that deploys application components adapted to the context of use thanks to a scoring system. The deployment of these application components lets the system adapt the interaction techniques of the application of its content presentation. We also propose a redistribution process that allows the end-user to change the distribution of his/her application components across multiple dimensions: display, user and platform. Thus, it allows the end-user to switch dynamically of platform or to combine multiple platforms. The implementation of these models in 3DPlasticToolkit provides developers with a ready to use solution for the development of plastic MR user interfaces. Indeed, the solution already integrates different display devices and interaction devices and also includes multiple interaction techniques, visual effects and data visualization metaphors
APA, Harvard, Vancouver, ISO, and other styles
3

Marchesi, Marco <1977&gt. "Advanced Technologies for Human-Computer Interfaces in Mixed Reality." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amsdottorato.unibo.it/7522/.

Full text
Abstract:
As human beings, we trust our five senses, that allow us to experience the world and communicate. Since our birth, the amount of data that every day we can acquire is impressive and such a richness reflects the complexity of humankind in arts, technology, etc. The advent of computers and the consequent progress in Data Science and Artificial Intelligence showed how large amounts of data can contain some sort of “intelligence” themselves. Machines learn and create a superimposed layer of reality. How data generated by humans and machines are related today? To give an answer we will present three projects in the context of “Mixed Reality”, the ideal place where Reality, Virtual Reality and Augmented Reality are increasingly connected as long as data enhance the digital experiences, making them more “real”. We will start with BRAVO, a tool that exploits the brain activity to improve the user’s learning process in real time by means of a Brain-Computer Interface that acquires EEG data. Then we will see AUGMENTED GRAPHICS, a framework for detecting objects in the reality that can be captured easily and inserted in any digital scenario. Based on the moments invariants theory, it looks particularly designed for mobile devices, as it assumes a light concept of object detection and it works without any training set. As third work, GLOVR, a wearable hand controller that uses inertial sensors to offer directional controls and to recognize gestures, particularly suitable for Virtual Reality applications. It features a microphone to record voice sequences that then are translated in tasks by means of a natural language web service. For each project we will summarize the main results and we will trace some future directions of research and development.
APA, Harvard, Vancouver, ISO, and other styles
4

Yoo, Yong Ho [Verfasser]. "Mixed Reality Design Using Unified Energy Interfaces / Yong Ho Yoo." Aachen : Shaker, 2007. http://d-nb.info/1166511200/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Englmeier, David [Verfasser], and Andreas [Akademischer Betreuer] Butz. "Spherical tangible user interfaces in mixed reality / David Englmeier ; Betreuer: Andreas Butz." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2021. http://d-nb.info/1238017150/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Panahi, Aliakbar. "Big Data Visualization Platform for Mixed Reality." VCU Scholars Compass, 2017. https://scholarscompass.vcu.edu/etd/5198.

Full text
Abstract:
The visualization of data helps to provide faster and deeper insight into the data. In this work, a system for visualizing and analyzing big data in an interactive mixed reality environment is proposed. Such a system can be used for representing different types of data such as temporal, geospatial, network graph, and high dimensional. Also, an implementation of this system for four different data types are created. The data types include network data, volumetric data, high dimensional, and spectral data for different mixed reality devices such as Microsoft HoloLens, Oculus Rift, Samsung Gear VR, and Android ARCore were created. It was shown that such a system could store and use billions of samples and represent millions of them at once.
APA, Harvard, Vancouver, ISO, and other styles
7

Yannier, Nesra. "Bridging Physical and Virtual Learning: A Mixed-Reality System for Early Science." Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/752.

Full text
Abstract:
Tangible interfaces and mixed-reality environments have potential to bring together the advantages of physical and virtual environments to improve children’s learning and enjoyment. However, there are too few controlled experiments that investigate whether interacting with physical objects in the real world accompanied by interactive feedback may actually improve student learning compared to flat-screen interaction. Furthermore, we do not have a sufficient empirical basis for understanding how a mixed-reality environment should be designed to maximize learning and enjoyment for children. I created EarthShake, a mixed-reality game bridging physical and virtual worlds via a Kinect depth-camera and a specialized computer vision algorithm to help children learn physics. I have conducted three controlled experiments with EarthShake that have identified features that are more and less important to student learning and enjoyment. The first experiment examined the effect of observing physical phenomena and collaboration (pairs versus solo), while the second experiment replicated the effect of observing physical phenomena while also testing whether adding simple physical control, such as shaking a tablet, improves learning and enjoyment. The experiments revealed that observing physical phenomena in the context of a mixed-reality game leads to significantly more learning (5 times more) and enjoyment compared to equivalent screen-only versions, while adding simple physical control or changing group size (solo or pairs) do not have significant effects. Furthermore, gesture analysis provides insight as to why experiencing physical phenomena may enhance learning. My thesis work further investigates what features of a mixed-reality system yield better learning and enjoyment, especially in the context of limited experimental results from other mixed-reality learning research. Most mixed-reality environments, including tangible interfaces (where users manipulate physical objects to create an interactive output), currently emphasize open-ended exploration and problem solving, and are claimed to be most effective when used in a discovery-learning mode with minimal guidance. I investigated how critical to learning and enjoyment interactive guidance and feedback is (e.g. predict/observe/explain prompting structure with interactive feedback), in the context of EarthShake. In a third experiment, I compared the learning and enjoyment outcomes of children interacting with a version of EarthShake that supports guided-discovery, another version that supports exploration in discovery-learning mode, and a version that is a combination of both guideddiscovery and exploration. The results of the experiment reveals that Guided-discovery and Combined conditions where children are exposed to the guided discovery activities with the predict-observe-explain cycle with interactive feedback yield better explanation and reasoning. Thus, having guided-discovery in a mixed-reality environment helps with formulating explanation theories in children’s minds. However, the results also suggest that, children are able to activate explanatory theory in action better when the guided discovery activities are combined with exploratory activities in the mixed-reality system. Adding exploration to guided-discovery activities, not only fosters better learning of the balance/physics principles, but also better application of those principles in a hands-on, constructive problem-solving task. My dissertation contributes to the literatures on the effects of physical observation and mixed-reality interaction on students’ science learning outcomes in learning technologies. Specifically, I have shown that a mixed-reality system (i.e., combining physical and virtual environments) can lead to superior learning and enjoyment outcomes than screen-only alternatives, based on different measures. My work also contributes to the literature of exploration and guided-discovery learning, by demonstrating that having guided-discovery activities in a mixed-reality setting can improve children’s fundamental principle learning by helping them formulate explanations. It also shows that combining an engineering approach with scientific thinking practice (by combining exploration and guided-discovery activities) can lead to better engineering outcomes such as transferring to constructive hands-on activities in the real world. Lastly, my work aims to make a contribution from the design perspective by creating a new mixed-reality educational system that bridges physical and virtual environments to improve children’s learning and enjoyment in a collaborative way, fostering productive dialogue and scientific curiosity in museum and school settings, through an iterative design methodology to ensure effective learning and enjoyment outcomes in these settings.
APA, Harvard, Vancouver, ISO, and other styles
8

Dahl, Tyler. "Real-Time Object Removal in Augmented Reality." DigitalCommons@CalPoly, 2018. https://digitalcommons.calpoly.edu/theses/1905.

Full text
Abstract:
Diminished reality, as a sub-topic of augmented reality where digital information is overlaid on an environment, is the perceived removal of an object from an environment. Previous approaches to diminished reality used digital replacement techniques, inpainting, and multi-view homographies. However, few used a virtual representation of the real environment, limiting their domains to planar environments. This thesis provides a framework to achieve real-time diminished reality on an augmented reality headset. Using state-of-the-art hardware, we combine a virtual representation of the real environment with inpainting to remove existing objects from complex environments. Our work is found to be competitive with previous results, with a similar qualitative outcome under the limitations of available technology. Additionally, by implementing new texturing algorithms, a more detailed representation of the real environment is achieved.
APA, Harvard, Vancouver, ISO, and other styles
9

Ens, Barrett. "Spatial Analytic Interfaces." ACM, 2014. http://hdl.handle.net/1993/31595.

Full text
Abstract:
We propose the concept of spatial analytic interfaces (SAIs) as a tool for performing in-situ, everyday analytic tasks. Mobile computing is now ubiquitous and provides access to information at nearly any time or place. However, current mobile interfaces do not easily enable the type of sophisticated analytic tasks that are now well-supported by desktop computers. Conversely, desktop computers, with large available screen space to view multiple data visualizations, are not always available at the ideal time and place for a particular task. Spatial user interfaces, leveraging state-of-the-art miniature and wearable technologies, can potentially provide intuitive computer interfaces to deal with the complexity needed to support everyday analytic tasks. These interfaces can be implemented with versatile form factors that provide mobility for doing such taskwork in-situ, that is, at the ideal time and place. We explore the design of spatial analytic interfaces for in-situ analytic tasks, that leverage the benefits of an upcoming generation of light-weight, see-through, head-worn displays. We propose how such a platform can meet the five primary design requirements for personal visual analytics: mobility, integration, interpretation, multiple views and interactivity. We begin with a design framework for spatial analytic interfaces based on a survey of existing designs of spatial user interfaces. We then explore how to best meet these requirements through a series of design concepts, user studies and prototype implementations. Our result is a holistic exploration of the spatial analytic concept on a head-worn display platform.
October 2016
APA, Harvard, Vancouver, ISO, and other styles
10

Pederson, Thomas. "From Conceptual Links to Causal Relations — Physical-Virtual Artefacts in Mixed-Reality Space." Doctoral thesis, Umeå : Univ, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Pouke, M. (Matti). "Augmented virtuality:transforming real human activity into virtual environments." Doctoral thesis, Oulun yliopisto, 2015. http://urn.fi/urn:isbn:9789526208343.

Full text
Abstract:
Abstract The topic of this work is the transformation of real-world human activity into virtual environments. More specifically, the topic is the process of identifying various aspects of visible human activity with sensor networks and studying the different ways how the identified activity can be visualized in a virtual environment. The transformation of human activities into virtual environments is a rather new research area. While there is existing research on sensing and visualizing human activity in virtual environments, the focus of the research is carried out usually within a specific type of human activity, such as basic actions and locomotion. However, different types of sensors can provide very different human activity data, as well as lend itself to very different use-cases. This work is among the first to study the transformation of human activities on a larger scale, comparing various types of transformations from multiple theoretical viewpoints. This work utilizes constructs built for use-cases that require the transformation of human activity for various purposes. Each construct is a mixed reality application that utilizes a different type of source data and visualizes human activity in a different way. The constructs are evaluated from practical as well as theoretical viewpoints. The results imply that different types of activity transformations have significantly different characteristics. The most distinct theoretical finding is that there is a relationship between the level of detail of the transformed activity, specificity of the sensors involved and the extent of world knowledge required to transform the activity. The results also provide novel insights into using human activity transformations for various practical purposes. Transformations are evaluated as control devices for virtual environments, as well as in the context of visualization and simulation tools in elderly home care and urban studies
Tiivistelmä Tämän väitöskirjatyön aiheena on ihmistoiminnan muuntaminen todellisesta maailmasta virtuaalitodellisuuteen. Työssä käsitellään kuinka näkyvästä ihmistoiminnasta tunnistetaan sensoriverkkojen avulla erilaisia ominaisuuksia ja kuinka nämä ominaisuudet voidaan esittää eri tavoin virtuaaliympäristöissä. Ihmistoiminnan muuntaminen virtuaaliympäristöihin on kohtalaisen uusi tutkimusalue. Olemassa oleva tutkimus keskittyy yleensä kerrallaan vain tietyntyyppisen ihmistoiminnan, kuten perustoimintojen tai liikkumisen, tunnistamiseen ja visualisointiin. Erilaiset anturit ja muut datalähteet pystyvät kuitenkin tuottamaan hyvin erityyppistä dataa ja siten soveltuvat hyvin erilaisiin käyttötapauksiin. Tämä työ tutkii ensimmäisten joukossa ihmistoiminnan tunnistamista ja visualisointia virtuaaliympäristössä laajemmassa mittakaavassa ja useista teoreettisista näkökulmista tarkasteltuna. Työssä hyödynnetään konstrukteja jotka on kehitetty eri käyttötapauksia varten. Konstruktit ovat sekoitetun todellisuuden sovelluksia joissa hyödynnetään erityyppistä lähdedataa ja visualisoidaan ihmistoimintaa eri tavoin. Konstrukteja arvioidaan sekä niiden käytännön sovellusalueen, että erilaisten teoreettisten viitekehysten kannalta. Tulokset viittaavat siihen, että erilaisilla muunnoksilla on selkeästi erityyppiset ominaisuudet. Selkein teoreettinen löydös on, että mitä yksityiskohtaisemmasta toiminnasta on kyse, sitä vähemmän tunnistuksessa voidaan hyödyntää kontekstuaalista tietoa tai tavanomaisia datalähteitä. Tuloksissa tuodaan myös uusia näkökulmia ihmistoiminnan visualisoinnin hyödyntämisestä erilaisissa käytännön sovelluskohteissa. Sovelluskohteina toimivat ihmiskehon käyttäminen ohjauslaitteena sekä ihmistoiminnan visualisointi ja simulointi kotihoidon ja kaupunkisuunnittelun sovellusalueilla
APA, Harvard, Vancouver, ISO, and other styles
12

Bambušek, Daniel. "User Interface for ARTable and Microsoft Hololens." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2018. http://www.nusl.cz/ntk/nusl-386023.

Full text
Abstract:
Tato práce se zaměřuje na použitelnost brýlí Microsoft HoloLens pro rozšířenou realitu v prototypu pracoviště pro spolupráci člověka s robotem - "ARTable". Použití brýlí je demonstrováno vytvořeným uživatelským rozhraním, které pomáhá uživatelům lépe a rychleji porozumět systému ARTable. Umožňuje prostorově vizualizovat naučené programy, aniž by bylo nutné spouštět samotného robota. Uživatel je veden 3D animací jednotlivých programů a hlasem zařízení, což mu pomůže získat jasnou představu o tom, co by se stalo, pokud by program spustil přímo na robotovi. Implementované řešení také umožňuje interaktivně provést uživatele celým procesem programování robota. Použití brýlí umožňuje mimo jiné zobrazit cenné prostorové informace, například vidění robota, tedy zvýraznit ty objekty, které jsou robotem detekovány.
APA, Harvard, Vancouver, ISO, and other styles
13

Juntunen, J. (Johan). "A cyber-physical system with a mixed reality interface." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201606092492.

Full text
Abstract:
This thesis presents a cyber physical system that mirrors and augments real world devices and their functionality in a virtual representation of the real world. The energy characteristics of real world devices are measured and presented in the virtual reality interface in real-time. The virtual reality interface augments the functionality of the real world system with virtual inputs and outputs. The system is designed and implemented using a multi-agent software model. The challenge of keeping real and virtual worlds synchronous and consistent is solved by introducing a synchronisation block. The synchronisation and consistency of the real and virtual worlds were evaluated from a technical perspective. Synchronisation was evaluated with timing measurements. They revealed that the system operated most of the time without breaking timing conditions. Still, there were at times larger delays which showed that the system was not synchronous all the time. Consistency was verified with empirical measurements of visually observable states. The responses of the system to the user actions were mainly consistent, although a malfunction leading to inconsistency between the states of virtual and real devices was found. The results demonstrate that the real world devices and their corresponding virtual representations are controlled synchronously and their visually observable states remain consistent. The thesis shows that it is feasible to augment a cyber-physical system with virtual objects, for example virtual sensors, which operate and interact synchronously with the real world system. The thesis concludes with a discussion on findings and future work
Työssä esitetään kyberfysikaalinen järjestelmä, joka peilaa ja täydentää reaalimaailman laitteet ja niiden toiminnallisuuden reaalimaailman virtuaalisessa esityksessä. Reaalimaailman laitteiden energiatasoja mitataan ja ne esitetään virtuaalitodellisuuden rajapinnassa reaaliaikaisesti. Virtuaaliset sisään- ja ulostulot virtuaalitodellisuuden rajapinnassa täydentävät reaalimaailman järjestelmän toiminnallisuutta. Järjestelmä suunnitellaan ja toteutetaan moniagentti- ohjelmistomallia käyttäen. Reaali- ja virtuaalimaailmojen synkronisuuden ja yhdenmukaisuuden takaamisen tuoma haaste ratkaistaan työssä esitettävän synkronointilohkon avulla. Reaali- ja virtuaalimaailman synkronisuuden ja yhdenmukaisuuden arviointi tehtiin teknisestä näkökulmasta. Synkronointi arvioitiin ajoitusmittausten perusteella. Mittausten mukaan järjestelmä toimi pääsääntöisesti ajoitusvaatimusten puitteissa. Ajoittaisia ajoitusvaatimuksia rikkovia viiveitä kuitenkin esiintyi. Järjestelmän yhdenmukaisuus arvioitiin empiirisesti havainnoimalla järjestelmän silminhavaittavia tiloja. Järjestelmän vaste käyttäjän toimiin oli yhdenmukaista lukuunottamatta toimintahäiriötä, joka tietyissä olosuhteissa johti todellisuuden ja virtuaalimaailman esityksen epäyhdenmukaisuuteen. Tulokset osoittavat, että reaalimaailman laitteiden ja niitä vastaavien virtuaalisten esitysten ohjaus on synkronissa ja silminhavaittavat tilat pysyvät yhdenmukaisina. Työ osoittaa, että on mahdollista täydentää kyberfysikaalista järjestelmää virtuaalisilla objekteilla, esimerkiksi virtuaalisilla sensoreilla, jotka toimivat ja vuorovaikuttavat synkronisesti reaalimaailman järjestelmän kanssa. Työn lopuksi käsitellään esiinnousseita havaintoja ja järjestelmän parannusehdotuksia
APA, Harvard, Vancouver, ISO, and other styles
14

García, Sanjuan Fernando. "CREAME: CReation of Educative Affordable Multi-surface Environments." Doctoral thesis, Universitat Politècnica de València, 2019. http://hdl.handle.net/10251/101942.

Full text
Abstract:
Los juegos serios colaborativos tienen un impacto positivo en el comportamiento y el aprendizaje, pero siguen desarrollándose para plataformas tecnológicas tradicionales como videoconsolas y ordenadores de sobremesa o portátiles, los cuales han sido identificados como sub-óptimos para niños en diversos estudios. En su lugar, el uso de dispositivos móviles como tabletas y teléfonos inteligentes presenta diversas ventajas: son económicamente asequibles, están ampliamente distribuidos, y pueden ser transportados, lo cual permite la actividad física y poder iniciar un juego sin necesitar que los usuarios se trasladen a una localización fija, especialmente dedicada para tal fin. Además, combinar varios de estos dispositivos y coordinar la interacción entre ellos en lo que se denomina Entorno Multi-Pantalla (EMP) proporciona beneficios adicionales para la colaboración tales como una mayor escalabilidad, conciencia del espacio de trabajo, paralelismo y fluidez de las interacciones. La interacción en estos entornos multi-tableta es por tanto un aspecto crítico. Los dispositivos móviles están diseñados para ser interactuados mediante el toque de los dedos principalmente, lo cual es muy sencillo y directo, pero está normalmente limitado a la pequeña dimensión de las pantallas, lo que puede conllevar la oclusión de la pantalla y la infrautilización del espacio periférico. Por esta razón, esta tesis se centra en la exploración de otro mecanismo de interacción que puede complementar al táctil: interacciones tangibles alrededor del dispositivo. Las interacciones tangibles están basadas en la manipulación de objetos físicos, lo que presenta un valor adicional en la educación de los niños puesto que resuena con los manipulativos educativos tradicionales y permite la exploración del mundo físico. Por otra parte, la explotación del espacio que envuelve a las pantallas tiene diversos beneficios adicionales para actividades educativas colaborativas: reducida oclusión de la pantalla (lo cual puede incrementar la conciencia del espacio de trabajo), el uso de objetos tangibles como contenedores de información digital que puede ser transportada de forma continua entre dispositivos, y la identificación de un determinado estudiante a través de la codificación de su ID en un operador tangible (lo cual facilita el seguimiento de sus acciones y progreso durante el juego). Esta tesis describe dos enfoques distintos para construir juegos educativos colaborativos en EMPs utilizando interacciones tangibles alrededor de los dispositivos. Una, denominada MarkAirs, es una solución óptica aérea que no necesita ningún hardware adicional aparte de las tabletas excepto diversas tarjetas de cartón impresas. La otra, Tangibot, introduce un robot tangiblemente controlado y otro atrezo físico en el entorno, y se basa en tecnología RFID. Ambas interacciones son respectivamente evaluadas, y se observa que MarkAirs es usable y poco exigente tanto para adultos como para niños, y que se pueden realizar con éxito gestos de grano fino encima de las tabletas con ella. Además, al aplicarse en juegos colaborativos, puede ayudar a reducir la oclusión de las pantallas y la interferencia entre las distintas acciones de los usuarios, lo cual es un problema que puede surgir en este tipo de escenarios cuando solamente se dispone de interacciones táctiles. Se evalúa un juego educativo colaborativo con MarkAirs con niños de educación primaria, y se concluye que este mecanismo es capaz de crear experiencias de aprendizaje colaborativo y de presentar un valor añadido en términos de experiencia de usuario, aunque no en eficiencia. Con respecto a Tangibot, se muestra que controlar colaborativamente un robot móvil mediante unas palas tangibles con cierta precisión es factible para niños a partir de los tres años de edad, e incluso para personas mayores con un deterioro cognitivo leve. Además, proporciona una experiencia divertida
Collaborative serious games have a positive impact on behavior and learning, but the majority are still being developed for traditional technological platforms, e.g., video consoles and desktop/laptop computers, which have been deemed suboptimal for children by several studies. Instead, the use of handheld devices such as tablets and smartphones presents several advantages: they are affordable, very widespread, and mobile---which enables physical activity and being able to engage in a game without requiring users to gather around a fixed, dedicated, location. Plus, combining several of these devices and coordinating interactions across them in what is called a Multi-Display Environment (MDE) brings on additional benefits to collaboration like higher scalability, awareness, parallelism, and fluidity of the interaction. How to interact with these multi-tablet environments is therefore a critical issue. Mobile devices are designed to be interacted mainly via touch, which is very straightforward but usually limited to the small area of the displays, which can lead to the occlusion of the screen and the underuse of the peripheral space. For this reason, this thesis focuses on the exploration of another interaction mechanism that can complement touch: tangible around-device interactions. Tangible interactions are based on the manipulation of physical objects, which have an added value in childhood education as they resonate with traditional learning manipulatives and enable the exploration of the physical world. On the other hand, the exploitation of the space surrounding the displays has several potential benefits for collaborative-learning activities: reduced on-screen occlusion (which may increase workspace awareness), the use of tangible objects as containers of digital information that can be seamlessly moved across devices, and the identification of a given student through the encoding of their ID in a tangible manipulator (which facilitates the tracking of their actions and progress throughout the game). This thesis describes two different approaches to build collaborative-learning games for MDEs using tangible around-device interactions. One, called MarkAirs, is a mid-air optical solution relying on no additional hardware besides the tablets except for several cardboard printed cards. The other, Tangibot, introduces a tangible-mediated robot and other physical props in the environment and is based on RFID technology. Both interactions are respectively evaluated, and it is observed that MarkAirs is usable and undemanding both for adults and for children, and that fine-grained gestures above the tablets can be successfully conducted with it. Also, when applied to collaborative games, it can help reduce screen occlusion and interference among the different users' actions, which is a problem that may arise in such settings when only touch interactions are available. A collaborative learning game with MarkAirs is evaluated with primary school children, revealing this mechanism as capable of creating collaborative learning experiences and presenting an added value in user experience, although not in performance. With respect to Tangibot, we show how collaboratively controlling a mobile robot with tangible paddles and achieving certain precision with it is feasible for children from 3 years of age, and even for elderly people with mild cognitive impairment. Furthermore, it provides a fun experience for children and maintains them in a constant state of flow.
Els jocs seriosos col·laboratius tenen un impacte positiu en el comportament i l'aprenentatge, però continuen sent desenvolupats per a plataformes tecnològiques tradicionals com videoconsoles i ordinadors de sobretaula o portàtils, els quals han sigut identificats com sub-òptims per a xiquets en diversos estudis. D'altra banda, l'ús de dispositius mòbils com ara tabletes i telèfons intel·ligents presenta diversos avantatges: són econòmicament assequibles, estan àmpliament distribuïts i poden ser transportats, la qual cosa permet l'activitat física i poder iniciar un joc sense necessitat de què els usuaris es traslladen a una localització fixa i especialment dedicada per a eixa finalitat. A més, combinar diversos d'estos dispositius i coordinar la interacció entre ells en el que es denomina Entorn Multi-Pantalla (EMP) proporciona beneficis addicionals per a la col·laboració tals com una major escalabilitat, consciència de l'espai de treball, paral·lelisme i fluïdesa de les interaccions. La interacció amb estos entorns multi-tableta és per tant crítica. Els dispositius mòbils estan dissenyats per a ser interactuats mitjançant tocs de dit principalment, mecanisme molt senzill i directe, però està normalment limitat a la reduïda dimensió de les pantalles, cosa que pot ocasionar l'oclusió de la pantalla i la infrautilització de l'espai perifèric. Per aquesta raó, la present tesi se centra en l'exploració d'un altre mecanisme d'interacció que pot complementar al tàctil: interaccions tangible al voltant dels dispositius. Les interaccions tangibles estan basades en la manipulació d'objectes físics, cosa que presenta un valor addicional en l'educació dels xiquets ja que ressona amb els manipulatius tradicionals i permet l'exploració del món físic. D'altra banda, l'explotació de l'espai que envolta a les pantalles té diversos beneficis addicionals per a activitats educatives col·laboratives: reduïda oclusió de la pantalla (la qual cosa pot incrementar la consciència de l'espai de treball), l'ús d'objectes tangibles com a contenidors d'informació digital que pot ser transportada de forma continua entre dispositius, i la identificació d'un estudiant determinat a través de la codificació de la seua identitat en un operador tangible (cosa que facilita el seguiment de les seues accions i progrés durant el joc). Aquesta tesi descriu dos enfocaments distints per a construir jocs educatius col·laboratius en EMPs utilitzant interaccions tangibles al voltant dels dispositius. Una, denominada MarkAirs, és una solució òptica aèria que no precisa de cap maquinari addicional a banda de les tabletes, exceptuant diverses targetes de cartró impreses. L'altra, Tangibot, introdueix un robot controlat tangiblement i attrezzo físic addicional en l'entorn, i es basa en tecnologia RFID. Ambdues interaccions són avaluades respectivament, i s'observa que MarkAirs és usable i poc exigent tant per a adults com per a xiquets, i que es poden realitzar gestos de granularitat fina dalt de les tabletes amb ella. A més a més, en aplicar-se a jocs col·laboratius, pot ajudar a reduir l'oclusió de les pantalles i la interferència entre les distintes accions dels usuaris, problema que pot aparèixer en este tipus d'escenaris quan solament es disposa d'interaccions tàctils. S'avalua un joc educatiu col·laboratiu amb MarkAirs amb xiquets d'educació primària, i es conclou que aquest mecanisme és capaç de crear experiències d'aprenentatge col·laboratiu i de presentar un valor afegit en termes d'experiència d'usuari, tot i que no en eficiència. Respecte a Tangibot, es mostra que controlar conjuntament un robot mòbil mitjançant unes pales tangibles amb certa precisió és factible per a xiquets a partir de tres anys i inclús per a persones majors amb un lleu deteriorament cognitiu. A més, proporciona una experiència divertida per als xiquets i els manté en un estat constant de flow.
García Sanjuan, F. (2018). CREAME: CReation of Educative Affordable Multi-surface Environments [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/101942
TESIS
APA, Harvard, Vancouver, ISO, and other styles
15

Zhao, Chen. "HUMAN POINT-TO-POINT REACHING AND SWARM-TEAMING PERFORMANCE IN MIXED REALITY." Case Western Reserve University School of Graduate Studies / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=case1607079526355402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Johansson, Daniel. "Convergence in mixed reality-virtuality environments : facilitating natural user behavior." Doctoral thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-21054.

Full text
Abstract:
This thesis addresses the subject of converging real and virtual environments to a combined entity that can facilitate physiologically complying interfaces for the purpose of training. Based on the mobility and physiological demands of dismounted soldiers, the base assumption is that greater immersion means better learning and potentially higher training transfer. As the user can interface with the system in a natural way, more focus and energy can be used for training rather than for control itself. Identified requirements on a simulator relating to physical and psychological user aspects are support for unobtrusive and wireless use, high field of view, high performance tracking, use of authentic tools, ability to see other trainees, unrestricted movement and physical feedback. Using only commercially available systems would be prohibitively expensive whilst not providing a solution that would be fully optimized for the target group for this simulator. For this reason, most of the systems that compose the simulator are custom made to facilitate physiological human aspects as well as to bring down costs. With the use of chroma keying, a cylindrical simulator room and parallax corrected high field of view video see-though head mounted displays, the real and virtual reality are mixed. This facilitates use of real tool as well as layering and manipulation of real and virtual objects. Furthermore, a novel omnidirectional floor and thereto interface scheme is developed to allow limitless physical walking to be used for virtual translation. A physically confined real space is thereby transformed into an infinite converged environment. The omnidirectional floor regulation algorithm can also provide physical feedback through adjustment of the velocity in order to synchronize virtual obstacles with the surrounding simulator walls. As an alternative simulator target use, an omnidirectional robotic platform has been developed that can match the user movements. This can be utilized to increase situation awareness in telepresence applications.
APA, Harvard, Vancouver, ISO, and other styles
17

Redfearn, Brady Edwin. "Rapid Design and Prototyping Methods for Mobile Head-Worn Mixed Reality (MR) Interface and Interaction Systems." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/82056.

Full text
Abstract:
As Mixed Reality (MR) technologies become more prevalent, it is important for researchers to design and prototype the kinds of user interface and user interactions that are most effective for end-user consumers. Creating these standards now will aid in technology development and adoption in MR overall. In the current climate of this domain, however, the interface elements and user interaction styles are unique to each hardware and software vendor and are generally proprietary in nature. This results in confusion for consumers. To explore the MR interface and interaction space, this research employed a series of standard user-centered design (UCD) methods to rapidly prototype 3D head-worn display (HWD) systems in the first responder domain. These methods were performed across a series of 13 experiments, resulting in an in-depth analysis of the most effective methods experienced herein and providing suggested paths forward for future researchers in 3D MR HWD systems. Lessons learned from each individual method and across all of the experiments are shared. Several characteristics are defined and described as they relate to each experiment, including interface, interaction, and cost.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
18

Orliac, Charlotte. "Modèles et outils pour la conception de Learning Games en Réalité Mixte." Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00952892.

Full text
Abstract:
Les Learning Games sont des environnements d'apprentissage, souvent informatisés, qui utilisent des ressorts ludiques pour catalyser l'attention des apprenants et ainsi faciliter leur apprentissage. Ils ont des atouts indéniables mais présentent également certaines limites, comme des situations d'apprentissage trop artificielles. Ces limites peuvent être dépassées par l'intégration d'interactions en Réalité Mixte dans les Learning Games, que nous appelons alors des Mixed Reality Learning Games (MRLG). La Réalité Mixte, qui combine environnements numériques et objets réels, ouvre de nouvelles possibilités d'interactions et d'apprentissage qui gomment les limites précédentes et qu'il faut repérer et explorer. Dans ce contexte, nous nous intéressons au processus de conception des MRLG. Dans un premier temps, nous présentons une étude sur l'utilisation de la Réalité Mixte dans les domaines de l'apprentissage et du jeu, incluant un état de l'art des MRLG. Cette étude montre que, malgré de nombreux atouts, la conception des MRLG reste difficile à maîtriser. En effet, il n'existe ni méthode ni outil adapté à la conception de ce type d'environnements. Dans un second temps, nous analysons et modélisons l'activité de conception des MRLG à travers la littérature et des expériences de conception, dont une menée dans le cadre du projet SEGAREM. Cette démarche révèle des verrous spécifiques tels que l'absence d'aide à la modélisation (ou formalisation), à la créativité et à la vérification de la cohérence des idées. Nous éclairons nos réponses à ces besoins par un recensement des outils utilisés dans les domaines liés aux MRLG : situations d'apprentissage, jeux et environnements de la Réalité Mixte. Ceci nous amène à proposer deux outils conceptuels : un modèle de description de MRLG (f-MRLG) et des aides à la créativité sous la forme de propositions puis de recommandations. Le modèle de description a pour objectif de formaliser l'ensemble des éléments constituant un MRLG, mais aussi d'être un moyen d'identifier les éléments à définir, de structurer et de vérifier les idées. Les listes de propositions et recommandations ont pour but d'aider le concepteur à faire des choix cohérents par rapport à la situation d'apprentissage visée, en particulier en ce qui concerne les types de jeux et les dispositifs de Réalité Mixte. Une première évaluation de ces propositions a conduit à leur amélioration. Ces propositions sont à l'origine de la conception et du développement d'un outil auteur informatisé : MIRLEGADEE (Mixed Reality Learning Game DEsign Environment). MIRLEGADEE est basé sur LEGADEE, un environnement auteur pour la conception de Learning Games. Une expérimentation auprès de 20 enseignants et concepteurs de formation a validé le bienfondé de cet outil qui guide effectivement les concepteurs dans les phases amont du processus de conception de MRLG malgré des limites pour l'accompagnement de tâches complexes.
APA, Harvard, Vancouver, ISO, and other styles
19

Mahieux, Pierre. "Interactions tangibles pour naviguer spatialement et temporellement en environnements virtuels. : application à la médiation culturelle en histoire des sciences et techniques." Thesis, École nationale d'ingénieurs de Brest, 2022. https://nuxeo.enib.fr/nuxeo/nxpath/default/default-domain/workspaces/D%C3%A9p%C3%B4t%20des%20th%C3%A8ses@view_documents?tabIds=%3A&old_conversationId=0NXMAIN1.

Full text
Abstract:
Les institutions de médiation culturelle, notamment les musées, utilisent de plus en plus les nouvelles technologies afin d'attirer les visiteurs. D'un côté, la Réalité Mixte permet aux visiteurs d'explorer des reconstitutions de lieux passés ou inaccessibles, mais aussi de naviguer spatialement et temporellement dans ces reconstitutions. D'un autre côté, les interfaces tangibles sont utilisées pour proposer des expériences interactives innovantes et engageantes.Dans cette thèse nous émettons l'hypothèse que l'utilisation d'interfaces tangibles faciliterait la navigation spatio-temporelle sur plusieurs échelles au sein d'Environnements Virtuels. Nos travaux ont deux objectifs : 1) proposer un modèle permettant de représenter l'espace et le temps sur plusieurs échelles ; 2) proposer une interface tangible permettant de naviguer sur ces différentes échelles.En réponse au premier objectif, notre proposition de représentation du temps et de l'espace s'appuie sur des notions utilisées en Histoire des Sciences & Techniques et propose quatre échelles. Nous nous appuyons sur notre modèle pour répondre au second objectif pour lequel nous avons mis en place une démarche de co-conception impliquant des experts en médiation culturelle. Le résultat de cette démarche est SABLIER, un interacteur tangible permettant de naviguer spatio-temporellement au sein d'un Environnement Virtuel
Cultural mediation institutions, especially museums, are increasingly using new technologies to attract visitors. On the one hand, Mixed Reality allows visitors to explore reconstructions of past or inaccessible places, but also to navigate spatially and temporally in these reconstructions. On the other hand, tangible interfaces are used to provide innovative and engaging interactive experiences.In this thesis we hypothesize that the use of tangible interfaces would facilitate spatio-temporal navigation on several scales within Virtual Environments. Our work has two objectives: 1) to propose a model to represent space and time on several scales; 2) to propose a tangible interface to navigate on these different scales.In response to the first objective, our proposition to represent time and space is based on notions used in the History of Science and Technology and proposes four layers. We rely on our model to respond to the second objective, for which we have set up a co-design process involving cultural mediation experts. The result of this approach is SABLIER, a tangible interactor allowing to navigate spatially and temporally within a Virtual Environment
APA, Harvard, Vancouver, ISO, and other styles
20

Marinelli, Federico. "Un framework per lo sviluppo di interfacce utente innovative in applicazioni hands-free basate su smartglass: ideazione e sperimentazione in un caso di studio." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/10998/.

Full text
Abstract:
La legge di Moore afferma che " la complessità di un microcircuito, misurata ad esempio tramite il numero di transistor per chip, raddoppia ogni 18 mesi". Partendo da questo presupposto è possibile evincere che nel corso degli ultimi decenni le tecnologie informatiche hanno subito numerose ed incredibili evoluzioni. In particolare, negli ultimi anni, stiamo assistendo alla comparsa di dispositivi embedded sempre più piccoli e portatili, tanto da poter essere indossabili dall'utente che ne vuole far uso. Questi tipi di dispositivi hanno raggiunto capacità di calcolo paragonabili a quella di un computer desktop di media fascia ed hanno cambiato radicalmente il modo con cui un utente medio si interfaccia con essi. L'obiettivo di questa tesi è stato inizialmente lo studio dei dispositivi wearable, in particolare degli Smart-glass, che rappresentano la versione embedded di un semplice paio di occhiali. In particolare si è voluto produrre un interfaccia utente innovativa che ha come scopo quello di semplificare l'interazione dell'utente con un dispositivo di tipo smartglass, avvicinando l'esperienza di utilizzo a quella che è la normale esperienza visiva dell'indossatore, così da rendere il più possibile semplice, utile e funzionale l'utilizzo di questo tipo di tecnologie. Seguendo questa linea guida si è voluto inoltre studiare un modo per interagire con questi sistemi tramite un processo di human-computer interacion hands-free, quindi che non prevedesse l'utilizzo delle mani. L'obiettivo di questo progetto di tesi è quindi quello di ideare e sviluppare un nuovo modo per interfacciarsi con queste nuove tecnologie emergenti, analogo a quanto successo negli anni passati con l'avvento degli smartphone.
APA, Harvard, Vancouver, ISO, and other styles
21

Lennerton, Mark J. "Exploring a chromakeyed augmented virtual environment for viability as an embedded training system for military helicopters." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Jun%5FLennerton.pdf.

Full text
Abstract:
Thesis (M.S. in Computer Science)--Naval Postgraduate School, June 2004.
Thesis advisor(s): Rudolph Darken, Joseph A. Sullivan. Includes bibliographical references (p. 103-104). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
22

Hsu, Wei-Che, and 徐瑋澤. "Applying Mixed Reality to Construction Industry:the Issues in Technology and in Human-Machine Interfaces." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/na3s4m.

Full text
Abstract:
碩士
國立臺灣大學
土木工程學研究所
106
With the software support multifunction and hardware can run faster and faster, Virtual Reality (VR), this technology has been developed for more than two decades can be implemented today. More and more industries hope to use virtual reality’s immersive display to improve the efficiency of implementation and solve the problems that they encountered in current projects. Based on the technology development of virtual reality, there are two more visualization technologies: Augmented Reality (AR) and Mixed Reality (MR). This research expects to use the new visualization technique “Mixed Reality” to develop some function and explore its application direction in the construction industry. And the construction industry can refer to what we provide in the future. Construction industries usually use 2D CAD for drawings. In recent year, Building Information Modeling develops and prepare to improve 2D CAD’s problem. At now, we can effectively convert 2D CAD to 3D model and store the numerical data in building objects. And then the user can inquire the information more easily. But only large-scale consulting and construction companies have sufficient resources for R&D. It’s difficult for small and medium-scale manufacturers to develop BIM technique. This research implements Mixed Reality technique and proposed the development process. It’s based on Revit’s model and loads the building model to Unity, one kind of game engine. We expect to use easier development mode to reduce the development cost of visualization and provide the different way to develop this new technique. Construction Life Cycle is divided into six phases. In this research, we focus on the phase of planning and design. We focus the interactive function that MR support and provide the function for the user to operate in the model intuitively, including the display of information on each object, designing their own layout. With this Mixed Reality system structure, the user can clearly cognize the place where they will use in the future. In addition, this research explores the opportunities of Mixed Reality for the construction industry. We will conduct interviews with a different kind of construction company and let them understand what we do by using video and operation of software and hardware. After we get the feedback from the expert, we will suggest the research direction in the follow-up research to improve this Mixed Reality system.
APA, Harvard, Vancouver, ISO, and other styles
23

Manuaba, Ida Bagus Kerthyayana. "Evaluation of gaming environments for mixed reality interfaces and human supervisory control in telerobotics." Phd thesis, 2014. http://hdl.handle.net/1885/11790.

Full text
Abstract:
Telerobotics refers to a branch of technology that deals with controlling a robot from a distance. It is commonly used to access difficult environments, reduce operating costs, and to improve comfort and safety. However, difficulties have emerged in telerobotics development. Effective telerobotics requires maximising operator performance and previous research has identified issues which reduce operator performance, such as operator attention being divided across the numerous custom built interfaces and continuous operator involvement in a high workload situation potentially causing exhaustion and subsequent operator error. This thesis evaluates mixed reality and human supervisory control concepts in a gaming engine environment for telerobotics. This concept is proposed in order to improve the effectiveness of current technology in telerobotic interfaces. Four experiments are reported in this thesis which covers virtual gaming environments, mixed reality interfaces, and human supervisory control and aims to advance telerobotics technology. This thesis argues that gaming environments are useful for building telerobotic interfaces and examines the properties required for telerobotics. A useful feature provided by gaming environments is that of overlying video on virtual objects to support mixed reality interfaces. Experiments in this thesis show that mixed reality interfaces provide useful information without distracting the operator from the task. This thesis introduces two response models based on the planning process of human supervisory control: Adaptation and Queue response models. The experimental results show superior user performance under these two response models compared to direct/manual control. In the final experiment a large number of novice users, with a diversity of backgrounds, used a robot arm to push blocks into a hole by using these two response models. Further analyses on evaluating the user performance on the interfaces with two response models were found to be well fitted by a Weibull distribution. Operators preferred the interface with the Queue response model over the interface with the Adaptation response model, and human supervisory control over direct/manual control. It is expected that the increased sophistication of control commands in a production system will usually be greater than those that were tested in this thesis, where limited time was available for automation development. Where that is the case the increases in human productivity using human supervisory control found in this experiment can be expected to be greater. The research conducted here has shown that mixed reality in gaming environments, when combined with human supervisory control, offers a good route for overcoming limitations in current telerobotics technology. Practical applications would benefit by the application of these methods, making it possible for the operator to have the necessary information available in a convenient and non-distracting form, considerably improving productivity.
APA, Harvard, Vancouver, ISO, and other styles
24

Driewer, Frauke. "Teleoperation Interfaces in Human-Robot Teams." Doctoral thesis, 2008. https://nbn-resolving.org/urn:nbn:de:bvb:20-opus-36351.

Full text
Abstract:
Diese Arbeit beschäftigt sich mit der Verbesserung von Mensch-Roboter Interaktion in Mensch-Roboter Teams für Teleoperation Szenarien, wie z.B. robotergestützte Feuerwehreinsätze. Hierbei wird ein Konzept und eine Architektur für ein System zur Unterstützung von Teleoperation von Mensch-Roboter Teams vorgestellt. Die Anforderungen an Informationsaustausch und -verarbeitung, insbesondere für die Anwendung Rettungseinsatz, werden ausgearbeitet. Weiterhin wird das Design der Benutzerschnittstellen für Mensch-Roboter Teams dargestellt und Prinzipien für Teleoperation-Systeme und Benutzerschnittstellen erarbeitet. Alle Studien und Ansätze werden in einem Prototypen-System implementiert und in verschiedenen Benutzertests abgesichert. Erweiterungsmöglichkeiten zum Einbinden von 3D Sensordaten und die Darstellung auf Stereovisualisierungssystemen werden gezeigt
This work deals with teams in teleoperation scenarios, where one human team partner (supervisor) guides and controls multiple remote entities (either robotic or human) and coordinates their tasks. Such a team needs an appropriate infrastructure for sharing information and commands. The robots need to have a level of autonomy, which matches the assigned task. The humans in the team have to be provided with autonomous support, e.g. for information integration. Design and capabilities of the human-robot interfaces will strongly influence the performance of the team as well as the subjective feeling of the human team partners. Here, it is important to elaborate the information demand as well as how information is presented. Such human-robot systems need to allow the supervisor to gain an understanding of what is going on in the remote environment (situation awareness) by providing the necessary information. This includes achieving fast assessment of the robot´s or remote human´s state. Processing, integration and organization of data as well as suitable autonomous functions support decision making and task allocation and help to decrease the workload in this multi-entity teleoperation task. Interaction between humans and robots is improved by a common world model and a responsive system and robots. The remote human profits from a simplified user interface providing exactly the information needed for the actual task at hand. The topic of this thesis is the investigation of such teleoperation interfaces in human-robot teams, especially for high-risk, time-critical, and dangerous tasks. The aim is to provide a suitable human-robot team structure as well as analyze the demands on the user interfaces. On one side, it will be looked on the theoretical background (model, interactions, and information demand). On the other side, real implementations for system, robots, and user interfaces are presented and evaluated as testbeds for the claimed requirements. Rescue operations, more precisely fire-fighting, was chosen as an exemplary application scenario for this work. The challenges in such scenarios are high (highly dynamic environments, high risk, time criticality etc.) and it can be expected that results can be transferred to other applications, which have less strict requirements. The present work contributes to the introduction of human-robot teams in task-oriented scenarios, such as working in high risk domains, e.g. fire-fighting. It covers the theoretical background of the required system, the analysis of related human factors concepts, as well as discussions on implementation. An emphasis is placed on user interfaces, their design, requirements and user testing, as well as on the used techniques (three-dimensional sensor data representation, mixed reality, and user interface design guidelines). Further, the potential integration of 3D sensor data as well as the visualization on stereo visualization systems is introduced
APA, Harvard, Vancouver, ISO, and other styles
25

Johnson, David. "MusE-XR: musical experiences in extended reality to enhance learning and performance." Thesis, 2019. http://hdl.handle.net/1828/10988.

Full text
Abstract:
Integrating state-of-the-art sensory and display technologies with 3D computer graphics, extended reality (XR) affords capabilities to create enhanced human experiences by merging virtual elements with the real world. To better understand how Sound and Music Computing (SMC) can benefit from the capabilities of XR, this thesis presents novel research on the de- sign of musical experiences in extended reality (MusE-XR). Integrating XR with research on computer assisted musical instrument tutoring (CAMIT) as well as New Interfaces for Musical Expression (NIME), I explore the MusE-XR design space to contribute to a better understanding of the capabilities of XR for SMC. The first area of focus in this thesis is the application of XR technologies to CAMIT enabling extended reality enhanced musical instrument learning (XREMIL). A common approach in CAMIT is the automatic assessment of musical performance. Generally, these systems focus on the aural quality of the performance, but emerging XR related sensory technologies afford the development of systems to assess playing technique. Employing these technologies, the first contribution in this thesis is a CAMIT system for the automatic assessment of pianist hand posture using depth data. Hand posture assessment is performed through an applied computer vision (CV) and machine learning (ML) pipeline to classify a pianist’s hands captured by a depth camera into one of three posture classes. Assessment results from the system are intended to be integrated into a CAMIT interface to deliver feedback to students regarding their hand posture. One method to present the feedback is through real-time visual feedback (RTVF) displayed on a standard 2D computer display, but this method is limited by a need for the student to constantly shift focus between the instrument and the display. XR affords new methods to potentially address this limitation through capabilities to directly augment a musical instrument with RTVF by overlaying 3D virtual objects on the instrument. Due to limited research evaluating effectiveness of this approach, it is unclear how the added cognitive demands of RTVF in virtual environments (VEs) affect the learning process. To fill this gap, the second major contribution of this thesis is the first known user study evaluating the effectiveness of XREMIL. Results of the study show that an XR environment with RTVF improves participant performance during training, but may lead to decreased improvement after the training. On the other hand,interviews with participants indicate that the XR environment increased their confidence leading them to feel more engaged during training. In addition to enhancing CAMIT, the second area of focus in this thesis is the application of XR to NIME enabling virtual environments for musical expression (VEME). Development of VEME requires a workflow that integrates XR development tools with existing sound design tools. This presents numerous technical challenges, especially to novice XR developers. To simplify this process and facilitate VEME development, the third major contribution of this thesis is an open source toolkit, called OSC-XR. OSC-XR makes VEME development more accessible by providing developers with readily available Open Sound Control (OSC) virtual controllers. I present three new VEMEs, developed with OSC-XR, to identify affordances and guidelines for VEME design. The insights gained through these studies exploring the application of XR to musical learning and performance, lead to new affordances and guidelines for the design of effective and engaging MusE-XR.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
26

Silva, Hugo João Leitão. "Mixed reality application to support building maintenance." Master's thesis, 2018. http://hdl.handle.net/10071/18793.

Full text
Abstract:
This dissertation presents two mixed reality (MR) applications developed for the head-mounted display Microsoft HoloLens (MH) – InOffice and InSitu, which support building maintenance tasks in constructions with complex infrastructure. These solutions are intended to help maintenance workers when they need to track and fix part of the infrastructure by revealing hidden components, displaying additional information and guiding them in complex tasks. The applications have the potential to improve maintenance worker’s performance as they assist in performing faster and with higher accuracy. The work presented explores the creation of the applications and discusses the methodologies used to build user-friendly tools. Both MR applications were tested with active maintenance professionals, and results revealed each solution is useful to support building maintenance in different types of situation. InOffice, which displays an interactive and reduced version of the building being maintained, is suited to work off-site, for planning and remote assistance. In InSitu the user visualizes a 1:1 scaled hologram of the building aligned with the real world, and it is better suited for maintenance that requires manual tasks on site. The methodology was based on design science research: an improvement need, and not necessarily a problem, was identified, and from there a solution was conceived. MR is being applied as a successful tool for helping in several areas, and this work can give insights for many future solutions with mixed reality or MH to build novel and better applications that improve tasks at work or domestic environments.
Esta dissertação apresenta duas aplicações de realidade mista (RM) desenvolvidas para o head-mounted display Microsoft HoloLens (MH) - InOffice e InSitu, as quais auxiliam no desempenho de tarefas de manutenção de edifícios em construções com infraestrutura complexa. Estas soluções destinam-se a ajudar os técnicos de manutenção quando estes precisam de rastrear e consertar parte da infraestrutura, revelando componentes ocultos, exibindo informações adicionais sobre os materiais e orientando-os em tarefas complexas. As aplicações têm potencial para melhorar o desempenho de trabalhadores de manutenção, pois auxiliam na execução mais rápida e com maior precisão do seu trabalho. O trabalho apresentado explora a criação das aplicações e discute as metodologias usadas para criar ferramentas fáceis de usar. Ambas as aplicações de RM foram testadas com profissionais de manutenção no ativo, e os resultados revelaram que cada solução é útil para ajudar na manutenção de edifícios em diferentes tipos de situação. A InOffice, que exibe um modelo interativo e reduzido do edifício a ser mantido, é adequada para trabalhar fora do edifício, para planeamento e assistência remota. Na InSitu, o utilizador visualiza um holograma em escala 1:1 do edifício alinhado com o mundo real e é mais adequada para manutenção que requeira tarefas manuais no próprio local. A metodologia seguida baseou-se em design science research: uma necessidade de melhoria, e não necessariamente um problema, foi identificada, e a partir daí uma solução foi concebida. A RM tem sido aplicada como uma ferramenta de sucesso para ajudar em várias indústrias, e este trabalho fornece visão e conhecimento para muitas soluções futuras com realidade mista ou com MH que melhorem as tarefas em ambientes profissionais ou até em ambientes domésticos.
APA, Harvard, Vancouver, ISO, and other styles
27

Chang, Yi-Jia, and 張益嘉. "Integrating Motion-Sensing User Interface into Spatial Perception Learning based on Mixed-Reality Environment." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/37213291550422084238.

Full text
Abstract:
碩士
國立新竹教育大學
數位學習科技研究所
101
To enhance one’s spatial perception is the key to explore the world. Furthermore, “Learning Outside” is the best way to experience the world and learn about the spatial perception. However, outside-learning is a time consuming and highly cost activity when traveling. These difficult made it a dream for most of the students to experiment the world. This thesis constructs a learning platform with three-dimensional GIS and Augmented Reality technology, which makes the learning environment familiar with the real world. Meanwhile, students could enhance their spatial perception when traveling in this platform to learn about the knowledge of Taiwan’s railway and stations. Moreover, the Motion-Sensing User Interface with Flow experience is supported for students to operate the system and immersed into the learning activity. Learners in this study divided into experimental group (which used Motion-Sensing User Interface), and control group (which used keyboard and mouse). It is found that the spatial perceptions in both groups were improved. And questionnaire of flow experience was proved that learning though Motion-Sensing User Interface can help students be immersed more in learning activities than the students with keyboard and mouse.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography