Dissertations / Theses on the topic 'Virtual and mixed reality'

To see the other types of publications on this topic, follow the link: Virtual and mixed reality.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Virtual and mixed reality.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Filip, Mori. "A 2D video player for Virtual Reality and Mixed Reality." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217359.

Full text
Abstract:
While 3D degree video in recent times have been object of research, 2D flat frame videos in virtual environments (VE) seemingly have not received the same amount of attention. Specifically, 2D video playback in Virtual Reality (VR) and Mixed Reality (MR) appears to lack exploration in both features and qualities of resolution, audio and interaction, which finally are contributors of presence. This paper reflects on the definitions of Virtual Reality and Mixed Reality, while extending known concepts of immersion and presence to 2D videos in VEs. Relevant attributes of presence that can applied to 2D videos were then investigated in the literature. The main problem was to find out the components and processes of the playback software in VR and MR with company request features and delimitations in consideration, and possibly, how to adjust those components to induce a greater presence within primarily the 2D video, and secondary the VE, although the mediums of visual information indeed are related and thus influence each other. The thesis work took place at Advrty, a company developing a brand advertising platform for VR and MR. The exploration and testing of the components, was done through the increment of a creating a basic standalone 2D video player, then through a second increment by implementing a video player into VR and MR. Comparisons with the proof-of-concept video players in VR and MR as well as the standalone video player were made. The results of the study show a feasible way of making a video player for VR and MR. In the discussion of the work, open source libraries in a commercial software; the technical limitations of the current VR and MR Head-mounted Displays (HMD); relevant presence inducing attributes as well as the choice of method were reflected upon.
Medan 360 graders video under senare tid varit föremål för studier, så verkar inte traditionella rektangulära 2D videos i virtuella miljöer ha fått samma uppmärksamhet. Mer specifikt, 2D videouppspelning i Virtual Reality (VR) och Mixed Reality (MR) verkar sakna utforskning i egenskaper som upplösning, ljud och interaktion, som slutligen bidrar till ”presence” i videon och den virtuella miljön. Det här pappret reflekterar över definitionerna VR och MR, samtidigt som den utökar de kända koncepten ”immersion” och ”presence” för 2D video i virtuella miljöer. Relevanta attribut till ”presence” som kan appliceras på 2D video utreddes sedan med hjälp av litteraturen. Det huvudsakliga problemet var att ta reda på komponenterna och processerna i den mjukvara som skall spela upp video i VR och MR med företagsönskemål och avgränsningar i åtanke, och möjligen, hur man kan justera dessa komponenter för att utöka närvaron i framförallt 2D video och sekundärt den virtuella miljön, även om dessa medium är relaterade och kan påverka varandra. Examensarbetet tog plats på Advrty, ett företag som utvecklar en annonseringsplattform för VR och MR. Utveckling och framtagande av komponenterna, var gjorda genom inkrementell utveckling där en enklare 2D videospelare skapades, sedan genom en andra inkrementell fas där videospelaren implementerades i VR och MR. Jämförelser med proof-of-concept-videospelaren i VR och MR samt den enklare videospelaren gjordes. I diskussionen om arbetet, gjordes reflektioner på användningen av open source-bibliotek i en kommersiell applikation, de tekniska begränsningarna i nuvarande VR och MR Head-mounted displays, relevanta ”presence” inducerande attribut samt val av metod för utvecklingen av videospelaren.
APA, Harvard, Vancouver, ISO, and other styles
2

Koleva, Boriana. "The properties of mixed reality boundaries." Thesis, University of Nottingham, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kadavasal, Sivaraman Muthukkumar. "Virtual reality based multi-modal teleoperation using mixed autonomy." [Ames, Iowa : Iowa State University], 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hilton, David. "Virtual reality and stroke rehabilitation : a mixed reality simulation of an everyday task." Thesis, University of Nottingham, 2008. http://eprints.nottingham.ac.uk/13252/.

Full text
Abstract:
This thesis is about the process of designing a computer simulation as a treatment tool for stroke rehabilitation. A stroke is a debilitating disease that is characterised by focal neural damage usually leading to physical and cognitive impairments. These impairments may severely compromise the stroke survivor's ability to perform everyday tasks of self-care such as dressing, washing and preparing meals. Safety issues are also an important consideration for the rehabilitation of the stroke survivor. Some everyday tasks can be hazardous, particularly when electrical equipment or hot liquids are involved. Computer simulations are gaining interest as a tool for stroke rehabilitation because they offer a means to replicate assessments and everyday tasks within ecologically valid environments. Training the motor skills required to perform everyday tasks together with the cognitive component of the activity is desirable however this is not always achieved due to the limitations of the human computer interface. These limitations are addressed by a simulation that is presented in this thesis. Stakeholders in stroke care contributed to the design and development of the simulation in order to ensure that it conformed to their requirements. The development culminated in a mixed reality system with a unique method of interaction in which real household objects were monitored by various electronic sensing technologies. The purpose of controlling the computer simulation using real objects was to encourage users to practice an everyday task (making a hot drink) using naturalistic upper limb movement whilst performing the task in a safe and controlled environment. The role of the computer was to monitor and score user's progress, and to intervene with prompts and demonstrations as required. The system was installed on a hospital stroke unit and tested by patients, something that had previously not been achieved. It was found to be acceptable and usable as a means of practicing making a hot drink. The system design, limitations and recommendations for future developments are discussed.
APA, Harvard, Vancouver, ISO, and other styles
5

Savage, Ruthann. "TRAINING WAYFINDING: NATURAL MOVEMENT IN MIXED REALITY." Doctoral diss., University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3929.

Full text
Abstract:
The Army needs a distributed training environment that can be accessed whenever and wherever required for training and mission rehearsal. This paper describes an exploratory experiment designed to investigate the effectiveness of a prototype of such a system in training a navigation task. A wearable computer, acoustic tracking system, and see-through head mounted display (HMD) were used to wirelessly track users' head position and orientation while presenting a graphic representation of their virtual surroundings, through which the user walked using natural movement. As previous studies have shown that virtual environments can be used to train navigation, the ability to add natural movement to a type of virtual environment may enhance that training, based on the proprioceptive feedback gained by walking through the environment. Sixty participants were randomly assigned to one of three conditions: route drawing on printed floor plan, rehearsal in the actual facility, and rehearsal in a mixed reality (MR) environment. Participants, divided equally between male and female in each group, studied verbal directions of route, then performed three rehearsals of the route, with those in the map condition drawing it onto three separate printed floor plans, those in the practice condition walking through the actual facility, and participants in the MR condition walking through a three dimensional virtual environment, with landmarks, waypoints and virtual footprints. A scaling factor was used, with each step in the MR environment equal to three steps in the real environment, with the MR environment also broken into "tiles", like pages in an atlas, through which participant progressed, entering each tile in succession until they completed the entire route. Transfer of training testing that consisted of a timed traversal of the route through the actual facility showed a significant difference in route knowledge based on the total time to complete the route, and the number of errors committed while doing so, with "walkers" performing better than participants in the paper map or MR condition, although the effect was weak. Survey knowledge showed little difference among the three rehearsal conditions. Three standardized tests of spatial abilities did not correlate with route traversal time, or errors, or with 3 of the 4 orientation localization tasks. Within the MR rehearsal condition there was a clear performance improvement over the three rehearsal trials as measured by the time required to complete the route in the MR environment which was accepted as an indication that learning occurred. As measured using the Simulator Sickness Questionnaire, there were no incidents of simulator sickness in the MR environment. Rehearsal in the actual facility was the most effective training condition; however, it is often not an acceptable form of rehearsal given an inaccessible or hostile environment. Performance between participants in the other two conditions were indistinguishable, pointing toward continued experimentation that should include the combined effect of paper map rehearsal with mixed reality, especially as it is likely to be the more realistic case for mission rehearsal, since there is no indication that maps should be eliminated. To walk through the environment beforehand can enhance the Soldiers' understanding of their surroundings, as was evident through the comments from participants as they moved from MR to the actual space: "This looks like I was just here", and "There's that pole I kept having trouble with". Such comments lead one to believe that this is a tool to continue to explore and apply. While additional research on the scaling and tiling factors is likely warranted, to determine if the effect can be applied to other environments or tasks, it should be pointed out that this is not a new task for most adults who have interacted with maps, where a scaling factor of 1 to 15,000 is common in orienteering maps, and 1 to 25,000 in military maps. Rehearsal time spent in the MR condition varied widely, some of which could be blamed on an issue referred to as "avatar excursions", a system anomaly that should be addressed in future research. The proprioceptive feedback in MR was expected to positively impact performance scores. It is very likely that proprioceptive feedback is what led to the lack of simulator sickness among these participants. The design of the HMD may have aided in the minimal reported symptoms as it allowed participants some peripheral vision that provided orientation cues as to their body position and movement. Future research might include a direct comparison between this MR, and a virtual environment system through which users move by manipulating an input device such as a mouse or joystick, while physically remaining stationary. The exploration and confirmation of the training capabilities of MR as is an important step in the development and application of the system to the U.S. Army training mission. This experiment was designed to examine one potential training area in a small controlled environment, which can be used as the foundation for experimentation with more complex tasks such as wayfinding through an urban environment, and or in direct comparison to more established virtual environments to determine strengths, as well as areas for improvement, to make MR as an effective addition to the Army training mission.
Ph.D.
Department of Psychology
Sciences
Psychology
APA, Harvard, Vancouver, ISO, and other styles
6

Santos, Lages Wallace. "Walk-Centric User Interfaces for Mixed Reality." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/84460.

Full text
Abstract:
Walking is a natural part of our lives and is also becoming increasingly common in mixed reality. Wireless headsets and improved tracking systems allow us to easily navigate real and virtual environments by walking. In spite of the benefits, walking brings challenges to the design of new systems. In particular, designers must be aware of cognitive and motor requirements so that walking does not negatively impact the main task. Unfortunately, those demands are not yet fully understood. In this dissertation, we present new scientific evidence, interaction designs, and analysis of the role of walking in different mixed reality applications. We evaluated the difference in performance of users walking vs. manipulating a dataset during visual analysis. This is an important task, since virtual reality is increasingly being used as a way to make sense of progressively complex datasets. Our findings indicate that neither option is absolutely better: the optimal design choice should consider both user's experience with controllers and user's inherent spatial ability. Participants with reasonable game experience and low spatial ability performed better using the manipulation technique. However, we found that walking can still enable higher performance for participants with low spatial ability and without significant game experience. In augmented reality, specifying points in space is an essential step to create content that is registered with the world. However, this task can be challenging when information about the depth or geometry of the target is not available. We evaluated different augmented reality techniques for point marking that do not rely on any model of the environment. We found that triangulation by physically walking between points provides higher accuracy than purely perceptual methods. However, precision may be affected by head pointing tremors. To increase the precision, we designed a new technique that uses multiple samples to obtain a better estimate of the target position. This technique can also be used to mark points while walking. The effectiveness of this approach was demonstrated with a controlled augmented reality simulation and actual outdoor tests. Moving into the future, augmented reality will eventually replace our mobile devices as the main method of accessing information. Nonetheless, to achieve its full potential, augmented reality interfaces must support the fluid way we move in the world. We investigated the potential of adaptation in achieving this goal. We conceived and implemented an adaptive workspace system, based in the study of the design space and through user contextual studies. Our final design consists in a minimum set of techniques to support mobility and integration with the real world. We also identified a set of key interaction patterns and desirable properties of adaptation-based techniques, which can be used to guide the design of the next-generation walking-centered workspaces.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Guidi, Eleonora. "Ambiente di Mixed Reality per l'insegnamento della medicina veterinaria." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/12203/.

Full text
Abstract:
La tesi progettuale, descritta in questo volume, nasce sulla base di un progetto accademico, sviluppato con la collaborazione di docenti della facoltà di Medicina Veterinaria dell’Università di Bologna. Obiettivo della tesi è lo sviluppo di un prototipo di un sistema per la realizzazione di un ambiente E-learning virtuale ed interattivo per uso didattico. Il progetto implementato ha permesso di creare un sistema basato su Realtà Mista che utilizza al suo interno immagini fotografiche a 360° della stalla della facoltà di Medicina Veterinaria su cui inserire modelli 3D animati di animali. L'utente potrà quindi visitare la fattoria, attraversando i vari ambienti in modalità di tour virtuale.
APA, Harvard, Vancouver, ISO, and other styles
8

Karlgren, Kasper. "Perceived physical presence in Mixed reality embodiment vs Augmented reality robot interaction." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-265568.

Full text
Abstract:
This thesis presents a novel interaction model using mixed reality simulating a robot human interaction; a clay embodiment is overlaid with animated facial features using mobile augmented reality. One of the challenges when building a social agent, whether it is for education or solely social interaction, is to achieve social presence. One way to increase the feeling of presence is to have the agent physically embodied by using a robot. Earlier research has found that users listen more to robots that are present, than robots that are presented through a screen. But there are problems that come with robots that are not yet solved. Robot are expensive, they break, they are hard to update and they are very limited to the realm and problems they are built for: even standing up can be a challenge. This thesis tests if the theory of embodiment as a tool to heighten presence can be achieved, even if the robot and the interaction is only present in a screen. The clay embodiment is built by hand and later 3D scanned. The clay embodiment is tracked using Vuforia’s object recognition of the scan and is given an animatable face in a mixed reality setting through unity. The interaction of comparison and the basis of evaluation consist of a fully virtual robot head placed in 3D space using ground plane tracking. These interactions are compared separately and test subjects are only exposed to one type of interaction. Through the study the participants interacting with the clay embodiment rated the exeprience higher in respect to physical presences and scored better ability to recall details than the one with the fully augmented robot human interaction. The results were significant and indicate, with the reservation of false positives given the small participation sample, that mobile augmented reality agent interactions are improved, in respect to attention allocation and physical presence, by the use of mixed reality embodiments. Overall the interaction was very well perceived. Both conditions were highly enjoyed and critique mostly focused on the lack of complexity in the dialogue - the participants wanted more. Initial positive feedback states that this can and should be tested further.
Den här uppsatsen presenterar en ny interaktionsmodell i mixed reality (förstärkt verklighet). Modellen simulerar en interaktion mellan en robot och en användare: en robotfigur gestaltad i lera är förstärkt med animerade ansiktsdrag som visas i en mixed reality - miljö genom en mobiltelefon. Interaktionsmodellen med den fysiska robotfiguren kombinerad med animerade ansiktsdrag testas mot en likadan interaktion med en helt virtuellt robot utan fysisk gestaltning. En av utmaningarna vid skapandet av sociala agenter, oavsett om de är byggda för undervisningsmiljöer eller enbart rent sociala interaktioner, är att åstadkomma en upplevelse av social närvaro. Ett sätt att öka känslan av närvaro är att använda sig av en fysisk gestaltning i form av en robot. Tidigare forskning har funnit att användare lyssnar mer på robotar som finns fysiskt närvarande än robotar som presenteras via en skärm. Problemet med robotar är att de är dyra, de går sönder, de är svåra att uppdatera och de kan vara väldigt fysiskt begränsade: till och med att gå kan vara en utmaning. Den här uppsatsen testar ifall fysisk gestaltning ökar känslan av social närvaro, trots att all interaktion sker via en skärm. Ler-gestaltningen är skulpterad för hand, 3D-skannad och sedan spårad med hjälp av Vuforias objektigenkän- ning. Ler-gestaltningen får animerbara ansiktsdrag i mobilen. Denna interaktion jämförs mot en interaktion utan fysisk gestaltning: ett enbart virtuellt robothuvud med samma ansiktsdrag som är virtuellt positionerad i det fysiska rummet med hjälp av yt- och plan-igenkänning. Resultaten visade att interaktion mellan en människa och en virtuell agent har en ökad upplevelse av fysisk närvaro och att en virtuell agent tilldelas mer uppmärksamhet av den mänskliga parten ifall agenten har en fysisk gestaltning. Resultaten är statistiskt signifikanta med viss reservation för deltagarantalet i studien. Överlag upplevdes interaktionerna väldigt positivt. Deltagare från bägge interaktionerna gillade upplevelse. Deltagarnas tydligaste kritiska synpunkter gällde brist på komplexitet i konversationen - deltagarna ville ha en rikare interaktion. Den positiva responsen visar att interaktionssättet kan och bör studeras yttligare.
APA, Harvard, Vancouver, ISO, and other styles
9

Göbel, Gunther, and Ralph Sonntag. "Erfahrungen zur Nutzung von Mixed und Virtual Reality im Lehralltag an der HTW Dresden." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-234638.

Full text
Abstract:
Der Einsatz von immersiven Systemen, also Virtual Reality (VR), Augmented Reality (AR) und Mixed Reality (MR) Systemen in der Lehre ist naheliegend. Eigene interaktive Erfahrung einer Tätigkeit ist immer einer reinen rezeptiven Beobachtung bzw. verbalen Erläuterung vorzuziehen. Trotzdem ist heutige Lehre selbst in Praktika und Übungen zum sehr großen Teil passiv, die selbständige Umsetzung, etwa das Bedienen einer Anlage oder die eigenständige Synthese einer Chemikalie, können aus Gründen der Zeit, Verfügbarkeit, Sicherheitsbedenken und Kostengründen oft nur selten eingesetzt werden. Dem Einsatz o.g. neuer immersiven Technologien stand bisher nicht nur der erhebliche Aufwand zur Erstellung entsprechender Simulationen gegenüber. Vor allem aber auch der Hardwareaufwand bei gleichzeitigem nicht optimalem Grad an Immersivität ließ kaum Möglichkeiten offen. Jeden Studenten einzeln ausreichend Zeit in einer teuren und großen Cave-Umgebung zu ermöglichen, damit dieser virtuell technische Anlagen bedient, ist für größere Studentenzahlen untauglich. [... aus der Einleitung]
APA, Harvard, Vancouver, ISO, and other styles
10

Vadruccio, Alessandro. "Mixed Reality techniques applied to a Virtual and Georeferenced tour for displaying street art content." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15144/.

Full text
Abstract:
Negli ultimi anni l’uso della tecnologia è aumentato considerevolmente, con un parallelo aumento di dati creati. Tale incremento ha portato alla necessita di avere nuove tecnologie che ci permettano di rappresentare nel contesto reale i dati raccolti. La “Mixed Reality” è una tecnologia che permette di inserire degli oggetti virtuali all’interno dell’ambiente reale circostante in cui viviamo. HoloLens è un dispositivo realizzato da Microsoft che permette l’esecuzione di applicazioni in Mixed Reality. Grazie a questa tecnologia è possibile mappare l’ambiente circostante per creare delle superfici virtuali, sulle quali è possibile posizionare degli oggetti virtuali. In questo lavoro di tesi abbiamo inizialmente effettuato uno studio sulle potenzialità e i limiti del dispositivo e successivamente abbiamo realizzato un’applicazione che consentisse di testare le prestazioni del dispositivo in un caso reale. L’applicazione realizzata, consente di effettuare un tour virtuale dei contenuti di street art della città di Bologna. Tale applicazione può essere utilizzata in due modi. Il primo consente all’utente di visualizzare immagini di graffiti scelti da una mappa tridimensionale. Il secondo invece, consente di utilizzare HoloLens sul territorio urbano, al fine di visualizzare opere di street art che sono state rimosse, direttamente sul muro su cui si trovavano.
APA, Harvard, Vancouver, ISO, and other styles
11

Sela, Sebastian, and Elliot Gustafsson. "Interactive Visualization of Underground Infrastructures via Mixed Reality." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-39771.

Full text
Abstract:
Visualization of underground infrastructures, such as pipes and cables, can be useful for infrastructure providers and can be utilized for both planning and maintenance. The purpose of this project is therefore to develop a system that provides interactive visualization of underground infrastructures using mixed reality. This requires positioning the user and virtual objects outdoors, as well as optimizing the system for outdoor use. To accomplish these, GPS coordinates must be known so the system is  capable of accurately drawing virtual underground infrastructures in real time in relation to the real world. To get GPS data into the system, a lightweight web server written in Python was developed to run on GPS-enabled Android devices, which responds to a given HTTP request with the current GPS coordinates of the device. A mixed reality application was developed in Unity and written in C# for the Microsoft HoloLens. This requests the coordinates via HTTP in order to draw virtual objects, commonly called holograms, representing the underground infrastructure. The application uses the Haversine formula to calculate distances using GPS coordinates. Data, including GPS coordinates, pertaining real underground infrastructures have been provided by Halmstad Energi och Miljö. The result is therefore a HoloLens application which, in combination with a Python script, draws virtual objects based on real data (type of structures, size, and their corresponding coordinates) to enable the user to view the underground infrastructure. The user can customize the experience by choosing to display certain types of pipes, or changing the chosen navigational tool. Users can also view the information of valves, such as their ID, type, and coordinates. Although the developed application is fully functional, the visualization of holograms with HoloLens outdoors is problematic because of the brightness of natural light affecting the application’s visibility, and lack of points for tracking of its surroundings causing the visualization to be wrongly displayed. Visualization of underground infrastructures, such as pipes and cables, can be useful for infrastructure providers and can be utilized for both planning and maintenance. The purpose of this project is therefore to develop a system that provides interactive visualization of underground infrastructures using mixed reality. This requires positioning the user and virtual objects outdoors, as well as optimizing the system for outdoor use. To accomplish these, GPS coordinates must be known so the system is capable of accurately drawing virtual underground infrastructures in real time in relation to the real world. To get GPS data into the system, a lightweight web server written in Python was developed to run on GPS-enabled Android devices, which responds to a given HTTP request with the current GPS coordinates of the device. A mixed reality application was developed in Unity and written in C# for the Microsoft HoloLens. This requests the coordinates via HTTP in order to draw virtual objects, commonly called holograms, representing the underground infrastructure. The application uses the Haversine formula to calculate distances using GPS coordinates. Data, including GPS coordinates, pertaining real underground infrastructures have been provided by Halmstad Energi och Miljö. The result is therefore a HoloLens application which, in combination with a Python script, draws virtual objects based on real data (type of structures, size, and their corresponding coordinates) to enable the user to view the underground infrastructure. The user can customize the experience by choosing to display certain types of pipes, or changing the chosen navigational tool. Users can also view the information of valves, such as their ID, type, and coordinates. Although the developed application is fully functional, the visualization of holograms with HoloLens outdoors is problematic because of the brightness of natural light affecting the application’s visibility, and lack of points for tracking of its surroundings causing the visualization to be wrongly displayed.
APA, Harvard, Vancouver, ISO, and other styles
12

Lucas, Stephen 1985. "Virtual Stage: Merging Virtual Reality Technologies and Interactive Audio/Video." Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc984124/.

Full text
Abstract:
Virtual Stage is a project to use Virtual Reality (VR) technology as an audiovisual performance interface. The depth of control, modularity of design, and user immersion aim to solve some of the representational problems in interactive audiovisual art and the control problems in digital musical instruments. Creating feedback between interaction and perception, the VR environment references the viewer's behavioral intuition developed in the real world, facilitating clarity in the understanding of artistic representation. The critical essay discusses of interactive behavior, game mechanics, interface implementations, and technical developments to express the structures and performance possibilities. This discussion uses Virtual Stage as an example with specific aesthetic and technical solutions, but addresses archetypal concerns in interactive audiovisual art. The creative documentation lists the interactive functions present in Virtual Stage as well as code reproductions of selected technical solutions. The included code excerpts document novel approaches to virtual reality implementation and acoustic physical modeling of musical instruments.
APA, Harvard, Vancouver, ISO, and other styles
13

Göbel, Gunther, and Ralph Sonntag. "Erfahrungen zur Nutzung von Mixed und Virtual Reality im Lehralltag an der HTW Dresden." TUDpress, 2017. https://tud.qucosa.de/id/qucosa%3A30911.

Full text
Abstract:
Der Einsatz von immersiven Systemen, also Virtual Reality (VR), Augmented Reality (AR) und Mixed Reality (MR) Systemen in der Lehre ist naheliegend. Eigene interaktive Erfahrung einer Tätigkeit ist immer einer reinen rezeptiven Beobachtung bzw. verbalen Erläuterung vorzuziehen. Trotzdem ist heutige Lehre selbst in Praktika und Übungen zum sehr großen Teil passiv, die selbständige Umsetzung, etwa das Bedienen einer Anlage oder die eigenständige Synthese einer Chemikalie, können aus Gründen der Zeit, Verfügbarkeit, Sicherheitsbedenken und Kostengründen oft nur selten eingesetzt werden. Dem Einsatz o.g. neuer immersiven Technologien stand bisher nicht nur der erhebliche Aufwand zur Erstellung entsprechender Simulationen gegenüber. Vor allem aber auch der Hardwareaufwand bei gleichzeitigem nicht optimalem Grad an Immersivität ließ kaum Möglichkeiten offen. Jeden Studenten einzeln ausreichend Zeit in einer teuren und großen Cave-Umgebung zu ermöglichen, damit dieser virtuell technische Anlagen bedient, ist für größere Studentenzahlen untauglich. [... aus der Einleitung]
APA, Harvard, Vancouver, ISO, and other styles
14

Panahi, Aliakbar. "Big Data Visualization Platform for Mixed Reality." VCU Scholars Compass, 2017. https://scholarscompass.vcu.edu/etd/5198.

Full text
Abstract:
The visualization of data helps to provide faster and deeper insight into the data. In this work, a system for visualizing and analyzing big data in an interactive mixed reality environment is proposed. Such a system can be used for representing different types of data such as temporal, geospatial, network graph, and high dimensional. Also, an implementation of this system for four different data types are created. The data types include network data, volumetric data, high dimensional, and spectral data for different mixed reality devices such as Microsoft HoloLens, Oculus Rift, Samsung Gear VR, and Android ARCore were created. It was shown that such a system could store and use billions of samples and represent millions of them at once.
APA, Harvard, Vancouver, ISO, and other styles
15

Yannier, Nesra. "Bridging Physical and Virtual Learning: A Mixed-Reality System for Early Science." Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/752.

Full text
Abstract:
Tangible interfaces and mixed-reality environments have potential to bring together the advantages of physical and virtual environments to improve children’s learning and enjoyment. However, there are too few controlled experiments that investigate whether interacting with physical objects in the real world accompanied by interactive feedback may actually improve student learning compared to flat-screen interaction. Furthermore, we do not have a sufficient empirical basis for understanding how a mixed-reality environment should be designed to maximize learning and enjoyment for children. I created EarthShake, a mixed-reality game bridging physical and virtual worlds via a Kinect depth-camera and a specialized computer vision algorithm to help children learn physics. I have conducted three controlled experiments with EarthShake that have identified features that are more and less important to student learning and enjoyment. The first experiment examined the effect of observing physical phenomena and collaboration (pairs versus solo), while the second experiment replicated the effect of observing physical phenomena while also testing whether adding simple physical control, such as shaking a tablet, improves learning and enjoyment. The experiments revealed that observing physical phenomena in the context of a mixed-reality game leads to significantly more learning (5 times more) and enjoyment compared to equivalent screen-only versions, while adding simple physical control or changing group size (solo or pairs) do not have significant effects. Furthermore, gesture analysis provides insight as to why experiencing physical phenomena may enhance learning. My thesis work further investigates what features of a mixed-reality system yield better learning and enjoyment, especially in the context of limited experimental results from other mixed-reality learning research. Most mixed-reality environments, including tangible interfaces (where users manipulate physical objects to create an interactive output), currently emphasize open-ended exploration and problem solving, and are claimed to be most effective when used in a discovery-learning mode with minimal guidance. I investigated how critical to learning and enjoyment interactive guidance and feedback is (e.g. predict/observe/explain prompting structure with interactive feedback), in the context of EarthShake. In a third experiment, I compared the learning and enjoyment outcomes of children interacting with a version of EarthShake that supports guided-discovery, another version that supports exploration in discovery-learning mode, and a version that is a combination of both guideddiscovery and exploration. The results of the experiment reveals that Guided-discovery and Combined conditions where children are exposed to the guided discovery activities with the predict-observe-explain cycle with interactive feedback yield better explanation and reasoning. Thus, having guided-discovery in a mixed-reality environment helps with formulating explanation theories in children’s minds. However, the results also suggest that, children are able to activate explanatory theory in action better when the guided discovery activities are combined with exploratory activities in the mixed-reality system. Adding exploration to guided-discovery activities, not only fosters better learning of the balance/physics principles, but also better application of those principles in a hands-on, constructive problem-solving task. My dissertation contributes to the literatures on the effects of physical observation and mixed-reality interaction on students’ science learning outcomes in learning technologies. Specifically, I have shown that a mixed-reality system (i.e., combining physical and virtual environments) can lead to superior learning and enjoyment outcomes than screen-only alternatives, based on different measures. My work also contributes to the literature of exploration and guided-discovery learning, by demonstrating that having guided-discovery activities in a mixed-reality setting can improve children’s fundamental principle learning by helping them formulate explanations. It also shows that combining an engineering approach with scientific thinking practice (by combining exploration and guided-discovery activities) can lead to better engineering outcomes such as transferring to constructive hands-on activities in the real world. Lastly, my work aims to make a contribution from the design perspective by creating a new mixed-reality educational system that bridges physical and virtual environments to improve children’s learning and enjoyment in a collaborative way, fostering productive dialogue and scientific curiosity in museum and school settings, through an iterative design methodology to ensure effective learning and enjoyment outcomes in these settings.
APA, Harvard, Vancouver, ISO, and other styles
16

Hollands, Robin. "Modelling and visualisation of systems with mixed-mode dynamics." Thesis, University of Sheffield, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.319415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Le, Chenechal Morgan. "Awareness Model for Asymmetric Remote Collaboration in Mixed Reality." Thesis, Rennes, INSA, 2016. http://www.theses.fr/2016ISAR0006/document.

Full text
Abstract:
Etre capable de collaborer à distance avec d'autres personnes peut fournir de précieuses capacités pour effectuer des tâches qui ont besoin de plusieurs utilisateurs pour être accomplies. De plus, les technologies de Réalité Mixte (RM) sont des outils intéressants pour développer de nouveaux types d'applications offrant des interactions et des possibilités de perception plus naturelles comparées aux systèmes classiques. Dans cette thèse, nous proposons d'améliorer la collaboration distante en utilisant ces technologies de RM qui profitent de nos capacités naturelles à effectuer des tâches en environnements 3D. En particulier, nous nous concentrons sur les aspects asymétriques impliqués par ce type de collaboration : les rôles, le point de vue (PdV), les dispositifs et le niveau de virtualité de l'application de RM. Premièrement, nous nous intéressons aux problèmes d'awareness et nous proposons un modèle générique capable de décrire précisément une application de RM collaborative en prenant en compte les potentielles dimensions asymétriques. Afin de traiter toutes ces dimensions, nous séparons notre modèle final en deux niveaux qui distingue espaces réels et virtuels pour chaque utilisateur. Dans ce modèle, chaque utilisateur peut générer différents types d'entrées et recevoir des retours de significations différentes dans le but de maintenir leur propre awareness de l'Environnement Virtuel (EV) partagé. Puis, nous présentons une étude utilisateur exploratoire qui s'intéresse aux conséquences de l'asymétrie des PdVs et aux implications induites par la représentation des utilisateurs sur le niveau d'awareness des autres collaborateurs. Deuxièmement, nous appliquons ces observations dans un contexte de guidage à distance qui implique un guide distant aidant un opérateur à réaliser une tâche de maintenance. Pour ce cas d'usage, nous proposons à l'expert d'utiliser une interface de Réalité Virtuelle (AV) pour aider l'opérateur au travers d'une interface de Réalité Augmentée (RA). Nous contribuons à ce domaine en améliorant les capacités de perception de l'environnement distant par l'expert et en proposant des interactions plus naturelles pour guider l'opérateur au travers d'indications non intrusives et intégrées à son environnement réel. Finalement, nous abordons la tâche de co-manipulation qui est une situation encore plus sensible vis-à-vis de l'awareness en collaboration distante. Cette tâche requiert de viser une synchronisation parfaite entre les collaborateurs pour l'accomplir efficacement. Ainsi, le système doit fournir des retours appropriés pour maintenir un haut niveau d'awareness, spécialement concernant l'activité courante des autres. En particulier, nous proposons une technique de co-manipulation hybride, inspirée de notre cas d'utilisation précédent sur la guidage distant, qui mixe la manipulation d'objet virtuel et du PdV d'un autre utilisateur
Being able to collaborate remotely with other people can provide valuable capabilities in performing tasks that require multiple users to be achieved. Moreover, Mixed Reality (MR) technologies are great tools to develop new kinds of applications with more natural interactions and perception abilities compared to classical desktop setups. In this thesis, we propose to improve remote collaboration using these MR technologies that take advantages of our natural skills to perform tasks in 3D environments. In particular, we focus on asymmetrical aspects involved by these kind of collaboration: roles, point of view (PoV), devices and level of virtuality of the MR application. First, we focus on awareness issues and we propose a generic model able to accurately describe a collaborative MR application taking into account potential asymmetry dimensions. In order to address all these dimensions, we split our final model into two layers that separate real and virtual spaces for each user. In this model, each user can generate different kind of input and receive feedbacks with different meanings in order to maintain their own awareness of the shared Virtual Environment (VE). Then, we conduct an exploratory user study to explore the consequences of asymmetric PoVs and the involvement of users' representation in the level of awareness of others' collaborators. Second, we apply our findings to a remote guiding context that implies a remote guide to help an operator in performing a maintenance task. For this use case, we propose to the expert to use a Virtual Reality (VR) interface in order to help the operator through an Augmented Reality (AR) interface. We contribute to this field by enhancing the expert's perceptual abilities of the remote workspace as well as by providing more natural interactions to guide the operator through not intrusive guiding cues integrated to the real world. Last, we address an even more sensitive situation for awareness in remote collaboration that is virtual co-manipulation. It requires to target a perfect synchronization between collaborators in order to achieve the task efficiently. Thus, the system needs to provide appropriate feedbacks to maintain a high level of awareness, especially about what others are currently doing. In particular, we propose a hybrid co-manipulation technique, inspired from our previous remote guiding use case, that mixes virtual object and other's PoV manipulation in the same time
APA, Harvard, Vancouver, ISO, and other styles
18

Pederson, Thomas. "From Conceptual Links to Causal Relations — Physical-Virtual Artefacts in Mixed-Reality Space." Doctoral thesis, Umeå : Univ, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Sra, Misha. "A framework for enhancing the sense of presence in virtual and mixed reality." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119074.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 251-281).
The vision of virtual reality has always been to create worlds that look, sound, act, and feel real. However, researchers and developers have largely favored visual perception over other senses. This over-valuation of the visual may be traced back to a partial interpretation of the seminal work on visual perception by psychologist JJ Gibson. Oculocentrism in design overlooks the fact that Gibson's theory of perception encompasses the entire range of perceptual processes integrated with action, including kinesthesia and affordances of the environment. Starting with Gibson's ecological approach to the reality of experience, I develop a four-dimensional framework for creating immersive experiences that blend extrinsic elements, meaning elements related to the user's real world context, and intrinsic elements, i.e., those related to the device, application and content. I present a series of novel methods and techniques, demonstrated through implemented systems to show how transferring real world affordances to virtual experiences can enhance the sense of presence, while also arguing for a shift from oculocentrism to sensorimotor processes and to the experiential modalities of touch, proprioception, and kinesthesia. My work contrasts with the currently dominant design approach premised on the notion that the richness of sensory perception can be recreated with vision alone. The hybrid systems described in this thesis present techniques for integrating space, kinesthesia, touch and other sensations, social interaction, and the user's physiology into the virtual experience.
by Misha Sra.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
20

Bekele, Mafkereseb Kassahun. "Collaborative and Multi-Modal Mixed Reality for Enhancing Cultural Learning in Virtual Heritage." Thesis, Curtin University, 2022. http://hdl.handle.net/20.500.11937/89298.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Widengren, Mattias. "A HOPE FOR STROKE REHABILITATION : EXPLORING THE REHATT MIXED REALITY APPLICATION." Thesis, Umeå universitet, Institutionen för psykologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-185272.

Full text
Abstract:
Unilateral spatial neglect (USN) is a common disorder after stroke. An application especially developed for stroke rehabilitation, the RehAtt Mixed Reality (MR) intends to train cognitive and motor functions that are affected by USN, by the means of interactive 3D games played in mixed reality, through smart glasses. The present study targets one specific cognitive function, namely spatial attention, and compares individual performances in one of the games (scenarios) to performances in a widely used test for detection of deficits in spatial attention – the Posner test. The hypothesis is that user reaction times in the RehAtt MR scenario correlates with user reaction times in the Posner test. Another test, including a questionnaire, to validate the usability of the RehAtt MR is also conducted. The sample for the usability test and questionnaire includes a total of 74 participants (47 women, 27 men, M = 39.6 years of age), of which 29 individuals (13 women, 16 men, M = 35 years of age) carried out the experimental part of the study. The results suggest that there is a significant correlation, r(27) = .411, p = .027, between reaction times in the Posner test and the scenario in the RehAtt MR, and that the product usability shows high quality. It is concluded that the results support that the scenario explored in the RehAtt MR trains spatial attention, although further research is needed for full validation.
Unilateraltspatialt neglekt (USN) är en vanlig funktionsnedsättning efter stroke. En applikation som utvecklats speciellt för strokerehabilitering - RehAtt Mixed Reality (MR) - har som mål att träna kognitiva och motoriska funktioner som är påverkade av USN, med hjälp av 3D-spel som spelas i mixed reality, genom smarta glasögon. Den aktuella studien siktar in sig på en specifik kognitiv funktion – spatial uppmärksamhet – och jämför individuella prestationer i ett av spelen i RehAtt MR med prestationer i ett vanligt, ofta använt test för att upptäcka nedsättningar i spatial uppmärksamhet – Posner-testet. Hypotesen är att användares reaktionstider i spelet i RehAtt MR korrelerar med användares reaktionstider i Posner-testet. Ett annat test, inklusive en enkät, görs också, för att validera användbarheten i RehAtt MR. 74 deltagare (47 kvinnor, 27 män, M = 39.6 år) finns med i användbarhetstestet och enkäten, av vilka 29 individer (13 kvinnor, 16 män, M = 35 år) medverkade i den experimentella delen av studien. Resultaten indikerar att det är en signifikant korrelation, r(27) = .411, p = .027, mellan reaktionstiderna i Posner-testet och spelet i RehAtt MR, och att användbarheten hos produktenvisar hög kvalitet. Slutsatsen är att vad som hittats i den aktuella studien stödjer idéen att spelet i RehAtt MR tränar spatial uppmärksamhet, även om vidare studier krävs för en full validering.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhao, Chen. "HUMAN POINT-TO-POINT REACHING AND SWARM-TEAMING PERFORMANCE IN MIXED REALITY." Case Western Reserve University School of Graduate Studies / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=case1607079526355402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Reynard, Gail Teresa. "A framework for awareness driven video quality service in collaborative virtual environments." Thesis, University of Nottingham, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Lafargue, David. "The Influence of Mixed Reality Learning Environments in Higher Education STEM Programs| A Study of Student Perceptions of Mixed Reality Self-Efficacy, Engagement, and Motivation Using Augmented and Virtual Reality." Thesis, University of Louisiana at Lafayette, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10932912.

Full text
Abstract:

Mixed Reality is a technology quickly advancing and becoming more readily available to the average consumer. The continually improving availability of Mixed Reality technology is due to advancements with software platforms and integration of miniaturized hardware for mobile devices. Mixed Reality is becoming more available for use within higher education but limited data is available supporting the relevance and effectiveness of this technology for helping students to learn.

The intent of this study was purposed to explore how Mixed Reality influences learning within a Science, Technology, Engineering, and Mathematics (STEM) higher education program when learning within a Mixed Reality Learning Environment (MRLE). Mixed Reality Self-efficacy, student engagement, and student motivation were used as part of the Mixed Reality Self-efficacy, Engagement, and Motivation (MRSEM) survey. The MRSEM survey captured demographic information but primarily focused on the variables of self-efficacy, engagement, and motivation of post-secondary STEM students within a MRLE.

The results from this study provided data indicating how gender influences student acceptance of Mixed Reality, significant relationships among student engagement and student motivation when using Mixed Reality along with observed mobile device usage. These findings can provide administrators with useful information needed to target specific population groups to effectively integrate this new technology. Incorporating Mixed Reality as a learning resource is an approach if done correctly can reap benefits for all stakeholders involved.

The final outcome originating from the findings and observations resulted in the development of a best practices guide and recommendations for administrators and practitioners considering Mixed Reality. The guide and recommendations are intended for stakeholders within STEM areas of concentration considering this technology as a resource to improve instructional methods by engaging, motivating, retaining and ultimately improving a student’s Mixed Reality Self-efficacy (MRSe).

APA, Harvard, Vancouver, ISO, and other styles
25

Adwernat, Stefan, and Matthias Neges. "Mixed Reality Assistenzsystem zur visuellen Qualitätsprüfung mit Hilfe digitaler Produktfertigungsinformationen." Thelem Universitätsverlag & Buchhandlung GmbH & Co. KG, 2019. https://tud.qucosa.de/id/qucosa%3A36940.

Full text
Abstract:
In der industriellen Fertigung unterliegen die Produkteigenschaften und -parameter, unabhängig vom eingesetzten Fertigungsverfahren, gewissen Streuungen. Im Rahmen der Qualitätsprüfung wird daher ermittelt, inwieweit die festgelegten Qualitätsanforderungen an das Produkt bzw. Werkstück trotz der Fertigungsstreuungen erfüllt werden (Brunner et al. 2011) [...] Insbesondere bei einer visuellen Prüfung durch den Menschen hängt das Ergebnis jedoch sehr stark vom jeweiligen Prüfwerker ab. Die wesentlichen Faktoren für die Erkennungsleistung sind Erfahrung, Qualifizierung und Ermüdung des Prüfers, Umgebungsbedingungen, wie Beleuchtung, Schmutz oder akustische Störfaktoren, aber auch die Anzahl und Gewichtung der zu bewertenden Merkmale (Keferstein et al. 2018). Infolge dessen kann die Zuverlässigkeit und Reproduzierbarkeit der Prüfergebnisse negativ beeinflusst werden. Gleiches gilt für die vollständige und konsistente Dokumentation der Sichtprüfung [...] Vor diesem Hintergrund wird ein Mixed Reality-basiertes Assistenzsystem entwickelt, welches den Prüfwerker bei der Durchführung und Dokumentation der visuellen Sichtprüfung unterstützen soll. Die Anforderungen dieses Ansatzes sind aus einem Kooperationsprojekt in der Automobilindustrie abgeleitet. Das dargestellte Assistenzsystem ist daher Teil von übergeordneten Aktivitäten im Zusammenhang mit 3D-Master und einer zeichnungsfreien Produktdokumentation. [...aus der Einleitung]
APA, Harvard, Vancouver, ISO, and other styles
26

Nguyen, Long. "DIRECT MANIPULATION OF VIRTUAL OBJECTS." Doctoral diss., University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2898.

Full text
Abstract:
Interacting with a Virtual Environment (VE) generally requires the user to correctly perceive the relative position and orientation of virtual objects. For applications requiring interaction in personal space, the user may also need to accurately judge the position of the virtual object relative to that of a real object, for example, a virtual button and the user's real hand. This is difficult since VEs generally only provide a subset of the cues experienced in the real world. Complicating matters further, VEs presented by currently available visual displays may be inaccurate or distorted due to technological limitations. Fundamental physiological and psychological aspects of vision as they pertain to the task of object manipulation were thoroughly reviewed. Other sensory modalities--proprioception, haptics, and audition--and their cross-interactions with each other and with vision are briefly discussed. Visual display technologies, the primary component of any VE, were canvassed and compared. Current applications and research were gathered and categorized by different VE types and object interaction techniques. While object interaction research abounds in the literature, pockets of research gaps remain. Direct, dexterous, manual interaction with virtual objects in Mixed Reality (MR), where the real, seen hand accurately and effectively interacts with virtual objects, has not yet been fully quantified. An experimental test bed was designed to provide the highest accuracy attainable for salient visual cues in personal space. Optical alignment and user calibration were carefully performed. The test bed accommodated the full continuum of VE types and sensory modalities for comprehensive comparison studies. Experimental designs included two sets, each measuring depth perception and object interaction. The first set addressed the extreme end points of the Reality-Virtuality (R-V) continuum--Immersive Virtual Environment (IVE) and Reality Environment (RE). This validated, linked, and extended several previous research findings, using one common test bed and participant pool. The results provided a proven method and solid reference points for further research. The second set of experiments leveraged the first to explore the full R-V spectrum and included additional, relevant sensory modalities. It consisted of two full-factorial experiments providing for rich data and key insights into the effect of each type of environment and each modality on accuracy and timeliness of virtual object interaction. The empirical results clearly showed that mean depth perception error in personal space was less than four millimeters whether the stimuli presented were real, virtual, or mixed. Likewise, mean error for the simple task of pushing a button was less than four millimeters whether the button was real or virtual. Mean task completion time was less than one second. Key to the high accuracy and quick task performance time observed was the correct presentation of the visual cues, including occlusion, stereoscopy, accommodation, and convergence. With performance results already near optimal level with accurate visual cues presented, adding proprioception, audio, and haptic cues did not significantly improve performance. Recommendations for future research include enhancement of the visual display and further experiments with more complex tasks and additional control variables.
Ph.D.
Department of Industrial Engineering and Management Systems
Engineering and Computer Science
Industrial Engineering PhD
APA, Harvard, Vancouver, ISO, and other styles
27

Davis, Jr Larry Dennis. "CONFORMAL TRACKING FOR VIRTUAL ENVIRONMENTS." Doctoral diss., University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4393.

Full text
Abstract:
A virtual environment is a set of surroundings that appears to exist to a user through sensory stimuli provided by a computer. By virtual environment, we mean to include environments supporting the full range from VR to pure reality. A necessity for virtual environments is knowledge of the location of objects in the environment. This is referred to as the tracking problem, which points to the need for accurate and precise tracking in virtual environments. Marker-based tracking is a technique which employs fiduciary marks to determine the pose of a tracked object. A collection of markers arranged in a rigid configuration is called a tracking probe. The performance of marker-based tracking systems depends upon the fidelity of the pose estimates provided by tracking probes. The realization that tracking performance is linked to probe performance necessitates investigation into the design of tracking probes for proponents of marker-based tracking. The challenges involved with probe design include prediction of the accuracy and precision of a tracking probe, the creation of arbitrarily-shaped tracking probes, and the assessment of the newly created probes. To address these issues, we present a pioneer framework for designing conformal tracking probes. Conformal in this work means to adapt to the shape of the tracked objects and to the environmental constraints. As part of the framework, the accuracy in position and orientation of a given probe may be predicted given the system noise. The framework is a methodology for designing tracking probes based upon performance goals and environmental constraints. After presenting the conformal tracking framework, the elements used for completing the steps of the framework are discussed. We start with the application of optimization methods for determining the probe geometry. Two overall methods for mapping markers on tracking probes are presented, the Intermediary Algorithm and the Viewpoints Algorithm. Next, we examine the method used for pose estimation and present a mathematical model of error propagation used for predicting probe performance in pose estimation. The model uses a first-order error propagation, perturbing the simulated marker locations with Gaussian noise. The marker locations with error are then traced through the pose estimation process and the effects of the noise are analyzed. Moreover, the effects of changing the probe size or the number of markers are discussed. Finally, the conformal tracking framework is validated experimentally. The assessment methods are divided into simulation and post-fabrication methods. Under simulation, we discuss testing of the performance of each probe design. Then, post-fabrication assessment is performed, including accuracy measurements in orientation and position. The framework is validated with four tracking probes. The first probe is a six-marker planar probe. The predicted accuracy of the probe was 0.06 deg and the measured accuracy was 0.083 plus/minus 0.015 deg. The second probe was a pair of concentric, planar tracking probes mounted together. The smaller probe had a predicted accuracy of 0.206 deg and a measured accuracy of 0.282 plus/minus 0.03 deg. The larger probe had a predicted accuracy of 0.039 deg and a measured accuracy of 0.017 plus/minus 0.02 deg. The third tracking probe was a semi-spherical head tracking probe. The predicted accuracy in orientation and position was 0.54 plus/minus 0.24 deg and 0.24 plus/minus 0.1 mm, respectively. The experimental accuracy in orientation and position was 0.60 plus/minus 0.03 deg and 0.225 plus/minus 0.05 mm, respectively. The last probe was an integrated, head-mounted display probe, created using the conformal design process. The predicted accuracy of this probe was 0.032 plus/minus 0.02 degrees in orientation and 0.14 plus/minus 0.08 mm in position. The measured accuracy of the probe was 0.028 plus/minus 0.01 degrees in orientation and 0.11 plus/minus 0.01 mm in position. These results constitute an order of magnitude improvement over current marker-based tracking probes in orientation, indicating the benefits of a conformal tracking approach. Also, this result translates to a predicted positional overlay error of a virtual object presented at 1m of less than 0.5 mm, which is well above reported overlay performance in virtual environments.
Ph.D.
Department of Electrical and Computer Engineering
Engineering and Computer Science;
Electrical & Computer Engineering
APA, Harvard, Vancouver, ISO, and other styles
28

Krug, Dominik. "Far Above Far Beyond." Thesis, Umeå universitet, Designhögskolan vid Umeå universitet, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-139536.

Full text
Abstract:
This project aims to explore what the brand Land Rover could stand for in the future. The brands rich history of exploring unconquered terrain earned it admiration and desirability all around the world. Further extending it's reach onto new worlds is within reach. In the 2030s the first manned missions to Mars are planned. The first arrivers will have exploration vehicles, that are limited in range and capability. To really explore the planet, vehicles with greater off-road capability and range will be needed. The vehicles also need to allow the expedition crews to stay in the vehicle for longer periods comfortably and also offer extended life support on multi-week long journeys.With this project I am exploring possible answers to face the harsh conditions on Mars. Furthermore, the vehicle and it's features project a vision of what a future off-road driving experience could be.
APA, Harvard, Vancouver, ISO, and other styles
29

Gådin, Valter. "Factors for Good Text Legibility : Eye-tracking in Virtual Reality." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447167.

Full text
Abstract:
Living with a hearing impairment can have a large impact on a person’slife. There already exists many different aids to help in their life, butas technology advances new solutions can be created to further improvethe life quality for everyone. Two technologies that have advanced andbecome more affordable are Virtual Reality (VR) and Augmented Reality(AR). A potential aid for those with a hearing impairment could bea system where speech is converted to text and presented in AR to theuser. Such a system must have an easily read and legible text. In this master thesis legibility and user perception are studied for differenttext presentation in VR. The VR enables a more controlled environmentthan AR. Reading speed, subjective scoring and eye-movementdata are used to analyze the presentations. Lastly, some design recommendationsbased on the findings are presented. The result showed that the legibility was affected by many factors.Middle-layers (layer between the fore- and background) improved thelegibility, especially over complex backgrounds. The size of the textalso affected legibility where the larger text performed the worst. Theoptimal number of lines of text seems to be two. There were variationsbetween the preferred presentations, indicating that a future systemmight seek to accommodate this by some level of customization.
APA, Harvard, Vancouver, ISO, and other styles
30

Davies, C. J. "Parallel reality : tandem exploration of real and virtual environments." Thesis, University of St Andrews, 2016. http://hdl.handle.net/10023/8098.

Full text
Abstract:
Alternate realities have fascinated mankind since early prehistory and with the advent of the computer and the smartphone we have seen the rise of many different categories of alternate reality that seek to augment, diminish, mix with or ultimately replace our familiar real world in order to expand our capabilities and our understanding. This thesis presents parallel reality as a new category of alternate reality which further addresses the vacancy problem that manifests in many previous alternate reality experiences. Parallel reality describes systems comprising two environments that the user may freely switch between, one real and the other virtual, both complete unto themselves. Parallel reality is framed within the larger ecosystem of previously explored alternate realities through a thorough review of existing categorisation techniques and taxonomies, leading to the introduction of the combined Milgram/Waterworth model and an extended definition of the vacancy problem for better visualising experience in alternate reality systems. Investigation into whether an existing state of the art alternate reality modality (Situated Simulations) could allow for parallel reality investigation via the Virtual Time Windows project was followed by the development of a bespoke parallel reality platform called Mirrorshades, which combined the modern virtual reality hardware of the Oculus Rift with the novel indoor positioning system of IndoorAtlas. Users were thereby granted the ability to walk through their real environment and to at any point switch their view to the equivalent vantage point within an immersive virtual environment. The benefits that such a system provides by granting users the ability to mitigate the effects of the extended vacancy problem and explore parallel real and virtual environments in tandem was experimentally shown through application to a use case within the realm of cultural heritage at a 15th century chapel. Evaluation of these user studies lead to the establishment of a number of best practice recommendations for future parallel reality endeavours.
APA, Harvard, Vancouver, ISO, and other styles
31

Hanson, Kami M. "The Utilization of Mixed-Reality Technologies to Teach Techniques for Administering Local Anesthesia." DigitalCommons@USU, 2011. https://digitalcommons.usu.edu/etd/850.

Full text
Abstract:
The ability to perform local anesthesia on dental patients is an important clinical skill for a dental hygienist. When learning this procedure in an academic situation, students often practice on their peers to build their skills. There are multiple reasons why the peer practice is not ideal; consequently, educators have sought the means to simulate the practice of local anesthetic procedures without endangering others. Mixed-reality technologies offer a potential solution to the simulated procedure problem. The purpose of this research was to determine if students could learn the techniques for providing local anesthesia using a mixed-reality system that allows them to manipulate 3D objects in virtual space. Guiding research questions were: In what ways do using 3D objects allow for a greater understanding of anatomical, spatial, and dimensional acuity? Will students develop conceptual understandings regarding the application of anatomical and technical concepts through iteration? Will students demonstrate the proper technique and verbalize a level of confidence for administering local anesthesia after using the mixed-reality system? Design-based research methods allowed for multiple iterations of design, enactment, analysis, and redesign. The first iteration focused on building a knowledge base for designing and developing virtual reality technologies for use in dental hygiene education. The second phase of research increased in technical sophistication and involved a virtual system that allowed for student interaction and manipulation of 3D objects. The interactions supported students' learning through the association of anatomical, spatial, and dimensional acuity. Built-in learner prompts promoted the understanding and identification of anatomical landmarks for performing an injection for the lower jaw. Further, the system promoted self-controlled practice and iterative learning processes. Redesign and development in the final iteration focused on design improvements of the system that included an output metric for assessing student performance, a data glove, and a marker to assist in following student interactions. Results support that students learned "while doing" in a specific immersive environment designed for dental hygiene education and they increased their level of confidence for performing a specific procedure.
APA, Harvard, Vancouver, ISO, and other styles
32

Pouke, M. (Matti). "Augmented virtuality:transforming real human activity into virtual environments." Doctoral thesis, Oulun yliopisto, 2015. http://urn.fi/urn:isbn:9789526208343.

Full text
Abstract:
Abstract The topic of this work is the transformation of real-world human activity into virtual environments. More specifically, the topic is the process of identifying various aspects of visible human activity with sensor networks and studying the different ways how the identified activity can be visualized in a virtual environment. The transformation of human activities into virtual environments is a rather new research area. While there is existing research on sensing and visualizing human activity in virtual environments, the focus of the research is carried out usually within a specific type of human activity, such as basic actions and locomotion. However, different types of sensors can provide very different human activity data, as well as lend itself to very different use-cases. This work is among the first to study the transformation of human activities on a larger scale, comparing various types of transformations from multiple theoretical viewpoints. This work utilizes constructs built for use-cases that require the transformation of human activity for various purposes. Each construct is a mixed reality application that utilizes a different type of source data and visualizes human activity in a different way. The constructs are evaluated from practical as well as theoretical viewpoints. The results imply that different types of activity transformations have significantly different characteristics. The most distinct theoretical finding is that there is a relationship between the level of detail of the transformed activity, specificity of the sensors involved and the extent of world knowledge required to transform the activity. The results also provide novel insights into using human activity transformations for various practical purposes. Transformations are evaluated as control devices for virtual environments, as well as in the context of visualization and simulation tools in elderly home care and urban studies
Tiivistelmä Tämän väitöskirjatyön aiheena on ihmistoiminnan muuntaminen todellisesta maailmasta virtuaalitodellisuuteen. Työssä käsitellään kuinka näkyvästä ihmistoiminnasta tunnistetaan sensoriverkkojen avulla erilaisia ominaisuuksia ja kuinka nämä ominaisuudet voidaan esittää eri tavoin virtuaaliympäristöissä. Ihmistoiminnan muuntaminen virtuaaliympäristöihin on kohtalaisen uusi tutkimusalue. Olemassa oleva tutkimus keskittyy yleensä kerrallaan vain tietyntyyppisen ihmistoiminnan, kuten perustoimintojen tai liikkumisen, tunnistamiseen ja visualisointiin. Erilaiset anturit ja muut datalähteet pystyvät kuitenkin tuottamaan hyvin erityyppistä dataa ja siten soveltuvat hyvin erilaisiin käyttötapauksiin. Tämä työ tutkii ensimmäisten joukossa ihmistoiminnan tunnistamista ja visualisointia virtuaaliympäristössä laajemmassa mittakaavassa ja useista teoreettisista näkökulmista tarkasteltuna. Työssä hyödynnetään konstrukteja jotka on kehitetty eri käyttötapauksia varten. Konstruktit ovat sekoitetun todellisuuden sovelluksia joissa hyödynnetään erityyppistä lähdedataa ja visualisoidaan ihmistoimintaa eri tavoin. Konstrukteja arvioidaan sekä niiden käytännön sovellusalueen, että erilaisten teoreettisten viitekehysten kannalta. Tulokset viittaavat siihen, että erilaisilla muunnoksilla on selkeästi erityyppiset ominaisuudet. Selkein teoreettinen löydös on, että mitä yksityiskohtaisemmasta toiminnasta on kyse, sitä vähemmän tunnistuksessa voidaan hyödyntää kontekstuaalista tietoa tai tavanomaisia datalähteitä. Tuloksissa tuodaan myös uusia näkökulmia ihmistoiminnan visualisoinnin hyödyntämisestä erilaisissa käytännön sovelluskohteissa. Sovelluskohteina toimivat ihmiskehon käyttäminen ohjauslaitteena sekä ihmistoiminnan visualisointi ja simulointi kotihoidon ja kaupunkisuunnittelun sovellusalueilla
APA, Harvard, Vancouver, ISO, and other styles
33

Knierim, Pascal [Verfasser], and Albrecht [Akademischer Betreuer] Schmidt. "Enhancing interaction in mixed reality : the impact of modalities and interaction techniques on the user experience in augmented and virtual reality / Pascal Knierim ; Betreuer: Albrecht Schmidt." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2020. http://d-nb.info/123491199X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Peillard, Etienne. "Vers une caractérisation des biais perceptifs en réalité mixte : une étude de facteurs altérant la perception des distances." Thesis, Ecole centrale de Nantes, 2020. http://www.theses.fr/2020ECDN0030.

Full text
Abstract:
Réalité Virtuelle, Réalité Augmentée, Réalité Mixte, ces mots comme les applications qui les accompagnent entrent peu à peu dans l’usage commun. Cependant, la réalité proposée par ces technologies n’est pas identique à notre réalité ordinaire. Le présent ouvrage se propose de mettre en évidence certains biais perceptifs en réalité mixte. Dans un premier temps nous étudierons un biais perceptif lié à l’observateur : l’anisotropie de la perception egocentrique des distances en réalité virtuelle. Dans un second temps nous examinerons la perception exocentrique des distances en Réalité Augmentée (RA). En effet la sous-estimation des distances egocentriques est un phénomène souvent observé et il est donc intéressant d’étudier son potentiel transfert à la perception exocentrique. Puis nous étudierons plus en avant d’autres biais potentiels en RA en s’attachant en particulier à évaluer l’impact des indices de profondeur sur la perception des distances. En particulier, nous analyserons dans ce chapitre l’effet de deux indices de profondeurs en RA: l’impact de la position et de la forme des ombres sur la perception des distances, puis l’influence de l’accommodation sur la perception des distances en utilisant une technologie d’affichage spécifique : les dispositifs de projection rétinienne. Enfin nous discuterons le potentiel impact des techniques d’interaction sur la perception des distances et proposerons un protocole permettant d’évaluer l’effet de certaines interactions sur la perception des distances en RA, afin peut-être de parvenir à rapprocher celle-ci de la perception réelle
Virtual Reality, Augmented Reality, Mixed Reality, these words as well as their applications are gradually becoming common usage. However, the reality proposed by these technologies is not identical to our regular reality. This work aims to highlight some perceptual biases in Mixed Reality. First we study a perceptual bias linked to the observer: the anisotropy of the egocentric distances perception in virtual reality. In a second part, we study the exocentric perception of distances in Augmented Reality (AR). Indeed the under estimation of egocentric distances is a phenomen on frequently observed and it is therefore interesting to consider its potential transfer tothe exocentric perception. Then we further study other potential biases in AR by focusing in particular on evaluating the impact of depth cues on the perception of distances. In particular, we investigate in this chapter the effect of two depth cues in AR: the impact of the position and shape of shadows on distance perception, and then the influence of accommodation on distance perception using a specific display technology: retinal projection devices. Finally, we discuss the potential impact of interaction techniques on distance perception and propose a protocol to evaluate the effect of certain interactions on distance perception in AR, in order to perhaps bring it closer to real perception
APA, Harvard, Vancouver, ISO, and other styles
35

Vagnoni, Alberto. "Tecnologie digitali per l'educazione e fruizione museale in ambito scientifico." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
Viene proposta una raccolta bibliografica sulle tecnologie di Internet of Things, Virtual Reality, Augmented Reality, Mixed Reality ed alcuni esempi di come queste possano essere utilizzate in contesto educativo all'interno di musei scientifici e non. Infine, viene proposto uno dei possibili iter di sviluppo per un'applicazione Augmented Reality con potenzialità ludico-educative.
APA, Harvard, Vancouver, ISO, and other styles
36

Dreifaldt, Ulrika, and Erik Lövquist. "The construction of a Haptic application in a Virtual Environment as a post-Stroke arm Rehabilitation exercise." Thesis, Linköping University, Department of Science and Technology, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-6155.

Full text
Abstract:

This thesis describes a six-month project based on stroke rehabilitation and involves designing with medical doctors, a physiotherapist and an occupational therapist, prototyping and evaluating with both stroke patients and other users. Our project involves the construction of a rehabilitation exercise system, based on virtual environments (VE) and haptics, designed for stroke patients. Our system uses a commercially available haptic device called the PHANTOM Omni, which has the possibility of being used as a rehabilitation tool to interact with virtual environments. The PHANTOM Omni is used in combination with our own developed software based on the platform H3D API. Our goal is to construct an application which will motivate the stroke patient to start using their arm again.

We give a review of the different aspects of stroke, rehabilitation, VE and haptics and how these have previously been combined. We describe our findings from our literature studies and from informal interviews with medical personnel. From these conclusions we attempt to take the research area further by suggesting and evaluating designs of different games/genres that can be used with the PHANTOM Omni as possible haptic exercises for post-stroke arm rehabilitation. We then present two different implementations to show how haptic games can be constructed. We mainly focus on an application we built, a game, using an iterative design process based on studies conducted during the project, called "The Labyrinth". The game is used to show many of the different aspects that have to be taken into account when designing haptic games for stroke patients. From a study with three stroke patients we have seen that "The Labyrinth" has the potential of being a stimulating, encouraging and fun exercise complement to the traditional rehabilitation. Through the design process and knowledge we acquired during this thesis we have created a set of general design guidelines that we believe can help in the future software development of haptic games for post-stroke arm rehabilitation.

APA, Harvard, Vancouver, ISO, and other styles
37

Tahar, Aissa Safia. "Improving well-being with virtual reality for frail elderly people : a mixed method approach letting them into the three-dimensional world." Thesis, KTH, Medicinteknik och hälsosystem, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235419.

Full text
Abstract:
Introduction: The Swedish population is ageing – resulting in an increase in the number of elderly people and higher socioeconomic demands that the society needs to support them with. In Sweden, frail elderly people with for example mobility and cognitive problems, have the opportunity to attend a day care center where they can join activities and to socialize. Purpose: The purpose of this thesis was to investigate to what extent virtual reality technology could contribute to improved well-being for frail elderly at day care centers. Method: This study was conducted via a mixed method consisting of a survey and a semi-structured interview. 19 participants (15 male and 4 female) from three day care centers in Södertälje participated in this study. Results: By allowing frail elderly at day care centers to experience virtual reality, quantitative and qualitative data was collected. Both indicating that the experience of using virtual reality was positive and comfortable. 7 themes were identified through a thematic analysis demonstrating what was repeatedly mentioned by the participants. The themes were: (1) immersion & interaction, (2) usage, (3) nature movies, (4) visit places, (5) talking about things that are dear to them, (6) being limited and (7) thinking that VR could affect well-being. Conclusion: This study showed that the subjective well-being of frail elderly was arguably partially improved with virtual reality. The participants were overall positive, enjoyed the experience with a sense of immersion and awakening memories.
Introduktion: Den svenska populationen åldras - vilket resulterar i en ökning av äldre personer och högre socioekonomiska krav som samhället måste stödja dem med. I Sverige har äldre personer med till exempel rörlighet och kognitiva problem möjlighet att delta i dagverksamheter där de kan delta i aktiviteter och umgås. Syfte: Syftet med detta examensarbete var att undersöka i vilken utsträckning virtuell verklighet skulle kunna bidra till förbättrad välmående hos äldre på dagverksamheter. Metod: Detta projekt var genomfört med hjälp av en mixed method så som en enkät och en semi- strukturerad intervju. 19 äldre person (15 män och 4 kvinnor) från tre dagverksamheter i Södertälje deltog i denna studie. Resultat: Genom att låta äldre personer vid dagverksamheterna uppleva virtuell verklighet, så samlades det in kvantitativ och kvalitativ data. Där båda indikerar att det, till exempel, var en positiv och bekväm erfarenhet. 7 teman identifierades genom en tematisk analys som illustrerade vad som ofta upprepades av deltagarna. Teman var: (1) immission & interaktion, (2) användning (3) naturfilmer, (4) besöka platser, (5) pratar om saker som ligger dem kärt om hjärtat, (6) att vara begränsad och (7) tror att VR kan påverka välmående. Slutsats: Denna studie visade att det subjektiva välmåendet hos äldre delvist var förbättrad med virtuell verklighet. Deltagarna var generellt positiva, njöt av upplevelsen av immersion samt minnen som väcktes.
APA, Harvard, Vancouver, ISO, and other styles
38

de, Cabo Portugal Sebastian. "Non Visuals : Material exploration of non-visual interaction design." Thesis, Umeå universitet, Designhögskolan vid Umeå universitet, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-182466.

Full text
Abstract:
Design is all about visuals, or that is what I have found out during this thesis, from the process materials to the outcome our main entry point to any problem is how will we solve it visually so it’s understandable for the general user. This aspect is problematic in itself due to the fact that we, as humans, understand the world and the things around using all our senses continuously, even though we can forget as visuals are so overpowering. There is a huge opportunity area in exploring our other senses and bringing them back to technology, and this can be seen in works in the past like Tangible Interactions [1] or Natural User Interfaces [2]. But in this moment in time, where all these new technologies like VR/AR and IoT are about to enter our lives and change them forever, this topic is more important than ever. We have already seen what happens when we turn humans into mere machines with some fingers as interactive inputs, and barely any senses to process all the information given to us. Now that these technologies are still young and malleable, we can direct the future to where we want it instead of being guided by the technology itself. To do this we need to reimagine the design process, not reinvent the wheel, but add experts which we currently leave behind and I argue are key to unlock these technologies, experts not only of the technological side of things but on the human side too, like physiotherapists and dancers. Add also people who we never think about when we think of VR like visually impaired users, which could make these technologies inclusive since early on, instead of as an afterthought like we usually do. Not only people, but we also need to add new materials to understand how we use our senses and explore ways that we can understand and explore them differently; like bodystorming and improv theatre because when things aren’t visual, how do you sketch it? A sketch turns into a video about movement. The end result provides a wide breadth of examples of the types of innovations that can come out of using these new design materials, and to open new frontiers. From a VR game with no visuals whatsoever to an AR location based story game, to a home sized multimodal operating system containing several different apps controlled through physical movement. The examples open up the space instead of closing into a single solution. This is just the tip of the iceberg, a hope that others will be inspired by it and continue with this journey that has just started, to guide the future into one that is more technological and at the same time more human than ever before. What we know is that VR does not equate Visual Reality.
APA, Harvard, Vancouver, ISO, and other styles
39

Martins, Ricardo F. "A wearable head-mounted projection display." Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4524.

Full text
Abstract:
Conventional head-mounted projection displays (HMPDs) contain of a pair of miniature projection lenses, beamsplitters, and miniature displays mounted on the helmet, as well as a retro-reflective screen placed strategically in the environment. We have extened the HMPD technology integrating the screen into a fully mobile embodiment. Some initial efforts of demonstrating this technology has been captured followed by an investigation of the diffraction effects versus image degradation caused by integrating the retro-reflective screen within the HMPD. The key contribution of this research is the conception and development of a mobile-HMPD (M-HMPD). We have included an extensive analysis of macro- and microscopic properties that encompass the retro-reflective screen. Furthermore, an evaluation of the overall performance of the optics will be assessed in both object space for the optical designer and visual space for the possible users of this technology. This research effort will also be focused on conceiving a mobile M-HMPD aimed for dual indoor/outdoor applications. The M-HMPD shares the known advantage such as ultra-lightweight optics (i.e. 8g per eye), unperceptible distortion (i.e.less or equal to]2.5%), and lightweight headset (i.e.less than or equal to]2.5 lbs) compared with eyepiece type head-mounted displays (HMDs) of equal eye relief and field of view. In addition, the M-HMPD also presents an advantage over the preexisting HMPD in that it does not require a retro-reflective screen placed strategically in the environment. This newly developed M-HMPD has the ability to project clear images at three different locations within near- or far-field observation depths without loss of image quality. This particular M-HMPD embodiment was targeted to mixed reality, augmented reality, and wearable display applications.
ID: 029050744; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2010.; Includes bibliographical references (p. 113-121).
Ph.D.
Doctorate
Department of Modeling and Simulation
Sciences
APA, Harvard, Vancouver, ISO, and other styles
40

Smith, Michael Sterling. "Strategies for the Creation of Spatial Audio in Electroacoustic Music." Thesis, University of North Texas, 2018. https://digital.library.unt.edu/ark:/67531/metadc1404593/.

Full text
Abstract:
This paper discusses technical and conceptual approaches to incorporate 3D spatial movement in electroacoustic music. The Ambisonic spatial audio format attempts to recreate a full sound field (with height information) and is currently a popular choice for 3D spatialization. While tools for Ambisonics are typically designed for the 2D computer screen and keyboard/mouse, virtual reality offers new opportunities to work with spatial audio in a 3D computer generated environment. An overview of my custom virtual reality software, VRSoMa, demonstrates new possibilities for the design of 3D audio. Created in the Unity video game engine for use with the HTC Vive virtual reality system, VRSoMa utilizes the Google Resonance SDK for spatialization. The software gives users the ability to control the spatial movement of sound objects by manual positioning, a waypoint system, animation triggering, or through gravity simulations. Performances can be rendered into an Ambisonic file for use in digital audio workstations. My work Discords (2018) for 3D audio facilitates discussion of the conceptual and technical aspects of spatial audio for use in musical composition. This includes consideration of human spatial hearing, technical tools, spatial allusion/illusion, and blending virtual/real spaces. The concept of spatial gestures has been used to categorize the various uses of spatial motion within a musical composition.
APA, Harvard, Vancouver, ISO, and other styles
41

Redfearn, Brady Edwin. "Rapid Design and Prototyping Methods for Mobile Head-Worn Mixed Reality (MR) Interface and Interaction Systems." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/82056.

Full text
Abstract:
As Mixed Reality (MR) technologies become more prevalent, it is important for researchers to design and prototype the kinds of user interface and user interactions that are most effective for end-user consumers. Creating these standards now will aid in technology development and adoption in MR overall. In the current climate of this domain, however, the interface elements and user interaction styles are unique to each hardware and software vendor and are generally proprietary in nature. This results in confusion for consumers. To explore the MR interface and interaction space, this research employed a series of standard user-centered design (UCD) methods to rapidly prototype 3D head-worn display (HWD) systems in the first responder domain. These methods were performed across a series of 13 experiments, resulting in an in-depth analysis of the most effective methods experienced herein and providing suggested paths forward for future researchers in 3D MR HWD systems. Lessons learned from each individual method and across all of the experiments are shared. Several characteristics are defined and described as they relate to each experiment, including interface, interaction, and cost.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
42

Nothnagel, Terese. "Impact of using mixed reality visualization to augment the exploration and analysis of water contamination events in a simulated car engine." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-220345.

Full text
Abstract:
The aim of this thesis is to understand the impact of virtual reality (VR) on user's performance and experience in practical industrial applications. In particular, I collaborated with Volvo Cars supporting their current engineering design practices. VR has the potential to augment and thereby improve a person's observational analysis. Current VR publications are focused on VR applications and their potential benefits and improvements, mainly towards science, industry, medicine, and education. Yet, no research on the impact of augmenting the exploration and analysis of water contamination events in a simulated car engine has been done. In this thesis, this issue has been explored by performing a task-based controlled user study with two conditions: the Workstation and the Wall. The Workstation is the current system examining contamination events in a video with keyboard and mouse on a normal desktop screen. The Wall is the developed system examining contamination events through interaction with an Xbox-controller and Kinect head-tracking motion control on a 4K screen (400 x 240 cm projection surface) with stereoscopy using passive 3D glasses. This thesis is a pilot to discover meaningful differences and trade-offs for future studies. The result of the study shows significant differences between the two conditions. Where the Workstation is faster, but less precise, the Wall is more precise and gives more options for analysis. Nevertheless, the current version of the Wall has some weaknesses, in particular, lag and discrete steps in the rotation following the head-tracking, which meant increased duration performing the task. Overall, the high level of interaction using the Wall, the details from the 4K screen, and the stereoscopic rendering were seen as helpful in solving the task. However, the participants were not unanimous on which features were most important or whether or not every feature was essential. Therefore, research, beyond the scope of this thesis, needs to be done to examine how these individual factors affect task performance.
Syftet med detta examensarbete är att studera vilken effekt virtual reality (VR) har på användares prestation och upplevelse när de använder industriella applikationer avsedda för att utföra uppgiften i den verkliga världen. Jag samarbetade med Volvo Cars i att stödja deras nuvarande tekniska designmetod. VR har möjligheten att förstärka och därigenom förbättra analysen av en persons observationer. Nuvarande forskning rörande VR fokuserar på potentiella fördelar och möjliga förbättringar i VR applikationer, riktad mot vetenskap, industri, medicin, utbildning m.m. Emellertid har inte någon forskning gjorts över effekterna av att förstärka sättet att utforska och analysera vattenföroreningshändelser i en simulerad bilmotor. I detta examensarbete har denna fråga studerats genom en uppgiftsbaserad användarstudie med två olika metoder, nämligen Workstation och Wall. Workstation är det system som idag används för att leta efter vattenföroreningshändelser, vilket sker genom en video med tangentbord och mus på en vanlig datorskärm. Wall är ett mer utvecklat system där förekomsten av vattenföroreningshändelser kontrolleras med hjälp av en Xbox-spelkontroll och Kinect "head-tracking" rörelsekontroller på en 4K skärm (400 x 240 cm projektionsyta) med stereoskopi med 3D-glasögon. Examensarbetet är en pilotstudie för att upptäcka skillnader mellan dessa metoder i syfte att hitta infallsvinklar inför framtida studier. Resultatet av studien visar en signifikant skillnad mellan de två metoderna. Workstation är snabbare men mindre exakt medan Wall är mer noggrann och ger fler analysmöjligheter. Den nuvarande versionen av Wall har dock problem med bl.a. fördröjningar som gör att tidsåtgången för att utföra en specifik uppgift förlängs. Sammanfattningsvis ansågs interaktionen i Wall, detaljerna från 4K skärmen och 3D som hjälpsamma för att lösa uppgiften, men deltagarna var inte eniga vilken funktion som var viktigast eller huruvida varje funktion var nödvändig. Ytterligare forskning behöver därför utföras för att utröna vilka enskilda faktorer som har betydelse för deltagarnas prestation.
APA, Harvard, Vancouver, ISO, and other styles
43

Paakkola, Dennis, and Robin Rännar. "Ökad användarberedskap för digitala miljösimuleringar : Kravställning,utveckling och utvärdering av digital prototyp för användarintroduktion." Thesis, Mittuniversitetet, Institutionen för data- och systemvetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-38021.

Full text
Abstract:
Digital environmental simulations can be performed with different techniques and the most common technologies are virtual reality, augmented reality and mixed reality. Digital environmental simulations have proven to be effective in practicing surgery, industrial activities and for military exercises. Previous studies have shown that technology habits are a factor that affects whether digital environmental simulations can be used effectively. Thus, the purpose of the study was to investigate how users can be introduced to digital environmental simulations. To achieve the purpose, the following question was needed: How can a digital prototype be designed to introduce users to digital environmental simulations based on user needs? The study was based on design science as a research strategy, which meant that the study was carried out in three phases: development of requirements, development and evaluation of digital prototype. The production of requirements was made through a qualitative data collection in the form of semi-structured interviews. The interview questions were developed using a theoretical framework on digital competence. The interviews resulted in a requirement specification containing 15 user stories that were prioritized.Based onthe requirement specification, a digital prototype was developed in thedevelopment environment Unity. The evaluation of the digital prototype wascarried out in two stages, where the first was to evaluate internally and thesecond step was to evaluate externally. The external evaluation was conductedwith respondents who carried out a use test of the digital prototype thatresulted in proposals for further development. But it also resulted in usershaving increased knowledge and ability to see opportunities with digitalenvironmental simulations. The conclusion is that users can be introduced to digitalenvironmental simulations through a digital prototype designed based on userneeds.
Digitala miljösimuleringar kan utföras med olika tekniker och de vanligaste teknikerna är virtual reality, augmented reality och mixed reality. Digitala miljösimuleringar har visat sig vara effektiva för att öva på kirurgi, industrimoment samt för militärövningar. Tidigare studier har visat att teknikvana är en faktor som påverkar om digitala miljösimuleringar kan användas effektivt. Således var syftet med studien att undersöka hur användare kan introduceras till digitala miljösimuleringar. För att uppnå syftet behövdes följande frågeställning besvaras: Hur kan en digital prototyp utformas för att introducera användare till digitala miljösimuleringar baserat på användares behov? Studien har utgått från design science som forskningsstrategi, vilket medförde att studien har utförts i tre faser: framtagning av krav, utveckling och utvärdering av digital prototyp. Framtagning av krav skedde genom en kvalitativ datainsamling i form av semistrukturerade intervjuer. Intervjufrågorna togs fram med hjälp av ett teoretiskt ramverk om digital kompetens. Intervjuerna resulterade i en kravspecifikation innehållande 15 användarberättelser som prioriterades.   Utifrån kravspecifikationen utvecklades en digital prototyp i utvecklingsmiljön Unity. Utvärderingen av den digitala prototypen genomfördes i två steg, där det första var att utvärdera internt och det andra steget var att utvärdera externt. Den externa utvärderingen genomfördes med respondenter som utförde ett användningstest av den digitala prototypen som resulterade i förslag till vidareutvecklingMen det resulterade även i att användare fick ökadkunskap och förmåga att se möjligheter med digitala miljösimuleringar.Slutsatsen är att användare kan introduceras till digitala miljösimuleringargenom en digital prototyp som utformats baserat på användares behov.
APA, Harvard, Vancouver, ISO, and other styles
44

Lennerton, Mark J. "Exploring a chromakeyed augmented virtual environment for viability as an embedded training system for military helicopters." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Jun%5FLennerton.pdf.

Full text
Abstract:
Thesis (M.S. in Computer Science)--Naval Postgraduate School, June 2004.
Thesis advisor(s): Rudolph Darken, Joseph A. Sullivan. Includes bibliographical references (p. 103-104). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
45

Kramer, Alice. "DIMENSIONS OF IDENTITY." Master's thesis, University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2896.

Full text
Abstract:
Imagination and fantasy environments created by writers and artists have always drawn people into their worlds. Advances in technology have blurred the lines between reality and imagination. My interest has always been to question the validity of these worlds and their cultures and to transcend the evolving virtual dimension by fusing it with what we perceive to be reality.
M.F.A.
Department of Art
Arts and Humanities
Studio Art and the Computer MFA
APA, Harvard, Vancouver, ISO, and other styles
46

Hansen, Simon. "TEXTILE - Augmenting Text in Virtual Space." Thesis, Malmö högskola, Fakulteten för kultur och samhälle (KS), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-23172.

Full text
Abstract:
Three-dimensional literature is a virtually non-existent or in any case very rare and emergent digital art form, defined by the author as a unit of text, which is not confined to the two-dimensional layout of print literature, but instead mediated across all three axes of a virtual space. In collaboration with two artists the author explores through a bodystorming workshop how writers and readers could create and experience three-dimensional literature in mixed reality, by using mobile devices that are equipped with motion sensors, which enable users to perform embodied interactions as an integral part of the literary experience.For documenting the workshop, the author used body-mounted action cameras in order to record the point-of-view of the participants. This choice turned out to generate promising knowledge on using point-of-view footage as an integral part of the methodological approach. The author has found that by engaging creatively with such footage, the designer gains a profound understanding and vivid memory of complex design activities.As the outcome the various design activities, the author developed a concept for an app called TEXTILE. It enables users to build three-dimensional texts by positioning words in a virtual bubble of space around the user and to share them, either on an online platform or at site-specific places. A key finding of this thesis is that the creation of three-dimensional literature on a platform such as TEXTILE is not just an act of writing – it is an act of sculpture and an act of social performance.
APA, Harvard, Vancouver, ISO, and other styles
47

Roo, Joan sol. "one reality : augmenting the human experience through the combination of physical and digital worlds." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0874/document.

Full text
Abstract:
Alors que le numérique a longtemps été réservé à des usages experts, il fait aujourd’hui partie intégrante de notre quotidien, au point, qu’il devient difficile de considérer le monde physique dans lequel nous vivons indépendamment du monde numérique. Pourtant, malgré cette évolution, notre manière d’interagir avec le monde numérique a très peu évolué, et reste toujours principalement basé sur l’utilisation d’écrans, de claviers et de souris. Dans les nouveaux usages rendus possible par le numérique, ces interfaces peuvent se montrer inadaptées, et continuent à préserver la séparation entre le monde physique et le monde numérique. Au cours de cette thèse, nous nous sommes concentrés à rendre cette frontière entre mondes physique et numérique plus subtil au point de la faire disparaître. Cela est rendu possible en étendant la portée des outils numériques dans le monde physique, puis en concevant des artefacts hybrides (des objets aux propriétés physique et numérique), et enfin en permettant les transitions dans une réalité mixte (physique-numérique), laissant le choix du niveau d’immersion à l’utilisateur en fonction de ses besoins. L’objectif final de ce travail est d’augmenter l’expérience de la réalité. Cela comprend non seulement le support de l’interaction avec le monde extérieur, mais aussi avec notre monde intérieur. Cette thèse fournit aux lecteurs les informations contextuelles et les connaissances techniques requises pour pouvoir comprendre et concevoir des systèmes de réalité mixte. A partir de ces fondements, nos contributions, ayant pour but de fusionner le monde physique et le monde virtuel, sont présentées. Nous espérons que ce document inspirera et facilitera des travaux futurs ayant pour vision d’unifier le physique et le virtuel
In recent history, computational devices evolved from simple calculators to now pervasive artefacts, with which we share most aspects of our lives, and it is hard to imagine otherwise. Yet, this change of the role of computers was not accompanied by an equivalent redefinition of the interaction paradigm: we still mostly depend on screens, keyboards and mice. Even when these legacy interfaces have been proven efficient for traditional tasks, we agree with those who argue that these interfaces are not necessarily fitting for their new roles. Even more so, traditional interfaces preserve the separation between digital and physical realms, now counterparts of our reality.During this PhD, we focused the dissolution of the separation between physical and digital, first by extending the reach of digital tools into the physical environment, followed by the creation of hybrid artefacts (physical-digital emulsions), to finally support the transition between different mixed realities, increasing immersion only when needed. The final objective of this work is to augment the experience of reality. This comprises not only the support of the interaction with the external world, but also with the internal one. This thesis provides the reader contextual information along with required technical knowledge to be able to understand and build mixed reality systems. Once the theoretical and practical knowledge is provided, our contributions towards the overarching goal of merging physical and digital realms are presented. We hope this document will inspire and help others to work towards a world where the physical and digital, and humans and their environment are not opposites, but instead all counterparts of a unified reality
APA, Harvard, Vancouver, ISO, and other styles
48

Halinár, Matej. "Architektura virtuálna." Master's thesis, Vysoké učení technické v Brně. Fakulta architektury, 2017. http://www.nusl.cz/ntk/nusl-316302.

Full text
Abstract:
Architecture Jail Escape It is a specific device for futuroptimist people based on the philosophy of posthumanism and transhumanism, a version of their own faith in endless life on the net. It is a belief in the possibility of technological transformation of humanity that will allow us to overcome our physical and biological limits. Clause 2.0 is architecture for pioneers - the protagonist of this transformation - enabling the longest and most complete stay in virtual reality. This avant-garde is anxious 2.0. Escapist personalities of digital age soldiers are looking for a haven and their own version of the world in the cyberspace. They create a vision of paradise and colonize (cyber) space without the political consequences of the finiteness of the physical world and the exhaustion of natural resources. They live on the frontier of the being, and they want to unburden themselves and merge with the world they understand more. They fight with their own brain and body that cannot break away from the world. The endlessness of the virtual space has the limits of body and senses. Long-term stay in a cyberspace is a loss of sense of time and space. This monastic life in clause 2.0 is able to keep them in shape, by observing the ritual, the physical performance of walking that they must undergo so that they can exist every day in their version of the digital monastery. These versions are infinite, and they can be ritually traced among them. Clause geometry isolates them from one another. The clause is a monastic concept that allows the people to live hermetically, as well as the physical world. The gateway to the virtual space is a "zero architecture" - a room, a cell, a cube on a 4x4 meter plan, rid of any visual architectural site. It provides only a flat floor as the reflection point for an endless virtual world and four walls and a ceiling with a corresponding thickness for a sufficient separation from the outside world. The world of infinite freedom opens behind this "zero architecture". It seems that not through "architectural innovation and political subversion" a modern architect's dream of architecture will be realized as machines for the liberation of man but through the abandonment of physical architecture as such. The prospect of "zero architecture" opens up a space where the new architecture will no longer be "luxuries and good homes, not the architecture of separation and imprisonment, but it will ultimately be the architecture of freedom.
APA, Harvard, Vancouver, ISO, and other styles
49

Nardicchia, Giulia. "Studio e sviluppo prototipale di un'applicazione di realtà mista in ambito healthcare con dispositivo hololens 2." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25623/.

Full text
Abstract:
La seguente tesi si pone come obiettivo lo studio e lo sviluppo di un’applicazione di realtà mista in ambito healthcare. Inizialmente, vengono date le basi teoriche per comprendere il significato di Extended Reality presentando le tecnologie che la compongono come la Virtual Reality, l'Augmented Reality e la Mixed Reality. In seguito vengono descritte le diverse tipologie di dispositivi in grado di permettere la realtà mista. In particolare vengono presentati: HoloLens 2 e MagicLeapOne. Nell’elaborato di tesi acquisisce un ruolo importante la trattazione dei diversi aspetti da tenere in considerazione durante la progettazione di applicazioni MR. Successivamente vengono mostrati alcuni esempi di applicazioni mediche già presenti in letteratura aventi come scopo primario la diagnosi e il sostegno all’assistenza sanitaria. Si descrive in generale lo scopo dell'applicazione, quali sono gli obiettivi principali del progetto e il concetto di DICOM e modello 3D e perché sono importanti. Viene data una panoramica delle tecnologie utilizzate nel progetto che servono a sviluppare applicazioni di realtà mista, quali: Unity, MRTK e OpenXR. Proseguendo nella trattazione si scende più nel dettaglio della progettazione e implementazione del prototipo realizzato. Il corpo del progetto prevede un esempio pratico in cui si mostrano tutti i passaggi, la dimostrazione di cosa avviene all’avvio dell’applicazione, come si può interagire con il menù e come invece con l'ologramma rappresentante il file DICOM e il modello 3D. Poi si evidenziano quali sono le problematiche riscontrate durante la programmazione e vengono, inoltre, presentate delle idee per effettuare dei miglioramenti o nuove funzionalità per sviluppi futuri.
APA, Harvard, Vancouver, ISO, and other styles
50

Vu, Dieu An. "The AR in Architecture." Thesis, Malmö universitet, Fakulteten för kultur och samhälle (KS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-22742.

Full text
Abstract:
En vanlig metod inom arkitektonisk visualisering idag är produktion av stillbilder som skapas med 3D-modelleringsprogram. Med sådan avancerad teknik blir det enkelt och effektivt att styra och manipulera vad som visas på stillbilderna, vilket ökar säljbarheten av arkitektoniska projekt. Men vad händer om vi tar det ett steg längre, med hjälp av Alternative Reality-teknik? AR eller Augmented Reality kan vara en annan användbar visualiseringsmetod, men vilka konsekvenser kommer det med, speciellt för de icke-professionella användarna? Om vi inte tänker på vilka konsekvenser det kan ha, på samma sätt som med stillbilder, blir det bara ett annat verktyg för att öka säljbarheten för arkitektoniska projekt. Denna studie kommer därför att försöka svara på frågan ”Hur implementerar vi AR inom arkitektonisk visualisering på ett sätt som är gynnsamt för de icke-professionella användarna?”De centrala begreppen som bör tas till hänsyn när man talar om arkitektonisk visualisering är autonomi, tid, medborgarnas tidiga medverkan, ocularcentrism och konceptet av verklighet. Eftersom arkitekturen måste bero på vardagens sammanhang, bör visualiseringen inte stänga av världen för att skapa ett fint ideal som bara fungerar som falsk annonsering. Att stänga ut medborgarnas röster leder också till att skapa en metaforisk mur mellan människorna inom fältet och människorna utanför, vilket leder till förlust av utbyte av insikter och perspektiv. En av rösterna som talar starkt mot den autonoma synen är Jeremy Till, hans ord från boken Architecture depends kommer därför att spela en central roll i det teoretiska perspektivet av denna studie.För att svara på frågorna i studien kommer observationslinsen vändas till både den professionella sidan och den icke-professionella sidan angående ämnet Alternative Reality inom arkitektur. Detta görs via metoden cyber-etnografi, där Internet kommer att vara det öppna fältet att observera. Potentialerna för AR som uttrycks av de professionella kommer att användas för att jämföras med de icke-professionellas perspektiv och oron. Resultaten av observationerna kommer att användas till ett förslag av en AR-applikation, vilket är denna studies bidrag till diskussionen av vilka sätt AR kan genomföras för de icke-professionella användarnas skull.
A common method within architectural visualization today is the production of still images made with 3D-modeling software. With such advanced technology, it is made easy and efficient to control and manipulate what is shown on those still images, increasing the salability of architectural projects. But what if we take it a step further, using alternative reality technologies? AR, or Augmented Reality can be another useful visualization method, but what implications does it come with, especially for the non-professional users? If we do not consider the impacts it might have, similarly to still images, it will just turn into another tool to increase the salability of architectural projects. This study will therefore seek to answer the question of “How do we implement AR within architectural visualization in a way that is beneficial for the non-professional users?”The central concepts to consider when talking about architectural visualization are autonomy, time, early involvement of citizens, ocularcentrism and the concept of reality. As architecture has to depend on the contexts of our daily lives, the visualization should not shut out the world to create a pretty ideal that only serves as false advertisement. Shutting out the voices of the citizens also serves to create a metaphorical wall between the people within the field and the people outside of it, causing a loss of exchange of insights and perspectives. One of the voices that speak strongly against the autonomous view is Jeremy Till, his words from the book Architecture Depends will therefore play a central role in the theoretical perspective of this study.To answer the questions of this study, the observation lens will be turned to both the professional side and the non-professional side regarding the subject of alternative reality usage within architecture. This is done via the method of cyber-ethnography, in which the Internet will be the open field to observe. The potentials of AR that are expressed by the professionals will be taken to compare to the perspectives and worries of the non-professionals. The results of the observations will be of use towards a proposal of an AR application, which is this study’s contribution to the discussion of which ways AR can be implemented for the sake of the non-professional users.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography