Academic literature on the topic 'VIRTUAL REALITY 3D GRAPHIC INTERFACES'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'VIRTUAL REALITY 3D GRAPHIC INTERFACES.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "VIRTUAL REALITY 3D GRAPHIC INTERFACES"

1

KIM, HADONG, and MALREY LEE. "AN OPEN MODULE DEVELOPMENT ENVIRONMENT (OMDE) FOR INTERACTIVE VIRTUAL REALITY SYSTEMS." International Journal of Pattern Recognition and Artificial Intelligence 24, no. 06 (September 2010): 947–60. http://dx.doi.org/10.1142/s0218001410008251.

Full text
Abstract:
Graphic designers and developers would like to use virtual reality (VR) systems with a friendly Graphical User Interface (GUI) and development environment that provide efficient creation, modification and deletion functions. Although current VR graphical design systems incorporate the most up-to-date features, graphic designers are not able to specify the interface features that they desire or features that are most suitable for specific design and development tasks. This paper proposes an Open Module Development Environment (OMDE) for VR systems that can provide interactive functions that reflect the graphic designers requirements. OMDE allows graphic designers to specify their specific interface features and functions, and the system configuration to utilize plug-in modules. Hence a dynamically created development environment is provided that is also tailored to the graphic designer's requirements and facilitates graphical composition and editing. The functions of the graphical interface modules and the OMDE system specifications are identified. The system implementation environment and structure of the 3D VR software are described and the implementation is evaluated for performance, as an improved 3D graphic design tool.
APA, Harvard, Vancouver, ISO, and other styles
2

Yilmaz, Bulent, and Muge Goken. "Virtual reality (VR) technologies in education of industrial design." New Trends and Issues Proceedings on Humanities and Social Sciences 2, no. 1 (February 19, 2016): 498–503. http://dx.doi.org/10.18844/prosoc.v2i1.336.

Full text
Abstract:
Design is an art and art is a design. Today, all industrial products are the result of a design process. Industrial design is a multi-disciplinary field of study, which has a goal to create and produce new objects and it focuses on designing of products by using knowledge related with applied science as well as applied arts and various engineering disciplines. Academic programs related to industrial design focus on achieving the proper balance between practicality and aesthetic pleasure. Courses may include graphic and industrial design basics, manufacturing, modelling and visualization, environmental and human interaction in design. Computer aided design software are strongly emphasized. Students constantly observe, model and test their creations. They investigate the optimal ways to design virtually any type of products, including computer interfaces, appliances, furniture, transportation and recreational items. The developments of new interactive technologies have inevitably affected to education of design and art in recent years. VR is an interdisciplinary emerging high technology. VR interfaces, interaction techniques, and devices have been improved greatly in order to provide more natural and obvious modes of interaction and motivational elements and it is an integrated technology combining; 3D graphics, human-computer interaction, sensor, simulation, display, artificial intelligence and network parallel processing. This study presents notable VR systems have been developed for education and the methods of design, such as modelling and visualization.Keywords: industrial design, interactive technologies, modelling and visualization, environmental and human interaction, virtual reality
APA, Harvard, Vancouver, ISO, and other styles
3

Mironenko, Maxim, Viktor Chertopolokhov, and Margarita Belousova. "Virtual Reality Technologies and Universal 3D Reconstruction Interface Development." Историческая информатика, no. 4 (April 2020): 192–205. http://dx.doi.org/10.7256/2585-7797.2020.4.34671.

Full text
Abstract:
The article summarizes the results of a two-year study of the issues related to the virtual reality and augmented reality technologies use to virtually reconstruct Moscow Bely Gorod in the 16th-18th centuries. The authors describe mathematical methods, software and hardware which grant access to the historical reconstruction of historical urban landscapes. An important feature of the reconstruction is the source verification module which was used to construct three-dimensional models of the landscape, buildings and the general scenery. The article names the basic principles which the verification module and its interface are based on and considers some optimum problems solved when constructing the interface. The project uses a hybrid motion tracking system as a combination of optical and inertial data. The archival sources used in the reconstruction process are presented in the virtual environment by means of a 3D graphical user interface for the virtual reality. The information displayed is generated from the database of historical sources which includes information about the urban development and individual buildings of Bely Gorod, their parts, location, purpose, owners and construction date. The database contains both text and graphic historical sources. The results obtained also include new algorithms, software and hardware systems as well as the experiment results. 
APA, Harvard, Vancouver, ISO, and other styles
4

Sung, Jung-Hwan, Dae-Young Lee, and Hyung-Koo Kim. "Difference of GUI Efficiency based on 3D and 2D Graphic -Imaginary 3D IPTV Interface Development Using Virtual Reality Theory-." Journal of the Korea Contents Association 7, no. 7 (July 28, 2007): 87–95. http://dx.doi.org/10.5392/jkca.2007.7.7.087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lim, Enmi, Kiyoshi Umeki, and Tsuyoshi Honjo. "Application of Virtual Reality and Plant Modeling for Participation in Urban Planning and Design." International Journal of Virtual Reality 8, no. 2 (January 1, 2009): 91–99. http://dx.doi.org/10.20870/ijvr.2009.8.2.2730.

Full text
Abstract:
In landscape planning, visualization of landscape is a powerful tool for public understanding and for selection of alternative plans. In recent years, three dimensional (3D) computer graphics (CG) were used as visualization tools of environment because it has the ability to accurately simulate the changes caused by a proposed plan. In particular, virtual reality (VR), which enables to walk through in a modeled park or in a visualized forest, was considered as the advanced technique of landscape visualization. In this study, we developed a landscape visualization system with graphic user interface, which we named VR-Terrain (GUI version), to generate the virtual reality image easily by using VRML (Virtual Reality Modeling Language) and plant modeling techniques. In order to test the feasibility of the landscape visualization system, we applied the system to Ichinoe Urban Design Plan. Ichinoe is located at Edogawa Ward, Tokyo. There is a Sakaikawa Shinsui Park with water space surrounded by large amount of green. In the case study, we simulated the landscape of Sakaikawa Shinsui Park with about 200 plants and 300 buildings. We used the images simulated by VR-Terrain to explain the concept (such as building height limit) to the residents in the public meetings. It took about 30 hours to make the 3D model of the town. After ten minutes of training, anybody can walk through in the simulated town freely. The results showed that the VR image by the system helped the public understanding of the concept of the urban plan.
APA, Harvard, Vancouver, ISO, and other styles
6

구상권 and 민수홍. "Considerations on Rationalization of 3D Computer Graphic Rendering Technology and Appropriation of the Role of Fake Interface on Virtual Reality." Journal of Digital Design 7, no. 2 (April 2007): 121–32. http://dx.doi.org/10.17280/jdd.2007.7.2.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Vörös, Viktor, Ruixuan Li, Ayoob Davoodi, Gauthier Wybaillie, Emmanuel Vander Poorten, and Kenan Niu. "An Augmented Reality-Based Interaction Scheme for Robotic Pedicle Screw Placement." Journal of Imaging 8, no. 10 (October 6, 2022): 273. http://dx.doi.org/10.3390/jimaging8100273.

Full text
Abstract:
Robot-assisted surgery is becoming popular in the operation room (OR) for, e.g., orthopedic surgery (among other surgeries). However, robotic executions related to surgical steps cannot simply rely on preoperative plans. Using pedicle screw placement as an example, extra adjustments are needed to adapt to the intraoperative changes when the preoperative planning is outdated. During surgery, adjusting a surgical plan is non-trivial and typically rather complex since the available interfaces used in current robotic systems are not always intuitive to use. Recently, thanks to technical advancements in head-mounted displays (HMD), augmented reality (AR)-based medical applications are emerging in the OR. The rendered virtual objects can be overlapped with real-world physical objects to offer intuitive displays of the surgical sites and anatomy. Moreover, the potential of combining AR with robotics is even more promising; however, it has not been fully exploited. In this paper, an innovative AR-based robotic approach is proposed and its technical feasibility in simulated pedicle screw placement is demonstrated. An approach for spatial calibration between the robot and HoloLens 2 without using an external 3D tracking system is proposed. The developed system offers an intuitive AR–robot interaction approach between the surgeon and the surgical robot by projecting the current surgical plan to the surgeon for fine-tuning and transferring the updated surgical plan immediately back to the robot side for execution. A series of bench-top experiments were conducted to evaluate system accuracy and human-related errors. A mean calibration error of 3.61 mm was found. The overall target pose error was 3.05 mm in translation and 1.12∘ in orientation. The average execution time for defining a target entry point intraoperatively was 26.56 s. This work offers an intuitive AR-based robotic approach, which could facilitate robotic technology in the OR and boost synergy between AR and robots for other medical applications.
APA, Harvard, Vancouver, ISO, and other styles
8

Yue, Xue Jun, Tian Sheng Hong, Xing Xu, and Wei Bin Wu. "Study on 3D Virtual Reality Modeling." Advanced Materials Research 129-131 (August 2010): 1296–300. http://dx.doi.org/10.4028/www.scientific.net/amr.129-131.1296.

Full text
Abstract:
We build out a very vivid and real conditions environment based on the virtual reality of a simulation computer technology, so that users can be in a virtual environment through the man-machine interface with a virtual environment direct interaction. This paper studies 3d virtual model development and integration including the system design to realize the functions and systems integration. In its implementation, we use the model tool, 3 D R C to structure three-dimensional model, and the model tool, and finally we use virtools tool to achieve a three-dimensional with the establishment of virtual scene model. Virtual reality (virtual reality, VR) is a computer senior man-machine interface on the basic feature of absorbion, interactiveness and constructiveness [1,2]. Specifically, virtual reality is a computer that creates the stereoscopic spaces, and users can interact space objects in the interaction and watch the operation of some part of the objects in space, and freely move with the users' will so that a sense of integration and participation are produced[3,4]. It is used in computer technology at the core of modern high technology ,which means to build a realistic view, hear and touch the integration of a virtual environment with the necessary equipment and a virtual environment of the interaction and mutual influence, which results in the "immersion" be true of the environment and feel[5-7]. VR technology is computer technology, computer graphics, computer simulations visual and technical, visual physiology, psychology, the microelectronics visual display technique, solid technology, sensing to measure the technical, technological, information technology, and voice recognition software engineering and technology, integrated man-machine the skill interfacing, and network technology and artificial intelligence technology and the achievement of other high technology. Since the birth of a virtual reality technology, it has the huge economy, military and the internet, multimedia minded race in many areas in the application of technology in the 21st century as the three-big technologies.
APA, Harvard, Vancouver, ISO, and other styles
9

Hwang, Jane, Jaehoon Jung, Sunghoon Yim, Jaeyoung Cheon, Sungkil Lee, Seungmoon Choi, and Gerard J. Kim. "Requirements, Implementation and Applications of Hand-held Virtual Reality." International Journal of Virtual Reality 5, no. 2 (January 1, 2006): 59–66. http://dx.doi.org/10.20870/ijvr.2006.5.2.2689.

Full text
Abstract:
While hand-held computing devices are capable of rendering advanced 3D graphics and processing of multimedia data, they are not designed to provide and induce sufficient sense of immersion and presence for virtual reality. In this paper, we propose minimal requirements for realizing VR on a hand-held device. Furthermore, based on the proposed requirements, we have designed and implemented a low cost hand-held VR platform by adding multimodal sensors and display components to a hand-held PC. The platform enables a motion based interface, an essential part of realizing VR on a small hand-held device, and provides outputs in three modalities, visual, aural and tactile/haptic for a reasonable sensory experience. We showcase our platform and demonstrate the possibilities of hand-hand VR through three VR applications: a typical virtual walkthrough, a 3D multimedia contents browser, and a motion based racing game
APA, Harvard, Vancouver, ISO, and other styles
10

Novak-Marcincin, Jozef. "Virtual Reality Modeling Language as Tool for Automated Workplaces Simulation." Applied Mechanics and Materials 309 (February 2013): 372–79. http://dx.doi.org/10.4028/www.scientific.net/amm.309.372.

Full text
Abstract:
Virtual Reality Modelling Language (VRML) is description language, which belongs to a field virtual reality (VR) system. The file, which is in VRML format, can be interpreted by VRML explorer in three-dimensional scene. VRML was created with aim to represent virtual reality on Internet easier. Development of 3D graphic is connected with Silicon Graphic Corporation. VRML 2.0 is the file format for describing interactive 3D scenes and objects. It can be used in collaboration with www, can be used for 3D complex representations creating of scenes, products or VR applications VRML 2.0 enables represent static and animated objects too. Interesting application of VRML is in area of automated workplaces simulation.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "VIRTUAL REALITY 3D GRAPHIC INTERFACES"

1

Chen, Yenan. "Advanced Multi-modal User Interfaces in 3D Computer Graphics and Virtual Reality." Thesis, Linköpings universitet, Institutionen för teknik och naturvetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-75889.

Full text
Abstract:
Computers are developed continuously to satisfy the human demands, and typical tools used everywhere for ranging from daily life usage to all kinds of research. Virtual Reality (VR), a virtual environment simulated to present physical presence in the real word and imaginary worlds, has been widely applied to simulate the virtual environment. People’s feeling is limited to visual perception when only computers are applied for simulations, since computers are limited to display visualization of data, while human senses include sight, smell, hearing, taste, touch and so on. Other devices can be applied, such as haptics, a device for sense of touch, to enhance the human perception in virtual environment. A good way to apply VR applications is to place them in a virtual display system, a system with multiply tools displays a virtual environment with experiencing different human senses, to enhance the people’s feeling of being immersed in a virtual environment. Such virtual display systems include VR dome, recursive acronym CAVE, VR workbench, VR workstation and so on. Menus with lots of advantages in manipulating applications are common in conventional systems, operating systems or other systems in computers. Normally a system will not be usable without them. Although VR applications are more natural and intuitive, they are much less or not usable without menus. But very few studies have focused on user interfaces in VR. This situation motivates us working further in this area. We want to create two models on different purposes. One is inspired from menus in conventional system and the sense of touch. And the other one is designed based on the spatial presence of VR. The first model is a two-dimensional pie menu in pop-up style with spring force feedback. This model is in a pie shape with eight options on the root menu. And there is a pop-up style hierarchical menu belongs to each option on the root menu. When the haptics device is near an option on the root menu, the spring force will force the haptics device towards to the center of the option and that option will be selected, and then the sub menu with nine options will pop up. The pie shape together with the spring force effect is expected to both increase the speed of selection and decrease the error rate of selection. The other model is a semiautomatic three-dimensional cube menu. This cube menu is designed with a aim to provide a simple, elegant, efficient and accurate user interface approach. This model is designed with four faces, including the front, back, left and right faces of the cube. Each face represents a category and has nine widgets. Users can make selections in different categories. An efficient way to change between categories is to rotate the cube automatically. Thus, a navigable rotation animation system is built and is manipulating the cube rotate horizontally for ninety degrees each time, so one of the faces will always face users. These two models are built under H3DAPI, an open source haptics software development platform with UI toolkit, a user interface toolkit. After the implementation, we made a pilot study, which is a formative study, to evaluate the feasibility of both menus. This pilot study includes a list of tasks for each menu, a questionnaire regards to the menu performance for each subject and a discussion with each subject. Six students participated as test subjects. In the pie menu, most of the subjects feel the spring force guides them to the target option and they can control the haptics device comfortably under such force. In the cube menu, the navigation rotation system works well and the cube rotates accurately and efficiently. The results of the pilot study show the models work as we initially expected. The recorded task completion time for each menu shows that with the same amount of tasks and similar difficulties, subjects spent more time on the cube menu than on the pie menu. This may implicate that pie menu is a faster approach comparing to the cube menu. We further consider that both the pie shape and force feedback may help reducing the selection time. The result for the option selection error rate test on the cube menu may implicates that option selection without any force feedback may also achieve a considerable good effect. Through the answers from the questionnaire for each subject, both menus are comfortable to use and in good control.
APA, Harvard, Vancouver, ISO, and other styles
2

Míchal, Vít. "3D video browser." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-235486.

Full text
Abstract:
The aim of this project is to design and create application for the visualization of interconnected video data. Visualization takes place in 3D space and seeks to exploit its advantages (such as depth perception). This document contains survey to different categories of spatial user interfaces. In addition, includes three possible designs of user interfaces and control. Implementation details and made usability tests are also described. Application is implemented in C++ using Open Inventor. The document includes evaluation of the results and made tests.
APA, Harvard, Vancouver, ISO, and other styles
3

Fernández, Baena Adso. "Animation and Interaction of Responsive, Expressive, and Tangible 3D Virtual Characters." Doctoral thesis, Universitat Ramon Llull, 2015. http://hdl.handle.net/10803/311800.

Full text
Abstract:
This thesis is framed within the field of 3D Character Animation. Virtual characters are used in many Human Computer Interaction applications such as video games and serious games. Within these virtual worlds they move and act in similar ways to humans controlled by users through some form of interface or by artificial intelligence. This work addresses the challenges of developing smoother movements and more natural behaviors driving motions in real-time, intuitively, and accurately. The interaction between virtual characters and intelligent objects will also be explored. With these subjects researched the work will contribute to creating more responsive, expressive, and tangible virtual characters. The navigation within virtual worlds uses locomotion such as walking, running, etc. To achieve maximum realism, actors' movements are captured and used to animate virtual characters. This is the philosophy of motion graphs: a structure that embeds movements where the continuous motion stream is generated from concatenating motion pieces. However, locomotion synthesis, using motion graphs, involves a tradeoff between the number of possible transitions between different kinds of locomotion, and the quality of these, meaning smooth transition between poses. To overcome this drawback, we propose the method of progressive transitions using Body Part Motion Graphs (BPMGs). This method deals with partial movements, and generates specific, synchronized transitions for each body part (group of joints) within a window of time. Therefore, the connectivity within the system is not linked to the similarity between global poses allowing us to find more and better quality transition points while increasing the speed of response and execution of these transitions in contrast to standard motion graphs method. Secondly, beyond getting faster transitions and smoother movements, virtual characters also interact with each other and with users by speaking. This interaction requires the creation of appropriate gestures according to the voice that they reproduced. Gestures are the nonverbal language that accompanies voiced language. The credibility of virtual characters when speaking is linked to the naturalness of their movements in sync with the voice in speech and intonation. Consequently, we analyzed the relationship between gestures, speech, and the performed gestures according to that speech. We defined intensity indicators for both gestures (GSI, Gesture Strength Indicator) and speech (PSI, Pitch Strength Indicator). We studied the relationship in time and intensity of these cues in order to establish synchronicity and intensity rules. Later we adapted the mentioned rules to select the appropriate gestures to the speech input (tagged text from speech signal) in the Gesture Motion Graph (GMG). The evaluation of resulting animations shows the importance of relating the intensity of speech and gestures to generate believable animations beyond time synchronization. Subsequently, we present a system that leads automatic generation of gestures and facial animation from a speech signal: BodySpeech. This system also includes animation improvements such as: increased use of data input, more flexible time synchronization, and new features like editing style of output animations. In addition, facial animation also takes into account speech intonation. Finally, we have moved virtual characters from virtual environments to the physical world in order to explore their interaction possibilities with real objects. To this end, we present AvatARs, virtual characters that have tangible representation and are integrated into reality through augmented reality apps on mobile devices. Users choose a physical object to manipulate in order to control the animation. They can select and configure the animation, which serves as a support for the virtual character represented. Then, we explored the interaction of AvatARs with intelligent physical objects like the Pleo social robot. Pleo is used to assist hospitalized children in therapy or simply for playing. Despite its benefits, there is a lack of emotional relationship and interaction between the children and Pleo which makes children lose interest eventually. This is why we have created a mixed reality scenario where Vleo (AvatAR as Pleo, virtual element) and Pleo (real element) interact naturally. This scenario has been tested and the results conclude that AvatARs enhances children's motivation to play with Pleo, opening a new horizon in the interaction between virtual characters and robots.
Aquesta tesi s'emmarca dins del món de l'animació de personatges virtuals tridimensionals. Els personatges virtuals s'utilitzen en moltes aplicacions d'interacció home màquina, com els videojocs o els serious games, on es mouen i actuen de forma similar als humans dins de mons virtuals, i on són controlats pels usuaris per mitjà d'alguna interfície, o d'altra manera per sistemes intel·ligents. Reptes com aconseguir moviments fluids i comportament natural, controlar en temps real el moviment de manera intuitiva i precisa, i inclús explorar la interacció dels personatges virtuals amb elements físics intel·ligents; són els que es treballen a continuació amb l'objectiu de contribuir en la generació de personatges virtuals responsius, expressius i tangibles. La navegació dins dels mons virtuals fa ús de locomocions com caminar, córrer, etc. Per tal d'aconseguir el màxim de realisme, es capturen i reutilitzen moviments d'actors per animar els personatges virtuals. Així funcionen els motion graphs, una estructura que encapsula moviments i per mitjà de cerques dins d'aquesta, els concatena creant un flux continu. La síntesi de locomocions usant els motion graphs comporta un compromís entre el número de transicions entre les diferents locomocions, i la qualitat d'aquestes (similitud entre les postures a connectar). Per superar aquest inconvenient, proposem el mètode transicions progressives usant Body Part Motion Graphs (BPMGs). Aquest mètode tracta els moviments de manera parcial, i genera transicions específiques i sincronitzades per cada part del cos (grup d'articulacions) dins d'una finestra temporal. Per tant, la conectivitat del sistema no està lligada a la similitud de postures globals, permetent trobar més punts de transició i de més qualitat, i sobretot incrementant la rapidesa en resposta i execució de les transicions respecte als motion graphs estàndards. En segon lloc, més enllà d'aconseguir transicions ràpides i moviments fluids, els personatges virtuals també interaccionen entre ells i amb els usuaris parlant, creant la necessitat de generar moviments apropiats a la veu que reprodueixen. Els gestos formen part del llenguatge no verbal que acostuma a acompanyar a la veu. La credibilitat dels personatges virtuals parlants està lligada a la naturalitat dels seus moviments i a la concordança que aquests tenen amb la veu, sobretot amb l'entonació d'aquesta. Així doncs, hem realitzat l'anàlisi de la relació entre els gestos i la veu, i la conseqüent generació de gestos d'acord a la veu. S'han definit indicadors d'intensitat tant per gestos (GSI, Gesture Strength Indicator) com per la veu (PSI, Pitch Strength Indicator), i s'ha estudiat la relació entre la temporalitat i la intensitat de les dues senyals per establir unes normes de sincronia temporal i d'intensitat. Més endavant es presenta el Gesture Motion Graph (GMG), que selecciona gestos adients a la veu d'entrada (text anotat a partir de la senyal de veu) i les regles esmentades. L'avaluació de les animaciones resultants demostra la importància de relacionar la intensitat per generar animacions cre\"{ibles, més enllà de la sincronització temporal. Posteriorment, presentem un sistema de generació automàtica de gestos i animació facial a partir d'una senyal de veu: BodySpeech. Aquest sistema també inclou millores en l'animació, major reaprofitament de les dades d'entrada i sincronització més flexible, i noves funcionalitats com l'edició de l'estil les animacions de sortida. A més, l'animació facial també té en compte l'entonació de la veu. Finalment, s'han traslladat els personatges virtuals dels entorns virtuals al món físic per tal d'explorar les possibilitats d'interacció amb objectes reals. Per aquest fi, presentem els AvatARs, personatges virtuals que tenen representació tangible i que es visualitzen integrats en la realitat a través d'un dispositiu mòbil gràcies a la realitat augmentada. El control de l'animació es duu a terme per mitjà d'un objecte físic que l'usuari manipula, seleccionant i parametritzant les animacions, i que al mateix temps serveix com a suport per a la representació del personatge virtual. Posteriorment, s'ha explorat la interacció dels AvatARs amb objectes físics intel·ligents com el robot social Pleo. El Pleo s'utilitza per a assistir a nens hospitalitzats en teràpia o simplement per jugar. Tot i els seus beneficis, hi ha una manca de relació emocional i interacció entre els nens i el Pleo que amb el temps fa que els nens perdin l'interès en ell. Així doncs, hem creat un escenari d'interacció mixt on el Vleo (un AvatAR en forma de Pleo; element virtual) i el Pleo (element real) interactuen de manera natural. Aquest escenari s'ha testejat i els resultats conclouen que els AvatARs milloren la motivació per jugar amb el Pleo, obrint un nou horitzó en la interacció dels personatges virtuals amb robots.
Esta tesis se enmarca dentro del mundo de la animación de personajes virtuales tridimensionales. Los personajes virtuales se utilizan en muchas aplicaciones de interacción hombre máquina, como los videojuegos y los serious games, donde dentro de mundo virtuales se mueven y actúan de manera similar a los humanos, y son controlados por usuarios por mediante de alguna interfaz, o de otro modo, por sistemas inteligentes. Retos como conseguir movimientos fluidos y comportamiento natural, controlar en tiempo real el movimiento de manera intuitiva y precisa, y incluso explorar la interacción de los personajes virtuales con elementos físicos inteligentes; son los que se trabajan a continuación con el objetivo de contribuir en la generación de personajes virtuales responsivos, expresivos y tangibles. La navegación dentro de los mundos virtuales hace uso de locomociones como andar, correr, etc. Para conseguir el máximo realismo, se capturan y reutilizan movimientos de actores para animar los personajes virtuales. Así funcionan los motion graphs, una estructura que encapsula movimientos y que por mediante búsquedas en ella, los concatena creando un flujo contínuo. La síntesi de locomociones usando los motion graphs comporta un compromiso entre el número de transiciones entre las distintas locomociones, y la calidad de estas (similitud entre las posturas a conectar). Para superar este inconveniente, proponemos el método transiciones progresivas usando Body Part Motion Graphs (BPMGs). Este método trata los movimientos de manera parcial, y genera transiciones específicas y sincronizadas para cada parte del cuerpo (grupo de articulaciones) dentro de una ventana temporal. Por lo tanto, la conectividad del sistema no está vinculada a la similitud de posturas globales, permitiendo encontrar más puntos de transición y de más calidad, incrementando la rapidez en respuesta y ejecución de las transiciones respeto a los motion graphs estándards. En segundo lugar, más allá de conseguir transiciones rápidas y movimientos fluídos, los personajes virtuales también interaccionan entre ellos y con los usuarios hablando, creando la necesidad de generar movimientos apropiados a la voz que reproducen. Los gestos forman parte del lenguaje no verbal que acostumbra a acompañar a la voz. La credibilidad de los personajes virtuales parlantes está vinculada a la naturalidad de sus movimientos y a la concordancia que estos tienen con la voz, sobretodo con la entonación de esta. Así pues, hemos realizado el análisis de la relación entre los gestos y la voz, y la consecuente generación de gestos de acuerdo a la voz. Se han definido indicadores de intensidad tanto para gestos (GSI, Gesture Strength Indicator) como para la voz (PSI, Pitch Strength Indicator), y se ha estudiado la relación temporal y de intensidad para establecer unas reglas de sincronía temporal y de intensidad. Más adelante se presenta el Gesture Motion Graph (GMG), que selecciona gestos adientes a la voz de entrada (texto etiquetado a partir de la señal de voz) y las normas mencionadas. La evaluación de las animaciones resultantes demuestra la importancia de relacionar la intensidad para generar animaciones creíbles, más allá de la sincronización temporal. Posteriormente, presentamos un sistema de generación automática de gestos y animación facial a partir de una señal de voz: BodySpeech. Este sistema también incluye mejoras en la animación, como un mayor aprovechamiento de los datos de entrada y una sincronización más flexible, y nuevas funcionalidades como la edición del estilo de las animaciones de salida. Además, la animación facial también tiene en cuenta la entonación de la voz. Finalmente, se han trasladado los personajes virtuales de los entornos virtuales al mundo físico para explorar las posibilidades de interacción con objetos reales. Para este fin, presentamos los AvatARs, personajes virtuales que tienen representación tangible y que se visualizan integrados en la realidad a través de un dispositivo móvil gracias a la realidad aumentada. El control de la animación se lleva a cabo mediante un objeto físico que el usuario manipula, seleccionando y configurando las animaciones, y que a su vez sirve como soporte para la representación del personaje. Posteriormente, se ha explorado la interacción de los AvatARs con objetos físicos inteligentes como el robot Pleo. Pleo se utiliza para asistir a niños en terapia o simplemente para jugar. Todo y sus beneficios, hay una falta de relación emocional y interacción entre los niños y Pleo que con el tiempo hace que los niños pierdan el interés. Así pues, hemos creado un escenario de interacción mixto donde Vleo (AvatAR en forma de Pleo; virtual) y Pleo (real) interactúan de manera natural. Este escenario se ha testeado y los resultados concluyen que los AvatARs mejoran la motivación para jugar con Pleo, abriendo un nuevo horizonte en la interacción de los personajes virtuales con robots.
APA, Harvard, Vancouver, ISO, and other styles
4

Terziman, Léo. "Contribution à l'Étude des Techniques d'Interaction 3D et des Retours Sensoriels pour Améliorer la Navigation et la Marche en Réalité Virtuelle." Phd thesis, INSA de Rennes, 2012. http://tel.archives-ouvertes.fr/tel-00767488.

Full text
Abstract:
La navigation à la première personne en Environnement Virtuel (EV) est essentielle à beaucoup d'applications de Réalité Virtuelle (RV), comme les simulations d'entraînement ou les visites virtuelles de musées ou de projets architecturaux. Les techniques de navigation doivent fournir un moyen efficace et écologique d'explorer les EV. De plus, comme dans toute autre application de RV, l'immersion des utilisateurs est aussi primordiale pour obtenir une bonne simulation. Dans cette thèse, nous avons proposé des nouvelles techniques d'interaction 3D et de retour sensoriel pour améliorer la navigation. Ainsi, nos contributions peuvent être découpées en deux parties : (1) une nouvelle technique d'interaction pour une navigation efficace et écologique en RV et (2) de nouvelles techniques de retour sensoriel conçues pour améliorer l'immersion et la sensation de marche des utilisateurs durant la navigation. Pour chacune des techniques proposées, nous avons conduit des évaluations poussées afin de valider que nos techniques atteignent leurs objectifs. Dans la première partie, nous avons proposé une nouvelle technique de navigation pour la RV, le Shake-Your-Head (SYH). Cette technique suit les mouvements de la tête de l'utilisateur lorsqu'il marche sûr place devant son écran afin de produire une navigation simulant la marche, ainsi que les sauts ou ramper. Nous avons trouvé que notre technique peut être utilisée efficacement sur des trajectoires complexes et est simple d'apprentissage. De plus, cette technique a été très appréciée par les utilisateurs. Dans la seconde partie, nous avons proposé une technique, les King Kong Effects (KKE), pour simuler les informations visuelles et vibrotactiles produites à chaque pas. Nous avons également proposé de nouveaux Mouvements de Caméra (MC) améliorés pour simuler les mouvements de la têtes lors de la marche, de la course et du sprint. De plus, nos MC sont adaptés à l'âge, au genre, au poids et à la condition physique de l'humain virtuel, ainsi qu'à la pente dans l'EV. Les KKE améliorent la sensation de marcher dans l'EV et ont également été très appréciés des utilisateurs. De plus, nous avons montrés que les différents paramètres des MC sont correctement perçus par les utilisateurs.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Jia. "Isometric versus Elastic Surfboard Interfaces for 3D Travel in Virtual Reality." Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/874.

Full text
Abstract:
" Three dimensional travel in immersive virtual environments (IVE) has been a difficult problem since the beginning of virtual reality (VR), basically due to the difficulty of designing an intuitive, efficient, and precise three degrees of freedom (DOF) interface which can map the user's finite local movements in the real world to a potentially infinite virtual space. Inspired by the Silver Surfer Sci-Fi movie and the popularity of the Nintendo Wii Balance Board interface, a surfboard interface appears to be a good solution to this problem. Based on this idea, I designed and developed a VR Silver Surfer system which allows a user to surf in the sky of an infinite virtual environment, using either an isometric balance board or an elastic tilt board. Although the balance board is the industrial standard of board interface, the tilt board seems to provide the user more intuitive, realistic and enjoyable experiences, without any sacrifice of efficiency or precision. To validate this hypothesis we designed and conducted a user study that compared the two board interfaces in three independent experiments that break the travel procedure into separate DOFs. The results showed that in all experiments, the tilt board was not only as efficient and precise as the balance board, but also more intuitive, realistic and fun. In addition, despite the popularity of the balance board in the game industry, most subjects in the study preferred the tilt board in general, and in fact complained that the balance board could have been the cause of possible motion sickness. "
APA, Harvard, Vancouver, ISO, and other styles
6

Santos, Lages Wallace. "Walk-Centric User Interfaces for Mixed Reality." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/84460.

Full text
Abstract:
Walking is a natural part of our lives and is also becoming increasingly common in mixed reality. Wireless headsets and improved tracking systems allow us to easily navigate real and virtual environments by walking. In spite of the benefits, walking brings challenges to the design of new systems. In particular, designers must be aware of cognitive and motor requirements so that walking does not negatively impact the main task. Unfortunately, those demands are not yet fully understood. In this dissertation, we present new scientific evidence, interaction designs, and analysis of the role of walking in different mixed reality applications. We evaluated the difference in performance of users walking vs. manipulating a dataset during visual analysis. This is an important task, since virtual reality is increasingly being used as a way to make sense of progressively complex datasets. Our findings indicate that neither option is absolutely better: the optimal design choice should consider both user's experience with controllers and user's inherent spatial ability. Participants with reasonable game experience and low spatial ability performed better using the manipulation technique. However, we found that walking can still enable higher performance for participants with low spatial ability and without significant game experience. In augmented reality, specifying points in space is an essential step to create content that is registered with the world. However, this task can be challenging when information about the depth or geometry of the target is not available. We evaluated different augmented reality techniques for point marking that do not rely on any model of the environment. We found that triangulation by physically walking between points provides higher accuracy than purely perceptual methods. However, precision may be affected by head pointing tremors. To increase the precision, we designed a new technique that uses multiple samples to obtain a better estimate of the target position. This technique can also be used to mark points while walking. The effectiveness of this approach was demonstrated with a controlled augmented reality simulation and actual outdoor tests. Moving into the future, augmented reality will eventually replace our mobile devices as the main method of accessing information. Nonetheless, to achieve its full potential, augmented reality interfaces must support the fluid way we move in the world. We investigated the potential of adaptation in achieving this goal. We conceived and implemented an adaptive workspace system, based in the study of the design space and through user contextual studies. Our final design consists in a minimum set of techniques to support mobility and integration with the real world. We also identified a set of key interaction patterns and desirable properties of adaptation-based techniques, which can be used to guide the design of the next-generation walking-centered workspaces.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Silveira, Junior Wedson Gomes da. "Manipulação de objetos 3D em ambientes colaborativos com o uso do dispositivo Kinect." Universidade Federal de Uberlândia, 2013. https://repositorio.ufu.br/handle/123456789/14523.

Full text
Abstract:
The research on natural interaction has been growing significantly since, with the spread of personal computers, there is an increasing demand for interfaces that maximize productivity. Among these we can highlight the interfaces Virtual Reality and Augmented Reality (KIRNER, TORI e SISCOUTO, 2006), where users can perform simple tasks such as choosing a 3D object translation, rotation and scaling in 3D object. These tasks are usually performed through devices such as the keyboard and mouse, and can, thus, a loss of immersion happen by the user in the virtual environment. Thus, investigating methodologies for natural interaction in these environments, can help increase user immersion in the virtual environment. Another issue that have been the focus of many researches are Collaborative Virtual Environments (KIRNER e TORI, 2004), where it is possible for users to communicate and share information. These users can be physically close or not. The main focus of this work is precisely the communication among remotely dispersed users. Thus, it is intended with this work, proposing a system where it is possible to manipulate 3D objects using natural gestures and share data remotely dispersed users.
As pesquisas na área de interação natural vêm crescendo significativamente, pois, com a disseminação de computadores pessoais, existe uma demanda crescente por interfaces que maximizem a produtividade. Dentre essas interfaces podemos destacar a Realidade Virtual e a Realidade Aumentada (KIRNER, TORI e SISCOUTO, 2006), de forma que os usuários possam realizar tarefas simples como: escolha de um objeto 3D, translação, rotação e mudança de escala neste objeto 3D. Essas tarefas normalmente são realizadas através de dispositivos como o teclado e o mouse, podendo, dessa maneira, acontecer uma perda de imersão por parte do usuário no ambiente virtual. Sendo assim, investigar metodologias para interação natural nesses ambientes, pode ajudar a aumentar a imersão do usuário, no ambiente virtual. Outra questão que vem sendo o foco de muitas pesquisas são os Ambientes Virtuais Colaborativos (KIRNER e TORI, 2004), que permitem aos usuários se comunicarem e compartilhar informações. Esses usuários podem estar próximos fisicamente ou não. O foco principal desse trabalho é justamente a comunicação entre usuários dispersos remotamente. Dessa maneira, pretende-se, com esse trabalho, propor um sistema que possibilite a manipulação objetos 3D através de gestos naturais e compartilhar dados com usuários dispersos remotamente.
Mestre em Ciências
APA, Harvard, Vancouver, ISO, and other styles
8

Hachet, Martin. "Interfaces utilisateur 3D, des terminaux mobiles aux environnements virtuels immersifs." Habilitation à diriger des recherches, Université Sciences et Technologies - Bordeaux I, 2010. http://tel.archives-ouvertes.fr/tel-00576663.

Full text
Abstract:
Améliorer l'interaction entre un utilisateur et un environnement 3D est un défi de recherche primordial pour le développement positif des technologies 3D interactives dans de nombreux domaines de nos sociétés, comme l'éducation. Dans ce document, je présente des interfaces utilisateur 3D que nous avons développées et qui contribuent à cette quête générale. Le premier chapitre se concentre sur l'interaction 3D pour des terminaux mobiles. En particulier, je présente des techniques dédiées à l'interaction à partir de touches, et à partir de gestes sur les écrans tactiles des terminaux mobiles. Puis, je présente deux prototypes à plusieurs degrés de liberté basés sur l'utilisation de flux vidéos. Dans le deuxième chapitre, je me concentre sur l'interaction 3D avec les écrans tactiles en général (tables, écrans interactifs). Je présente Navidget, un exemple de technique d'interaction dédié au controle de la caméra virtuelle à partir de gestes 2D, et je discute des défis de l'interaction 3D sur des écrans multi-points. Finalement, le troisième chapitre de ce document est dédié aux environnements virtuels immersifs, avec une coloration spéciale vers les interfaces musicales. Je présente les nouvelles directions que nous avons explorées pour améliorer l'interaction entre des musiciens, le public, le son, et les environements 3D interactifs. Je conclue en discutant du futur des interfaces utilisateur 3D.
APA, Harvard, Vancouver, ISO, and other styles
9

Pouke, M. (Matti). "Augmented virtuality:transforming real human activity into virtual environments." Doctoral thesis, Oulun yliopisto, 2015. http://urn.fi/urn:isbn:9789526208343.

Full text
Abstract:
Abstract The topic of this work is the transformation of real-world human activity into virtual environments. More specifically, the topic is the process of identifying various aspects of visible human activity with sensor networks and studying the different ways how the identified activity can be visualized in a virtual environment. The transformation of human activities into virtual environments is a rather new research area. While there is existing research on sensing and visualizing human activity in virtual environments, the focus of the research is carried out usually within a specific type of human activity, such as basic actions and locomotion. However, different types of sensors can provide very different human activity data, as well as lend itself to very different use-cases. This work is among the first to study the transformation of human activities on a larger scale, comparing various types of transformations from multiple theoretical viewpoints. This work utilizes constructs built for use-cases that require the transformation of human activity for various purposes. Each construct is a mixed reality application that utilizes a different type of source data and visualizes human activity in a different way. The constructs are evaluated from practical as well as theoretical viewpoints. The results imply that different types of activity transformations have significantly different characteristics. The most distinct theoretical finding is that there is a relationship between the level of detail of the transformed activity, specificity of the sensors involved and the extent of world knowledge required to transform the activity. The results also provide novel insights into using human activity transformations for various practical purposes. Transformations are evaluated as control devices for virtual environments, as well as in the context of visualization and simulation tools in elderly home care and urban studies
Tiivistelmä Tämän väitöskirjatyön aiheena on ihmistoiminnan muuntaminen todellisesta maailmasta virtuaalitodellisuuteen. Työssä käsitellään kuinka näkyvästä ihmistoiminnasta tunnistetaan sensoriverkkojen avulla erilaisia ominaisuuksia ja kuinka nämä ominaisuudet voidaan esittää eri tavoin virtuaaliympäristöissä. Ihmistoiminnan muuntaminen virtuaaliympäristöihin on kohtalaisen uusi tutkimusalue. Olemassa oleva tutkimus keskittyy yleensä kerrallaan vain tietyntyyppisen ihmistoiminnan, kuten perustoimintojen tai liikkumisen, tunnistamiseen ja visualisointiin. Erilaiset anturit ja muut datalähteet pystyvät kuitenkin tuottamaan hyvin erityyppistä dataa ja siten soveltuvat hyvin erilaisiin käyttötapauksiin. Tämä työ tutkii ensimmäisten joukossa ihmistoiminnan tunnistamista ja visualisointia virtuaaliympäristössä laajemmassa mittakaavassa ja useista teoreettisista näkökulmista tarkasteltuna. Työssä hyödynnetään konstrukteja jotka on kehitetty eri käyttötapauksia varten. Konstruktit ovat sekoitetun todellisuuden sovelluksia joissa hyödynnetään erityyppistä lähdedataa ja visualisoidaan ihmistoimintaa eri tavoin. Konstrukteja arvioidaan sekä niiden käytännön sovellusalueen, että erilaisten teoreettisten viitekehysten kannalta. Tulokset viittaavat siihen, että erilaisilla muunnoksilla on selkeästi erityyppiset ominaisuudet. Selkein teoreettinen löydös on, että mitä yksityiskohtaisemmasta toiminnasta on kyse, sitä vähemmän tunnistuksessa voidaan hyödyntää kontekstuaalista tietoa tai tavanomaisia datalähteitä. Tuloksissa tuodaan myös uusia näkökulmia ihmistoiminnan visualisoinnin hyödyntämisestä erilaisissa käytännön sovelluskohteissa. Sovelluskohteina toimivat ihmiskehon käyttäminen ohjauslaitteena sekä ihmistoiminnan visualisointi ja simulointi kotihoidon ja kaupunkisuunnittelun sovellusalueilla
APA, Harvard, Vancouver, ISO, and other styles
10

Barnes, Evans Katie. "Beyond the Screen: Embedded Interfaces as Retail Wayfinding Tools." Kent State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=kent1493251709396537.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "VIRTUAL REALITY 3D GRAPHIC INTERFACES"

1

Xuelei, Qian, and ebrary Inc, eds. OpenSceneGraph 3.0: Beginner's guide : create high-performance virtual reality applications with OpenSceneGraph, one of the best 3D graphics engines. Birmingham, U.K: Packt Open Source, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cai, Yiyu. 3D Immersive and Interactive Learning. Singapore: Springer Singapore, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

IEEE Symposium on 3D User Interfaces (2008 Reno, Nev.). 3DUI: IEEE Symposium on 3D User Interfaces 2008 : Reno, Nevada, USA, March 8-9, 2008 : proceedings. Piscataway, NJ: Institute of Electrical and Electronics Engineers, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Minn.) IEEE Symposium on 3D User Interfaces (9th 2014 Minneapolis. 2014 IEEE Symposium on 3D User Interfaces (3DUI 2014): Minneapolis, Minnesota, USA, 29-30 March 2014. Piscataway, NJ: IEEE, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

IEEE Symposium on 3D User Interfaces (2nd 2007 Charlotte, N.C.). 3DUI: IEEE Symposium on 3D User Interfaces 2007 : proceedings, Charlotte, North Carolina, USA, March 10-11, 2007. Piscataway, NJ: Institute of Electrical and Electronics Engineers, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Orlando, Fla ). IEEE Symposium on 3D User Interfaces (8th 2013. 2013 IEEE Symposium on 3D User Interface (3DUI 2013): Orlando, Florida, USA, 16-17 March 2013. Piscataway, NJ: IEEE, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cai, Yiyu. 3D Immersive and Interactive Learning. Springer, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cai, Yiyu. 3D Immersive and Interactive Learning. Springer, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Staff, IEEE. 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Staff, IEEE. 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). IEEE, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "VIRTUAL REALITY 3D GRAPHIC INTERFACES"

1

Lacoche, Jérémy, Thierry Duval, Bruno Arnaldi, Eric Maisel, and Jérôme Royan. "3DPlasticToolkit: Plasticity for 3D User Interfaces." In Virtual Reality and Augmented Reality, 62–83. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31908-3_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lacoche, Jérémy, Thierry Duval, Bruno Arnaldi, Eric Maisel, and Jérôme Royan. "Machine Learning Based Interaction Technique Selection for 3D User Interfaces." In Virtual Reality and Augmented Reality, 33–51. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31908-3_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cannavò, Alberto, Davide Calandra, Aidan Kehoe, and Fabrizio Lamberti. "Evaluating Consumer Interaction Interfaces for 3D Sketching in Virtual Reality." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 291–306. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-73426-8_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Riddershom Bargum, Anders, Oddur Ingi Kristjánsson, Péter Babó, Rasmus Eske Waage Nielsen, Simon Rostami Mosen, and Stefania Serafin. "Spatial Audio Mixing in Virtual Reality." In Sonic Interactions in Virtual Environments, 269–302. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04021-4_9.

Full text
Abstract:
AbstractThe development of Virtual Reality (VR) systems and multimodal simulations presents possibilities in spatial-music mixing, be it in virtual spaces, for ensembles and orchestral compositions or for surround sound in film and music. Traditionally, user interfaces for mixing music have employed the channel-strip metaphor for controlling volume, panning and other audio effects that are aspects that also have grown into the culture of mixing music spatially. Simulated rooms and two-dimensional panning systems are simply implemented on computer screens to facilitate the placement of sound sources within space. In this chapter, we present design aspects for mixing in VR, investigating already existing virtual music mixing products and creating a framework from which a virtual spatial-music mixing tool can be implemented. Finally, the tool will be tested against a similar computer version to examine whether or not the sensory benefits and palpable spatial proportions of a VE can improve the process of mixing 3D sound.
APA, Harvard, Vancouver, ISO, and other styles
5

Andujar, Carlos, and Pere Brunet. "A Critical Analysis of Human-Subject Experiments in Virtual Reality and 3D User Interfaces." In Lecture Notes in Computer Science, 79–90. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-17043-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zappi, Victor, Dario Mazzanti, and Florent Berthaut. "From the Lab to the Stage: Practical Considerations on Designing Performances with Immersive Virtual Musical Instruments." In Sonic Interactions in Virtual Environments, 383–424. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04021-4_13.

Full text
Abstract:
AbstractImmersive virtual musical instruments (IVMIs) lie at the intersection between music technology and virtual reality. Being both digital musical instruments (DMIs) and elements of virtual environments (VEs), IVMIs have the potential to transport the musician into a world of imagination and unprecedented musical expression. But when the final aim is to perform live on stage, the employment of these technologies is anything but straightforward, for sharing the virtual musical experience with the audience gets quite arduous. In this chapter, we assess in detail the several technical and conceptual challenges linked to the composition of IVMI performances on stage, i.e., their scenography, providing a new critical perspective on IVMI performance and design. We first propose a set of dimensions meant to analyse IVMI scenographies, as well as to evaluate their compatibility with different instrument metaphors and performance rationales. Such dimensions are built from the specifics and constraints of DMIs and VEs; they include the level of immersion of musicians and spectators and provide an insight into the interaction techniques afforded by 3D user interfaces in the context of musical expression. We then analyse a number of existing IVMIs and stage setups, and finally suggest new ones, with the aim to facilitate the design of future immersive performances.
APA, Harvard, Vancouver, ISO, and other styles
7

Jasmine, S. Graceline, L. Jani Anbarasi, Modigari Narendra, and Benson Edwin Raj. "Augmented and Virtual Reality and Its Applications." In Multimedia and Sensory Input for Augmented, Mixed, and Virtual Reality, 68–85. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-4703-8.ch003.

Full text
Abstract:
Augmented reality (AR) overlies manually made materials directly over the real-world materials. This chapter addresses the technological and design frameworks required to create realistic motion tracking environments, realistic audio, 3D graphical interactions, multimodal sensory integration, and user interfaces and games using virtual reality to augmented reality. Similarly, the portfolio required to build a personal VR or AR application is detailed. Virtual and augmented reality industry committed innovative technologies that can be explored in the field of entertainment, education, training, medical and industrial innovation, and the development are explored. Augmented reality (AR) allows the physical world to be enhanced by incorporating digital knowledge in real time created by virtual machine. Few applications that have used augmented and virtual reality in real-world applications are discussed.
APA, Harvard, Vancouver, ISO, and other styles
8

Santoianni, Flavia, and Alessandro Ciasullo. "Digital and Spatial Education Intertwining in the Evolution of Technology Resources for Educational Curriculum Reshaping and Skills Enhancement." In Virtual Reality in Education, 330–47. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-8179-6.ch016.

Full text
Abstract:
The aim of this research is to deepen how digital education has been intertwined with spatial education throughout the evolution of technology resources. In the last years, the user experience has been improved by open-source, collaborative user-generated, and immersive content – starting from multimedia/hypermedia architectures to synthetic learning environments. This research analyses which spatial design principles have influenced multimedia/hypermedia, collaborative web 2.0 interfaces, and more recently the synthetic environments of virtual worlds. The evolution of technology resources supports the hypothesis of a continuous intertwining between digital and spatial education since multimedia/hypermedia architectures, in which spatial knowledge may play a significant role in web-based design according to individual differences in hypermedia fruition, prior knowledge in the field, and personal experience in web-based instruction. In collaborative user-generated content technology, visual presentation facilitates learning co-construction and spaces are intended as synchronous and asynchronous virtual knowledge spaces of communication. In 3D virtual learning environments, spatial interaction is really developed and may open full accessibility to further studies on digital and spatial education. In the joined field of learning and ICT, the main scope of digital technology knowledge sharing, and re-shaping, is the enhancement of digital skills based on experiences in educational activities and the re-thinking of the nature and the format of educational curriculum to implement more experiences in the digital – and, possibly, spatial – fields.
APA, Harvard, Vancouver, ISO, and other styles
9

Armiano, Ioana. "Creative Interfaces." In Innovative Design and Creation of Visual Interfaces, 192–219. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-0285-4.ch014.

Full text
Abstract:
Recent developments in process interaction solutions are helping companies and educational institutions to reduce training costs, enhance visualization, and increase communication. Service personnel can make more informed decisions by allowing a broad range of employees to access data instantly. New 3D interactive technologies incorporated into training applications and learning environments together with the introduction of the one projector 3D solution is rapidly changing the landscape for education. Over the last 10 years, virtual reality applications have been applied in various industries; medical, aircraft computer modeling, training simulations for offshore drilling platforms, product configuration, and 3D visualization solutions for education and R&D. This paper examines emergent visualization technologies, their influence on market growth and on new perceptions of learning and teaching. It describes the interrelationship between technology development, technology providers, product launches, R&D, and the motivation to learn and teach new skills. The paper incorporates social, technological, and global markets growth drives, describing the pull and drag synergy between these forces.
APA, Harvard, Vancouver, ISO, and other styles
10

Gaspar, Filipe, Rafael Bastos, and Miguel Sales. "Accurate Infrared Tracking System for Immersive Virtual Environments." In Innovative Design and Creation of Visual Interfaces, 318–43. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-0285-4.ch020.

Full text
Abstract:
In large-scale immersive virtual reality (VR) environments, such as a CAVE, one of the most common problems is tracking the position of the user’s head while he or she is immersed in this environment to reflect perspective changes in the synthetic stereoscopic images. In this paper, the authors describe the theoretical foundations and engineering approach adopted in the development of an infrared-optical tracking system designed for large scale immersive Virtual Environments (VE) or Augmented Reality (AR) settings. The system is capable of tracking independent retro-reflective markers arranged in a 3D structure in real time, recovering all possible 6DOF. These artefacts can be adjusted to the user’s stereo glasses to track his or her head while immersed or used as a 3D input device for rich human-computer interaction (HCI). The hardware configuration consists of 4 shutter-synchronized cameras attached with band-pass infrared filters and illuminated by infrared array-emitters. Pilot lab results have shown a latency of 40 ms when simultaneously tracking the pose of two artefacts with 4 infrared markers, achieving a frame-rate of 24.80 fps and showing a mean accuracy of 0.93mm/0.51º and a mean precision of 0.19mm/0.04º, respectively, in overall translation/rotation, fulfilling the requirements initially defined.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "VIRTUAL REALITY 3D GRAPHIC INTERFACES"

1

"IEEE Visualization and Graphics Technical Committee (VGTC)." In 2021 IEEE Virtual Reality and 3D User Interfaces (VR). IEEE, 2021. http://dx.doi.org/10.1109/vr50410.2021.00008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Riecke, Bernhard E., Joseph J. LaViola, and Ernst Kruijff. "3D user interfaces for virtual reality and games." In SIGGRAPH '18: Special Interest Group on Computer Graphics and Interactive Techniques Conference. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3214834.3214869.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Muller, Christoph, Matthias Braun, and Thomas Ertl. "Optimised Molecular Graphics on the HoloLens." In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 2019. http://dx.doi.org/10.1109/vr.2019.8798111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

"IEEE Visualization and Graphics Technical Committee (VGTC)." In 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 2018. http://dx.doi.org/10.1109/vr.2018.8446353.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

"IEEE Visualization and Graphics Technical Committee (VGTC)." In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 2019. http://dx.doi.org/10.1109/vr.2019.8798109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hobson, Tanner, Jeremiah Duncan, Mohammad Raji, Aidong Lu, and Jian Huang. "Alpaca: AR Graphics Extensions for Web Applications." In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 2020. http://dx.doi.org/10.1109/vr46266.2020.00036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hobson, Tanner, Jeremiah Duncan, Mohammad Raji, Aidong Lu, and Jian Huang. "Alpaca: AR Graphics Extensions for Web Applications." In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 2020. http://dx.doi.org/10.1109/vr46266.2020.1581207820146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kaur, Akriti, and Pradeep G. Yammiyavar. "A comparative study of 2D and 3D mobile keypad user interaction preferences in virtual reality graphic user interfaces." In VRST '17: 23rd ACM Symposium on Virtual Reality Software and Technology. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3139131.3141221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Whitlock, Matt, Stephen Smart, and Danielle Albers Szafir. "Graphical Perception for Immersive Analytics." In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 2020. http://dx.doi.org/10.1109/vr46266.2020.00084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Whitlock, Matt, Stephen Smart, and Danielle Albers Szafir. "Graphical Perception for Immersive Analytics." In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 2020. http://dx.doi.org/10.1109/vr46266.2020.1582298687237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography