Статті в журналах з теми "Mixed reality interfaces"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Mixed reality interfaces.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Mixed reality interfaces".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Yoo, Yong-Ho, and Wilhelm Bruns. "ENERGY INTERFACES FOR MIXED REALITY." IFAC Proceedings Volumes 39, no. 3 (2006): 249–54. http://dx.doi.org/10.3182/20060517-3-fr-2903.00140.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Lindlbauer, David. "The future of mixed reality is adaptive." XRDS: Crossroads, The ACM Magazine for Students 29, no. 1 (September 2022): 26–31. http://dx.doi.org/10.1145/3558191.

Повний текст джерела
Анотація:
In a future where we replace our smartphones and notebooks with mixed reality headsets, the way we create user interfaces will change drastically. Future interfaces will need to adapt automatically to users' context, guided by optimization-based methods and machine learning, to become beneficial for end-users.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

White, Martin, Panagiotis Petridis, Fotis Liarokapis, and Daniel Plecinckx. "Multimodal Mixed Reality Interfaces for Visualizing Digital Heritage." International Journal of Architectural Computing 5, no. 2 (June 2007): 321–37. http://dx.doi.org/10.1260/1478-0771.5.2.322.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Mountain, David, and Fotis Liarokapis. "Mixed reality (MR) interfaces for mobile information systems." Aslib Proceedings 59, no. 4/5 (July 12, 2007): 422–36. http://dx.doi.org/10.1108/00012530710817618.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Poustinchi, Ebrahim. "Mixed Robotic Interface Г : Searching for a hybrid cyber-physical design/experience interface using virtual/actual robots". SHS Web of Conferences 64 (2019): 01008. http://dx.doi.org/10.1051/shsconf/20196401008.

Повний текст джерела
Анотація:
Mixed Robotic Interface is a project-based design-research investigation, studying new ways of creating hybridized cyber-physical design and experience interfaces, at the intersection of robotics—as its core component, and augmented reality, game design, projection mapping, and digital fabrication. Mixed Robotic Interface Г—as part of Mixed Robotic Interface series of research projects, focuses on using “actual” and “virtual” robot arms as a possible creative medium and extensions of design/gaming environment creating immersive atmospheres for “experiencing” design. This research questions the possibilities of creating an architectural/spatial atmosphere through digitally enhanced experiences. Different from some of the current experiments with augmented reality (AR), virtual reality (VR) and projection-mapping in architecture, Mixed Robotic Interface Г is not looking into “immersive” experience as a way to “blur” the boundaries of digital and physical—similar to virtual reality experience with headsets. Instead, Mixed Robotic Interface Г creates a recognizable gap between real and virtual to open up a creative space for the user/audience to be involved between these two mediums. Mixed Robotic Interface Г uses para-fictional storytelling as a way to engage the audience with the experience and to create continues atmospheric qualities.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Osipov, Ilya V. "Cubios Transreality Puzzle as a Mixed Reality Object." International Journal of Virtual and Augmented Reality 1, no. 2 (July 2017): 1–17. http://dx.doi.org/10.4018/ijvar.2017070101.

Повний текст джерела
Анотація:
The author proposes to establish a separate class of electronic mechanical puzzles being the object of “mixed reality”, presents a self-engineered example of such a device, and reviews similar devices produced by other developers. Close relationships of such devices with tangible user interfaces are described. A Cubios device is presented as an illustration of a mixed reality puzzle along with its variants developed by the author. The purpose of this paper is to present a new mixed reality device, review similar devices, propose the classification of such devices, identify their relationships with tangible user interfaces, and discuss the prospects of their development.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Andolina, Salvatore, Yi-Ta Hsieh, Denis Kalkofen, Antti Nurminen, Diogo Cabral, Anna Spagnolli, Luciano Gamberini, Ann Morrison, Dieter Schmalstieg, and Giulio Jacucci. "Designing for Mixed Reality Urban Exploration." Interaction Design and Architecture(s), no. 48 (June 10, 2021): 33–49. http://dx.doi.org/10.55612/s-5002-048-002.

Повний текст джерела
Анотація:
This paper introduces a design framework for mixed reality urban exploration (MRUE), based on a concrete implementation in a historical city. The framework integrates different modalities, such as virtual reality (VR), augmented reality (AR), and haptics-audio interfaces, as well as advanced features such as personalized recommendations, social exploration, and itinerary management. It permits to address a number of concerns regarding information overload, safety, and quality of the experience, which are not sufficiently tackled in traditional non-integrated approaches. This study presents an integrated mobile platform built on top of this framework and reflects on the lessons learned.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Nakayama, Angelica, Daniel Ruelas, Jesus Savage, and Ernesto Bribiesca. "Teleoperated Service Robot with an Immersive Mixed Reality Interface." Informatics and Automation 20, no. 6 (September 10, 2021): 1187–223. http://dx.doi.org/10.15622/ia.20.6.1.

Повний текст джерела
Анотація:
Teleoperated service robots can perform more complex and precise tasks as they combine robot skills and human expertise. Communication between the operator and the robot is essential for remote operation and strongly affects system efficiency. Immersive interfaces are being used to enhance teleoperation experience. However, latency or time delay can impair the performance of the robot operation. Since remote visualization involves transmitting a large amount of video data, the challenge is to decrease communication instability. Then, an efficient teleoperation system must have a suitable operation interface capable of visualizing the remote environment, controlling the robot, and having a fast response time. This work presents the development of a service robot teleoperation system with an immersive mixed reality operation interface where the operator can visualize the real remote environment or a virtual 3D environment representing it. The virtual environment aims to reduce the latency on communication by reducing the amount of information sent over the network and improve user experience. The robot can perform navigation and simple tasks autonomously or change to the teleoperated mode for more complex tasks. The system was developed using ROS, UNITY 3D, and sockets to be exported with ease to different platforms. The experiments suggest that having an immersive operation interface provides improved usability for the operator. The latency appears to improve when using the virtual environment. The user experience seems to benefit from the use of mixed reality techniques; this may lead to the broader use of teleoperated service robot systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Mourtzis, Dimitris, John Angelopoulos, and Nikos Panopoulos. "Closed-Loop Robotic Arm Manipulation Based on Mixed Reality." Applied Sciences 12, no. 6 (March 14, 2022): 2972. http://dx.doi.org/10.3390/app12062972.

Повний текст джерела
Анотація:
Robotic manipulators have become part of manufacturing systems in recent decades. However, in the realm of Industry 4.0, a new type of manufacturing cell has been introduced—the so-called collaborative manufacturing cell. In such collaborative environments, communication between a human operator and robotic manipulators must be flawless, so that smooth collaboration, i.e., human safety, is ensured constantly. Therefore, engineers have focused on the development of suitable human–robot interfaces (HRI) in order to tackle this issue. This research work proposes a closed-loop framework for the human–robot interface based on the utilization of digital technologies, such as Mixed Reality (MR). Concretely, the framework can be realized as a methodology for the remote and safe manipulation of the robotic arm in near real-time, while, simultaneously, safety zones are displayed in the field of view of the shop-floor technician. The method is based on the creation of a Digital Twin of the robotic arm and the setup of a suitable communication framework for continuous and seamless communication between the user interface, the physical robot, and the Digital Twin. The development of the method is based on the utilization of a ROS (Robot Operating System) for the modelling of the Digital Twin, a Cloud database for data handling, and Mixed Reality (MR) for the Human–Machine Interface (HMI). The developed MR application is tested in a laboratory-based machine shop, incorporating collaborative cells.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Pfeiffer, Thies, and Nadine Pfeiffer-Leßmann. "Virtual Prototyping of Mixed Reality Interfaces with Internet of Things (IoT) Connectivity." i-com 17, no. 2 (August 28, 2018): 179–86. http://dx.doi.org/10.1515/icom-2018-0025.

Повний текст джерела
Анотація:
AbstractOne key aspect of the Internet of Things (IoT) is, that human machine interfaces are disentangled from the physicality of the devices. This provides designers with more freedom, but also may lead to more abstract interfaces, as they lack the natural context created by the presence of the machine. Mixed Reality (MR) on the other hand, is a key technology that enables designers to create user interfaces anywhere, either linked to a physical context (augmented reality, AR) or embedded in a virtual context (virtual reality, VR). Especially today, designing MR interfaces is a challenge, as there is not yet a common design language nor a set of standard functionalities or patterns. In addition to that, neither customers nor future users have substantial experiences in using MR interfaces.Prototypes can contribute to overcome this gap, as they continuously provide user experiences of increasing realism along the design process. We present ExProtoVAR, a tool that supports quick and lightweight prototyping of MR interfaces for IoT using VR technology.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Yoo, Yong-Ho, and Wilhelm Bruns. "BI-DIRECTIONAL ENERGY INTERFACES FOR MIXED REALITY DESIGN – VIRTUAL EQUIVALENCE –." IFAC Proceedings Volumes 38, no. 1 (2005): 61–65. http://dx.doi.org/10.3182/20050703-6-cz-1902.01392.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Liccardo, Annalisa, and Francesco Bonavolontà. "VR, AR, and 3-D User Interfaces for Measurement and Control." Future Internet 15, no. 1 (December 29, 2022): 18. http://dx.doi.org/10.3390/fi15010018.

Повний текст джерела
Анотація:
The topics of virtual, mixed, and extended reality have now become key areas in various fields of scientific and industrial applications, and the interest in them is made tangible by the numerous papers available in the scientific literature. In this regard, the Special Issue “VR, AR, and 3-D User Interfaces for Measurement and Control” received a fair number of varied contributions that analyzed different aspects of the implementation of virtual, mixed, and extended reality systems and approaches in the real world. They range from investigating the requirements of new potential technologies to the prediction verification of the effectiveness and benefits of their use, the analysis of the difficulties of interaction with graphical interfaces to the possibility of performing complex and risky tasks (such as surgical operations) using mixed reality viewers. All contributions were of a high standard and mainly highlight that measurement and control applications based on the new models of interaction with reality are by now increasingly ready to leave laboratory spaces and become objects and features of common life. The significant benefits of this technology will radically change the way we live and interact with information and the reality around us, and it will surely be worthy of further exploration, maybe even in a new Special Issue of Future Internet.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Domingues, Christophe, Mouna Essabbah, Nader Cheaib, Samir Otmane, and Alain Dinis. "Human-Robot-Interfaces based on Mixed Reality for Underwater Robot Teleoperation." IFAC Proceedings Volumes 45, no. 27 (2012): 212–15. http://dx.doi.org/10.3182/20120919-3-it-2046.00036.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Frank, Jared A., and Vikram Kapila. "Mixed-reality learning environments: Integrating mobile interfaces with laboratory test-beds." Computers & Education 110 (July 2017): 88–104. http://dx.doi.org/10.1016/j.compedu.2017.02.009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Dunston, Phillip S., and Xiangyu Wang. "Mixed Reality-Based Visualization Interfaces for Architecture, Engineering, and Construction Industry." Journal of Construction Engineering and Management 131, no. 12 (December 2005): 1301–9. http://dx.doi.org/10.1061/(asce)0733-9364(2005)131:12(1301).

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Hendery, Rachel, and Andrew Burrell. "Playful interfaces to the archive and the embodied experience of data." Journal of Documentation 76, no. 2 (December 23, 2019): 484–501. http://dx.doi.org/10.1108/jd-05-2019-0078.

Повний текст джерела
Анотація:
Purpose The purpose of this paper is to demonstrate the possibility for the galleries, libraries, archives and museums sector to employ playful, immersive discovery interfaces for their collections and raise awareness of some of the considerations that go into the decision to use such technology and the creation of the interfaces. Design/methodology/approach This is a case study approach using the methodology of research through design. The paper introduces two examples of immersive interfaces to archival data created by the authors, using these as a springboard for discussing the different kinds of embodied experiences that users have with different kinds of immersion, for example, the exploration of the archive on a flat screen, a data “cave” or arena, or virtual reality. Findings The role of such interfaces in communicating with the audience of an archive is considered, for example, in allowing users to detect structure in data, particularly in understanding the role of geographic or other spatial elements in a collection, and in shifting the locus of knowledge production from individual to community. It is argued that these different experiences draw on different metaphors in terms of users’ prior experience with more well-known technologies, for example, “a performance” vs “a tool” vs “a background to a conversation”. Originality/value The two example interfaces discussed here are original creations by the authors of this paper. They are the first uses of mixed reality for interfacing with the archives in question. One is the first mixed reality interface to an audio archive. The discussion has implications for the future of interfaces to galleries, archives, libraries and museums more generally.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Zhang, Yiyi, and Tatsuo Nakajima. "Exploring the Design of a Mixed-Reality 3D Minimap to Enhance Pedestrian Satisfaction in Urban Exploratory Navigation." Future Internet 14, no. 11 (November 10, 2022): 325. http://dx.doi.org/10.3390/fi14110325.

Повний текст джерела
Анотація:
The development of ubiquitous computing technology and the emergence of XR could provide pedestrian navigation with more options for user interfaces and interactions. In this work, we aim investigate the role of a mixed-reality map interface in urban exploration to enhance pedestrians’ mental satisfaction. We propose a mixed-reality 3D minimap as a part of the navigation interface which pedestrians could refer to and interact during urban exploration. To further explore the different levels of detail of the map interface, we conducted a user study (n = 28, two groups with two tasks). We designed two exploratory activities as experimental tasks with two map modes (a normal one and a simplified one) to discuss the detailed design of the minimap interface. The results indicated that participants showed a positive attitude toward our method. The simplified map mode could result in a lower perceived workload in both tasks while enhancing performance in specific navigation, such as wayfinding. However, we also found that pedestrians’ preference for the level of detail of the minimap interface is dynamic in navigation. Thus, we suggest discussing the different levels of detail further in specific scenarios. Finally, we also summarize some findings observed during user study for inspiring the study of virtual map interface of future mixed-reality navigation for urban exploration in various scenarios.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Sanna, Manuri, and De Pace. "Special Issue “Wearable Augmented and Mixed Reality Applications”." Information 10, no. 10 (September 20, 2019): 289. http://dx.doi.org/10.3390/info10100289.

Повний текст джерела
Анотація:
Until some years ago, there was a lack of affordable devices that allowed for research in order to design and implement augmented and mixed reality applications. Thanks to technology improvements, it is now possible to find several wearable devices on the market that greatly improve the synergy between the real and virtual world, thus allowing users to be part of an immersive experience in which they can easily interact with 3D assets and real objects at the same time. Thus, the number of augmented reality and mixed reality (AR/MR) applications is greatly increased, from educational or cultural heritage applications to industry ones, such as maintenance-assembly-repair processes or product inspection and building monitoring. Indeed, several improvements have been made in order to enhance this technology, but some issues still need to be tackled, namely: limited field-of-view, tracking stability, the effectiveness of user interfaces to interact with 3D contents, and many others.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Ohshima, Toshikazu, and Kenzo Kojima. "MitsuDomoe: Ecosystem Simulation of Virtual Creatures in Petri Dish Display." International Journal of Virtual Reality 17, no. 2 (January 1, 2017): 65–71. http://dx.doi.org/10.20870/ijvr.2017.17.2.2892.

Повний текст джерела
Анотація:
In this study, we propose the use of mixed reality (MR) for the purposes of biological education. Our objective is to create an interactive edutainment MR framework for users to learn about nature and human beings. MitsuDomoe, an interactive ecosystem simulator of virtual creatures in a petri dish, comprises three species of primitive artificial creatures. MitsuDomoe simulates the predation chain of the virtual creatures in the petri dish, and users can interact with this ecosystem via the petri dish interface. Users can also experience immersive observation by wearing HMD. By combining the MR petri dish and immersive virtual reality (VR) interfaces, we synergistically improve user understanding of the experience.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Prada, Rui. "Artificial Intelligence test agents for automated testing of Extended Reality (XR)." Open Access Government 37, no. 1 (January 6, 2023): 344–45. http://dx.doi.org/10.56367/oag-037-10543.

Повний текст джерела
Анотація:
Artificial Intelligence test agents for automated testing of Extended Reality (XR) Extended Reality (XR) is an umbrella term for advanced interactive systems such as Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR), and systems with advanced 3D User Interfaces. These systems have emerged in various domains, from entertainment and education to combat training and mission-critical applications. XR systems typically involve a representation of a virtual world, are highly interactive and tend to be more immersive than other technologies.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Piskorz, Joanna, Marcin Czub, Katarzyna Urbańska, Małgorzata Mrula, Paweł Hodowaniec, and Mateusz Misiewicz. "How does interface influence the level of analgesia when Virtual Reality distraction is used?" Polish Journal of Applied Psychology 12, no. 1 (March 1, 2014): 45–57. http://dx.doi.org/10.1515/pjap-2015-0003.

Повний текст джерела
Анотація:
Abstract This study investigates the effectiveness of virtual reality (VR) technology in distracting attention from pain. We tested how body engagement related to navigating the virtual environment (VE) influences the intensity of pain. Two different interfaces were used to play the same VE, and a cold pressor test was used for pain stimulation. A mixed design was used for the experiment. Sixty-six undergraduate students participated. One group navigated the game using a rotation sensor, head tracker and foot pedals (Body Movement Interface). Another group navigated only using their hands (Hand Movement Interface). Objective and subjective measures of pain were collected - the amount of time participants kept their hand in a container with cold water, and the participant’s assessment of the pain intensity on a visual analog scale (VAS). Participants also filled in questionnaires designed to measure feelings of presence in VE and emotional attitudes towards the game. We found no significant difference between the two used interfaces in their analgesic efficacy. In both groups during VR distraction, participants showed significantly higher levels of pain endurance than without VR distraction.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Schäfer, Alexander, Gerd Reis, and Didier Stricker. "AnyGesture: Arbitrary One-Handed Gestures for Augmented, Virtual, and Mixed Reality Applications." Applied Sciences 12, no. 4 (February 11, 2022): 1888. http://dx.doi.org/10.3390/app12041888.

Повний текст джерела
Анотація:
Natural user interfaces based on hand gestures are becoming increasingly popular. The need for expensive hardware left a wide range of interaction possibilities that hand tracking enables largely unexplored. Recently, hand tracking has been built into inexpensive and widely available hardware, allowing more and more people access to this technology. This work provides researchers and users with a simple yet effective way to implement various one-handed gestures to enable deeper exploration of gesture-based interactions and interfaces. To this end, this work provides a framework for design, prototyping, testing, and implementation of one-handed gestures. The proposed framework was implemented with two main goals: First, it should be able to recognize any one-handed gesture. Secondly, the design and implementation of gestures should be as simple as performing the gesture and pressing a button to record it. The contribution of this paper is a simple yet unique way to record and recognize static and dynamic one-handed gestures. A static gesture can be captured with a template matching approach, while dynamic gestures use previously captured spatial information. The presented approach was evaluated in a user study with 33 participants and the implementable gestures received high accuracy and user acceptance.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Jing, Allison, Kieran May, Brandon Matthews, Gun Lee, and Mark Billinghurst. "The Impact of Sharing Gaze Behaviours in Collaborative Mixed Reality." Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (November 7, 2022): 1–27. http://dx.doi.org/10.1145/3555564.

Повний текст джерела
Анотація:
In a remote collaboration involving a physical task, visualising gaze behaviours may compensate for other unavailable communication channels. In this paper, we report on a 360° panoramic Mixed Reality (MR) remote collaboration system that shares gaze behaviour visualisations between a local user in Augmented Reality and a remote collaborator in Virtual Reality. We conducted two user studies to evaluate the design of MR gaze interfaces and the effect of gaze behaviour (on/off) and gaze style (bi-/uni-directional). The results indicate that gaze visualisations amplify meaningful joint attention and improve co-presence compared to a no gaze condition. Gaze behaviour visualisations enable communication to be less verbally complex therefore lowering collaborators' cognitive load while improving mutual understanding. Users felt that bi-directional behaviour visualisation, showing both collaborator's gaze state, was the preferred condition since it enabled easy identification of shared interests and task progress.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Krupke, Dennis, Jianwei Zhang, and Frank Steinicke. "IMPAct: A Holistic Framework for Mixed Reality Robotic User Interface Classification and Design." Multimodal Technologies and Interaction 3, no. 2 (April 11, 2019): 25. http://dx.doi.org/10.3390/mti3020025.

Повний текст джерела
Анотація:
The number of scientific publications combining robotic user interfaces and mixed reality highly increased during the 21st Century. Counting the number of yearly added publications containing the keywords “mixed reality” and “robot” listed on Google Scholar indicates exponential growth. The interdisciplinary nature of mixed reality robotic user interfaces (MRRUI) makes them very interesting and powerful, but also very challenging to design and analyze. Many single aspects have already been successfully provided with theoretical structure, but to the best of our knowledge, there is no contribution combining everything into an MRRUI taxonomy. In this article, we present the results of an extensive investigation of relevant aspects from prominent classifications and taxonomies in the scientific literature. During a card sorting experiment with professionals from the field of human–computer interaction, these aspects were clustered into named groups for providing a new structure. Further categorization of these groups into four different categories was obvious and revealed a memorable structure. Thus, this article provides a framework of objective, technical factors, which finds its application in a precise description of MRRUIs. An example shows the effective use of the proposed framework for precise system description, therefore contributing to a better understanding, design, and comparison of MRRUIs in this growing field of research.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Kettle, Liam, and Yi-Ching Lee. "Augmented Reality for Vehicle-Driver Communication: A Systematic Review." Safety 8, no. 4 (December 13, 2022): 84. http://dx.doi.org/10.3390/safety8040084.

Повний текст джерела
Анотація:
Capabilities for automated driving system (ADS)-equipped vehicles have been expanding over the past decade. Research has explored integrating augmented reality (AR) interfaces in ADS-equipped vehicles to improve drivers’ situational awareness, performance, and trust. This paper systematically reviewed AR visualizations for in-vehicle vehicle-driver communication from 2012 to 2022. The review first identified meta-data and methodological trends before aggregating findings from distinct AR interfaces and corresponding subjective and objective measures. Prominent subjective measures included acceptance, trust, and user experience; objective measures comprised various driving behavior or eye-tracking metrics. Research more often evaluated simulated AR interfaces, presented through windshields, and communicated object detection or intended maneuvers, in level 2 ADS. For object detection, key visualizations included bounding shapes, highlighting, or symbols. For intended route, mixed results were found for world-fixed verse screen-fixed arrows. Regardless of the AR design, communicating the ADS’ actions or environmental elements was beneficial to drivers, though presenting clear, relevant information was more favorable. Gaps in the literature that yet to be addressed include longitudinal effects, impaired visibility, contextual user needs, system reliability, and, most notably, inclusive design. Regardless, the review supports that integrating AR interfaces in ADS-equipped vehicles can lead to higher trust, acceptance, and safer driving performances.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Kim, Yong Min, and Ilsun Rhiu. "A comparative study of navigation interfaces in virtual reality environments: A mixed-method approach." Applied Ergonomics 96 (October 2021): 103482. http://dx.doi.org/10.1016/j.apergo.2021.103482.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

SHERSTYUK, ANDREI, ANTON TRESKUNOV, and BENJAMIN BERG. "SEMI-AUTOMATIC SURFACE SCANNER FOR MEDICAL TANGIBLE USER INTERFACES." International Journal of Image and Graphics 10, no. 02 (April 2010): 219–33. http://dx.doi.org/10.1142/s0219467810003743.

Повний текст джерела
Анотація:
Mixing real and virtual components into one consistent environment often involves creating geometry models of physical objects. Traditional approaches include manual modeling by 3D artists or use of dedicated devices. Both approaches require special skills or special hardware and may be costly.We propose a new method for fast semi-automatic 3D geometry acquisition, based upon unconventional use of motion tracking equipment. The proposed method is intended for quick surface prototyping for Virtual, Augmented and Mixed reality applications (VR/AR/MR), where quality of visualization of scanned objects is not required or is of low priority.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Riedlinger, Urs, Florian Klein, Marcos Hill, Christian Lambracht, Sonja Nieborowski, Ralph Holst, Sascha Bahlau, and Leif Oppermann. "Evaluation of Mixed Reality Support for Bridge Inspectors Using BIM Data." i-com 21, no. 2 (July 19, 2022): 253–67. http://dx.doi.org/10.1515/icom-2022-0019.

Повний текст джерела
Анотація:
Abstract Bridge inspectors work for the safety of our infrastructure and mobility. In regular intervals, they conduct structural inspections – a manual task with a long-lasting and firmly normed analogue tradition. In our collaborative research project, we developed Mixed Reality (MR) and Virtual Reality (VR) prototypes to support that work digitally. We propose a mixed analogue and digital workflow using Building Information Modeling (BIM) data that can be ready-to-hand for bridge inspectors during their work on-site at a bridge. In this paper, we describe the system and the evaluation results of our final MR demonstrator at a autobahn-bridge in Germany. We identified a need for a digital MR tool to support the bridge inspection in-situ. In general, this matches with the trend to bring the computer-supported office-work out into the real world. However, there are also challenges to consider, like lacking BIM data for existing bridges and structures, appropriate user-interfaces in this new application domain, or the need to adopt norms and guidelines for public tender. We argue to consider a user-centered design approach for future developments to best profit from the bridge inspectors’, as well as the MR- and CSCW-researchers expertise, and ultimately increase the acceptance of the developed information systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Rydvanskiy, Ruslan, and Nick Hedley. "Mixed Reality Flood Visualizations: Reflections on Development and Usability of Current Systems." ISPRS International Journal of Geo-Information 10, no. 2 (February 18, 2021): 82. http://dx.doi.org/10.3390/ijgi10020082.

Повний текст джерела
Анотація:
Interest in and use of 3D visualizations for analysis and communication of flooding risks has been increasing. At the same time, an ecosystem of 3D user interfaces has also been emerging. Together, they offer exciting potential opportunities for flood visualization. In order to understand how we turn potential into real value, we need to develop better understandings of technical workflows, capabilities of the resulting systems, their usability, and implications for practice. Starting with existing geospatial datasets, we develop single user and collaborative visualization prototypes that leverage capabilities of the state-of-the art HoloLens 2 mixed reality system. By using the 3D displays, positional tracking, spatial mapping, and hand- and eye-tracking, we seek to unpack the capabilities of these tools for meaningful spatial data practice. We reflect on the user experience, hardware performance, and usability of these tools and discuss the implications of these technologies for flood risk management, and broader spatial planning practice.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Lindemann, Patrick, Tae-Young Lee, and Gerhard Rigoll. "Catch My Drift: Elevating Situation Awareness for Highly Automated Driving with an Explanatory Windshield Display User Interface." Multimodal Technologies and Interaction 2, no. 4 (October 11, 2018): 71. http://dx.doi.org/10.3390/mti2040071.

Повний текст джерела
Анотація:
Broad access to automated cars (ACs) that can reliably and unconditionally drive in all environments is still some years away. Urban areas pose a particular challenge to ACs, since even perfectly reliable systems may be forced to execute sudden reactive driving maneuvers in hard-to-predict hazardous situations. This may negatively surprise the driver, possibly causing discomfort, anxiety or loss of trust, which might be a risk for the acceptance of the technology in general. To counter this, we suggest an explanatory windshield display interface with augmented reality (AR) elements to support driver situation awareness (SA). It provides the driver with information about the car’s perceptive capabilities and driving decisions. We created a prototype in a human-centered approach and implemented the interface in a mixed-reality driving simulation. We conducted a user study to assess its influence on driver SA. We collected objective SA scores and self-ratings, both of which yielded a significant improvement with our interface in good (medium effect) and in bad (large effect) visibility conditions. We conclude that explanatory AR interfaces could be a viable measure against unwarranted driver discomfort and loss of trust in critical urban situations by elevating SA.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Cruz Ulloa, Christyan, David Domínguez, Jaime Del Cerro, and Antonio Barrientos. "A Mixed-Reality Tele-Operation Method for High-Level Control of a Legged-Manipulator Robot." Sensors 22, no. 21 (October 24, 2022): 8146. http://dx.doi.org/10.3390/s22218146.

Повний текст джерела
Анотація:
In recent years, legged (quadruped) robots have been subject of technological study and continuous development. These robots have a leading role in applications that require high mobility skills in complex terrain, as is the case of Search and Rescue (SAR). These robots stand out for their ability to adapt to different terrains, overcome obstacles and move within unstructured environments. Most of the implementations recently developed are focused on data collecting with sensors, such as lidar or cameras. This work seeks to integrate a 6DoF arm manipulator to the quadruped robot ARTU-R (A1 Rescue Tasks UPM Robot) by Unitree to perform manipulation tasks in SAR environments. The main contribution of this work is focused on the High-level control of the robotic set (Legged + Manipulator) using Mixed-Reality (MR). An optimization phase of the robotic set workspace has been previously developed in Matlab for the implementation, as well as a simulation phase in Gazebo to verify the dynamic functionality of the set in reconstructed environments. The first and second generation of Hololens glasses have been used and contrasted with a conventional interface to develop the MR control part of the proposed method. Manipulations of first aid equipment have been carried out to evaluate the proposed method. The main results show that the proposed method allows better control of the robotic set than conventional interfaces, improving the operator efficiency in performing robotic handling tasks and increasing confidence in decision-making. On the other hand, Hololens 2 showed a better user experience concerning graphics and latency time.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Steinicke, Frank, and Katrin Wolf. "New Digital Realities – Blending our Reality with Virtuality." i-com 19, no. 2 (August 26, 2020): 61–65. http://dx.doi.org/10.1515/icom-2020-0014.

Повний текст джерела
Анотація:
AbstractNew digital reality as a spectrum of technologies and experiences that digitally simulate and extend reality in one way or another across different human senses has received considerable attention in recent years. In particular, we have witnessed great advances in mixed reality (MR) technologies, such as Virtual Reality (VR) and Augmented Reality (AR) technology, which provide enormous potential for application domains like training, simulation, education, entertainment, health, and sports. However, also other forms of digitally enhanced reality (XR) supports novel forms of immersion and experiences while generating, visualizing and interacting with digital content either displayed in fully-immersive virtual environments or superimposed into our view of the real world, and will significantly change the way we work, travel, play, and communicate. Consequently, we face dramatic changes in interactive media creation, access, and perception. In this special issue, we solicit work that addresses novel interaction design, interfaces, and implementation of new digital reality in which our reality is blended with the virtuality with a focus on users’ needs, joy, and visions.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Pietroni, Eva, Alfonsina Pagano, Luigi Biocca, and Giacomo Frassineti. "Accessibility, Natural User Interfaces and Interactions in Museums: The IntARSI Project." Heritage 4, no. 2 (April 4, 2021): 567–84. http://dx.doi.org/10.3390/heritage4020034.

Повний текст джерела
Анотація:
In a museum context, people have specific needs in terms of physical, cognitive, and social accessibility that cannot be ignored. Therefore, we need to find a way to make art and culture accessible to them through the aid of Universal Design principles, advanced technologies, and suitable interfaces and contents. Integration of such factors is a priority of the Museums General Direction of the Italian Ministry of Cultural Heritage, within the wider strategy of museum exploitation. In accordance with this issue, the IntARSI project, publicly funded, consists of a pre-evaluation and a report of technical specifications for a new concept of museology applied to the new Museum of Civilization in Rome (MuCIV). It relates to planning of multimedia, virtual, and mixed reality applications based on the concept of “augmented” and multisensory experience, innovative tangible user interfaces, and storytelling techniques. An inclusive approach is applied, taking into account the needs and attitudes of a wide audience with different ages, cultural interests, skills, and expectations, as well as cognitive and physical abilities.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Ehrlich, Nea. "The Animated Document: Animation’s Dual Indexicality in Mixed Realities." Animation 15, no. 3 (November 2020): 260–75. http://dx.doi.org/10.1177/1746847720974971.

Повний текст джерела
Анотація:
Animation has become ubiquitous within digital visual culture and fundamental to knowledge production. As such, its status as potentially reliable imagery should be clarified. This article examines how animation’s indexicality (both as trace and deixis) changes in mixed realities where the physical and the virtual converge, and how this contributes to the research of animation as documentary and/or non-fiction imagery. In digital culture, animation is used widely to depict both physical and virtual events, and actions. As a result, animation is no longer an interpretive visual language. Instead, animation in virtual culture acts as real-time visualization of computer-mediated actions, their capture and documentation. Now that animation includes both captured and generated imagery, not only do its definitions change but its link to the realities depicted and the documentary value of animated representations requires rethinking. This article begins with definitions of animation and their relation to the perception of animation’s validity as documentary imagery; thereafter it examines indexicality and the strength of indexical visualizations, introducing a continuum of strong and weak indices to theorize the hybrid and complex forms of indexicality in animation, ranging from graphic user interfaces (GUI) to data visualization. The article concludes by examining four indexical connections in relation to physical and virtual reality, offering a theoretical framework with which to conceptualize animation’s indexing abilities in today’s mixed realities.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Atzenbeck, Claus. "Interview with Beat Signer." ACM SIGWEB Newsletter, Winter (January 2021): 1–5. http://dx.doi.org/10.1145/3447879.3447881.

Повний текст джерела
Анотація:
Beat Signer is Professor of Computer Science at the Vrije Universiteit Brussel (VUB) and co-director of the Web & Information Systems Engineering (WISE) research lab. He received a PhD in Computer Science from ETH Zurich where he has also been leading the Interactive Paper lab as a senior researcher for four years. He is an internationally distinguished expert in cross-media technologies and interactive paper solutions. His further research interests include human-information interaction, document engineering, data physicalisation, mixed reality as well as multimodal interaction. He has published more than 100 papers on these topics at international conferences and journals, and received multiple best paper awards. Beat has 20 years of experience in research on cross-media information management and multimodal user interfaces. As part of his PhD research, he investigated the use of paper as an interactive user interface and developed the resource-selector-link (RSL) hypermedia metamodel. With the interactive paper platform (iPaper), he strongly contributed to the interdisciplinary European Paper++ and PaperWorks research projects and the seminal research on paper-digital user interfaces led to innovative cross-media publishing solutions and novel forms of paper-based human-computer interaction. The RSL hypermedia metamodel is nowadays widely applied in his research lab and has, for example, been used for cross-media personal information management, an extensible cross-document link service, the MindXpres presentation platform as well as in a framework for cross-device and Internet of Things applications. For more details, please visit https://beatsigner.com.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Zimmerer, Chris, Martin Fischbach, and Marc Latoschik. "Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks." Multimodal Technologies and Interaction 2, no. 4 (December 6, 2018): 81. http://dx.doi.org/10.3390/mti2040081.

Повний текст джерела
Анотація:
Semantic fusion is a central requirement of many multimodal interfaces. Procedural methods like finite-state transducers and augmented transition networks have proven to be beneficial to implement semantic fusion. They are compliant with rapid development cycles that are common for the development of user interfaces, in contrast to machine-learning approaches that require time-costly training and optimization. We identify seven fundamental requirements for the implementation of semantic fusion: Action derivation, continuous feedback, context-sensitivity, temporal relation support, access to the interaction context, as well as the support of chronologically unsorted and probabilistic input. A subsequent analysis reveals, however, that there is currently no solution for fulfilling the latter two requirements. As the main contribution of this article, we thus present the Concurrent Cursor concept to compensate these shortcomings. In addition, we showcase a reference implementation, the Concurrent Augmented Transition Network (cATN), that validates the concept’s feasibility in a series of proof of concept demonstrations as well as through a comparative benchmark. The cATN fulfills all identified requirements and fills the lack amongst previous solutions. It supports the rapid prototyping of multimodal interfaces by means of five concrete traits: Its declarative nature, the recursiveness of the underlying transition network, the network abstraction constructs of its description language, the utilized semantic queries, and an abstraction layer for lexical information. Our reference implementation was and is used in various student projects, theses, as well as master-level courses. It is openly available and showcases that non-experts can effectively implement multimodal interfaces, even for non-trivial applications in mixed and virtual reality.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Perelman, Gary, Emmanuel Dubois, Alice Probst, and Marcos Serrano. "Visual transitions around tabletops in mixed reality: study on a visual acquisition task between vertical virtual displays and horizontal tabletops." Proceedings of the ACM on Human-Computer Interaction 6, ISS (November 14, 2022): 660–79. http://dx.doi.org/10.1145/3567738.

Повний текст джерела
Анотація:
See-through Head-Mounted Displays (HMDs) offer interesting opportunities to augment the interaction space around screens, especially around horizontal tabletops. In such context, HMDs can display surrounding vertical virtual windows to complement the tabletop content with data displayed in close vicinity. However, the effects of such combination on the visual acquisition of targets in the resulting combined display space have scarcely been explored. In this paper we conduct a study to explore visual acquisitions in such contexts, with a specific focus on the analysis of visual transitions between the horizontal tabletop display and the vertical virtual displays (in front and on the side of the tabletop). To further study the possible visual perception of the tabletop content out of the HMD and its impact on visual interaction, we distinguished two solutions for displaying information on the horizontal tabletop: using the see-through HMD to display virtual content over the tabletop surface (virtual overlay), i.e. the content is only visible inside the HMD’s FoV, or using the tabletop itself (tabletop screen). 12 participants performed visual acquisition tasks involving the horizontal and vertical displays. We measured the time to perform the task, the head movements, the portions of the displays visible in the HMD’s field of view, the physical fatigue and the user’s preference. Our results show that it is faster to acquire virtual targets in the front display than on the side. Results reveal that the use of the virtual overlay on the tabletop slows down the visual acquisition compared to the use of the tabletop screen, showing that users exploit the visual perception of the tabletop content on the peripheral visual space. We were also able to quantify when and to which extent targets on the tabletop can be acquired without being visible within the HMD's field of view when using the tabletop screen, i.e. by looking under the HMD. These results lead to design recommendations for more efficient, comfortable and integrated interfaces combining tabletop and surrounding vertical virtual displays.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Noor, Ahmed K. "AI and the Future of the Machine Design." Mechanical Engineering 139, no. 10 (October 1, 2017): 38–43. http://dx.doi.org/10.1115/1.2017-oct-2.

Повний текст джерела
Анотація:
This article discusses advantages of artificially intelligent (AI) systems and future of machine design. Advances in AI, combined synergistically with other technologies such as cognitive computing, Internet of Things, 3D (or even 4D) printing, advanced robotics, virtual and mixed reality, and human–machine interfaces are transforming what, where, and how products are designed, manufactured, assembled, distributed, serviced, and upgraded. The research and related activities may ultimately result in the development of self-repairing, self-healing, self-adaptive, self-reconfiguring systems—and products that ‘operationally improve’ themselves. Instead of depreciating in value and capability, such products could improve over time. In time, the role of the human engineer may be that of a director rather than of a producer. Much of the technical aspect of engineering will be moved to the machine-based design system, just as one need not be able to operate a slide rule or complete an isometric drawing to be a successful engineer today.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Kolo, Castulus, and Florian Haumer. "Technological advances and the future of corporate and marketing communication: an international foresight study among experts from different professional backgrounds." Journal of Creative Industries and Cultural Studies 6 (2020): 18–35. http://dx.doi.org/10.56140/jocis-v6-1.

Повний текст джерела
Анотація:
This study strives to shed light on the potential impact of technological advances on specific aspects of corporate and marketing communication. It is based on an international survey among experts from different professional backgrounds (n=470) and a follow-up group discussion of the survey results with selected media industry experts to identify required actions by companies to cope with various aspects of change. Our foresight approach with a time horizon until about 2030 suggest that artificial intelligence (AI), virtual, mixed, or augmented reality, new human machine interfaces in general as well as internet of things (IoT) and blockchain will have the most substantial effects alongside with the increasing complexity and need for integration of various communication activities. Furthermore, content creation will become automated to a considerable extent in this decade and thus will require process innovation to get it organized. As a result, companies should build internal expertise on AI at decision-making level to avoid dependencies on external consultancy and set up a proactive partnering with suppliers of the other key technologies.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Ormándi, Tamás, Balázs Varga, and Tamás Tettamanti. "Estimating Vehicle Suspension Characteristics for Digital Twin Creation with Genetic Algorithm." Periodica Polytechnica Transportation Engineering 49, no. 3 (September 1, 2021): 231–41. http://dx.doi.org/10.3311/pptr.18576.

Повний текст джерела
Анотація:
Usage of simulation techniques like Vehicle-in-the-Loop, Scenario-in-the-Loop, and other mixed-reality systems are becoming inevitable in autonomous vehicle development, particularly in testing and validation. These methods rely on using digital twins, realistic representations of real vehicles, and traffic in a carefully rebuilt virtual world. Recreating them precisely in a virtual ecosystem requires many parameters of real vehicles to follow their properties in a simulation. This is especially true for vehicle dynamics, where these parameters have high impact on the simulation results. The paper's objective is to provide a method that can help reverse engineering a real car's suspension characteristics with the help of a genetic algorithm. A detailed description of the method is presented, guiding the reader through the whole process, including the meta-heuristic function's settings and how it interfaces with IPG Carmaker. The paper also presents multiple measurements, which can be effortlessly recreated without expensive devices or the need to disassemble any vehicle parts. Measurements are reproduced in two separate simulation tools with special scenarios providing an efficient way to analyze and verify the results. The provided method creates vehicle suspension characteristics with adequate quality, opening up the possibility to use them in the creation of digital twins or creating virtual traffic with realistic vehicle dynamics for high-quality visualization. Results show satisfying accuracy when tested with OpenCRG.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Kontopanagou, Katerina, Athanasios Tsipis, and Vasileios Komianos. "A Framework for Exploring Churches/Monuments/Museums of Byzantine Cultural Influence Exploiting Immersive Technologies in Real-Time Networked Environments." Technologies 9, no. 3 (August 9, 2021): 57. http://dx.doi.org/10.3390/technologies9030057.

Повний текст джерела
Анотація:
The unique art that was developed in Byzantine times is widely accepted as a precursor to the Renaissance and is still evident in monuments either from or influenced by the Byzantium. Visiting such a site is a unique experience due to the lavishly painted interior for both tourists and scholars. Taking advantage of the emerging 5G technologies, cloud/fog computing, and Augmented/Mixed Reality mechanisms, a common smart device (e.g., smartphone, tablet) could be employed to give a better experience to the end-users. The proposed framework is intended to provide visitors with interpretative information regarding the visited monuments and their paintings. Under the framework introduced in this paper, camera input is uploaded to a cloud/fog computing infrastructure where appropriate algorithms and services provide monument and painting recognition and the mobile application retrieves and projects the related information. In addition, the designed immersive user interfaces assist visitors in contributing in cases of monuments and paintings for which no available information exists. This paper presents the state of the art in approaches for immersive experiences in Digital Culture, reviews the current image recognition approaches and their suitability for Byzantine paintings, proposes interaction techniques appropriately designed for observation and interaction of paintings, and discusses the overall framework architecture that is needed to support the described functionality while stressing the challenging issues of the above aspects thus paving the road for future work and providing guidelines for a test-bed implementation.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Varga, Balázs, Mátyás Szalai, Árpád Fehér, Szilárd Aradi, and Tamás Tettamanti. "Mixed-reality Automotive Testing with SENSORIS." Periodica Polytechnica Transportation Engineering 48, no. 4 (August 3, 2020): 357–62. http://dx.doi.org/10.3311/pptr.15851.

Повний текст джерела
Анотація:
Highly automated and autonomous vehicles become more and more widespread changing the classical way of testing and validation. Traditionally, the automotive industry has pursued testing rather in real-world or in pure virtual simulation environments. As a new possibility, mixed-reality testing has also appeared enabling an efficient combination of real and simulated elements of testing. Furthermore, vehicles from different OEMs will have a common interface to communicate with a test system. The paper presents a mixed-reality test framework for visualizing perception sensor feeds real-time in the Unity 3D game engine. Thereby, the digital twin of the tested vehicle and its environment are realized in the simulation. The communication between the sensors of the tested vehicle and the central computer running the test is realized via the standard SENSORIS interface. The paper outlines the hardware and software requirements towards such a system in detail. To show the viability of the system a vehicle in the loop test has been carried out.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Kang, Hyun. "A Mixed Reality Based Interface for Planing Layouts." Journal of the HCI Society of Korea 2, no. 2 (November 30, 2007): 45. http://dx.doi.org/10.17210/jhsk.2007.11.2.2.45.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Khatib, Maram, Khaled Al Khudir, and Alessandro De Luca. "Human-robot contactless collaboration with mixed reality interface." Robotics and Computer-Integrated Manufacturing 67 (February 2021): 102030. http://dx.doi.org/10.1016/j.rcim.2020.102030.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Egliston, Ben, and Marcus Carter. "‘The interface of the future’: Mixed reality, intimate data and imagined temporalities." Big Data & Society 9, no. 1 (January 2022): 205395172110636. http://dx.doi.org/10.1177/20539517211063689.

Повний текст джерела
Анотація:
This article examines discourses about mixed reality as a data-rich sensing technology – specifically, engaging with discourses of time as framed by developers, engineers and in corporate PR and marketing in a range of public facing materials. We focus on four main settings in which mixed reality is imagined to be used, and in which time was a dominant discursive theme – (1) the development of mixed reality by big tech companies, (2) the use of mixed reality for defence, (3) mixed reality as a technology for control of populations in civil society and (4) mixed reality as a technology used in workplace settings. Across these settings, the broad narrative is that mixed reality technologies afford overwhelmingly positive benefits like efficiency and security through their capture, relay and rendition of data (about the environment, about the body etc.) – affording a form of anticipatory power to the user. The framing of temporality, we argue, is underlain by social and political values, which represent certain interests, but leave others out in the imagination of mixed reality's technological advance.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Wilbrink, Marc, Merle Lau, Johannes Illgner, Anna Schieben, and Michael Oehl. "Impact of External Human–Machine Interface Communication Strategies of Automated Vehicles on Pedestrians’ Crossing Decisions and Behaviors in an Urban Environment." Sustainability 13, no. 15 (July 27, 2021): 8396. http://dx.doi.org/10.3390/su13158396.

Повний текст джерела
Анотація:
The development of automated vehicles (AVs) and their integration into traffic are seen by many vehicle manufacturers and stakeholders such as cities or transportation companies as a revolution in mobility. In future urban traffic, it is more likely that AVs will operate not in separated traffic spaces but in so-called mixed traffic environments where different types of traffic participants interact. Therefore, AVs must be able to communicate with other traffic participants, e.g., pedestrians as vulnerable road users (VRUs), to solve ambiguous traffic situations. To achieve well-working communication and thereby safe interaction between AVs and other traffic participants, the latest research discusses external human–machine interfaces (eHMIs) as promising communication tools. Therefore, this study examines the potential positive and negative effects of AVs equipped with static (only displaying the current vehicle automation status (VAS)) and dynamic (communicating an AV’s perception and intention) eHMIs on the interaction with pedestrians by taking subjective and objective measurements into account. In a Virtual Reality (VR) simulator study, 62 participants were instructed to cross a street while interacting with non-automated (without eHMI) and automated vehicles (equipped with static eHMI or dynamic eHMI). The results reveal that a static eHMI had no effect on pedestrians’ crossing decisions and behaviors compared to a non-automated vehicle without any eHMI. However, participants benefit from the additional information of a dynamic eHMI by making earlier decisions to cross the street and higher certainties regarding their decisions when interacting with an AV with a dynamic eHMI compared to an AV with a static eHMI or a non-automated vehicle. Implications for a holistic evaluation of eHMIs as AV communication tools and their safe introduction into traffic are discussed based on the results.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Prieto, Pablo A., Francisco D. Soto, Marcos D. Zúñiga, Sheng-Feng Qin, and David K. Wright. "Three-dimensional immersive mixed-reality interface for structural design." Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture 226, no. 5 (January 25, 2012): 955–58. http://dx.doi.org/10.1177/0954405411432392.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Bruno, Fabio, Agostino Angilica, Francesco Cosco, and Maurizio Muzzupappa. "Reliable behaviour simulation of product interface in mixed reality." Engineering with Computers 29, no. 3 (October 25, 2012): 375–87. http://dx.doi.org/10.1007/s00366-012-0293-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Komenda, Tomas, and Franz Schauer. "REMLABNET – User Experience and Mixed Reality Continuum." International Journal of Online Engineering (iJOE) 14, no. 02 (February 28, 2018): 38. http://dx.doi.org/10.3991/ijoe.v14i02.7651.

Повний текст джерела
Анотація:
Our recent research in remote laboratory management systems (REMLABNET - www.remlabnet.eu) deals with questions such as how to make the user experience stronger and how to help users understand complex phenomena behind remote experiments and the laws of physics governing the experiment. At our current stage of technological development, we have both sufficiently powerful hardware and software to create an impressive virtual user interface which could be a help to this mission. An extended mixed reality taxonomy for remote physical experiments was proposed to identify goals of the future REMLABNET research and development. The first part of this paper describes classes of taxonomy and reasons why they were set up in this way. The second part mentions the chosen method of our research and our current progress.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Kováč, Juraj, František Ďurovský, and Jozef Varga. "Integrated System of Mixed Virtual Reality Based on Data Glove CyberGlove II and Robotic Arm MechaTE Robot." Applied Mechanics and Materials 611 (August 2014): 239–44. http://dx.doi.org/10.4028/www.scientific.net/amm.611.239.

Повний текст джерела
Анотація:
Proposed paper describes development of CyberGlove II - MechaTE low-cost robotic hand interface intended for future use in virtual and mixed reality robot programming. The main goal is to explore possibilities and gain programing experience in controlling mechanical hands by means of data gloves and its interconnection to virtual reality modeling software. First part of paper describes recent progress in using virtual reality for purposes of intuitive robot programming; second part includes an overview of recent development of mechanical hands construction, as well as currently available data gloves. Last part provides details about CyberGlove – MechaTE interface and its potential for methods of intuitive robot programming in virtual or mixed reality environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії