To see the other types of publications on this topic, follow the link: Virtual multisensor.

Journal articles on the topic 'Virtual multisensor'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Virtual multisensor.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Emura, Satoru, and Susumu Tachi. "Multisensor Integrated Prediction for Virtual Reality." Presence: Teleoperators and Virtual Environments 7, no. 4 (August 1998): 410–22. http://dx.doi.org/10.1162/105474698565811.

Full text
Abstract:
Unconstrained measurement of human head motion is essential for HMDs (headmounted displays) to be really interactive. Polhemus sensors developed for that purpose have deficiencies of critical latency and low sampling rates. Adding to this, a delay for rendering virtual scenes is inevitable. This paper proposes methods that compensate the latency and raises the effective sampling rate by integrating Polhemus and gyro sensors. The adoption of quaternion representation enables us to avoid singularity and the complicated boundary process of rotational motion. The ability of proposed methods under various rendering delays was evaluated in the respect of RMS error and our new correlational technique, which enables us to check the latency and fidelity of a magnetic tracker, and to assess the environment where the magnetic tracker is used. The real-time implementation of our simpler method on personal computers is also reported in detail.
APA, Harvard, Vancouver, ISO, and other styles
2

Wenhao, Dong. "Multisensor Information Fusion-Assisted Intelligent Art Design under Wireless Virtual Reality Environment." Journal of Sensors 2021 (December 31, 2021): 1–10. http://dx.doi.org/10.1155/2021/6119127.

Full text
Abstract:
Under the background of intelligent technologies, art designers need to use information technology to assist the design of art factors and fully realize the integration of art design and information technology. Multisensor information fusion technology can more intuitively and visually carry out a more comprehensive grasp of the objectives to be designed, maximize the positive effects of art design, and achieve its overall optimization and can also help art designers get rid of the traditional monolithic and obsolete design concepts. Based on multisensor information fusion technology under wireless virtual reality environment, principles of signal acquisition and preprocessing, feature extraction, and fusion calculation, to analyze the information processing process of multisensor information fusion, conduct the model construction and performance evaluation for intelligent art design, and propose an intelligent art design model based on multisensor information fusion technology, we discuss the realization of multisensor information fusion algorithm in intelligent art design and finally carry out a simulation experiment and its result analysis by taking the environment design of a parent-child restaurant as an example. The study results show that using multisensor information fusion in the environmental design of parent-child restaurant is better than using a single sensor for that; at the same time, using force sensors has a better environmental design effect than using vibration sensors. The multisensor information fusion technology can automatically analyze the observation information of several sources obtained in time sequence under certain criteria and comprehensively perform information processing for the completion of the decision-making and estimation tasks required for intelligent art design.
APA, Harvard, Vancouver, ISO, and other styles
3

Xie, Jiahao, Daozhi Wei, Shucai Huang, and Xiangwei Bu. "A Sensor Deployment Approach Using Improved Virtual Force Algorithm Based on Area Intensity for Multisensor Networks." Mathematical Problems in Engineering 2019 (February 27, 2019): 1–9. http://dx.doi.org/10.1155/2019/8015309.

Full text
Abstract:
Sensor deployment is one of the major concerns in multisensor networks. This paper proposes a sensor deployment approach using improved virtual force algorithm based on area intensity for multisensor networks to realize the optimal deployment of multisensor and obtain better coverage effect. Due to the real-time sensor detection model, the algorithm uses the intensity of sensor area to select the optimal deployment distance. In order to verify the effectiveness of this algorithm to improve coverage quality, VFA and PSOA are selected for comparative analysis. The simulation results show that the algorithm can achieve global coverage optimization better and improve the performance of virtual force algorithm. It avoids the unstable coverage caused by the large amount of computation, slow convergence speed, and easily falling into local optimum, which provides a new idea for multisensor deployment.
APA, Harvard, Vancouver, ISO, and other styles
4

Di, Peng, Xuan Wang, Tong Chen, and Bin Hu. "Multisensor Data Fusion in Testability Evaluation of Equipment." Mathematical Problems in Engineering 2020 (November 30, 2020): 1–16. http://dx.doi.org/10.1155/2020/7821070.

Full text
Abstract:
The multisensor data fusion method has been extensively utilized in many practical applications involving testability evaluation. Due to the flexibility and effectiveness of Dempster–Shafer evidence theory in modeling and processing uncertain information, this theory has been widely used in various fields of multisensor data fusion method. However, it may lead to wrong results when fusing conflicting multisensor data. In order to deal with this problem, a testability evaluation method of equipment based on multisensor data fusion method is proposed. First, a novel multisensor data fusion method, based on the improvement of Dempster–Shafer evidence theory via the Lance distance and the belief entropy, is proposed. Next, based on the analysis of testability multisensor data, such as testability virtual test data, testability test data of replaceable unit, and testability growth test data, the corresponding prior distribution conversion schemes of testability multisensor data are formulated according to their different characteristics. Finally, the testability evaluation method of equipment based on the multisensor data fusion method is proposed. The result of experiment illustrated that the proposed method is feasible and effective in handling the conflicting evidence; besides, the accuracy of fusion of the proposed method is higher and the result of evaluation is more reliable than other testability evaluation methods, which shows that the basic probability assignment of the true target is 94.71%.
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Tao. "Performance of VR Technology in Environmental Art Design Based on Multisensor Information Fusion under Computer Vision." Mobile Information Systems 2022 (April 23, 2022): 1–10. http://dx.doi.org/10.1155/2022/3494535.

Full text
Abstract:
Multisensor information fusion technology is a symbol of scientific and technological progress. This paper is aimed at discussing the performance of virtual reality (VR) technology in the environmental art design of multisensor information fusion technology. This paper prepares some related work in the early stage and then lists the algorithms and models, such as the multisensor information fusion model based on VR instrument technology, and shows the principle of information fusion and GPID bus structure. This paper describes the multisensor information fusion algorithm to analyze DS evidence theory. In the evidence-based decision theory, the multisensor information fusion process is the calculation of the qualitative level and/or confidence level function, generally calculating the posterior distribution information. In addition to showing its algorithm, this paper also shows the data flow of the multisensor information fusion system through pictures. Then, this paper explains the design and construction of garden art environment based on active panoramic stereo vision sensor, shows the relationship of the four coordinates in an all-round way, and shows the interactive experience of indoor and outdoor environmental art design. Then, this paper conducts estimation simulation experiments based on EKF and shows the results, and it is concluded that the fusion data using the extended Kalman filter algorithm is closer to the actual target motion data and the accuracy rate is better than 92%.
APA, Harvard, Vancouver, ISO, and other styles
6

Gu, Yingjie, and Ye Zhou. "Application of Virtual Reality Based on Multisensor Data Fusion in Theater Space and Installation Art." Mobile Information Systems 2022 (August 28, 2022): 1–8. http://dx.doi.org/10.1155/2022/4101910.

Full text
Abstract:
The application of Virtual Reality (VR) in theater space and installation art is the general trend, and it can be seen in large stage plays and installation art exhibitions. However, as the current VR is not mature enough, it is difficult to perfectly fulfill the exhibition requirements of large theaters, so this paper aims to change this situation by using VR based on multisensor data fusion. In this paper, a data fusion algorithm based on multisensors is designed, which improves the data transmission efficiency and delay of the VR system, so that VR can have a better viewing experience in theater space and installation art. And, through the questionnaire survey and actual interview, the actual feelings of VR audience in theater space and installation art are investigated and studied. Through the experimental analysis of this paper, the algorithm in this paper has high reliability and can improve the experience of using VR. The interview results and results show that the main application of VR in theater space is manifested in three aspects: multiangle and all-round viewing, multiroute viewing, and man-machine interaction in art galleries. The application of VR in installation art is mainly reflected in the perception of installation materials.
APA, Harvard, Vancouver, ISO, and other styles
7

Shen, Dongli. "Application of GIS and Multisensor Technology in Green Urban Garden Landscape Design." Journal of Sensors 2023 (March 27, 2023): 1–7. http://dx.doi.org/10.1155/2023/9730980.

Full text
Abstract:
In order to solve the problem of low definition of the original 3D virtual imaging system, the author proposes the application method of GIS and multisensor technology in green urban garden landscape design. By formulating a hardware design framework, an image collector is selected for image acquisition according to the framework, the image is filtered and denoised by a computer, the processed image is output through laser refraction, and a photoreceptor and a transparent transmission module are used for virtual imaging. Formulate a software design framework, perform noise reduction processing on the collected image through convolutional neural network calculation, and use pixel grayscale calculation to obtain the feature points of the original image, and use C language to set and output the virtual imaging, thus completing the software design. Combined with the above hardware and software design, the design of 3D virtual imaging system in garden landscape design is completed. Construct a comparative experiment to compare with the original system. The results showed the following: The designed system has a significant improvement in the clarity, the original system clarity is 82%~85%, and the image clarity of this system is 85%~90%. In conclusion, the author designed the method to be more effective.
APA, Harvard, Vancouver, ISO, and other styles
8

Lee, Wonjun, Hyung-Jun Lim, and Mun Sang Kim. "Development for Multisensor and Virtual Simulator–Based Automatic Broadcast Shooting System." International Journal of Digital Multimedia Broadcasting 2022 (July 16, 2022): 1–13. http://dx.doi.org/10.1155/2022/2724804.

Full text
Abstract:
To solve the limitations of complexity and repeatability of existing broadcast filming systems, a new broadcast filming system was developed. In particular, for Korean music broadcasts, the shooting sequence is stage and lighting installation, rehearsal, lighting effect production, and main shooting; however, this sequence is complex and involves multiple people. We developed an automatic shooting system that can produce the same effect as the sequence with a minimum number of people as the era of un-tact has emerged because of COVID-19. The developed system comprises a simulator. After developing a stage using the simulator, during rehearsal, dancers’ movements are acquired using UWB and two-dimensional (2D) LiDAR sensors. By inserting acquired movement data in the developed stage, a camera effect is produced using a virtual camera installed in the developed simulator. The camera effect comprises pan, tilt, and zoom, and a camera director creates lightning effects while evaluating the movements of virtual dancers on the virtual stage. In this study, four cameras were used, three of which were used for camera pan, tilt, and zoom control, and the fourth was used as a fixed camera for a full shot. Video shooting is performed according to the pan, tilt, and zoom values ​​of the three cameras and switcher data. Only the video of dancers recorded during rehearsal and that produced by the lighting director via the existing broadcast filming process is overlapped in the developed simulator to assess lighting effects. The lighting director assesses the overlapping video and then corrects parts that require to be corrected or emphasized. The abovementioned method produced better lighting effects optimized for music and choreography compared to existing lighting effect production methods. Finally, the performance and lighting effects of the developed simulator and system were confirmed by shooting using K-pop using the pan, tilt, and zoom control plan, switcher sequence, and lighting effects of the selected camera.
APA, Harvard, Vancouver, ISO, and other styles
9

Oue, Mariko, Aleksandra Tatarevic, Pavlos Kollias, Dié Wang, Kwangmin Yu, and Andrew M. Vogelmann. "The Cloud-resolving model Radar SIMulator (CR-SIM) Version 3.3: description and applications of a virtual observatory." Geoscientific Model Development 13, no. 4 (April 21, 2020): 1975–98. http://dx.doi.org/10.5194/gmd-13-1975-2020.

Full text
Abstract:
Abstract. Ground-based observatories use multisensor observations to characterize cloud and precipitation properties. One of the challenges is how to design strategies to best use these observations to understand these properties and evaluate weather and climate models. This paper introduces the Cloud-resolving model Radar SIMulator (CR-SIM), which uses output from high-resolution cloud-resolving models (CRMs) to emulate multiwavelength, zenith-pointing, and scanning radar observables and multisensor (radar and lidar) products. CR-SIM allows for direct comparison between an atmospheric model simulation and remote-sensing products using a forward-modeling framework consistent with the microphysical assumptions used in the atmospheric model. CR-SIM has the flexibility to easily incorporate additional microphysical modules, such as microphysical schemes and scattering calculations, and expand the applications to simulate multisensor retrieval products. In this paper, we present several applications of CR-SIM for evaluating the representativeness of cloud microphysics and dynamics in a CRM, quantifying uncertainties in radar–lidar integrated cloud products and multi-Doppler wind retrievals, and optimizing radar sampling strategy using observing system simulation experiments. These applications demonstrate CR-SIM as a virtual observatory operator on high-resolution model output for a consistent comparison between model results and observations to aid interpretation of the differences and improve understanding of the representativeness errors due to the sampling limitations of the ground-based measurements. CR-SIM is licensed under the GNU GPL package and both the software and the user guide are publicly available to the scientific community.
APA, Harvard, Vancouver, ISO, and other styles
10

Bidaut, Luc. "Multisensor Imaging and Virtual Simulation for Assessment, Diagnosis, Therapy Planning, and Navigation." Simulation & Gaming 32, no. 3 (September 2001): 370–90. http://dx.doi.org/10.1177/104687810103200307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Schutze, A., A. Gramm, and T. Ruhl. "Identification of Organic Solvents by a Virtual Multisensor System With Hierarchical Classification." IEEE Sensors Journal 4, no. 6 (December 2004): 857–63. http://dx.doi.org/10.1109/jsen.2004.833514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Speller, Nicholas C., Noureen Siraj, Stephanie Vaughan, Lauren N. Speller, and Isiah M. Warner. "QCM virtual multisensor array for fuel discrimination and detection of gasoline adulteration." Fuel 199 (July 2017): 38–46. http://dx.doi.org/10.1016/j.fuel.2017.02.066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Ilyas, Muhamad, Sangdeok Park, and Seung-Ho Baeg. "Improving unmanned ground vehicle navigation by exploiting virtual sensors and multisensor fusion." Journal of Mechanical Science and Technology 28, no. 11 (November 2014): 4369–79. http://dx.doi.org/10.1007/s12206-014-1004-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Wang, Rui, and Yi Huang. "Application of 3D Software Virtual Reality in Interior Designing." Mobile Information Systems 2022 (May 4, 2022): 1–8. http://dx.doi.org/10.1155/2022/5315262.

Full text
Abstract:
A 3D virtual reality expertise is a dynamic arena of research in current years. Virtual technology is a novel expertise participating in artificial intelligence, image processing, and multisensor knowledge. The formation of situation in 3D virtual reality scheme is the key foundation for the simulation of virtual world scene. Aiming at the main problems that the effect of traditional interior space design is not ideal and cannot achieve practicability. This article proposes an interior space virtual design method based on three-dimensional vision. Initially, the feature of the indoor scene is combined with the set of three points, and finally, the optimal combination of the main and subpoints is obtained through the iterative combination of the main and subpoints of the indoor scene. The information fusion of color background and vision of indoor space is used for indoor comprehensive design. It is proved that the method proposed in this paper can effectively improve the effect and practicability of indoor space design through the simulation experiment with virtual reality platform software.
APA, Harvard, Vancouver, ISO, and other styles
15

Zhang, Yimin D., Xizhong Shen, Ramazan Demirli, and Moeness G. Amin. "Ultrasonic Flaw Imaging via Multipath Exploitation." Advances in Acoustics and Vibration 2012 (August 1, 2012): 1–12. http://dx.doi.org/10.1155/2012/874081.

Full text
Abstract:
We consider ultrasonic imaging for the visualization of flaws in a material. Ultrasonic imaging is a powerful nondestructive testing (NDT) tool which assesses material conditions via the detection, localization, and classification of flaws inside a structure. We utilize reflections of ultrasonic signals which occur when encountering different media and interior boundaries. These reflections can be cast as direct paths to the target corresponding to the virtual sensors appearing on the top and bottom side of the target. Some of these virtual sensors constitute a virtual aperture, whereas in others, the aperture changes with the transmitter position. Exploitations of multipath extended virtual array apertures provide enhanced imaging capability beyond the limitation of traditional multisensor approaches. The waveforms observed at the physical as well as the virtual sensors yield additional measurements corresponding to different aspect angles, thus allowing proper multiview imaging of flaws. We derive the wideband point spread functions for dominant multipaths and show that fusion of physical and virtual sensor data improves the flaw perimeter detection and localization performance. The effectiveness of the proposed multipath exploitation approach is demonstrated using real data.
APA, Harvard, Vancouver, ISO, and other styles
16

Yan, Lingyun, Guowu Wei, Zheqi Hu, Haohua Xiu, Yuyang Wei, and Lei Ren. "Low-Cost Multisensor Integrated System for Online Walking Gait Detection." Journal of Sensors 2021 (August 14, 2021): 1–15. http://dx.doi.org/10.1155/2021/6378514.

Full text
Abstract:
A three-dimensional motion capture system is a useful tool for analysing gait patterns during walking or exercising, and it is frequently applied in biomechanical studies. However, most of them are expensive. This study designs a low-cost gait detection system with high accuracy and reliability that is an alternative method/equipment in the gait detection field to the most widely used commercial system, the virtual user concept (Vicon) system. The proposed system integrates mass-produced low-cost sensors/chips in a compact size to collect kinematic data. Furthermore, an x86 mini personal computer (PC) running at 100 Hz classifies motion data in real-time. To guarantee gait detection accuracy, the embedded gait detection algorithm adopts a multilayer perceptron (MLP) model and a rule-based calibration filter to classify kinematic data into five distinct gait events: heel-strike, foot-flat, heel-off, toe-off, and initial-swing. To evaluate performance, volunteers are requested to walk on the treadmill at a regular walking speed of 4.2 km/h while kinematic data are recorded by a low-cost system and a Vicon system simultaneously. The gait detection accuracy and relative time error are estimated by comparing the classified gait events in the study with the Vicon system as a reference. The results show that the proposed system obtains a high accuracy of 99.66% with a smaller time error (32 ms), demonstrating that it performs similarly to the Vicon system in the gait detection field.
APA, Harvard, Vancouver, ISO, and other styles
17

Groben, Dennis, Kittikhun Thongpull, Abhaya Chandra Kammara, and Andreas König. "Neural Virtual Sensors for Adaptive Magnetic Localization of Autonomous Dataloggers." Advances in Artificial Neural Systems 2014 (December 30, 2014): 1–17. http://dx.doi.org/10.1155/2014/394038.

Full text
Abstract:
The surging advance in micro- and nanotechnologies allied with neural learning systems allows the realization of miniaturized yet extremely powerful multisensor systems and networks for wide application fields, for example, in measurement, instrumentation, automation, and smart environments. Time and location context is particularly relevant to sensor swarms applied for distributed measurement in industrial environment, such as, for example, fermentation tanks. Common RF solutions face limits here, which can be overcome by magnetic systems. Previously, we have developed the electronic system for an integrated data logger swarm with magnetic localization and sensor node timebase synchronization. The focus of this work is on an approach to improving both localization accuracy and flexibility by the application of artificial neural networks applied as virtual sensors and classifiers in a hybrid dedicated learning system. Including also data from an industrial brewery environment, the best investigated neural virtual sensor approach has achieved an advance in localization accuracy of a factor of 4 compared to state-of-the-art numerical methods and, thus, results in the order of less than 5 cm meeting industrial expectations on a feasible solution for the presented integrated localization system solution.
APA, Harvard, Vancouver, ISO, and other styles
18

Zhang, Di, Dinghan Jia, Lili Ren, Jiacun Li, Yan Lu, and Haiwei Xu. "Multisensor and Multiscale Data Integration Method of TLS and GPR for Three-Dimensional Detailed Virtual Reconstruction." Sensors 23, no. 24 (December 14, 2023): 9826. http://dx.doi.org/10.3390/s23249826.

Full text
Abstract:
Integrated TLS and GPR data can provide multisensor and multiscale spatial data for the comprehensive identification and analysis of surficial and subsurface information, but a reliable systematic methodology associated with data integration of TLS and GPR is still scarce. The aim of this research is to develop a methodology for the data integration of TLS and GPR for detailed, three-dimensional (3D) virtual reconstruction. GPR data and high-precision geographical coordinates at the centimeter level were simultaneously gathered using the GPR system and the Global Navigation Satellite System (GNSS) signal receiver. A time synchronization algorithm was proposed to combine each trace of the GPR data with its position information. In view of the improved propagation model of electromagnetic waves, the GPR data were transformed into dense point clouds in the geodetic coordinate system. Finally, the TLS-based and GPR-derived point clouds were merged into a single point cloud dataset using coordinate transformation. In addition, TLS and GPR (250 MHz and 500 MHz antenna) surveys were conducted in the Litang fault to assess the feasibility and overall accuracy of the proposed methodology. The 3D realistic surface and subsurface geometry of the fault scarp were displayed using the integration data of TLS and GPR. A total of 40 common points between the TLS-based and GPR-derived point clouds were implemented to assess the data fusion accuracy. The difference values in the x and y directions were relatively stable within 2 cm, while the difference values in the z direction had an abrupt fluctuation and the maximum values could be up to 5 cm. The standard deviations (STD) of the common points between the TLS-based and GPR-derived point clouds were 0.9 cm, 0.8 cm, and 2.9 cm. Based on the difference values and the STD in the x, y, and z directions, the field experimental results demonstrate that the GPR-derived point clouds exhibit good consistency with the TLS-based point clouds. Furthermore, this study offers a good future prospect for the integration method of TLS and GPR for comprehensive interpretation and analysis of the surficial and subsurface information in many fields, such as archaeology, urban infrastructure detection, geological investigation, and other fields.
APA, Harvard, Vancouver, ISO, and other styles
19

Song, Hongchang, and Tengfei Li. "Image Data Fusion Algorithm Based on Virtual Reality Technology and Nuke Software and Its Application." Journal of Sensors 2022 (March 23, 2022): 1–11. http://dx.doi.org/10.1155/2022/1569197.

Full text
Abstract:
As an important branch of multisensor information fusion, image fusion is widely used in various fields. As a hot research technology, virtual reality technology can bring different levels of experience to image fusion. At the same time, with the development of existing image processing software, it is conducive to the further analysis and processing of images. In today’s market, image processing technology still faces many problems, and with the advancement of technology, virtual technology is widely used in various fields, so combining virtual technology with images is conducive to improving image processing technology. This article mainly introduces the image fusion algorithm and its application research based on virtual reality technology and Nuke software. This paper first proposes a picture fusion model and a picture fusion system through the analysis of virtual technology and Nuke software and, on this basis, proposes a particle algorithm and a picture edge algorithm. Secondly, the optimal fusion of images is studied on Nuke software, and finally the experimental results are analyzed through image fusion algorithm. Studies have shown that the best image fusion greatly improves the security and privacy of the image, and the difficulty of cracking is as high as 80%. The data in the experimental analysis of the graphic fusion algorithm shows that the execution efficiency and time consumption of the algorithm are greatly shortened, and the time consumption is greatly reduced. The rate is reduced by about 50%, and a good image fusion effect has been achieved.
APA, Harvard, Vancouver, ISO, and other styles
20

Chen, Junjie, Shuai Li, Donghai Liu, and Xueping Li. "AiRobSim: Simulating a Multisensor Aerial Robot for Urban Search and Rescue Operation and Training." Sensors 20, no. 18 (September 13, 2020): 5223. http://dx.doi.org/10.3390/s20185223.

Full text
Abstract:
Unmanned aerial vehicles (UAVs), equipped with a variety of sensors, are being used to provide actionable information to augment first responders’ situational awareness in disaster areas for urban search and rescue (SaR) operations. However, existing aerial robots are unable to sense the occluded spaces in collapsed structures, and voids buried in disaster rubble that may contain victims. In this study, we developed a framework, AiRobSim, to simulate an aerial robot to acquire both aboveground and underground information for post-disaster SaR. The integration of UAV, ground-penetrating radar (GPR), and other sensors, such as global navigation satellite system (GNSS), inertial measurement unit (IMU), and cameras, enables the aerial robot to provide a holistic view of the complex urban disaster areas. The robot-collected data can help locate critical spaces under the rubble to save trapped victims. The simulation framework can serve as a virtual training platform for novice users to control and operate the robot before actual deployment. Data streams provided by the platform, which include maneuver commands, robot states and environmental information, have potential to facilitate the understanding of the decision-making process in urban SaR and the training of future intelligent SaR robots.
APA, Harvard, Vancouver, ISO, and other styles
21

MARCHESOTTI, LUCA, CARLO REGAZZONI, CARLO BONAMICO, and FABIO LAVAGETTO. "VIDEO PROCESSING AND UNDERSTANDING TOOLS FOR AUGMENTED MULTISENSOR PERCEPTION AND MOBILE USER INTERACTION IN SMART SPACES." International Journal of Image and Graphics 05, no. 03 (July 2005): 679–98. http://dx.doi.org/10.1142/s021946780500194x.

Full text
Abstract:
In this paper, a complete Smart Space architecture and related system prototype are presented. The system is able to analyze situations of interest in a given environment and to produce related contextual information. Experimental results show that video information plays a major role for what concerns both situation perception and personalized contex-aware communications. For this reason, the poropsed multisensor system automatically extracts information from multiple cameras as well as diverse sensors describing environment status. This information is then used to trigger personalized and context-aware video messages adaptively sent to users. A rule-based module is encharged to customize video messages in relation to the user profile, contextual situation and users's terminal. The systems outputs graphically generated video messages consisting of an animated avatar (i.e. Virtual Character) closing the loop on users. Proposed results validate the conceptual schema behind the architecture and the successful adaption to the analysis of different situations.
APA, Harvard, Vancouver, ISO, and other styles
22

Ngoc, Trinh Minh, Nguyen Van Duy, Chu Manh Hung, Nguyen Duc Hoa, Hugo Nguyen, Matteo Tonezzer, and Nguyen Van Hieu. "Self-heated Ag-decorated SnO2 nanowires with low power consumption used as a predictive virtual multisensor for H2S-selective sensing." Analytica Chimica Acta 1069 (September 2019): 108–16. http://dx.doi.org/10.1016/j.aca.2019.04.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Lewis, Tyrell, and Kiran Bhaganagar. "Configurable simulation strategies for testing pollutant plume source localization algorithms using autonomous multisensor mobile robots." International Journal of Advanced Robotic Systems 19, no. 2 (March 1, 2022): 172988062210813. http://dx.doi.org/10.1177/17298806221081325.

Full text
Abstract:
In hazardous situations involving the dispersion of chemical, biological, radiological, and nuclear pollutants, timely containment of the emission is critical. A contaminant disperses as a dynamically evolving plume into the atmosphere, introducing complex difficulties in predicting the dispersion trajectory and potential evacuation sites. Strategies for predictive modeling of rapid contaminant dispersion demand localization of the emission source, a task performed effectively via unmanned mobile-sensing platforms. With vast possibilities in sensor configurations and source-seeking algorithms, platform deployment in real-world applications involves much uncertainty alongside opportunity. This work aims to develop a plume source detection simulator to offer a reliable comparison of source-seeking approaches and performance testing of ground-based mobile-sensing platform configurations prior to experimental field testing. Utilizing ROS, Gazebo, MATLAB, and Simulink, a virtual environment is developed for an unmanned ground vehicle with a configurable array of sensors capable of measuring plume dispersion model data mapped into the domain. For selected configurations, gradient-based and adaptive exploration algorithms were tested for source localization using Gaussian dispersion models in addition to large eddy simulation models incorporating the effects of atmospheric turbulence. A unique global search algorithm was developed to locate the true source with overall success allowing for further evaluation in field experiments. From the observations obtained in simulation, it is evident that source-seeking performance can improve drastically by designing algorithms for global exploration while incorporating measurements of meteorological parameters beyond solely concentration (e.g. wind velocity and vorticity) made possible by the inclusion of high-resolution large eddy simulation plume data.
APA, Harvard, Vancouver, ISO, and other styles
24

Maciel, Daniel, Evlyn Novo, Lino Sander de Carvalho, Cláudio Barbosa, Rogério Flores Júnior, and Felipe de Lucia Lobo. "Retrieving Total and Inorganic Suspended Sediments in Amazon Floodplain Lakes: A Multisensor Approach." Remote Sensing 11, no. 15 (July 24, 2019): 1744. http://dx.doi.org/10.3390/rs11151744.

Full text
Abstract:
Remote sensing imagery are fundamental to increasing the knowledge about sediment dynamics in the middle-lower Amazon floodplains. Moreover, they can help to understand both how climate change and how land use and land cover changes impact the sediment exchange between the Amazon River and floodplain lakes in this important and complex ecosystem. This study investigates the suitability of Landsat-8 and Sentinel-2 spectral characteristics in retrieving total (TSS) and inorganic (TSI) suspended sediments on a set of Amazon floodplain lakes in the middle-lower Amazon basin using in situ Remote Sensing Reflectance (Rrs) measurements to simulate Landsat 8/OLI (Operational Land Imager) and Sentinel 2/MSI (Multispectral Instrument) bands and to calibrate/validate several TSS and TSI empirical algorithms. The calibration was based on the Monte Carlo Simulation carried out for the following datasets: (1) All-Dataset, consisting of all the data acquired during four field campaigns at five lakes spread over the lower Amazon floodplain (n = 94); (2) Campaign-Dataset including samples acquired in a specific hydrograph phase (season) in all lakes. As sample size varied from one season to the other, n varied from 18 to 31; (3) Lake-Dataset including samples acquired in all seasons at a given lake with n also varying from 17 to 67 for each lake. The calibrated models were, then, applied to OLI and MSI scenes acquired in August 2017. The performance of three atmospheric correction algorithms was also assessed for both OLI (6S, ACOLITE, and L8SR) and MSI (6S, ACOLITE, and Sen2Cor) images. The impact of glint correction on atmosphere-corrected image performance was assessed against in situ glint-corrected Rrs measurements. After glint correction, the L8SR and 6S atmospheric correction performed better with the OLI and MSI sensors, respectively (Mean Absolute Percentage Error (MAPE) = 16.68% and 14.38%) considering the entire set of bands. However, for a given single band, different methods have different performances. The validated TSI and TSS satellite estimates showed that both in situ TSI and TSS algorithms provided reliable estimates, having the best results for the green OLI band (561 nm) and MSI red-edge band (705 nm) (MAPE < 21%). Moreover, the findings indicate that the OLI and MSI models provided similar errors, which support the use of both sensors as a virtual constellation for the TSS and TSI estimate over an Amazon floodplain. These results demonstrate the applicability of the calibration/validation techniques developed for the empirical modeling of suspended sediments in lower Amazon floodplain lakes using medium-resolution sensors.
APA, Harvard, Vancouver, ISO, and other styles
25

Yen, Shih-Hsiang, Pei-Chong Tang, Yuan-Chiu Lin, and Chyi-Yeu Lin. "Development of a Virtual Force Sensor for a Low-Cost Collaborative Robot and Applications to Safety Control." Sensors 19, no. 11 (June 7, 2019): 2603. http://dx.doi.org/10.3390/s19112603.

Full text
Abstract:
To protect operators and conform to safety standards for human–machine interactions, the design of collaborative robot arms often incorporates flexible mechanisms and force sensors to detect and absorb external impact forces. However, this approach increases production costs, making the introduction of such robot arms into low-cost service applications difficult. This study proposes a low-cost, sensorless rigid robot arm design that employs a virtual force sensor and stiffness control to enable the safety collision detection and low-precision force control of robot arms. In this design, when a robot arm is subjected to an external force while in motion, the contact force observer estimates the external torques on each joint according to the motor electric current and calculation errors of the system model, which are then used to estimate the external contact force exerted on the robot arm’s end-effector. Additionally, a torque saturation limiter is added to the servo drive for each axis to enable the real-time adjustment of joint torque output according to the estimated external force, regulation of system stiffness, and achievement of impedance control that can be applied in safety measures and force control. The design this study developed is a departure from the conventional multisensor flexible mechanism approach. Moreover, it is a low-cost and sensorless design that relies on model-based control for stiffness regulation, thereby improving the safety and force control in robot arm applications.
APA, Harvard, Vancouver, ISO, and other styles
26

Li, Lei, Hanjun Ma, Mingyi Wei, Xuanbo Zhang, Qingxi Chen, and Yanqing Xin. "Thermal Power Plant Turbine Rotor Digital Twin Automation Construction and Monitoring System." Mathematical Problems in Engineering 2022 (October 10, 2022): 1–11. http://dx.doi.org/10.1155/2022/8527281.

Full text
Abstract:
Based on the digital twin technology, this article investigates the physical rules fusion model of the turbine rotor operation in thermal power plants, establishes the geometric behavior mapping method of the turbine rotor in the virtual scenario of thermal power plants, and develops a real-time data-driven virtual monitoring system of the rotor operation, which realizes the virtual control of the rotor operation process from the physical and geometric levels, respectively. The 3D model created by Creo was imported into ADAMS in x_t format, constraints were added, and model data input and output interfaces were established in ADAMS software to build its dynamics model. The foundation of the joint simulation with the AMESim model is laid. The information fusion technology based on D-S evidence theory, fusing multisensor data and information from other channels, can more accurately and comprehensively understand and describe the diagnostic object, to make correct judgment and decisions on complex fault diagnosis. We propose an integrated modeling method for multiview control scenarios of manufacturing units based on digital twins and finalize the construction of digital twin models of manufacturing units based on the definition of the multiview model collaboration mechanism, which provides model support for the research of digital twin-driven manufacturing unit control technology. For the twin data perception and interaction problem, a unified architecture-standardized communication protocol is established based on OPC UA technology to solve the problem of difficult data perception and interaction caused by the nonuniform communication interface protocol of different devices on the automated production line. The model change is intended to help improve the visualization level of digital production line monitoring and improve the operating efficiency of the turbine rotor. The experimental results show that the application of digital twin to thermal turbine rotor operation monitoring provides a new method for turbine rotor vibration fault diagnosis; D-S evidence theory can fuse information from multiple aspects of the fault, thus improving the probability of fault diagnosis and reducing uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
27

Li, Xiaomin, Zhiyu Ma, Xuan Chu, and Yongxin Liu. "A Cloud-Assisted Region Monitoring Strategy of Mobile Robot in Smart Greenhouse." Mobile Information Systems 2019 (October 1, 2019): 1–10. http://dx.doi.org/10.1155/2019/5846232.

Full text
Abstract:
In smart agricultural systems, the macroinformation sensing by adopting a mobile robot with multiple types of sensors is a key step for sustainable development of agriculture. Also, in a region monitoring strategy that meets the real-scene requirements, optimal operation of mobile robots is necessary. In this paper, a cloud-assisted region monitoring strategy of mobile robots in a smart greenhouse is presented. First, a hybrid framework that contains a cloud, a wireless network, and mobile multisensor robots is deployed to monitor a wide-region greenhouse. Then, a novel strategy that contains two phases is designed to ensure valid region monitoring and meet the time constraints of a mobile sensing robot. In the first phase, candidate region monitoring points are selected using the improved virtual forces. In the second phase, a moving path for the mobile node is calculated based on Euclidean distance. Subsequently, the applicability of the proposed strategy is verified by the greenhouse test system. The verification results show that the proposed algorithm has better performance than the conventional methods. The results also demonstrate that, by applying the proposed algorithm, the number of monitoring points and time consumption can reduce, while the valid monitoring region area is enlarged.
APA, Harvard, Vancouver, ISO, and other styles
28

Chen, Mengjunguang, and Jingjing Jiang. "Sports Analysis and Action Optimization in Physical Education Teaching Practice Based on Internet of Things Sensor Perception." Computational Intelligence and Neuroscience 2022 (June 30, 2022): 1–8. http://dx.doi.org/10.1155/2022/7152953.

Full text
Abstract:
With the progress of the Internet of things technology in recent years, all aspects of people’s lives have also been affected. More and more people are immersed in the virtual world and ignore the real activities. According to the survey, nearly 50% of people in China are in subhealth, mainly modern diseases caused by long-term inactivity. Therefore, to form a good habit of physical exercise, we must start from an early age. Starting from the physical education teaching in primary and secondary schools and from the perspective of modern scientific and technological facilities, this paper discusses the practical sports analysis and action optimization of physical education teaching based on the perception of the Internet of things. Starting from the practice of Internet of things sensor sensing in physical education teaching, we have successively determined the multisensor motion acquisition system algorithm, motion pattern recognition algorithm, and motion energy consumption algorithm, which provides modern equipment for motion analysis and motion optimization in physical education teaching practice, which breaks the current situation that traditional teachers spend a lot of time and energy for students. Combining sports mode with sports energy consumption can not only analyze sports data accurately and in real time but also optimize and predict students’ sports behavior in time. We hope to supervise and urge primary and secondary school students to exercise through technical means to improve the quality of primary and secondary school students’ exercise and improve people’s health through physical exercise.
APA, Harvard, Vancouver, ISO, and other styles
29

Choi, Woong, Liang Li, Satoru Satoh, and Kozaburo Hachimura. "Multisensory Integration in the Virtual Hand Illusion with Active Movement." BioMed Research International 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/8163098.

Full text
Abstract:
Improving the sense of immersion is one of the core issues in virtual reality. Perceptual illusions of ownership can be perceived over a virtual body in a multisensory virtual reality environment. Rubber Hand and Virtual Hand Illusions showed that body ownership can be manipulated by applying suitable visual and tactile stimulation. In this study, we investigate the effects of multisensory integration in the Virtual Hand Illusion with active movement. A virtual xylophone playing system which can interactively provide synchronous visual, tactile, and auditory stimulation was constructed. We conducted two experiments regarding different movement conditions and different sensory stimulations. Our results demonstrate that multisensory integration with free active movement can improve the sense of immersion in virtual reality.
APA, Harvard, Vancouver, ISO, and other styles
30

Bailey, Hudson Diggs, Aidan B. Mullaney, Kyla D. Gibney, and Leslie Dowell Kwakye. "Audiovisual Integration Varies With Target and Environment Richness in Immersive Virtual Reality." Multisensory Research 31, no. 7 (2018): 689–713. http://dx.doi.org/10.1163/22134808-20181301.

Full text
Abstract:
Abstract We are continually bombarded by information arriving to each of our senses; however, the brain seems to effortlessly integrate this separate information into a unified percept. Although multisensory integration has been researched extensively using simple computer tasks and stimuli, much less is known about how multisensory integration functions in real-world contexts. Additionally, several recent studies have demonstrated that multisensory integration varies tremendously across naturalistic stimuli. Virtual reality can be used to study multisensory integration in realistic settings because it combines realism with precise control over the environment and stimulus presentation. In the current study, we investigated whether multisensory integration as measured by the redundant signals effects (RSE) is observable in naturalistic environments using virtual reality and whether it differs as a function of target and/or environment cue-richness. Participants detected auditory, visual, and audiovisual targets which varied in cue-richness within three distinct virtual worlds that also varied in cue-richness. We demonstrated integrative effects in each environment-by-target pairing and further showed a modest effect on multisensory integration as a function of target cue-richness but only in the cue-rich environment. Our study is the first to definitively show that minimal and more naturalistic tasks elicit comparable redundant signals effects. Our results also suggest that multisensory integration may function differently depending on the features of the environment. The results of this study have important implications in the design of virtual multisensory environments that are currently being used for training, educational, and entertainment purposes.
APA, Harvard, Vancouver, ISO, and other styles
31

Berthiaume, Maxine, Giulia Corno, Kevin Nolet, and Stéphane Bouchard. "A Novel Integrated Information Processing Model of Presence." PRESENCE: Virtual and Augmented Reality 27, no. 4 (2018): 378–99. http://dx.doi.org/10.1162/pres_a_00336.

Full text
Abstract:
Abstract The objective of this article is to conduct a narrative literature review on multisensory integration and propose a novel information processing model of presence in virtual reality (VR). The first half of the article introduces basic multisensory integration (implicit information processing) and the integration of coherent stimuli (explicit information processing) in the physical environment, offering an explanation for people's reactions during VR immersions and is an important component of our model. To help clarify these concepts, examples are provided. The second half of the article addresses multisensory integration in VR. Three models in the literature examine the role that multisensory integration plays in inducing various perceptual illusions and the relationship between embodiment and presence in VR. However, they do not relate specifically to presence and multisensory integration. We propose a novel model of presence using elements of these models and suggest that implicit and explicit information processing lead to presence. We refer to presence as a perceptual illusion that includes a plausibility illusion (the feeling that the scenario in the virtual environment is actually occurring) and a place illusion (the feeling of being in the place depicted in the virtual environment), based on efficient and congruent multisensory integration.
APA, Harvard, Vancouver, ISO, and other styles
32

Aleksandrovich, Angelina, and Leonardo Mariano Gomes. "Shared multisensory sexual arousal in virtual reality (VR) environments." Paladyn, Journal of Behavioral Robotics 11, no. 1 (August 4, 2020): 379–89. http://dx.doi.org/10.1515/pjbr-2020-0018.

Full text
Abstract:
AbstractThis research explores multisensory sexual arousal in men and women, and how it can be implemented and shared between multiple individuals in Virtual Reality (VR). This is achieved through the stimulation of human senses with immersive technology including visual, olfactory, auditory, and haptic triggers. Participants are invited to VR to test various sensory triggers and assess them as sexually arousing or not. A literature review on VR experiments related to sexuality, the concepts of perception and multisensory experiments, and data collected from self-reports was used to conclude. The goal of this research is to establish that sexual arousal is a multisensory event that may or may not be linked to the presence or thought of the intended object of desire (sexual partner). By examining what stimulates arousal, we better understand the multisensory capacity of humans, leading not only to richer sexual experiences but also to the further development of wearable sextech products, soft robotics, and multisensory learning machines. This understanding helps with other research related to human-robot interaction, affection, detection, and transmission in both physical and virtual realities, and how VR technology can help to design a new generation of sex robots.
APA, Harvard, Vancouver, ISO, and other styles
33

Ross, Miriam. "Virtual Reality’s New Synesthetic Possibilities." Television & New Media 21, no. 3 (October 26, 2018): 297–314. http://dx.doi.org/10.1177/1527476418805240.

Full text
Abstract:
In its current, popular manifestation, Virtual Reality (VR) represents the culmination of more than two centuries of screen practice aimed at creating greater immersion. VR’s optical illusions produce an expanded multisensory immersive experience that enhances the viewer’s interior position within new space. This article questions where embodiment and disembodiment lie in VR’s multisensory optical illusion and whether there is a difference produced by the digital environment versus the photographic, live-action environment? It takes into account our present moment in the history of VR during which the fantasy of total bodily engagement and transference into the “machine” has not yet occurred. In doing so, this article considers the way VR uses synesthetic modes rather than direct sensory stimuli to engage more of the senses.
APA, Harvard, Vancouver, ISO, and other styles
34

Rashidova, Dildora. "MULTISENSORY APPROACH IN CREATING A COMMUNICATIVE SPACE IN TEACHING ENGLISH LANGUAGE." Frontline Social Sciences and History Journal 03, no. 01 (January 1, 2023): 27–31. http://dx.doi.org/10.37547/social-fsshj-03-01-04.

Full text
Abstract:
This paper provides a methodology for using progressive tools and methods of teaching a English language when using the language environment, and also takes into account the specific characteristics of virtual communication, which is a kind of creation and perception of text, including the anonymity of communication participants.
APA, Harvard, Vancouver, ISO, and other styles
35

Zorn, Elayne, and Natalie Underberg. "Multisensory Immersion in Virtual Spaces: PeruVine/PeruDigital." Anthropology News 50, no. 4 (April 2009): 18–19. http://dx.doi.org/10.1111/j.1556-3502.2009.50418.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Campos, Jennifer L., Graziella El-Khechen Richandi, Marge Coahran, Lindsey E. Fraser, Babak Taati, and Behrang Keshavarz. "Virtual Hand Illusion in younger and older adults." Journal of Rehabilitation and Assistive Technologies Engineering 8 (January 2021): 205566832110593. http://dx.doi.org/10.1177/20556683211059389.

Full text
Abstract:
Introduction Embodiment involves experiencing ownership over our body and localizing it in space and is informed by multiple senses (visual, proprioceptive and tactile). Evidence suggests that embodiment and multisensory integration may change with older age. The Virtual Hand Illusion (VHI) has been used to investigate multisensory contributions to embodiment, but has never been evaluated in older adults. Spatio-temporal factors unique to virtual environments may differentially affect the embodied perceptions of older and younger adults. Methods Twenty-one younger (18–35 years) and 19 older (65+ years) adults completed the VHI paradigm. Body localization was measured at baseline and again, with subjective ownership ratings, following synchronous and asynchronous visual-tactile interactions. Results Higher ownership ratings were observed in the synchronous relative to the asynchronous condition, but no effects on localization/drift were found. No age differences were observed. Localization accuracy was biased in both age groups when the virtual hand was aligned with the real hand, indicating a visual mislocalization of the virtual hand. Conclusions No age-related differences in the VHI were observed. Mislocalization of the hand in VR occurred for both groups, even when congruent and aligned; however, tactile feedback reduced localization biases. Our results expand the current understanding of age-related changes in multisensory embodiment within virtual environments.
APA, Harvard, Vancouver, ISO, and other styles
37

Huang, Bingxin. "Examining Illusory Touch Perception in Virtual Reality among Virtual Reality Users." Journal of Education, Humanities and Social Sciences 26 (March 2, 2024): 1075–80. http://dx.doi.org/10.54097/a9tyq152.

Full text
Abstract:
The phenomenon of illusory touch perception in virtual reality (VR) is a subject of increasing intrigue within online VR communities, where users report experiencing tactile sensations in the absence of direct physical contact, triggered solely by visual stimuli. This paper presents an exploratory study aimed at understanding the underlying mechanisms that contribute to the development of such illusory touch perceptions in immersive virtual environments. Through an online survey targeting VR users, this study collected qualitative data on the nature and frequency of these experiences. The survey was complemented by a review of existing literature on touch perception and multisensory integration, providing a foundational understanding of how sensory information is processed and perceived. The findings suggest that certain visual cues in VR can evoke tactile sensations, a phenomenon that may be enhanced by the immersive quality of the virtual environment and the user's previous sensory experiences and expectations. This paper discusses potential contributors to the phenomenon, such as the realism of the virtual environment, the user's level of immersion and presence, and the congruence of multisensory stimuli. Although this study sheds light on the conditions that may facilitate illusory touch perception in VR, the complexity of human sensory processing necessitates further research. Controlled experimental studies are required to establish a causal relationship between specific factors and the experience of illusory touch in VR, which could have significant implications for the fields of virtual reality and sensory augmentation technology.
APA, Harvard, Vancouver, ISO, and other styles
38

Harding, C. "Modeling geoscience data in a multisensory virtual environment." Computing in Science & Engineering 6, no. 1 (January 2004): 89–92. http://dx.doi.org/10.1109/mcise.2004.1255828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Frisoli, Antonio, and Antonio Camurri. "Special Issue Editorial: Multisensory interaction in virtual environments." Virtual Reality 10, no. 1 (May 2006): 2–3. http://dx.doi.org/10.1007/s10055-006-0031-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Fisher, S. S., E. M. Wenzel, C. Coler, and M. W. McGreevy. "Virtual Interface Environment Workstations." Proceedings of the Human Factors Society Annual Meeting 32, no. 2 (October 1988): 91–95. http://dx.doi.org/10.1177/154193128803200219.

Full text
Abstract:
A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed at NASA's Ames Research Center for use as a multipurpose interface environment. This Virtual Interface Environment Workstation (VIEW) system provides a multisensory, interactive display environment in which a user can virtually explore a 360—degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, research scenarios, and research directions are described.
APA, Harvard, Vancouver, ISO, and other styles
41

Voiskounsky, A. E. "Cyberpsychological Approach to the Analysis of Multisensory Integration." Консультативная психология и психотерапия 27, no. 3 (2019): 9–21. http://dx.doi.org/10.17759/cpp.2019270302.

Full text
Abstract:
The paper relates to the branch of cyberpsychology associated with risk factors during immersion in a virtual environment. Specialists in the development and operation of virtual reality systems know that immersion into this environment may be accompanied by symptoms similar to the “motion sickness” of transport vehicle passengers (ships, aircraft, cars). In the paper, these conditions are referred to as a cybersickness (or, cyberdisease). The three leading theories, proposed as an explanation of the causes of cybersickness, are discussed: the theory of sensory conflict, the theory of postural instability (the inability to maintain equilibrium), and the evolutionary (aka toxin) theory. A frequent occurrence of symptoms of cybersickness is a conflict between visual signals and signals from the vestibular system. It is shown that such conflicts can be stimulated in the framework of a specially organized experiment (e.g., the illusion of out-of-body experience) using virtual reality systems. When competing signals (visual, auditory, kinesthetic, tactile, etc.) reach the brain, the data gained with the use of virtual reality systems give a chance to hypothetically determine the localization of the specific area in the brain that ensures the integration of multisensory stimuli.
APA, Harvard, Vancouver, ISO, and other styles
42

Caola, Barbara, Martina Montalti, Alessandro Zanini, Antony Leadbetter, and Matteo Martini. "The Bodily Illusion in Adverse Conditions: Virtual Arm Ownership During Visuomotor Mismatch." Perception 47, no. 5 (February 23, 2018): 477–91. http://dx.doi.org/10.1177/0301006618758211.

Full text
Abstract:
Classically, body ownership illusions are triggered by cross-modal synchronous stimulations, and hampered by multisensory inconsistencies. Nonetheless, the boundaries of such illusions have been proven to be highly plastic. In this immersive virtual reality study, we explored whether it is possible to induce a sense of body ownership over a virtual body part during visuomotor inconsistencies, with or without the aid of concomitant visuo-tactile stimulations. From a first-person perspective, participants watched a virtual tube moving or an avatar’s arm moving, with or without concomitant synchronous visuo-tactile stimulations on their hand. Three different virtual arm/tube speeds were also investigated, while all participants kept their real arms still. The subjective reports show that synchronous visuo-tactile stimulations effectively counteract the effect of visuomotor inconsistencies, but at slow arm movements, a feeling of body ownership might be successfully induced even without concomitant multisensory correspondences. Possible therapeutical implications of these findings are discussed.
APA, Harvard, Vancouver, ISO, and other styles
43

Malighetti, Clelia, Maria Sansoni, Santino Gaudio, Marta Matamala-Gomez, Daniele Di Di Lernia, Silvia Serino, and Giuseppe Riva. "From Virtual Reality to Regenerative Virtual Therapy: Some Insights from a Systematic Review Exploring Inner Body Perception in Anorexia and Bulimia Nervosa." Journal of Clinical Medicine 11, no. 23 (November 30, 2022): 7134. http://dx.doi.org/10.3390/jcm11237134.

Full text
Abstract:
Despite advances in our understanding of the behavioral and molecular factors that underlie the onset and maintenance of Eating Disorders (EDs), it is still necessary to optimize treatment strategies and establish their efficacy. In this context, over the past 25 years, Virtual Reality (VR) has provided creative treatments for a variety of ED symptoms, including body dissatisfaction, craving, and negative emotions. Recently, different researchers suggested that EDs may reflect a broader impairment in multisensory body integration, and a particular VR technique—VR body swapping—has been used to repair it, but with limited clinical results. In this paper, we use the results of a systematic review employing PRISMA guidelines that explore inner body perception in EDs (21 studies included), with the ultimate goal to analyze the features of multisensory impairment associated with this clinical condition and provide possible solutions. Deficits in interoception, proprioception, and vestibular signals were observed across Anorexia and Bulimia Nervosa, suggesting that: (a) alteration of inner body perception might be a crucial feature of EDs, even if further research is needed and; (b) VR, to be effective with these patients, has to simulate/modify both the external and the internal body. Following this outcome, we introduce a new therapeutic approach—Regenerative Virtual Therapy—that integrates VR with different technologies and clinical strategies to regenerate a faulty bodily experience by stimulating the multisensory brain mechanisms and promoting self-regenerative processes within the brain itself.
APA, Harvard, Vancouver, ISO, and other styles
44

Salzman, Marilyn C., Chris Dede, R. Bowen Loftin, and Debra Sprague. "Assessing Virtual Reality's Potential for Teaching Abstract Science." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 41, no. 2 (October 1997): 1208–12. http://dx.doi.org/10.1177/1071181397041002108.

Full text
Abstract:
Understanding how to leverage the features of immersive, three-dimensional (3-D) multisensory virtual reality to meet user needs presents a challenge for human factors researchers. This paper describes our approach to evaluating this medium's potential as a tool for teaching abstract science. It describes some of our early research outcomes and discusses an evaluation comparing a 3-D VR microworld to an alternative 2-D computer-based microworld. Both are simulations in which students learn about electrostatics. The outcomes of the comparison study suggest: 1) the immersive 3-D VR microworld facilitated conceptual and three-dimensional learning that the 2-D computer microworld did not, and 2) VR's multisensory information aided students who found the electrostatics concepts challenging. As a whole, our research suggests that VR's immersive representational abilities have promise for teaching and for visualization. It also demonstrates that characteristics of the learning experience such as usability, motivation, and simulator sickness are important part of assessing this medium's potential.
APA, Harvard, Vancouver, ISO, and other styles
45

Serino, Andrea, Elisa Canzoneri, and Alessio Avenanti. "Fronto-parietal Areas Necessary for a Multisensory Representation of Peripersonal Space in Humans: An rTMS Study." Journal of Cognitive Neuroscience 23, no. 10 (October 2011): 2956–67. http://dx.doi.org/10.1162/jocn_a_00006.

Full text
Abstract:
A network of brain regions including the ventral premotor cortex (vPMc) and the posterior parietal cortex (PPc) is consistently recruited during processing of multisensory stimuli within peripersonal space (PPS). However, to date, information on the causal role of these fronto-parietal areas in multisensory PPS representation is lacking. Using low-frequency repetitive TMS (rTMS; 1 Hz), we induced transient virtual lesions to the left vPMc, PPc, and visual cortex (V1, control site) and tested whether rTMS affected audio–tactile interaction in the PPS around the hand. Subjects performed a timed response task to a tactile stimulus on their right (contralateral to rTMS) hand while concurrent task-irrelevant sounds were presented either close to the hand or 1 m far from the hand. When no rTMS was delivered, a sound close to the hand reduced RT-to-tactile targets as compared with when a far sound was presented. This space-dependent, auditory modulation of tactile perception was specific to a hand-centered reference frame. Such a specific form of multisensory interaction near the hand can be taken as a behavioral hallmark of PPS representation. Crucially, virtual lesions to vPMc and PPc, but not to V1, eliminated the speeding effect due to near sounds, showing a disruption of audio–tactile interactions around the hand. These findings indicate that multisensory interaction around the hand depends on the functions of vPMc and PPc, thus pointing to the necessity of this human fronto-parietal network in multisensory representation of PPS.
APA, Harvard, Vancouver, ISO, and other styles
46

Adão, Telmo, Tatiana Pinho, Luís Pádua, Luís G. Magalhães, Joaquim J. Sousa, and Emanuel Peres. "Prototyping IoT-Based Virtual Environments: An Approach toward the Sustainable Remote Management of Distributed Mulsemedia Setups." Applied Sciences 11, no. 19 (September 23, 2021): 8854. http://dx.doi.org/10.3390/app11198854.

Full text
Abstract:
Business models built upon multimedia/multisensory setups delivering user experiences within disparate contexts—entertainment, tourism, cultural heritage, etc.—usually comprise the installation and in-situ management of both equipment and digital contents. Considering each setup as unique in its purpose, location, layout, equipment and digital contents, monitoring and control operations may add up to a hefty cost over time. Software and hardware agnosticity may be of value to lessen complexity and provide more sustainable management processes and tools. Distributed computing under the Internet of Things (IoT) paradigm may enable management processes capable of providing both remote control and monitoring of multimedia/multisensory experiences made available in different venues. A prototyping software to perform IoT multimedia/multisensory simulations is presented in this paper. It is fully based on virtual environments that enable the remote design, layout, and configuration of each experience in a transparent way, without regard of software and hardware. Furthermore, pipelines to deliver contents may be defined, managed, and updated in a context-aware environment. This software was tested in the laboratory and was proven as a sustainable approach to manage multimedia/multisensory projects. It is currently being field-tested by an international multimedia company for further validation.
APA, Harvard, Vancouver, ISO, and other styles
47

Taffou, Marine, Rachid Guerchouche, George Drettakis, and Isabelle Viaud-Delmon. "Auditory–Visual Aversive Stimuli Modulate the Conscious Experience of Fear." Multisensory Research 26, no. 4 (2013): 347–70. http://dx.doi.org/10.1163/22134808-00002424.

Full text
Abstract:
In a natural environment, affective information is perceived via multiple senses, mostly audition and vision. However, the impact of multisensory information on affect remains relatively undiscovered. In this study, we investigated whether the auditory–visual presentation of aversive stimuli influences the experience of fear. We used the advantages of virtual reality to manipulate multisensory presentation and to display potentially fearful dog stimuli embedded in a natural context. We manipulated the affective reactions evoked by the dog stimuli by recruiting two groups of participants: dog-fearful and non-fearful participants. The sensitivity to dog fear was assessed psychometrically by a questionnaire and also at behavioral and subjective levels using a Behavioral Avoidance Test (BAT). Participants navigated in virtual environments, in which they encountered virtual dog stimuli presented through the auditory channel, the visual channel or both. They were asked to report their fear using Subjective Units of Distress. We compared the fear for unimodal (visual or auditory) and bimodal (auditory–visual) dog stimuli. Dog-fearful participants as well as non-fearful participants reported more fear in response to bimodal audiovisual compared to unimodal presentation of dog stimuli. These results suggest that fear is more intense when the affective information is processed via multiple sensory pathways, which might be due to a cross-modal potentiation. Our findings have implications for the field of virtual reality-based therapy of phobias. Therapies could be refined and improved by implicating and manipulating the multisensory presentation of the feared situations.
APA, Harvard, Vancouver, ISO, and other styles
48

Huang, Fuxing, Jianping Huang, and Xiaoang Wan. "Influence of virtual color on taste: Multisensory integration between virtual and real worlds." Computers in Human Behavior 95 (June 2019): 168–74. http://dx.doi.org/10.1016/j.chb.2019.01.027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Swinkels, Lieke M. J., Harm Veling, and Hein T. van Schie. "The Redundant Signals Effect and the Full Body Illusion: not Multisensory, but Unisensory Tactile Stimuli Are Affected by the Illusion." Multisensory Research 34, no. 6 (April 9, 2021): 553–85. http://dx.doi.org/10.1163/22134808-bja10046.

Full text
Abstract:
Abstract During a full body illusion (FBI), participants experience a change in self-location towards a body that they see in front of them from a third-person perspective and experience touch to originate from this body. Multisensory integration is thought to underlie this illusion. In the present study we tested the redundant signals effect (RSE) as a new objective measure of the illusion that was designed to directly tap into the multisensory integration underlying the illusion. The illusion was induced by an experimenter who stroked and tapped the participant’s shoulder and underarm, while participants perceived the touch on the virtual body in front of them via a head-mounted display. Participants performed a speeded detection task, responding to visual stimuli on the virtual body, to tactile stimuli on the real body and to combined (multisensory) visual and tactile stimuli. Analysis of the RSE with a race model inequality test indicated that multisensory integration took place in both the synchronous and the asynchronous condition. This surprising finding suggests that simultaneous bodily stimuli from different (visual and tactile) modalities will be transiently integrated into a multisensory representation even when no illusion is induced. Furthermore, this finding suggests that the RSE is not a suitable objective measure of body illusions. Interestingly however, responses to the unisensory tactile stimuli in the speeded detection task were found to be slower and had a larger variance in the asynchronous condition than in the synchronous condition. The implications of this finding for the literature on body representations are discussed.
APA, Harvard, Vancouver, ISO, and other styles
50

Höst, Gunnar E., Konrad J. Schönborn, and Karljohan E. Lundin Palmerius. "A Case-Based Study of Students' Visuohaptic Experiences of Electric Fields around Molecules: Shaping the Development of Virtual Nanoscience Learning Environments." Education Research International 2013 (2013): 1–11. http://dx.doi.org/10.1155/2013/194363.

Full text
Abstract:
Recent educational research has suggested that immersive multisensory virtual environments offer learners unique and exciting knowledge-building opportunities for the construction of scientific knowledge. This paper delivers a case-based study of students’ immersive interaction with electric fields around molecules in a multisensory visuohaptic virtual environment. The virtual architecture presented here also has conceptual connections to the flourishing quest in contemporary literature for the pressing need to communicate nanoscientific ideas to learners. Five upper secondary school students’ prior conceptual understanding of electric fields and their application of this knowledge to molecular contexts, were probed prior to exposure to the virtual model. Subsequently, four students interacted with the visuohaptic model while performing think-aloud tasks. An inductive and heuristic treatment of videotaped verbal and behavioural data revealed distinct interrelationships between students’ interactive strategies implemented when executing tasks in the virtual system and the nature of their conceptual knowledge deployed. The obtained qualitative case study evidence could serve as an empirical basis for informing the rendering and communication of overarching nanoscale ideas. At the time of composing this paper for publication in the current journal, the research findings of this study have been put into motion in informing a broader project goal of developing educational virtual environments for depicting nanophenomena.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography