Auswahl der wissenschaftlichen Literatur zum Thema „Virtual multisensor“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Virtual multisensor" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Virtual multisensor"

1

Emura, Satoru, und Susumu Tachi. „Multisensor Integrated Prediction for Virtual Reality“. Presence: Teleoperators and Virtual Environments 7, Nr. 4 (August 1998): 410–22. http://dx.doi.org/10.1162/105474698565811.

Der volle Inhalt der Quelle
Annotation:
Unconstrained measurement of human head motion is essential for HMDs (headmounted displays) to be really interactive. Polhemus sensors developed for that purpose have deficiencies of critical latency and low sampling rates. Adding to this, a delay for rendering virtual scenes is inevitable. This paper proposes methods that compensate the latency and raises the effective sampling rate by integrating Polhemus and gyro sensors. The adoption of quaternion representation enables us to avoid singularity and the complicated boundary process of rotational motion. The ability of proposed methods under various rendering delays was evaluated in the respect of RMS error and our new correlational technique, which enables us to check the latency and fidelity of a magnetic tracker, and to assess the environment where the magnetic tracker is used. The real-time implementation of our simpler method on personal computers is also reported in detail.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Wenhao, Dong. „Multisensor Information Fusion-Assisted Intelligent Art Design under Wireless Virtual Reality Environment“. Journal of Sensors 2021 (31.12.2021): 1–10. http://dx.doi.org/10.1155/2021/6119127.

Der volle Inhalt der Quelle
Annotation:
Under the background of intelligent technologies, art designers need to use information technology to assist the design of art factors and fully realize the integration of art design and information technology. Multisensor information fusion technology can more intuitively and visually carry out a more comprehensive grasp of the objectives to be designed, maximize the positive effects of art design, and achieve its overall optimization and can also help art designers get rid of the traditional monolithic and obsolete design concepts. Based on multisensor information fusion technology under wireless virtual reality environment, principles of signal acquisition and preprocessing, feature extraction, and fusion calculation, to analyze the information processing process of multisensor information fusion, conduct the model construction and performance evaluation for intelligent art design, and propose an intelligent art design model based on multisensor information fusion technology, we discuss the realization of multisensor information fusion algorithm in intelligent art design and finally carry out a simulation experiment and its result analysis by taking the environment design of a parent-child restaurant as an example. The study results show that using multisensor information fusion in the environmental design of parent-child restaurant is better than using a single sensor for that; at the same time, using force sensors has a better environmental design effect than using vibration sensors. The multisensor information fusion technology can automatically analyze the observation information of several sources obtained in time sequence under certain criteria and comprehensively perform information processing for the completion of the decision-making and estimation tasks required for intelligent art design.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Xie, Jiahao, Daozhi Wei, Shucai Huang und Xiangwei Bu. „A Sensor Deployment Approach Using Improved Virtual Force Algorithm Based on Area Intensity for Multisensor Networks“. Mathematical Problems in Engineering 2019 (27.02.2019): 1–9. http://dx.doi.org/10.1155/2019/8015309.

Der volle Inhalt der Quelle
Annotation:
Sensor deployment is one of the major concerns in multisensor networks. This paper proposes a sensor deployment approach using improved virtual force algorithm based on area intensity for multisensor networks to realize the optimal deployment of multisensor and obtain better coverage effect. Due to the real-time sensor detection model, the algorithm uses the intensity of sensor area to select the optimal deployment distance. In order to verify the effectiveness of this algorithm to improve coverage quality, VFA and PSOA are selected for comparative analysis. The simulation results show that the algorithm can achieve global coverage optimization better and improve the performance of virtual force algorithm. It avoids the unstable coverage caused by the large amount of computation, slow convergence speed, and easily falling into local optimum, which provides a new idea for multisensor deployment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Di, Peng, Xuan Wang, Tong Chen und Bin Hu. „Multisensor Data Fusion in Testability Evaluation of Equipment“. Mathematical Problems in Engineering 2020 (30.11.2020): 1–16. http://dx.doi.org/10.1155/2020/7821070.

Der volle Inhalt der Quelle
Annotation:
The multisensor data fusion method has been extensively utilized in many practical applications involving testability evaluation. Due to the flexibility and effectiveness of Dempster–Shafer evidence theory in modeling and processing uncertain information, this theory has been widely used in various fields of multisensor data fusion method. However, it may lead to wrong results when fusing conflicting multisensor data. In order to deal with this problem, a testability evaluation method of equipment based on multisensor data fusion method is proposed. First, a novel multisensor data fusion method, based on the improvement of Dempster–Shafer evidence theory via the Lance distance and the belief entropy, is proposed. Next, based on the analysis of testability multisensor data, such as testability virtual test data, testability test data of replaceable unit, and testability growth test data, the corresponding prior distribution conversion schemes of testability multisensor data are formulated according to their different characteristics. Finally, the testability evaluation method of equipment based on the multisensor data fusion method is proposed. The result of experiment illustrated that the proposed method is feasible and effective in handling the conflicting evidence; besides, the accuracy of fusion of the proposed method is higher and the result of evaluation is more reliable than other testability evaluation methods, which shows that the basic probability assignment of the true target is 94.71%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Xu, Tao. „Performance of VR Technology in Environmental Art Design Based on Multisensor Information Fusion under Computer Vision“. Mobile Information Systems 2022 (23.04.2022): 1–10. http://dx.doi.org/10.1155/2022/3494535.

Der volle Inhalt der Quelle
Annotation:
Multisensor information fusion technology is a symbol of scientific and technological progress. This paper is aimed at discussing the performance of virtual reality (VR) technology in the environmental art design of multisensor information fusion technology. This paper prepares some related work in the early stage and then lists the algorithms and models, such as the multisensor information fusion model based on VR instrument technology, and shows the principle of information fusion and GPID bus structure. This paper describes the multisensor information fusion algorithm to analyze DS evidence theory. In the evidence-based decision theory, the multisensor information fusion process is the calculation of the qualitative level and/or confidence level function, generally calculating the posterior distribution information. In addition to showing its algorithm, this paper also shows the data flow of the multisensor information fusion system through pictures. Then, this paper explains the design and construction of garden art environment based on active panoramic stereo vision sensor, shows the relationship of the four coordinates in an all-round way, and shows the interactive experience of indoor and outdoor environmental art design. Then, this paper conducts estimation simulation experiments based on EKF and shows the results, and it is concluded that the fusion data using the extended Kalman filter algorithm is closer to the actual target motion data and the accuracy rate is better than 92%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Gu, Yingjie, und Ye Zhou. „Application of Virtual Reality Based on Multisensor Data Fusion in Theater Space and Installation Art“. Mobile Information Systems 2022 (28.08.2022): 1–8. http://dx.doi.org/10.1155/2022/4101910.

Der volle Inhalt der Quelle
Annotation:
The application of Virtual Reality (VR) in theater space and installation art is the general trend, and it can be seen in large stage plays and installation art exhibitions. However, as the current VR is not mature enough, it is difficult to perfectly fulfill the exhibition requirements of large theaters, so this paper aims to change this situation by using VR based on multisensor data fusion. In this paper, a data fusion algorithm based on multisensors is designed, which improves the data transmission efficiency and delay of the VR system, so that VR can have a better viewing experience in theater space and installation art. And, through the questionnaire survey and actual interview, the actual feelings of VR audience in theater space and installation art are investigated and studied. Through the experimental analysis of this paper, the algorithm in this paper has high reliability and can improve the experience of using VR. The interview results and results show that the main application of VR in theater space is manifested in three aspects: multiangle and all-round viewing, multiroute viewing, and man-machine interaction in art galleries. The application of VR in installation art is mainly reflected in the perception of installation materials.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Shen, Dongli. „Application of GIS and Multisensor Technology in Green Urban Garden Landscape Design“. Journal of Sensors 2023 (27.03.2023): 1–7. http://dx.doi.org/10.1155/2023/9730980.

Der volle Inhalt der Quelle
Annotation:
In order to solve the problem of low definition of the original 3D virtual imaging system, the author proposes the application method of GIS and multisensor technology in green urban garden landscape design. By formulating a hardware design framework, an image collector is selected for image acquisition according to the framework, the image is filtered and denoised by a computer, the processed image is output through laser refraction, and a photoreceptor and a transparent transmission module are used for virtual imaging. Formulate a software design framework, perform noise reduction processing on the collected image through convolutional neural network calculation, and use pixel grayscale calculation to obtain the feature points of the original image, and use C language to set and output the virtual imaging, thus completing the software design. Combined with the above hardware and software design, the design of 3D virtual imaging system in garden landscape design is completed. Construct a comparative experiment to compare with the original system. The results showed the following: The designed system has a significant improvement in the clarity, the original system clarity is 82%~85%, and the image clarity of this system is 85%~90%. In conclusion, the author designed the method to be more effective.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lee, Wonjun, Hyung-Jun Lim und Mun Sang Kim. „Development for Multisensor and Virtual Simulator–Based Automatic Broadcast Shooting System“. International Journal of Digital Multimedia Broadcasting 2022 (16.07.2022): 1–13. http://dx.doi.org/10.1155/2022/2724804.

Der volle Inhalt der Quelle
Annotation:
To solve the limitations of complexity and repeatability of existing broadcast filming systems, a new broadcast filming system was developed. In particular, for Korean music broadcasts, the shooting sequence is stage and lighting installation, rehearsal, lighting effect production, and main shooting; however, this sequence is complex and involves multiple people. We developed an automatic shooting system that can produce the same effect as the sequence with a minimum number of people as the era of un-tact has emerged because of COVID-19. The developed system comprises a simulator. After developing a stage using the simulator, during rehearsal, dancers’ movements are acquired using UWB and two-dimensional (2D) LiDAR sensors. By inserting acquired movement data in the developed stage, a camera effect is produced using a virtual camera installed in the developed simulator. The camera effect comprises pan, tilt, and zoom, and a camera director creates lightning effects while evaluating the movements of virtual dancers on the virtual stage. In this study, four cameras were used, three of which were used for camera pan, tilt, and zoom control, and the fourth was used as a fixed camera for a full shot. Video shooting is performed according to the pan, tilt, and zoom values ​​of the three cameras and switcher data. Only the video of dancers recorded during rehearsal and that produced by the lighting director via the existing broadcast filming process is overlapped in the developed simulator to assess lighting effects. The lighting director assesses the overlapping video and then corrects parts that require to be corrected or emphasized. The abovementioned method produced better lighting effects optimized for music and choreography compared to existing lighting effect production methods. Finally, the performance and lighting effects of the developed simulator and system were confirmed by shooting using K-pop using the pan, tilt, and zoom control plan, switcher sequence, and lighting effects of the selected camera.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Oue, Mariko, Aleksandra Tatarevic, Pavlos Kollias, Dié Wang, Kwangmin Yu und Andrew M. Vogelmann. „The Cloud-resolving model Radar SIMulator (CR-SIM) Version 3.3: description and applications of a virtual observatory“. Geoscientific Model Development 13, Nr. 4 (21.04.2020): 1975–98. http://dx.doi.org/10.5194/gmd-13-1975-2020.

Der volle Inhalt der Quelle
Annotation:
Abstract. Ground-based observatories use multisensor observations to characterize cloud and precipitation properties. One of the challenges is how to design strategies to best use these observations to understand these properties and evaluate weather and climate models. This paper introduces the Cloud-resolving model Radar SIMulator (CR-SIM), which uses output from high-resolution cloud-resolving models (CRMs) to emulate multiwavelength, zenith-pointing, and scanning radar observables and multisensor (radar and lidar) products. CR-SIM allows for direct comparison between an atmospheric model simulation and remote-sensing products using a forward-modeling framework consistent with the microphysical assumptions used in the atmospheric model. CR-SIM has the flexibility to easily incorporate additional microphysical modules, such as microphysical schemes and scattering calculations, and expand the applications to simulate multisensor retrieval products. In this paper, we present several applications of CR-SIM for evaluating the representativeness of cloud microphysics and dynamics in a CRM, quantifying uncertainties in radar–lidar integrated cloud products and multi-Doppler wind retrievals, and optimizing radar sampling strategy using observing system simulation experiments. These applications demonstrate CR-SIM as a virtual observatory operator on high-resolution model output for a consistent comparison between model results and observations to aid interpretation of the differences and improve understanding of the representativeness errors due to the sampling limitations of the ground-based measurements. CR-SIM is licensed under the GNU GPL package and both the software and the user guide are publicly available to the scientific community.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Bidaut, Luc. „Multisensor Imaging and Virtual Simulation for Assessment, Diagnosis, Therapy Planning, and Navigation“. Simulation & Gaming 32, Nr. 3 (September 2001): 370–90. http://dx.doi.org/10.1177/104687810103200307.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Virtual multisensor"

1

Pasika, Hugh Joseph Christopher. „Neural network sensor fusion : creation of a virtual sensor for cloud-base height estimation /“. *McMaster only, 1999.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Morie, Jacquelyn Ford. „Meaning and emplacement in expressive immersive virtual environments“. Thesis, University of East London, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.532661.

Der volle Inhalt der Quelle
Annotation:
From my beginnings as an artist, my work has always been created with the goal of evoking strong emotional responses from those who experience it. I wanted to wrap my work around the viewers have it encompass them completely. When virtual reality came along, 1 knew I had found my true medium. I could design the space, bring people inside and see what they did there. I was always excited to see what the work would mean to them, what they brought to it, what I added, and what they took away.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Fasthén, Patrick. „The Virtual Self : Sensory-Motor Plasticity of Virtual Body-Ownership“. Thesis, Högskolan i Skövde, Institutionen för biovetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-10501.

Der volle Inhalt der Quelle
Annotation:
The distinction between the sense of body-ownership and the sense of agency has attracted considerable empirical and theoretical interest lately. However, the respective contributions of multisensory and sensorimotor integration to these two varieties of body experience are still the subject of ongoing research. In this study, I examine the various methodological problems encountered in the empirical study of body-ownership and agency with the use of novel immersive virtual environment technology to investigate the interplay between sensory and motor information. More specifically, the focus is on testing the relative contributions and possible interactions of visual-tactile and visual-motor contingencies implemented under the same experimental protocol. The effect of this is supported by physiological measurements obtained from skin conductance responses and heart rate. The findings outline a relatively simple method for identifying the necessary and sufficient conditions for the experience of body-ownership and agency, as studied with immersive virtual environment technology.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Chung, Tak-yin Jason, und 鍾德賢. „The virtual multisensory room: supplementary effect on students withsevere mentally handicap in a special school“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B29624101.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Taffou, Marine. „Inducing feelings of fear with virtual reality : the influence of multisensory stimulation on negative emotional experience“. Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066622/document.

Der volle Inhalt der Quelle
Annotation:
Dans l'environnement naturel, les signaux émotionnels sont transmis via différentes modalités sensorielles. Pourtant, l'effet d'évènements affectifs multisensoriels sur l'expérience émotionnelle consciente (le ressenti) reste relativement peu connu. Ce travail de recherche a exploité les avantages de la réalité virtuelle pour étudier le ressenti négatif induit par des évènements aversifs visuo-auditifs présentés dans un contexte écologique. Un tel contexte permet de prendre en compte un facteur important qui est la distance entre le sujet et le stimulus affectif. Par conséquent, ce travail a impliqué l'étude des liens entre l'affect, la présentation multisensorielle et l'espace. Une première étude a exploré l'influence de stimuli aversifs visuo-auditifs sur le ressenti. Une deuxième étude a examiné l'effet de la peur excessive sur la représentation de l'espace péri-personnel. Une troisième étude a testé l'effet de stimuli aversifs visuo-auditifs sur le ressenti en fonction de leur position plus ou moins proche du sujet. En conclusion, il a été constaté que le ressenti émotionnel est modulé par les caractéristiques sensorielles et spatiales des évènements aversifs. Les stimuli aversifs visuo-auditifs amplifient le ressenti négatif. Cependant, cet effet n'existe que si ces stimuli sont dans l'espace proche du sujet. Enfin, la peur excessive d'un stimulus spécifique provoque une extension de l'espace péri-personnel. L'ensemble de ces travaux fournit de nouvelles informations sur le traitement de l'information affective et met en évidence l'utilité et la pertinence de la réalité virtuelle pour l'étude de l'affect
In a natural environment, affective events often convey emotional cues through multiple sensory modalities. Yet, the effect of multisensory affective events on the conscious emotional experience (feelings) they induce remains relatively undiscovered. The present research exploited the unique advantages of virtual reality techniques to examine the negative emotional experience induced by auditory-visual aversive events embedded in a natural context. In natural contexts, the spatial distance between the perceiver and the affective stimuli is an important factor. Consequently, this research investigated the relationship between affect, multisensory presentation and space. A first study using virtual reality tested the influence of auditory-visual aversive stimuli on negative emotional experience. A second study explored the effect of excessive fear on the representation of close space. A third study examined the effect of auditory-visual stimuli on negative emotional experience as a function of their location at close or far distances from the perceiver. Overall, it was found that negative emotional experience is modulated by the sensory and spatial characteristics of aversive events. Multisensory aversive events amplify negative feelings only when they are located at close distances from the perceiver. Moreover, excessive fear related to an event extends the space, wherein the event is represented as close. Taken together, the present research provides new information about affective processing and exposes virtual reality as a relevant tool for the study of human affect
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Nierula, Birgit. „Multisensory processing and agency in VR embodiment: Interactions through BCI and their therapeutic applications“. Doctoral thesis, Universitat de Barcelona, 2017. http://hdl.handle.net/10803/461771.

Der volle Inhalt der Quelle
Annotation:
Body ownership refers to the experience that this body is my body and is closely linked to consciousness. Multisensory integration processes play an important role in body ownership as shown in the rubber hand illusion, which induces the illusory experience that a rubber hand is part of one's own body. Illusions of body ownership can also be experienced in immersive virtual reality (VR), which was used in all three experiments of this thesis. The first experiment of this thesis aimed at investigating some of the underlying mechanisms of body ownership. Specifically we were interested whether the body ownership illusion fluctuates over time and if so, whether these fluctuations are related to spontaneous brain activity. The second experiment aimed at investigating the relation between body ownership illusions and pain perception. Looking at one's own body has been demonstrated to have analgesic properties. This well-known effect in people's real hand has been studied in illusory owned hands with contradictory results. It has been replicated in VR-embodiment, but there are controversial findings in the rubber hand illusion. One crucial difference between the rubber hand illusion and VR-embodiment is that in VR real and virtual hand can be colocated while this is not possible in the rubber hand illusion. We were interested whether the distance between real and surrogate hand can explain controversial findings in the literature. When people experience high levels of body ownership over a virtual body, they can also feel agency over the actions of that virtual body. Agency has been described as result of a matching between predicted and actual sensory feedback of a planned motor action, a process involving motor areas. However, situations in which strong body ownership gives us the illusion of agency, raise the question of the involvement of motor areas in the sense of agency. In the third experiment of this thesis we explored this question in the context of brain computer interfaces (BCI). All together these experiments investigated underlying processes of body ownership and its influences on pain perception and agency. The findings have implications in pain management and neurological rehabilitation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Cooper, N. „The role of multisensory feedback in the objective and subjective evaluations of fidelity in virtual reality environments“. Thesis, University of Liverpool, 2017. http://livrepository.liverpool.ac.uk/3007774/.

Der volle Inhalt der Quelle
Annotation:
The use of virtual reality in academic and industrial research has been rapidly expanding in recent years therefore evaluations of the quality and effectiveness of virtual environments are required. The assessment process is usually done through user evaluation that is being measured whilst the user engages with the system. The limitations of this method in terms of its variability and user bias of pre and post-experience have been recognised in the research literature. Therefore, there is a need to design more objective measures of system effectiveness that could complement subjective measures and provide a conceptual framework for the fidelity assessment in VR. There are many technological and perceptual factors that can influence the overall experience in virtual environments. The focus of this thesis was to investigate how multisensory feedback, provided during VR exposure, can modulate a user’s qualitative and quantitative experience in the virtual environment. In a series of experimental studies, the role of visual, audio, haptic and motion cues on objective and subjective evaluations of fidelity in VR was investigated. In all studies, objective measures of performance were collected and compared to the subjective measures of user perception. The results showed that the explicit evaluation of environmental and perceptual factors available within VR environments modulated user experience. In particular, the results shown that a user’s postural responses can be used as a basis for the objective measure of fidelity. Additionally, the role of augmented sensory cues was investigated during a manual assembly task. By recording and analysing the objective and subjective measures it was shown that augmented multisensory feedback modulated the user’s acceptability of the virtual environment in a positive manner and increased overall task performance. Furthermore, the presence of augmented cues mitigated the negative effects of inaccurate motion tracking and simulation sickness. In the follow up study, the beneficial effects of virtual training with augmented sensory cues were observed in the transfer of learning when the same task was performed in a real environment. Similarly, when the effects of 6 degrees of freedom motion cuing on user experience were investigated in a high fidelity flight simulator, the consistent findings between objective and subjective data were recorded. By measuring the pilot’s accuracy to follow the desired path during a slalom manoeuvre while perceived task demand was increased, it was shown that motion cuing is related to effective task performance and modulates the levels of workload, sickness and presence. The overall findings revealed that multisensory feedback plays an important role in the overall perception and fidelity evaluations of VR systems and as such user experience needs to be included when investigating the effectiveness of sensory feedback signals. Throughout this thesis it was consistently shown that subjective measures of user perception in VR are directly comparable to the objective measures of performance and therefore both should be used in order to obtain a robust results when investigating the effectiveness of VR systems. This conceptual framework can provide an effective method to study human perception, which can in turn provide a deeper understanding of the environmental and cognitive factors that can influence the overall user experience, in terms of fidelity requirements, in virtual reality environments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Morati, Nicolas. „Système de détection ultra-sensible et sélectif pour le suivi de la qualité de l'air intérieur et extérieur“. Electronic Thesis or Diss., Aix-Marseille, 2021. http://www.theses.fr/2021AIXM0200.

Der volle Inhalt der Quelle
Annotation:
Aujourd’hui, l’air est pollué par de nombreuses substances chimiques, difficile à identifier. Plusieurs gaz marqueurs sont caractéristiques de la pollution, comme le monoxyde de carbone (CO), l'ozone (O3) et le dioxyde d'azote (NO2). Les capteurs de gaz à base d’oxyde métallique (MOX) sont des bons candidats pour suivre en temps réel la qualité de l’air. Ils sont largement utilisés dans les dispositifs de détection de gaz portables et à faible coût. Très sensibles, stables et avec une grande durée de vie, les capteurs MOX souffrent d'un manque inhérent de sélectivité, qui peut être comblé en y intégrant de l’intelligence artificielle. Ce travail de thèse s’intéresse à la mise en œuvre de méthodes d’identification de gaz basées sur l’analyse de données expérimentales. L’objectif est de discriminer le CO, l’O3, et le NO2, avec un seul capteur, dans des conditions réelles d’utilisation (faible débit, humidité...). Pour cela, nous utilisons un capteur de gaz à base d’oxyde de tungstène (WO3) breveté par l’IM2NP et exploité sous licence mondiale par la société NANOZ. Une base de données expérimentale complète a été créée à partir d’un protocole basé sur la modulation de la température de la couche sensible. À partir de cette base de données nous avons mis en œuvre deux méthodes différentes d’extractions de paramètres : le calcul des attributs temporels et la transformée en ondelettes. Ces deux méthodes ont été évaluées sur leur capacité de discrimination des gaz grâce à l’utilisation de plusieurs familles d’algorithmes de classification tels que les machines à vecteurs de support (SVM), les K plus proches voisins (KNN), les réseaux de neurone
Today the air is polluted by many chemicals, which are in the form of a complex mixture that is difficult to identify. These marker gases include carbon monoxide (CO), ozone (O3) and nitrogen dioxide (NO2). It has therefore become imperative to design detection systems that are inexpensive, but at the same time highly sensitive and selective, in order to monitor air quality in real time. Metal Oxide gas sensors (MOX) can meet these requirements. They are used in portable and low cost gas detection devices. Very sensitive, stable and with a long lifespan, MOX sensors suffer from an inherent lack of selectivity, which can be overcome by integrating artificial intelligence. This thesis is concerned with the implementation of gas identification methods based on the analysis of experimental data. The objective is to discriminate three pollution marker gases: CO, O3, and NO2, with a single sensor, under real conditions of use, i.e. in the permanent presence of a concentration of these gases in the humid ambient air. For this, we use a tungsten oxide (WO3) gas sensor patented by IM2NP laboratory and operated under a worldwide license by the company NANOZ.A complete experimental database was created from a protocol based on temperature modulation of the sensitive layer. From this database, we implemented two different feature extraction methods: the computation of temporal attributes and the wavelet transform. These two methods were evaluated on their gas discrimination capacity thanks to the use of several families of classification algorithms, such as support vector machines (SVM), decision trees, K nearest neighbours, neural networks, etc
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Boumenir, Yasmine. „Spatial navigation in real and virtual urban environments: performance and multisensory processing of spatial information in sighted, visually impaired, late and congenitally blind individuals“. Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2011. http://tel.archives-ouvertes.fr/tel-00632703.

Der volle Inhalt der Quelle
Annotation:
Previous studies investigating how humans build reliable spatial knowledge representations allowing them to find their way from one point to another in complex environments have been focused on comparing the relative importance of the two-dimensional visual geometry of routes and intersections, multi-dimensional data from direct exposure with the real world, or verbal symbols and/or instructions. This thesis sheds further light on the multi-dimensional and multi-sensorial aspects by investigating how the cognitive processing of spatial information derived from different sources of sensory and higher order input influences the performance of human observers who have to find their way from memory through complex and non-familiar real-world environments. Three experiments in large-scale urban environments of the real world, and in computer generated representations of these latter (Google Street View), were run to investigate the influence of prior exposure to 2D visual or tactile maps of an itinerary, compared with a single direct experience or verbal instructions, on navigation performances in sighted and/or visually deficient individuals, and in individuals temporarily deprived of vision. Performances were analyzed in terms of time from departure to destination, number of stops, number of wrong turns, and success rates. Potential strategies employed by individuals during navigation and mental mapping abilities were screened on the basis of questionnaires and drawing tests. Subjective levels of psychological stress (experiment 2) were measured to bring to the fore possible differences between men and women in this respect. The results of these experiments show that 2D visual maps, briefly explored prior to navigation, generate better navigation performances compared with poorly scaled virtual representations of a complex real-world environment (experiment 1), the best performances being produced by a single prior exposure to the real-world itinerary. However, brief familiarization with a reliably scaled virtual representation of a non-familiar real-world environment (Google Street View) not only generates optimal navigation in computer generated testing (virtual reality), but also produces better navigation performances when tested in the real-world environment and compared with prior exposure to 2D visual maps (experiment 2). Congenitally blind observers (experiment 3) who have to find their way from memory through a complex non-familiar urban environment perform swiftly and with considerable accuracy after exposure to a 2D tactile map of their itinerary. They are also able to draw a visual image of their itinerary on the basis of the 2D tactile map exposure. Other visually deficient or sighted but blindfolded individuals seem to have greater difficulty in finding their way again than congenitally blind people, regardless of the type of prior exposure to their test itinerary. The findings of this work here are discussed in the light of current hypotheses regarding the presumed intrinsic nature of human spatial representations, replaced herein within a context of working memory models. It is suggested that multi-dimensional temporary storage systems, capable of processing a multitude of sensory input in parallel and with a much larger general capacity than previously considered in terms of working memory limits, need to be taken into account for future research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Christou, Maria. „Enaction, interaction multisensorielle : théorie, technologie et expériences pour les arts numériques“. Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENS019/document.

Der volle Inhalt der Quelle
Annotation:
Cette recherche interdisciplinaire se situe à l'intersection des sciences cognitives, de l'informatique et des arts. Nous y traitons de questions relatives à la perception et compréhension d'une expérience artistique dans le contexte des technologies numériques. Nous considérons l'ordinateur comme un outil puissant de création et nous nous posons la question du comment son rôle peut être fonctionnellement introduit dans les arts numériques. Une des clés de la réponse à cette question se situe dans la notion d'incorporation (embodiment). Il s'agit d'un aspect de la perception et de la cognition humaine que nous ne pouvons pas approcher de façon directe, car il s'agit d'un processus émergent qui se construit avec l'action. Dans cette thèse, nous avons fait émerger quatre critères pour qualifier puis tenter d'évaluer les processus d'incorporation en situation créative, soit de réception soit de réception et d'action. Ces critères sont : la cohérence des retours sensoriels proposée par le système technologique, par exemple la cohérence entre le son et l'image ou entre le son, le geste et l'image ; la nature de l'action telle que perçue ou effectuée ; la sensation d'immersion cognitive des participants ; le potentiel évocateur de la situation sensori-motrice proposée à la perception et/ou à l'action.Nous avons mis en œuvre une méthode qualitative pour l'analyse d'expériences multisensorielles et interactives. Des entretiens ouverts ont permis de récolter un corpus de données sous forme d'enregistrements audiovisuels et de textes transcrits de ces entretiens. Un des objectifs de ces entretiens est de favoriser l'expressivité du sujet sur la manière dont il a vécu la situation en amont, voire au-delà, d'un quelconque jugement esthétique. Cette méthode a été utilisée dans de deux types de situations. Dans la première situation, nous avons mené des entretiens avec des spectateurs ayant participé à un concert effectué dans le cadre des Journées d'Informatique Musicale à Grenoble. Pour cela, nous avons choisi sept pièces audiovisuelles d'auteurs différents, qui étaient soit des œuvres jouées sur scène, soit des œuvres enregistrées. Le deuxième cas comporte des entretiens réalisés avec des participants d'une œuvre interactive audio-visio-haptique intitulée « Geste réel sur matière simulée ». Cette installation a été conçue dans le cadre du projet Créativité Instrumentale pour étudier les transformations du processus de création induit par les technologies de la simulation interactive numérique. Elle se décline en trois scènes de simulation multisensorielle réalisées par modèle physique permettant l'interaction instrumentale. Les entretiens avaient lieu pendant et après l'expérience. L'analyse des discours recueillis, nous a permis de mettre en évidence la relation entre l'outil technologique et l'homme. Dans cette thèse, nous proposons un cadre théorique composé de quatre éléments : Cohérence, Immersion, Action et Evocation, à l'aide desquels nous avons analysé les discours de sujets confrontés à des situations multisensorielles numériques actives et ainsi cerner l'embodiment dans telles situations. L'usage de ces quatre éléments dans l'analyse des discours nous a permis de mettre en évidence une multitude de liaisons entre eux qui se déclinent selon les paramètres des scènes virtuelles. Différents mécanismes de compréhension de ces scènes se mettent en place selon la manière dont les sensorialités sont stimulées et nos analyses nous ont permis de qualifier comment le visuel, l'audio et l'haptique pris séparément ou réunis permettent d'appréhender des différentes dimensions de la scène dans sa complexité
L'auteur n'a pas fourni de résumé en anglais
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Virtual multisensor"

1

Cheok, Adrian David, und Kasun Karunanayaka. Virtual Taste and Smell Technologies for Multisensory Internet and Virtual Reality. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73864-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Sutcliffe, Alistair. Multimedia and virtual reality: Designing usable multisensory user interfaces. Mahwah, N.J: Lawrence Erlbaum, 2003.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Cheok, Adrian David, und Kasun Karunanayaka. Virtual Taste and Smell Technologies for Multisensory Internet and Virtual Reality. Springer, 2018.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Cheok, Adrian David, und Kasun Karunanayaka. Virtual Taste and Smell Technologies for Multisensory Internet and Virtual Reality. Springer, 2019.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sutcliffe, Alistair. Multimedia and Virtual Reality: Designing Multisensory User Interfaces. Taylor & Francis Group, 2003.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Sutcliffe, Alistair. Multimedia and Virtual Reality: Designing Multisensory User Interfaces. Taylor & Francis Group, 2003.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Sutcliffe, Alistair. Multimedia and Virtual Reality: Designing Multisensory User Interfaces. Taylor & Francis Group, 2003.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Sutcliffe, Alistair. Multimedia and Virtual Reality: Designing Multisensory User Interfaces. Lawrence Erlbaum, 2003.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Sutcliffe, Alistair. Multimedia and Virtual Reality: Designing Multisensory User Interfaces. Taylor & Francis Group, 2003.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Sutcliffe, Alistair. Multimedia and Virtual Reality: Designing Multisensory User Interfaces. Taylor & Francis Group, 2003.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Virtual multisensor"

1

Pai, Dinesh K. „Multisensory Interaction: Real and Virtual“. In Springer Tracts in Advanced Robotics, 489–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11008941_52.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Uhl, Jakob C., Barbara Prodinger, Markus Murtinger und Armin Brysch. „A Journey for All Senses: Multisensory VR for Pre-travel Destination Experiences“. In Information and Communication Technologies in Tourism 2024, 128–39. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-58839-6_13.

Der volle Inhalt der Quelle
Annotation:
AbstractThe rapid advancement of Virtual Reality (VR) technologies, bolstered by cutting-edge hardware, has ushered in a new era that blurs the lines between the physical and virtual realms. As opportunities for immersive information absorption in virtual worlds grow, the tourism industry faces escalating pressure to stay competitive. Although traditional VR mainly engages audio-visual senses, this study examines whether multisensory VR in the pre-travel phase enhances users’ sense of presence and technology acceptance. Employing a mixed-methods, between-subjects design, we conducted an experiment with 103 participants divided into a multisensory VR group and an audio-visual VR group. Our aim was to investigate the impact on the sense of ’being there,’ technology acceptance, and the relationship between increased presence and acceptance. Results from tourism professionals reveal no significant variation in physical presence between the two groups; however, the multisensory VR group showed a notable difference in self-presence. Our findings suggest that the inclusion of multisensory stimuli makes VR more approachable and user-friendly, leading to greater self-presence and technology acceptance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Serafin, Stefania. „Audio in Multisensory Interactions: From Experiments to Experiences“. In Sonic Interactions in Virtual Environments, 305–18. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04021-4_10.

Der volle Inhalt der Quelle
Annotation:
AbstractIn the real and virtual world, we usually experience sounds in combination with at least an additional modality, such as vision, touch or proprioception. Understanding how sound enhances, substitutes or modifies the way we perceive and interact with the world is an important element when designing interactive multimodal experiences. In this chapter, we present an overview of sound in a multimodal context, ranging from basic experiments in multimodal perception to more advanced interactive experiences in virtual reality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Toet, Alexander, Tina Mioch, Simon N. B. Gunkel, Camille Sallaberry, Jan B. F. van Erp und Omar Niamut. „Holistic Quality Assessment of Mediated Immersive Multisensory Social Communication“. In Virtual Reality and Augmented Reality, 209–15. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-62655-6_13.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Richards-Rissetto, Heather, Kristy E. Primeau, David E. Witt und Graham Goodwin. „Multisensory Experiences in Archaeological Landscapes—Sound, Vision, and Movement in GIS and Virtual Reality“. In Capturing the Senses, 179–210. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-23133-9_9.

Der volle Inhalt der Quelle
Annotation:
AbstractArchaeologists are employing a variety of digital tools to develop new methodological frameworks that combine computational and experiential approaches which is leading to new multisensory research. In this article, we explore vision, sound, and movement at the ancient Maya city of Copan from a multisensory and multiscalar perspective bridging concepts and approaches from different archaeological paradigms. Our methods and interpretations employ theory-inspired variables from proxemics and semiotics to develop a methodological framework that combines computation with sensory perception. Using GIS, 3D, and acoustic tools we create multisensory experiences in VR with spatial sound using an immersive headset (Oculus Rift) and touch controllers (for movement). The case study simulates the late eighth and early ninth-century landscape of the ancient Maya city of Copan to investigate the role of landscape in facilitate movement, send messages, influence social interaction, and structure cultural events. We perform two simulations to begin to study the impact of vegetation on viewsheds and soundsheds of a stela at ancient Copan. Our objectives are twofold: (1) design and test steps towards developing a GIS computational approach to analyse the impact of vegetation within urban agrarian landscapes on viewsheds and soundsheds and (2) explore cultural significance of Stela 12, and more generally the role of synesthetic experience in ancient Maya society using a multisensory approach that incorporates GIS and VR.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Diaconu, Alexandru, Flaviu Vreme, Henrik Sæderup, Hans Pauli Arnoldson, Patrick Stolc, Anthony L. Brooks und Michael Boelstoft Holte. „An Interactive Multisensory Virtual Environment for Developmentally Disabled“. In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 406–17. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-06134-0_44.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Prodinger, Barbara, und Barbara Neuhofer. „Multisensory VR Experiences in Destination Management“. In Information and Communication Technologies in Tourism 2022, 162–73. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-94751-4_15.

Der volle Inhalt der Quelle
Annotation:
AbstractThe rapid development of information and communication technologies (ICTs) and the high level of consumer acceptance have made it increasingly complex to retain loyal customers. Virtual Reality (VR) has become a solution that allows tourism providers to design technology-enhanced experiences along the entire customer journey. While most VR offers focus on pre-travel experiences, the potential of VR in the post-travel phase is still little explored. Considering that multisensory tourism experiences contribute to memory formation, the multisensory extension of VR (4D VR) in post-travel experiences is of interest. Thus, through a quantitative field experiment, this study aims to detect what effect the stimulation of different senses during the use of VR has on the overall experience and how this influences the brand relationship quality. The results revealed elevated levels of technology acceptance, which consequently enhances the traveler’s overall VR experience. The multisensory component positively affects one realm of an experience in the area of escapism and thus correlates with the overall experience. However, there is no significant difference between 3D and 4D regarding the level of brand relationship quality. The study expands the literature on 4D VR experiences and supports tourism practitioners in the implementation to strengthen the relationship between a destination and its guests.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Landesz, Tamás, und Karine Sargsyan. „Future of Sex and Gender“. In Future of Business and Finance, 113–22. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-36382-5_10.

Der volle Inhalt der Quelle
Annotation:
AbstractThe sex tech industry is set to experience significant changes as it grows in value and attracts more users. The industry is often divided into five main branches: remote sex, robots, immersive entertainment, virtual sex, and augmentation. Virtual reality (VR) technology has the potential to transform sexual experiences, allowing people to explore different identities and experiment with new sensations. Haptic and multisensory experiences will revolutionize virtual sex, and virtual sexology will enhance people's sexual skills. The COVID-19 pandemic has had an impact on sexual lives, with people turning to sexting and sex toys. The article explores the possibility of humans falling in love, marrying, having sex with robots, and merging with machines. VR contact lenses may even enable dream-based sex. However, the article notes that human touch and contact remain crucial in sexual experiences and technology cannot fully replace them. Despite new technological developments the future of sex will continue to be about the pursuit of pleasure, while genuine human relationships will remain essential.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Reimann, Peter, und Andreas Schütze. „Sensor Arrays, Virtual Multisensors, Data Fusion, and Gas Sensor Data Evaluation“. In Springer Series on Chemical Sensors and Biosensors, 67–107. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/5346_2013_52.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Encalada, Patricio, Johana Medina, Santiago Manzano, Juan P. Pallo, Dennis Chicaiza, Carlos Gordón, Carlos Núñez und Diego F. Andaluz. „Virtual Therapy System in a Multisensory Environment for Patients with Alzheimer’s“. In Advances in Intelligent Systems and Computing, 767–81. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-29513-4_57.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Virtual multisensor"

1

Haskamp, Klaus, Markus Kästner und Eduard Reithmeier. „Fast virtual shadow projection system as part of a virtual multisensor assistance system“. In SPIE Optical Metrology, herausgegeben von Bernd Bodermann. SPIE, 2011. http://dx.doi.org/10.1117/12.883026.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Nicholson, Denise, Kathleen Bartlett, Robert Hoppenfeld, Margaret Nolan und Sae Schatz. „A virtual environment for modeling and testing sensemaking with multisensor information“. In SPIE Defense + Security, herausgegeben von Gerald C. Holst, Keith A. Krapels, Gary H. Ballard, James A. Buford und R. Lee Murrer. SPIE, 2014. http://dx.doi.org/10.1117/12.2050780.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Lan, Gongjin, Jiaming Sun, Chengyang Li, Zebin Ou, Ziyun Luo, Jinhao Liang und Qi Hao. „Development of UAV based virtual reality systems“. In 2016 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI). IEEE, 2016. http://dx.doi.org/10.1109/mfi.2016.7849534.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Tang, Xiaojun T., Weiping Li und Junhua Liu. „Virtual instrument for calibrating of multisensor testing system based on LabWindows/CVI“. In Fifth International Symposium on Instrumentation and Control Technology, herausgegeben von Guangjun Zhang, Huijie Zhao und Zhongyu Wang. SPIE, 2003. http://dx.doi.org/10.1117/12.521362.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Liu, Chang, Ruipeng Cao, Sa Jia, Yanan Zhang, Bo Wang und Qingjie Zhao. „The PTZ tracking algorithms evaluation virtual platform system“. In 2014 International Conference on Multisensor Fusion and Information Integration for Intelligent Systems (MFI). IEEE, 2014. http://dx.doi.org/10.1109/mfi.2014.6997643.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Sacharny, David, Thomas C. Henderson und Vista Marston. „On-Demand Virtual Highways for Dense UAS Operations“. In 2021 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI). IEEE, 2021. http://dx.doi.org/10.1109/mfi52462.2021.9591196.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Reimann, P., A. Dausend, S. Darsch, M. Schüler, A. Schütze und Perena Gouma. „Improving MOS Virtual Multisensor Systems by Combining Temperature Cycled Operation with Impedance Spectroscopy“. In OLFACTION AND ELECTRONIC NOSE: PROCEEDINGS OF THE 14TH INTERNATIONAL SYMPOSIUM ON OLFACTION AND ELECTRONIC NOSE. AIP, 2011. http://dx.doi.org/10.1063/1.3626378.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Wu, Qingcong, und Xingsong Wang. „Development of an upper limb exoskeleton for rehabilitation training in virtual environment“. In 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI). IEEE, 2017. http://dx.doi.org/10.1109/mfi.2017.8170425.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Bauer, Johannes, Jorge Davila-Chacon, Erik Strahl und Stefan Wermter. „Smoke and mirrors — Virtual realities for sensor fusion experiments in biomimetic robotics“. In 2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI 2012). IEEE, 2012. http://dx.doi.org/10.1109/mfi.2012.6343022.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Hoher, Patrick, Johannes Reuter, Felix Govaers und Wolfgang Koch. „Extended Object Tracking and Shape Classification using Random Matrices and Virtual Measurement Models“. In 2023 IEEE Symposium Sensor Data Fusion and International Conference on Multisensor Fusion and Integration (SDF-MFI). IEEE, 2023. http://dx.doi.org/10.1109/sdf-mfi59545.2023.10361348.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Virtual multisensor"

1

Mills, Kathy, Elizabeth Heck, Alinta Brown, Patricia Funnell und Lesley Friend. Senses together : Multimodal literacy learning in primary education : Final project report. Institute for Learning Sciences and Teacher Education, Australian Catholic University, 2023. http://dx.doi.org/10.24268/acu.8zy8y.

Der volle Inhalt der Quelle
Annotation:
[Executive summary] Literacy studies have traditionally focussed on the seen. The other senses are typically under-recognised in literacy studies and research, where the visual sense has been previously prioritised. However, spoken and written language, images, gestures, touch, movement, and sound are part of everyday literacy practices. Communication is no longer focussed on visual texts but is a multisensory experience. Effective communication depends then on sensory orchestration, which unifies the body and its senses. Understanding sensory orchestration is crucial to literacy learning in the 21st century where the combination of multisensory practices is both digital and multimodal. Unfortunately, while multimodal literacy has become an increasing focus in school curriculum, research has still largely remained focussed on the visual. The Sensory Orchestration for Multimodal Literacy Learning in Primary Education project, led by ARC Future Fellow Professor Kathy Mills, sought to address this research deficit. In addressing this gap, the project built an evidence base for understanding how students become critical users of sensory techniques to communicate through digital, virtual, and augmented-reality texts. The project has contributed to the development of new multimodal literacy programs and a next-generation approach to multimodality through the utilisation of innovative sensorial education programs in various educational environments including primary schools, digital labs, and art museums.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie