Dissertations / Theses on the topic 'Virtual multisensor'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 17 dissertations / theses for your research on the topic 'Virtual multisensor.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Pasika, Hugh Joseph Christopher. "Neural network sensor fusion : creation of a virtual sensor for cloud-base height estimation /." *McMaster only, 1999.
Find full textMorie, Jacquelyn Ford. "Meaning and emplacement in expressive immersive virtual environments." Thesis, University of East London, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.532661.
Full textFasthén, Patrick. "The Virtual Self : Sensory-Motor Plasticity of Virtual Body-Ownership." Thesis, Högskolan i Skövde, Institutionen för biovetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-10501.
Full textChung, Tak-yin Jason, and 鍾德賢. "The virtual multisensory room: supplementary effect on students withsevere mentally handicap in a special school." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B29624101.
Full textTaffou, Marine. "Inducing feelings of fear with virtual reality : the influence of multisensory stimulation on negative emotional experience." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066622/document.
Full textIn a natural environment, affective events often convey emotional cues through multiple sensory modalities. Yet, the effect of multisensory affective events on the conscious emotional experience (feelings) they induce remains relatively undiscovered. The present research exploited the unique advantages of virtual reality techniques to examine the negative emotional experience induced by auditory-visual aversive events embedded in a natural context. In natural contexts, the spatial distance between the perceiver and the affective stimuli is an important factor. Consequently, this research investigated the relationship between affect, multisensory presentation and space. A first study using virtual reality tested the influence of auditory-visual aversive stimuli on negative emotional experience. A second study explored the effect of excessive fear on the representation of close space. A third study examined the effect of auditory-visual stimuli on negative emotional experience as a function of their location at close or far distances from the perceiver. Overall, it was found that negative emotional experience is modulated by the sensory and spatial characteristics of aversive events. Multisensory aversive events amplify negative feelings only when they are located at close distances from the perceiver. Moreover, excessive fear related to an event extends the space, wherein the event is represented as close. Taken together, the present research provides new information about affective processing and exposes virtual reality as a relevant tool for the study of human affect
Nierula, Birgit. "Multisensory processing and agency in VR embodiment: Interactions through BCI and their therapeutic applications." Doctoral thesis, Universitat de Barcelona, 2017. http://hdl.handle.net/10803/461771.
Full textCooper, N. "The role of multisensory feedback in the objective and subjective evaluations of fidelity in virtual reality environments." Thesis, University of Liverpool, 2017. http://livrepository.liverpool.ac.uk/3007774/.
Full textMorati, Nicolas. "Système de détection ultra-sensible et sélectif pour le suivi de la qualité de l'air intérieur et extérieur." Electronic Thesis or Diss., Aix-Marseille, 2021. http://www.theses.fr/2021AIXM0200.
Full textToday the air is polluted by many chemicals, which are in the form of a complex mixture that is difficult to identify. These marker gases include carbon monoxide (CO), ozone (O3) and nitrogen dioxide (NO2). It has therefore become imperative to design detection systems that are inexpensive, but at the same time highly sensitive and selective, in order to monitor air quality in real time. Metal Oxide gas sensors (MOX) can meet these requirements. They are used in portable and low cost gas detection devices. Very sensitive, stable and with a long lifespan, MOX sensors suffer from an inherent lack of selectivity, which can be overcome by integrating artificial intelligence. This thesis is concerned with the implementation of gas identification methods based on the analysis of experimental data. The objective is to discriminate three pollution marker gases: CO, O3, and NO2, with a single sensor, under real conditions of use, i.e. in the permanent presence of a concentration of these gases in the humid ambient air. For this, we use a tungsten oxide (WO3) gas sensor patented by IM2NP laboratory and operated under a worldwide license by the company NANOZ.A complete experimental database was created from a protocol based on temperature modulation of the sensitive layer. From this database, we implemented two different feature extraction methods: the computation of temporal attributes and the wavelet transform. These two methods were evaluated on their gas discrimination capacity thanks to the use of several families of classification algorithms, such as support vector machines (SVM), decision trees, K nearest neighbours, neural networks, etc
Boumenir, Yasmine. "Spatial navigation in real and virtual urban environments: performance and multisensory processing of spatial information in sighted, visually impaired, late and congenitally blind individuals." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2011. http://tel.archives-ouvertes.fr/tel-00632703.
Full textChristou, Maria. "Enaction, interaction multisensorielle : théorie, technologie et expériences pour les arts numériques." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENS019/document.
Full textL'auteur n'a pas fourni de résumé en anglais
Cunio, Rachel J. "Spatialized Auditory and Vibrotactile Cueing for Dynamic Three-Dimensional Visual Search." Wright State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=wright155912105678525.
Full textBressolette, Benjamin. "Manipulations gestuelles d'un objet virtuel sonifié pour le contrôle d'une interface en situation de conduite." Thesis, Ecole centrale de Marseille, 2018. http://www.theses.fr/2018ECDM0009/document.
Full textCar manufacturers offer a wide range of secondary driving controls, such as GPS, music, or ventilation, often localized on a central touch-sensitive screen. However, operating them while driving proves to be unsafe: engaging the sense of sight for interface interaction can lead to vigilance reduction towards the driving task, which can lead to high-risk situations. In this PhD thesis, which is a part of a collaborative research project involving both the PSA Group and the PRISM laboratory, we aim to provide a gesture and sound association as an alternative to the visual solicitation. The goal is to enable blind interface interactions, allowing the driver to focus their eyes on the road. When jointly performing interface manipulations and the driving task, a multisensory solicitation can lower the driver's cognitive load, in comparison with a visual unimodal situation. For the gesture-sound association to feel more natural, a virtual object that can be handled with gestures is introduced. This object is the support for sonification strategies, constructed by analogy with sounds from our environment, which are the consequence of an action on an object .The virtual object also allows to structure different gestures around the same metaphor, or to redefine the interface's menu. The first part of this thesis deals with the development of sonification strategies, with the aim to inform users about the virtual object dynamic. Two perceptual experiments were set up, which led to the discrimination of two valuable sonification strategies. In a second part, the automotive application was addressed by designing new sound stimuli, the interface, and by studying the multisensory integration. Sounds were proposed for each of the two sonification strategies, to progress towards an in-vehicle integration. The evocations brought by the gestures and sounds association were the subject of a third perceptive blinded experiment. The concepts around the virtual object were unknown and gradually discovered by the subjects. These mental images conveyed by the sonification strategies can help users familiarize with the interface. A fourth perceptual experiment focused on the virtual object handling for first-time users, where the integration of audio-visual stimuli was studied, in the context of an interface manipulation. The experiment conditions were similar to the driver first discovering of the interface in a parked vehicle thanks to audio-visual stimuli, and then operating it through sonification strategies only. The results of this experiment lead to the design of a gestural interface, which was compared with the results obtained with a touchscreen interface in a final perceptual experiment, carried out in a driving simulator. Although the results show better performances for the tactile interface, the combination of gestures and sounds proved to be effective from the cognitive load point of view. The gesture interface can therefore offer a promising alternative or complement to tactile interfaces for a safe simultaneous use in driving condition
Stratulat, Anca. "Etude des interactions multi-sensorielle pour la perception des mouvements du véhicule en simulateur dynamique : contribution de l'illusion somatogravique à l'immersion en environnement virtuel." Thesis, Aix-Marseille 2, 2011. http://www.theses.fr/2011AIX22139.
Full textDriving simulators allow the exploration of certain areas of research that are difficult to reach in normal conditions, like the integration of different sensory inputs (visual, vestibular and somesthesic) for perception of self-motion. In spite of their complexity, driving simulators do not produce a realistic sensation of driving, especially for braking and turnings. This is due to their mechanical limitations. As a consequence, driving simulators' motion algorithm is based on tilt-coordination technique, which assumes the tilt of the car so that the driver's force of gravity is oriented in the same way as the gravito-inertial acceleration (GIA) during a linear acceleration. This technique is based on the tilt-translation ambiguity of the vestibular system and is used on dynamic driving simulators in combination with linear translations in so-called washout algorithm, to produce a sensation of linear acceleration. The aim of the present research is to understand how humans use multiple sensory signals (vestibular, visual and somatosensory) during the perception of linear acceleration on a driving simulator. The conducted experiments show that the perception of motion depends on the manner tilt and translation are used together to provide a unified percept of linear acceleration. Further, our results show that there is an important difference on how humans perceive accelerations and decelerations. For braking, the most realistic tilt/translation ratio depends on the level of deceleration. For acceleration, the motion is generally overestimated and depends on the level of acceleration, but not on the variation of tilt/translation ratio. The results suggest that visual, vestibular and proprioceptive cues are integrated in an optimal Bayesian fashion. In conclusion, it is not advisable to use a washout algorithm without taking into account the non-linearity of human perception. We propose an empirically found data-driven fitting model that describes the relationship between tilt, translation and the desired level of acceleration or deceleration. This model is intended to be a supplement to motion cueing algorithms that should improve the realism of driving simulations
Edmunds, Timothy. "Improving interactive multisensory simulation and rendering through focus on perceptual processes." 2009. http://hdl.rutgers.edu/1782.2/rucore10001600001.ETD.000050507.
Full textMishra, A., A. Shukla, Nripendra P. Rana, and Y. K. Dwivedi. "From ‘touch’ to a ‘multisensory’ experience: The impact of technology interface and product type on consumer responses." 2020. http://hdl.handle.net/10454/18169.
Full textOnline retailers are increasingly using augmented reality (AR) and virtual reality (VR) technologies to solve mental and physical intangibility issues in a product evaluation. Moreover, the technologies are easily available and accessible to consumers via their smartphones. The authors conducted three experiments to examine consumer responses to technology interfaces (AR/VR and mobile apps) for hedonic and utilitarian products. The results show that AR is easier to use (vs. app), and users find AR more responsive when buying a hedonic (vs. utilitarian) product. Touch interface users are likely to have a more satisfying experience and greater recommendation intentions, as compared to AR, for buying utilitarian products. In contrast, a multisensory environment (AR) results in a better user experience for purchasing a hedonic product. Moreover, multisensory technologies lead to higher visual appeal, emotional appeal, and purchase intentions. The research contributes to the literature on computer-mediated interactions in a multisensory environment and proposes actionable recommendations to online marketers.
The full-text of this article will be released for public view at the end of the publisher embargo on 04 Dec 2022.
Villar, Joana Andrade Dias Posser. "How multisensory experiences in virtual environments affect intention to return: the role of cognitive flexibility, sense of power and personality traits." Master's thesis, 2019. http://hdl.handle.net/10071/19439.
Full textOs avanços tecnológicos concretizados todos os dias estão a criar oportunidades para as empresas. A Realidade Virtual tem sido o foco de diversos estudos. Contudo, a realidade virtual tem sido apontada pela inexistência de alguns sentidos, tais como o tato, cheiro e paladar. Neste estudo, cruzamos duas experiências sensoriais, uma com os sentidos da visão e audição e a outra com os sentidos da visão, audição e cheiro. As experiências tomam lugar num café virtual e mede o impacto que tem na intenção de retorno (intention to return). A escolha das diferentes experiências sensoriais teve por base os conceitos de sentidos mais próximos e sentidos mais distantes. Para o objetivo desta tese, vão ser também analisados os conceitos de sentimento de poder e flexibilidade cognitiva e as características de personalidade são introduzidas como moderador desta relação. Posto isto, este estudo conclui que as experiências multissensoriais num ambiente virtual, não têm impacto na intenção de retorno. Contudo, o estudo conclui também que a flexibilidade cognitiva tem um impacto positivo no sentimento de poder e ainda que as características de personalidade têm um papel de moderador na relação entre os dois. Além disto, este estudo sugere que a ideia de que os sentidos podem ser psicologicamente mais próximos ou distantes, com base na distância física normalmente necessária para um estímulo ser sentido, também se verifica em ambientes de realidade virtual.
Manghisi, Vito Modesto. "Operator 4.0: Industrial Augmented Reality, Interfaces and Ergonomics." Doctoral thesis, 2019. http://hdl.handle.net/11589/161097.
Full textThe German program Industry 4.0 and the corresponding international initiatives will continue to transform the industrial workforce and their work environment through 2025. In parallel with the evolution of the industry, the history of the interaction of operators with various industrial and digital production technologies can be summarized as a generational evolution towards the Operator 4.0 generation. This work aims at applying the enabling technologies of Industry 4.0 in order to design and develop, methods and applications supporting the figure of the Operator 4.0 with respect three out of her/his eight facets – the Augmented Operator, the Virtual Operator, and the Healthy Operator. In Chapter 1, we introduce the researches carried out in the IAR field. We describe the Augmented Reality (AR) technology and its application in the field of the Industrial Augmented Reality (IAR). In chapter 2, we present a Spatial Augmented Reality (SAR) workbench prototype designed in the early stage of this research and we describe the experiments carried out to validate its efficiency as support to the Operator 4.0. In Chapter 3, we describe the experiments carried out to optimize legibility of text shown in AR interfaces for optical see-through displays. In this research, we propose novel indices extracted from the background images, displayed on an LCD screen, and we compare them with those proposed in the literature by designing a specific user test. In Chapter 4, we present an AR framework for handheld devices that enhance users in the comprehension of plant information traditionally conveyed through printed Piping and Instrumentation Diagrams (P&ID). In Chapter 5 we describe the research carried out in the field of HMI related to the use of Natural User Interfaces in Virtual Reality. We designed and developed a gesture interface for navigation of virtual tours made-up of spherical images. We compared the developed interface with a classical mouse-controlled one to evaluate the effectiveness of such an interface in terms of user acceptance and user engagement. In Chapter 6, we describe a general framework to design a mid-air gesture vocabulary for the navigation of technical instructions in digital manuals for maintenance operations. A validation procedure is also proposed and utilized to compare gesture vocabularies in terms of fatigue and cognitive load. In Chapter 7, we treat the facet of the Healthy Operator. We describe the design and development of a semi-automatic software tool able at monitoring the operator ergonomics in the shop floor by assessing Rapid Upper Limb Assessment (RULA) metrics. We describe the design and development of our software prototype – the K2RULA - based on a low- cost sensor, the Microsoft Kinect v2 depth-camera. Subsequently, we validate our tool with two experiments. In the first one, we compared the K2RULA grand-scores with those obtained with a reference optical motion capture system. In the second experiment, we evaluate the agreement of the grand-scores returned by the proposed application with those obtained by a RULA expert rater. Finally, we draw our conclusions regarding the work carried out and try to map out a path for the future development of our researches in these fields.