Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Virtual multisensor.

Dissertationen zum Thema „Virtual multisensor“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-17 Dissertationen für die Forschung zum Thema "Virtual multisensor" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Pasika, Hugh Joseph Christopher. „Neural network sensor fusion : creation of a virtual sensor for cloud-base height estimation /“. *McMaster only, 1999.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Morie, Jacquelyn Ford. „Meaning and emplacement in expressive immersive virtual environments“. Thesis, University of East London, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.532661.

Der volle Inhalt der Quelle
Annotation:
From my beginnings as an artist, my work has always been created with the goal of evoking strong emotional responses from those who experience it. I wanted to wrap my work around the viewers have it encompass them completely. When virtual reality came along, 1 knew I had found my true medium. I could design the space, bring people inside and see what they did there. I was always excited to see what the work would mean to them, what they brought to it, what I added, and what they took away.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Fasthén, Patrick. „The Virtual Self : Sensory-Motor Plasticity of Virtual Body-Ownership“. Thesis, Högskolan i Skövde, Institutionen för biovetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-10501.

Der volle Inhalt der Quelle
Annotation:
The distinction between the sense of body-ownership and the sense of agency has attracted considerable empirical and theoretical interest lately. However, the respective contributions of multisensory and sensorimotor integration to these two varieties of body experience are still the subject of ongoing research. In this study, I examine the various methodological problems encountered in the empirical study of body-ownership and agency with the use of novel immersive virtual environment technology to investigate the interplay between sensory and motor information. More specifically, the focus is on testing the relative contributions and possible interactions of visual-tactile and visual-motor contingencies implemented under the same experimental protocol. The effect of this is supported by physiological measurements obtained from skin conductance responses and heart rate. The findings outline a relatively simple method for identifying the necessary and sufficient conditions for the experience of body-ownership and agency, as studied with immersive virtual environment technology.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Chung, Tak-yin Jason, und 鍾德賢. „The virtual multisensory room: supplementary effect on students withsevere mentally handicap in a special school“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B29624101.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Taffou, Marine. „Inducing feelings of fear with virtual reality : the influence of multisensory stimulation on negative emotional experience“. Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066622/document.

Der volle Inhalt der Quelle
Annotation:
Dans l'environnement naturel, les signaux émotionnels sont transmis via différentes modalités sensorielles. Pourtant, l'effet d'évènements affectifs multisensoriels sur l'expérience émotionnelle consciente (le ressenti) reste relativement peu connu. Ce travail de recherche a exploité les avantages de la réalité virtuelle pour étudier le ressenti négatif induit par des évènements aversifs visuo-auditifs présentés dans un contexte écologique. Un tel contexte permet de prendre en compte un facteur important qui est la distance entre le sujet et le stimulus affectif. Par conséquent, ce travail a impliqué l'étude des liens entre l'affect, la présentation multisensorielle et l'espace. Une première étude a exploré l'influence de stimuli aversifs visuo-auditifs sur le ressenti. Une deuxième étude a examiné l'effet de la peur excessive sur la représentation de l'espace péri-personnel. Une troisième étude a testé l'effet de stimuli aversifs visuo-auditifs sur le ressenti en fonction de leur position plus ou moins proche du sujet. En conclusion, il a été constaté que le ressenti émotionnel est modulé par les caractéristiques sensorielles et spatiales des évènements aversifs. Les stimuli aversifs visuo-auditifs amplifient le ressenti négatif. Cependant, cet effet n'existe que si ces stimuli sont dans l'espace proche du sujet. Enfin, la peur excessive d'un stimulus spécifique provoque une extension de l'espace péri-personnel. L'ensemble de ces travaux fournit de nouvelles informations sur le traitement de l'information affective et met en évidence l'utilité et la pertinence de la réalité virtuelle pour l'étude de l'affect
In a natural environment, affective events often convey emotional cues through multiple sensory modalities. Yet, the effect of multisensory affective events on the conscious emotional experience (feelings) they induce remains relatively undiscovered. The present research exploited the unique advantages of virtual reality techniques to examine the negative emotional experience induced by auditory-visual aversive events embedded in a natural context. In natural contexts, the spatial distance between the perceiver and the affective stimuli is an important factor. Consequently, this research investigated the relationship between affect, multisensory presentation and space. A first study using virtual reality tested the influence of auditory-visual aversive stimuli on negative emotional experience. A second study explored the effect of excessive fear on the representation of close space. A third study examined the effect of auditory-visual stimuli on negative emotional experience as a function of their location at close or far distances from the perceiver. Overall, it was found that negative emotional experience is modulated by the sensory and spatial characteristics of aversive events. Multisensory aversive events amplify negative feelings only when they are located at close distances from the perceiver. Moreover, excessive fear related to an event extends the space, wherein the event is represented as close. Taken together, the present research provides new information about affective processing and exposes virtual reality as a relevant tool for the study of human affect
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Nierula, Birgit. „Multisensory processing and agency in VR embodiment: Interactions through BCI and their therapeutic applications“. Doctoral thesis, Universitat de Barcelona, 2017. http://hdl.handle.net/10803/461771.

Der volle Inhalt der Quelle
Annotation:
Body ownership refers to the experience that this body is my body and is closely linked to consciousness. Multisensory integration processes play an important role in body ownership as shown in the rubber hand illusion, which induces the illusory experience that a rubber hand is part of one's own body. Illusions of body ownership can also be experienced in immersive virtual reality (VR), which was used in all three experiments of this thesis. The first experiment of this thesis aimed at investigating some of the underlying mechanisms of body ownership. Specifically we were interested whether the body ownership illusion fluctuates over time and if so, whether these fluctuations are related to spontaneous brain activity. The second experiment aimed at investigating the relation between body ownership illusions and pain perception. Looking at one's own body has been demonstrated to have analgesic properties. This well-known effect in people's real hand has been studied in illusory owned hands with contradictory results. It has been replicated in VR-embodiment, but there are controversial findings in the rubber hand illusion. One crucial difference between the rubber hand illusion and VR-embodiment is that in VR real and virtual hand can be colocated while this is not possible in the rubber hand illusion. We were interested whether the distance between real and surrogate hand can explain controversial findings in the literature. When people experience high levels of body ownership over a virtual body, they can also feel agency over the actions of that virtual body. Agency has been described as result of a matching between predicted and actual sensory feedback of a planned motor action, a process involving motor areas. However, situations in which strong body ownership gives us the illusion of agency, raise the question of the involvement of motor areas in the sense of agency. In the third experiment of this thesis we explored this question in the context of brain computer interfaces (BCI). All together these experiments investigated underlying processes of body ownership and its influences on pain perception and agency. The findings have implications in pain management and neurological rehabilitation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Cooper, N. „The role of multisensory feedback in the objective and subjective evaluations of fidelity in virtual reality environments“. Thesis, University of Liverpool, 2017. http://livrepository.liverpool.ac.uk/3007774/.

Der volle Inhalt der Quelle
Annotation:
The use of virtual reality in academic and industrial research has been rapidly expanding in recent years therefore evaluations of the quality and effectiveness of virtual environments are required. The assessment process is usually done through user evaluation that is being measured whilst the user engages with the system. The limitations of this method in terms of its variability and user bias of pre and post-experience have been recognised in the research literature. Therefore, there is a need to design more objective measures of system effectiveness that could complement subjective measures and provide a conceptual framework for the fidelity assessment in VR. There are many technological and perceptual factors that can influence the overall experience in virtual environments. The focus of this thesis was to investigate how multisensory feedback, provided during VR exposure, can modulate a user’s qualitative and quantitative experience in the virtual environment. In a series of experimental studies, the role of visual, audio, haptic and motion cues on objective and subjective evaluations of fidelity in VR was investigated. In all studies, objective measures of performance were collected and compared to the subjective measures of user perception. The results showed that the explicit evaluation of environmental and perceptual factors available within VR environments modulated user experience. In particular, the results shown that a user’s postural responses can be used as a basis for the objective measure of fidelity. Additionally, the role of augmented sensory cues was investigated during a manual assembly task. By recording and analysing the objective and subjective measures it was shown that augmented multisensory feedback modulated the user’s acceptability of the virtual environment in a positive manner and increased overall task performance. Furthermore, the presence of augmented cues mitigated the negative effects of inaccurate motion tracking and simulation sickness. In the follow up study, the beneficial effects of virtual training with augmented sensory cues were observed in the transfer of learning when the same task was performed in a real environment. Similarly, when the effects of 6 degrees of freedom motion cuing on user experience were investigated in a high fidelity flight simulator, the consistent findings between objective and subjective data were recorded. By measuring the pilot’s accuracy to follow the desired path during a slalom manoeuvre while perceived task demand was increased, it was shown that motion cuing is related to effective task performance and modulates the levels of workload, sickness and presence. The overall findings revealed that multisensory feedback plays an important role in the overall perception and fidelity evaluations of VR systems and as such user experience needs to be included when investigating the effectiveness of sensory feedback signals. Throughout this thesis it was consistently shown that subjective measures of user perception in VR are directly comparable to the objective measures of performance and therefore both should be used in order to obtain a robust results when investigating the effectiveness of VR systems. This conceptual framework can provide an effective method to study human perception, which can in turn provide a deeper understanding of the environmental and cognitive factors that can influence the overall user experience, in terms of fidelity requirements, in virtual reality environments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Morati, Nicolas. „Système de détection ultra-sensible et sélectif pour le suivi de la qualité de l'air intérieur et extérieur“. Electronic Thesis or Diss., Aix-Marseille, 2021. http://www.theses.fr/2021AIXM0200.

Der volle Inhalt der Quelle
Annotation:
Aujourd’hui, l’air est pollué par de nombreuses substances chimiques, difficile à identifier. Plusieurs gaz marqueurs sont caractéristiques de la pollution, comme le monoxyde de carbone (CO), l'ozone (O3) et le dioxyde d'azote (NO2). Les capteurs de gaz à base d’oxyde métallique (MOX) sont des bons candidats pour suivre en temps réel la qualité de l’air. Ils sont largement utilisés dans les dispositifs de détection de gaz portables et à faible coût. Très sensibles, stables et avec une grande durée de vie, les capteurs MOX souffrent d'un manque inhérent de sélectivité, qui peut être comblé en y intégrant de l’intelligence artificielle. Ce travail de thèse s’intéresse à la mise en œuvre de méthodes d’identification de gaz basées sur l’analyse de données expérimentales. L’objectif est de discriminer le CO, l’O3, et le NO2, avec un seul capteur, dans des conditions réelles d’utilisation (faible débit, humidité...). Pour cela, nous utilisons un capteur de gaz à base d’oxyde de tungstène (WO3) breveté par l’IM2NP et exploité sous licence mondiale par la société NANOZ. Une base de données expérimentale complète a été créée à partir d’un protocole basé sur la modulation de la température de la couche sensible. À partir de cette base de données nous avons mis en œuvre deux méthodes différentes d’extractions de paramètres : le calcul des attributs temporels et la transformée en ondelettes. Ces deux méthodes ont été évaluées sur leur capacité de discrimination des gaz grâce à l’utilisation de plusieurs familles d’algorithmes de classification tels que les machines à vecteurs de support (SVM), les K plus proches voisins (KNN), les réseaux de neurone
Today the air is polluted by many chemicals, which are in the form of a complex mixture that is difficult to identify. These marker gases include carbon monoxide (CO), ozone (O3) and nitrogen dioxide (NO2). It has therefore become imperative to design detection systems that are inexpensive, but at the same time highly sensitive and selective, in order to monitor air quality in real time. Metal Oxide gas sensors (MOX) can meet these requirements. They are used in portable and low cost gas detection devices. Very sensitive, stable and with a long lifespan, MOX sensors suffer from an inherent lack of selectivity, which can be overcome by integrating artificial intelligence. This thesis is concerned with the implementation of gas identification methods based on the analysis of experimental data. The objective is to discriminate three pollution marker gases: CO, O3, and NO2, with a single sensor, under real conditions of use, i.e. in the permanent presence of a concentration of these gases in the humid ambient air. For this, we use a tungsten oxide (WO3) gas sensor patented by IM2NP laboratory and operated under a worldwide license by the company NANOZ.A complete experimental database was created from a protocol based on temperature modulation of the sensitive layer. From this database, we implemented two different feature extraction methods: the computation of temporal attributes and the wavelet transform. These two methods were evaluated on their gas discrimination capacity thanks to the use of several families of classification algorithms, such as support vector machines (SVM), decision trees, K nearest neighbours, neural networks, etc
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Boumenir, Yasmine. „Spatial navigation in real and virtual urban environments: performance and multisensory processing of spatial information in sighted, visually impaired, late and congenitally blind individuals“. Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2011. http://tel.archives-ouvertes.fr/tel-00632703.

Der volle Inhalt der Quelle
Annotation:
Previous studies investigating how humans build reliable spatial knowledge representations allowing them to find their way from one point to another in complex environments have been focused on comparing the relative importance of the two-dimensional visual geometry of routes and intersections, multi-dimensional data from direct exposure with the real world, or verbal symbols and/or instructions. This thesis sheds further light on the multi-dimensional and multi-sensorial aspects by investigating how the cognitive processing of spatial information derived from different sources of sensory and higher order input influences the performance of human observers who have to find their way from memory through complex and non-familiar real-world environments. Three experiments in large-scale urban environments of the real world, and in computer generated representations of these latter (Google Street View), were run to investigate the influence of prior exposure to 2D visual or tactile maps of an itinerary, compared with a single direct experience or verbal instructions, on navigation performances in sighted and/or visually deficient individuals, and in individuals temporarily deprived of vision. Performances were analyzed in terms of time from departure to destination, number of stops, number of wrong turns, and success rates. Potential strategies employed by individuals during navigation and mental mapping abilities were screened on the basis of questionnaires and drawing tests. Subjective levels of psychological stress (experiment 2) were measured to bring to the fore possible differences between men and women in this respect. The results of these experiments show that 2D visual maps, briefly explored prior to navigation, generate better navigation performances compared with poorly scaled virtual representations of a complex real-world environment (experiment 1), the best performances being produced by a single prior exposure to the real-world itinerary. However, brief familiarization with a reliably scaled virtual representation of a non-familiar real-world environment (Google Street View) not only generates optimal navigation in computer generated testing (virtual reality), but also produces better navigation performances when tested in the real-world environment and compared with prior exposure to 2D visual maps (experiment 2). Congenitally blind observers (experiment 3) who have to find their way from memory through a complex non-familiar urban environment perform swiftly and with considerable accuracy after exposure to a 2D tactile map of their itinerary. They are also able to draw a visual image of their itinerary on the basis of the 2D tactile map exposure. Other visually deficient or sighted but blindfolded individuals seem to have greater difficulty in finding their way again than congenitally blind people, regardless of the type of prior exposure to their test itinerary. The findings of this work here are discussed in the light of current hypotheses regarding the presumed intrinsic nature of human spatial representations, replaced herein within a context of working memory models. It is suggested that multi-dimensional temporary storage systems, capable of processing a multitude of sensory input in parallel and with a much larger general capacity than previously considered in terms of working memory limits, need to be taken into account for future research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Christou, Maria. „Enaction, interaction multisensorielle : théorie, technologie et expériences pour les arts numériques“. Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENS019/document.

Der volle Inhalt der Quelle
Annotation:
Cette recherche interdisciplinaire se situe à l'intersection des sciences cognitives, de l'informatique et des arts. Nous y traitons de questions relatives à la perception et compréhension d'une expérience artistique dans le contexte des technologies numériques. Nous considérons l'ordinateur comme un outil puissant de création et nous nous posons la question du comment son rôle peut être fonctionnellement introduit dans les arts numériques. Une des clés de la réponse à cette question se situe dans la notion d'incorporation (embodiment). Il s'agit d'un aspect de la perception et de la cognition humaine que nous ne pouvons pas approcher de façon directe, car il s'agit d'un processus émergent qui se construit avec l'action. Dans cette thèse, nous avons fait émerger quatre critères pour qualifier puis tenter d'évaluer les processus d'incorporation en situation créative, soit de réception soit de réception et d'action. Ces critères sont : la cohérence des retours sensoriels proposée par le système technologique, par exemple la cohérence entre le son et l'image ou entre le son, le geste et l'image ; la nature de l'action telle que perçue ou effectuée ; la sensation d'immersion cognitive des participants ; le potentiel évocateur de la situation sensori-motrice proposée à la perception et/ou à l'action.Nous avons mis en œuvre une méthode qualitative pour l'analyse d'expériences multisensorielles et interactives. Des entretiens ouverts ont permis de récolter un corpus de données sous forme d'enregistrements audiovisuels et de textes transcrits de ces entretiens. Un des objectifs de ces entretiens est de favoriser l'expressivité du sujet sur la manière dont il a vécu la situation en amont, voire au-delà, d'un quelconque jugement esthétique. Cette méthode a été utilisée dans de deux types de situations. Dans la première situation, nous avons mené des entretiens avec des spectateurs ayant participé à un concert effectué dans le cadre des Journées d'Informatique Musicale à Grenoble. Pour cela, nous avons choisi sept pièces audiovisuelles d'auteurs différents, qui étaient soit des œuvres jouées sur scène, soit des œuvres enregistrées. Le deuxième cas comporte des entretiens réalisés avec des participants d'une œuvre interactive audio-visio-haptique intitulée « Geste réel sur matière simulée ». Cette installation a été conçue dans le cadre du projet Créativité Instrumentale pour étudier les transformations du processus de création induit par les technologies de la simulation interactive numérique. Elle se décline en trois scènes de simulation multisensorielle réalisées par modèle physique permettant l'interaction instrumentale. Les entretiens avaient lieu pendant et après l'expérience. L'analyse des discours recueillis, nous a permis de mettre en évidence la relation entre l'outil technologique et l'homme. Dans cette thèse, nous proposons un cadre théorique composé de quatre éléments : Cohérence, Immersion, Action et Evocation, à l'aide desquels nous avons analysé les discours de sujets confrontés à des situations multisensorielles numériques actives et ainsi cerner l'embodiment dans telles situations. L'usage de ces quatre éléments dans l'analyse des discours nous a permis de mettre en évidence une multitude de liaisons entre eux qui se déclinent selon les paramètres des scènes virtuelles. Différents mécanismes de compréhension de ces scènes se mettent en place selon la manière dont les sensorialités sont stimulées et nos analyses nous ont permis de qualifier comment le visuel, l'audio et l'haptique pris séparément ou réunis permettent d'appréhender des différentes dimensions de la scène dans sa complexité
L'auteur n'a pas fourni de résumé en anglais
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Cunio, Rachel J. „Spatialized Auditory and Vibrotactile Cueing for Dynamic Three-Dimensional Visual Search“. Wright State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=wright155912105678525.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Bressolette, Benjamin. „Manipulations gestuelles d'un objet virtuel sonifié pour le contrôle d'une interface en situation de conduite“. Thesis, Ecole centrale de Marseille, 2018. http://www.theses.fr/2018ECDM0009/document.

Der volle Inhalt der Quelle
Annotation:
Les constructeurs automobiles proposent un vaste éventail de fonctions secondaires à la conduite, en relation avec le GPS, la musique, ou encore la ventilation. L'ensemble de ces fonctions est centralisé sur un écran accessible au centre de l'habitacle, équipé d'une dalle tactile dans nombre de véhicules récents. Cependant, leur manipulation en conduite peut se révéler périlleuse : la sollicitation de la modalité visuelle pour interagir avec l'interface peut entraîner une baisse de vigilance vis-à-vis de la tâche de conduite, ce qui peut mener à des situations dangereuses. Au cours de cette thèse, fruit d'une collaboration entre le Groupe PSA et le laboratoire PRISM, nous nous sommes attachés à proposer une association de gestes et de sons comme alternative à une sollicitation visuelle. Le but est de proposer des interactions réalisables en aveugle, pour permettre au conducteur de focaliser ses yeux sur la route. Pour réaliser conjointement une manipulation de l'interface et une tâche de conduite, une sollicitation multisensorielle peut en effet permettre de faire baisser la charge mentale du conducteur, en comparaison à une situation unimodale visuelle. Pour que le lien entre les gestes et les sons paraisse naturel, un objet virtuel a été introduit, manipulable grâce à des gestes. Cet objet est le support des stratégies de sonification, par analogie aux sons de notre environnement, qui sont la conséquence d'une action sur un objet. L'objet virtuel permet également de structurer différents gestes autour d'une même métaphore, ou encore de redéfinir le menu d'une interface. La première partie de cette thèse est consacrée à la mise au point de stratégies de sonification pour transmettre des informations pertinentes sur la dynamique de l'objet virtuel. Deux expériences perceptives ont été mises en place, qui ont conduit à la discrimination de deux stratégies de sonification intéressantes. Dans une deuxième partie, nous avons œuvré à la mise en situation automobile par un travail sur les stimuli sonores, sur l'interface, et par l'étude de l'intégration multisensorielle. Un design des stratégies de sonification a été proposé pour permettre de se projeter dans une utilisation en véhicule. Par la suite, les évocations provoquées par le couplage des gestes et des sons ont été au centre d'une troisième expérience perceptive. Cette étude a été effectuée en aveugle, où le concept d'objet virtuel était inconnu et découvert progressivement par les sujets. Ces images mentales véhiculées par les stratégies de sonification peuvent en effet être utiles lors de la familiarisation des utilisateurs avec l'interface. Une quatrième expérience perceptive s'est concentrée sur la prise en main de l'objet virtuel, où l'intégration des stimuli visuels et auditifs a été étudiée, dans le contexte du maniement d'une interface. Les sujets ont été placés dans des conditions similaires à la découverte de l'interface en véhicule à l'arrêt, avec des stimuli audio-visuels ; puis à son utilisation en aveugle grâce aux stratégies de sonification. Les enseignements de ces expériences ont permis de bâtir une interface gestuelle, qui a été comparée à une interface tactile dans une dernière expérience perceptive réalisée en simulateur de conduite. Bien que les résultats montrent une utilisation plus performante de l'interface tactile, l'association des gestes et des sons semble intéressante du point de vue de la charge cognitive des conducteurs. L'interface gestuelle peut donc offrir une alternative prometteuse ou un complément aux interfaces tactiles pour une utilisation simultanée à la conduite en toute sécurité
Car manufacturers offer a wide range of secondary driving controls, such as GPS, music, or ventilation, often localized on a central touch-sensitive screen. However, operating them while driving proves to be unsafe: engaging the sense of sight for interface interaction can lead to vigilance reduction towards the driving task, which can lead to high-risk situations. In this PhD thesis, which is a part of a collaborative research project involving both the PSA Group and the PRISM laboratory, we aim to provide a gesture and sound association as an alternative to the visual solicitation. The goal is to enable blind interface interactions, allowing the driver to focus their eyes on the road. When jointly performing interface manipulations and the driving task, a multisensory solicitation can lower the driver's cognitive load, in comparison with a visual unimodal situation. For the gesture-sound association to feel more natural, a virtual object that can be handled with gestures is introduced. This object is the support for sonification strategies, constructed by analogy with sounds from our environment, which are the consequence of an action on an object .The virtual object also allows to structure different gestures around the same metaphor, or to redefine the interface's menu. The first part of this thesis deals with the development of sonification strategies, with the aim to inform users about the virtual object dynamic. Two perceptual experiments were set up, which led to the discrimination of two valuable sonification strategies. In a second part, the automotive application was addressed by designing new sound stimuli, the interface, and by studying the multisensory integration. Sounds were proposed for each of the two sonification strategies, to progress towards an in-vehicle integration. The evocations brought by the gestures and sounds association were the subject of a third perceptive blinded experiment. The concepts around the virtual object were unknown and gradually discovered by the subjects. These mental images conveyed by the sonification strategies can help users familiarize with the interface. A fourth perceptual experiment focused on the virtual object handling for first-time users, where the integration of audio-visual stimuli was studied, in the context of an interface manipulation. The experiment conditions were similar to the driver first discovering of the interface in a parked vehicle thanks to audio-visual stimuli, and then operating it through sonification strategies only. The results of this experiment lead to the design of a gestural interface, which was compared with the results obtained with a touchscreen interface in a final perceptual experiment, carried out in a driving simulator. Although the results show better performances for the tactile interface, the combination of gestures and sounds proved to be effective from the cognitive load point of view. The gesture interface can therefore offer a promising alternative or complement to tactile interfaces for a safe simultaneous use in driving condition
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Stratulat, Anca. „Etude des interactions multi-sensorielle pour la perception des mouvements du véhicule en simulateur dynamique : contribution de l'illusion somatogravique à l'immersion en environnement virtuel“. Thesis, Aix-Marseille 2, 2011. http://www.theses.fr/2011AIX22139.

Der volle Inhalt der Quelle
Annotation:
Les simulateurs de conduite permettent d’explorer certains domaines de recherche difficiles à appréhender en conditions réelles, comme l'intégration de différents signaux sensoriels (ex. visuel, vestibulaire, somesthésique) pour la perception du mouvement. Malgré leur complexité, les simulateurs de conduite ne produisent pas toujours une sensation de conduite réelle, spécialement dans les situations comportant des freinages ou des virages. Leurs limites mécaniques en sont la cause. En conséquence, les lois de mouvement des simulateurs sont basées sur la technique de la « tilt-coordination ». Cette technique consiste à incliner un véhicule de telle sorte que la force gravitationnelle soit équivalente à l’accélération gravito-inertielle (GIA) résultant d’une accélération linéaire. La « tilt-coordination » se base sur l'ambigüité perçue par le système vestibulaire entre un basculement et une translation. Sur simulateur de conduite, l'algorithme « washout » combine la « tilt-coordination » à des translations pour produire une sensation d'accélération linéaire. L'objectif de ces travaux de recherche est d'atteindre une meilleure compréhension de l'intégration multisensorielle pour la perception des accélérations linéaires en simulateur de conduite. Les expériences présentées ci-dessous montrent que la perception des décélérations linéaires dépend de la manière dont le basculement et la translation sont combinés pour produire une perception cohérente. Par ailleurs, nos résultats montrent qu'il y a une différence importante dans la perception des accélérations et des décélérations. Pour le freinage, le rapport basculement/translation le plus réaliste dépend du niveau de décélération. Pour l'accélération, le mouvement est généralement surestimé et dépend du niveau d'accélération. Dans ce cas, la perception ne dépend pas du rapport basculement/translation. Ces résultats suggèrent que les signaux visuels, vestibulaires et somesthésiques sont intégrés de façon Bayésienne. En conclusion, il n'est pas conseillé d'utiliser l'algorithme « washout » sans prendre en compte la non-linéarité de la perception humaine. Nous proposons un modèle qui décrit la relation entre le basculement, la translation et le niveau d'accélération ou décélération souhaité. Ce modèle peut être utilisé pour améliorer la loi du mouvement afin de produire des simulations de conduite plus réalistes
Driving simulators allow the exploration of certain areas of research that are difficult to reach in normal conditions, like the integration of different sensory inputs (visual, vestibular and somesthesic) for perception of self-motion. In spite of their complexity, driving simulators do not produce a realistic sensation of driving, especially for braking and turnings. This is due to their mechanical limitations. As a consequence, driving simulators' motion algorithm is based on tilt-coordination technique, which assumes the tilt of the car so that the driver's force of gravity is oriented in the same way as the gravito-inertial acceleration (GIA) during a linear acceleration. This technique is based on the tilt-translation ambiguity of the vestibular system and is used on dynamic driving simulators in combination with linear translations in so-called washout algorithm, to produce a sensation of linear acceleration. The aim of the present research is to understand how humans use multiple sensory signals (vestibular, visual and somatosensory) during the perception of linear acceleration on a driving simulator. The conducted experiments show that the perception of motion depends on the manner tilt and translation are used together to provide a unified percept of linear acceleration. Further, our results show that there is an important difference on how humans perceive accelerations and decelerations. For braking, the most realistic tilt/translation ratio depends on the level of deceleration. For acceleration, the motion is generally overestimated and depends on the level of acceleration, but not on the variation of tilt/translation ratio. The results suggest that visual, vestibular and proprioceptive cues are integrated in an optimal Bayesian fashion. In conclusion, it is not advisable to use a washout algorithm without taking into account the non-linearity of human perception. We propose an empirically found data-driven fitting model that describes the relationship between tilt, translation and the desired level of acceleration or deceleration. This model is intended to be a supplement to motion cueing algorithms that should improve the realism of driving simulations
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Edmunds, Timothy. „Improving interactive multisensory simulation and rendering through focus on perceptual processes“. 2009. http://hdl.rutgers.edu/1782.2/rucore10001600001.ETD.000050507.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Mishra, A., A. Shukla, Nripendra P. Rana und Y. K. Dwivedi. „From ‘touch’ to a ‘multisensory’ experience: The impact of technology interface and product type on consumer responses“. 2020. http://hdl.handle.net/10454/18169.

Der volle Inhalt der Quelle
Annotation:
Yes
Online retailers are increasingly using augmented reality (AR) and virtual reality (VR) technologies to solve mental and physical intangibility issues in a product evaluation. Moreover, the technologies are easily available and accessible to consumers via their smartphones. The authors conducted three experiments to examine consumer responses to technology interfaces (AR/VR and mobile apps) for hedonic and utilitarian products. The results show that AR is easier to use (vs. app), and users find AR more responsive when buying a hedonic (vs. utilitarian) product. Touch interface users are likely to have a more satisfying experience and greater recommendation intentions, as compared to AR, for buying utilitarian products. In contrast, a multisensory environment (AR) results in a better user experience for purchasing a hedonic product. Moreover, multisensory technologies lead to higher visual appeal, emotional appeal, and purchase intentions. The research contributes to the literature on computer-mediated interactions in a multisensory environment and proposes actionable recommendations to online marketers.
The full-text of this article will be released for public view at the end of the publisher embargo on 04 Dec 2022.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Villar, Joana Andrade Dias Posser. „How multisensory experiences in virtual environments affect intention to return: the role of cognitive flexibility, sense of power and personality traits“. Master's thesis, 2019. http://hdl.handle.net/10071/19439.

Der volle Inhalt der Quelle
Annotation:
The technology advances made every day are creating opportunities for business. Virtual Reality has been the focus of several studies. However, virtual reality has been stated to fail in implementing some senses, such as tactile, smell and taste. In this study, we cross two sensory experiences, one with the senses of sight and sound and the other with the sight, sound and smell. The experience takes place in a Virtual Café and measures the impact it has on the intention to return. The choice of the different sensory experiences was based on the concepts of proximal and distal senses. For the purposes of the study, this dissertation further analyses the concepts of Sense of Power and Cognitive Flexibility and Personality traits is introduced as a moderator in this relationship. The type of sensory experience is introduced and used as a moderator on the relationship between Sense of Power and Intention to Return (behavior). This study concludes that multisensory experiences in a virtual environment, have no impact on return intentions. However, this study also concludes that cognitive flexibility has a positive impact on sense of power and that personality traits plays as a moderator in the relationship between the two variables. Furthermore, this study findings suggested the notion that senses can be psychologically more proximal or distal based on the maximum physical distance typically required for a stimulus to be sensed also applies to virtual reality environments.
Os avanços tecnológicos concretizados todos os dias estão a criar oportunidades para as empresas. A Realidade Virtual tem sido o foco de diversos estudos. Contudo, a realidade virtual tem sido apontada pela inexistência de alguns sentidos, tais como o tato, cheiro e paladar. Neste estudo, cruzamos duas experiências sensoriais, uma com os sentidos da visão e audição e a outra com os sentidos da visão, audição e cheiro. As experiências tomam lugar num café virtual e mede o impacto que tem na intenção de retorno (intention to return). A escolha das diferentes experiências sensoriais teve por base os conceitos de sentidos mais próximos e sentidos mais distantes. Para o objetivo desta tese, vão ser também analisados os conceitos de sentimento de poder e flexibilidade cognitiva e as características de personalidade são introduzidas como moderador desta relação. Posto isto, este estudo conclui que as experiências multissensoriais num ambiente virtual, não têm impacto na intenção de retorno. Contudo, o estudo conclui também que a flexibilidade cognitiva tem um impacto positivo no sentimento de poder e ainda que as características de personalidade têm um papel de moderador na relação entre os dois. Além disto, este estudo sugere que a ideia de que os sentidos podem ser psicologicamente mais próximos ou distantes, com base na distância física normalmente necessária para um estímulo ser sentido, também se verifica em ambientes de realidade virtual.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Manghisi, Vito Modesto. „Operator 4.0: Industrial Augmented Reality, Interfaces and Ergonomics“. Doctoral thesis, 2019. http://hdl.handle.net/11589/161097.

Der volle Inhalt der Quelle
Annotation:
Il programma Industry 4.0, in Germania, e le corrispondenti iniziative internazionali continueranno a trasformare la forza e l’ambiente di lavoro nell’industria fino al 2025. Parallelamente all'evoluzione del settore, la storia dell'interazione dell’operatori con le varie tecnologie di produzione industriale e digitale può essere riassunta come un'evoluzione generazionale verso la generazione dell’Operatore 4.0. Questo lavoro di tesi mira ad applicare le tecnologie abilitanti di Industry 4.0 per progettare e sviluppare, metodi e applicazioni a supporto della figura di Operator 4.0 rispetto a tre delle sue otto sfaccettature: l'Augmented Operator, il Virtual Operator e l'Healthy Operator. Nel Capitolo 1, presentiamo le ricerche svolte nel campo Industrial Augmented Reality. Descriviamo la tecnologia della Realtà Aumentata (AR) e la sua applicazione nel campo della Realtà Aumentata Industriale (IAR). Nel capitolo 2 presentiamo un prototipo di banco di lavoro badato su Spatial Augmented Reality (SAR) progettato nelle prime fasi di questa ricerca e descriviamo gli esperimenti effettuati per convalidare la sua efficienza come supporto per Operator 4.0. Nel Capitolo 3 descriviamo gli esperimenti effettuati per ottimizzare la leggibilità del testo mostrato nelle interfacce AR per Optical See-Through Displays. In questa ricerca, proponiamo nuovi indici estratti dalle immagini di background, visualizzati su uno schermo LCD, e li confrontiamo con quelli proposti in letteratura attraverso un test utente specifico. Nel Capitolo 4 presentiamo un framework AR per dispositivi palmari che aiuta gli utenti nella comprensione delle informazioni sugli impianti descritte tradizionalmente attraverso i Piping and Instrumentation Diagrams (P & ID) su supporto cartaceo. Nel Capitolo 5 descriviamo la ricerca svolta nel campo delle Human Machine Interfaces relativa all'uso delle interfacce utente naturali in realtà virtuale. Abbiamo progettato e sviluppato un'interfaccia gestuale per la navigazione di tour virtuali costituiti da immagini sferiche. Abbiamo confrontato l'interfaccia sviluppata con una classica controllata da mouse per valutare l'efficacia di tale interfaccia in termini di accettazione e coinvolgimento degli utenti. Nel Capitolo 6, descriviamo un framework generale per la progettazione di un vocabolario di gesti per la navigazione delle istruzioni tecniche nei manuali digitali per le operazioni di manutenzione. Viene anche proposta e utilizzata una procedura di validazione per confrontare i vocabolari gestuali in termini di fatica e carico cognitivo. Nel Capitolo 7, trattiamo l'aspetto dell’Healthy Operator. Descriviamo la progettazione e lo sviluppo di uno strumento software semi-automatico in grado di monitorare l'ergonomia dell'operatore nell’ambiente di lavoro valutando la metrica Rapid Upper Limb Assessment (RULA). Descriviamo il design e lo sviluppo del nostro prototipo software, il K2RULA, basato su un sensore a basso costo, il Microsoft Kinect v2. Successivamente, convalidiamo il nostro strumento con due esperimenti. Nel primo, lo confrontiamo con un sistema di tracking ottico, il golden standard di settore. Nel secondo confrontiamo i risultati restituiti dal prototipo con quelli calcolati da un valutatore esperto. Infine, traiamo le nostre conclusioni sul lavoro svolto e cerchiamo di tracciare un percorso per lo sviluppo futuro delle nostre ricerche.
The German program Industry 4.0 and the corresponding international initiatives will continue to transform the industrial workforce and their work environment through 2025. In parallel with the evolution of the industry, the history of the interaction of operators with various industrial and digital production technologies can be summarized as a generational evolution towards the Operator 4.0 generation. This work aims at applying the enabling technologies of Industry 4.0 in order to design and develop, methods and applications supporting the figure of the Operator 4.0 with respect three out of her/his eight facets – the Augmented Operator, the Virtual Operator, and the Healthy Operator. In Chapter 1, we introduce the researches carried out in the IAR field. We describe the Augmented Reality (AR) technology and its application in the field of the Industrial Augmented Reality (IAR). In chapter 2, we present a Spatial Augmented Reality (SAR) workbench prototype designed in the early stage of this research and we describe the experiments carried out to validate its efficiency as support to the Operator 4.0. In Chapter 3, we describe the experiments carried out to optimize legibility of text shown in AR interfaces for optical see-through displays. In this research, we propose novel indices extracted from the background images, displayed on an LCD screen, and we compare them with those proposed in the literature by designing a specific user test. In Chapter 4, we present an AR framework for handheld devices that enhance users in the comprehension of plant information traditionally conveyed through printed Piping and Instrumentation Diagrams (P&ID). In Chapter 5 we describe the research carried out in the field of HMI related to the use of Natural User Interfaces in Virtual Reality. We designed and developed a gesture interface for navigation of virtual tours made-up of spherical images. We compared the developed interface with a classical mouse-controlled one to evaluate the effectiveness of such an interface in terms of user acceptance and user engagement. In Chapter 6, we describe a general framework to design a mid-air gesture vocabulary for the navigation of technical instructions in digital manuals for maintenance operations. A validation procedure is also proposed and utilized to compare gesture vocabularies in terms of fatigue and cognitive load. In Chapter 7, we treat the facet of the Healthy Operator. We describe the design and development of a semi-automatic software tool able at monitoring the operator ergonomics in the shop floor by assessing Rapid Upper Limb Assessment (RULA) metrics. We describe the design and development of our software prototype – the K2RULA - based on a low- cost sensor, the Microsoft Kinect v2 depth-camera. Subsequently, we validate our tool with two experiments. In the first one, we compared the K2RULA grand-scores with those obtained with a reference optical motion capture system. In the second experiment, we evaluate the agreement of the grand-scores returned by the proposed application with those obtained by a RULA expert rater. Finally, we draw our conclusions regarding the work carried out and try to map out a path for the future development of our researches in these fields.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie