Добірка наукової літератури з теми "Spatial sound perception"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Spatial sound perception".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Spatial sound perception"

1

Elvemo, Johan-Magnus. "Spatial perception and diegesis in multi-channel surround cinema." New Soundtrack 3, no. 1 (March 2013): 31–44. http://dx.doi.org/10.3366/sound.2013.0034.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Li, Jian, Luigi Maffei, Aniello Pascale, and Massimiliano Masullo. "Effects of spatialized water-sound sequences for traffic noise masking on brain activities." Journal of the Acoustical Society of America 152, no. 1 (July 2022): 172–83. http://dx.doi.org/10.1121/10.0012222.

Повний текст джерела
Анотація:
Informational masking of water sounds has been proven effective in mitigating traffic noise perception with different sound levels and signal-to-noise ratios, but less is known about the effects of the spatial distribution of water sounds on the perception of the surrounding environment and corresponding psychophysical responses. Three different spatial settings of water-sound sequences with a traffic noise condition were used to investigate the role of spatialization of water-sound sequences on traffic noise perception. The neural responses of 20 participants were recorded by a portable electroencephalogram (EEG) device during the spatial sound playback time. The mental effects and attention process related to informational masking were assessed by the analysis of the EEG spectral power distribution and sensor-level functional connectivity along with subjective assessments. The results showed higher relative power of the alpha band and greater alpha-beta ratio among water-sound sequence conditions compared to traffic noise conditions, which confirmed the increased relaxation on the mental state induced by the introduction of water sounds. Moreover, different spatial settings of water-sound sequences evoked different cognitive network responses. The setting of two-position switching water brought more attentional network activations than other water sequences related to the information masking process along with more positive subjective feelings.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Yamasaki, Daiki, Kiyofumi Miyoshi, Christian F. Altmann, and Hiroshi Ashida. "Front-Presented Looming Sound Selectively Alters the Perceived Size of a Visual Looming Object." Perception 47, no. 7 (May 21, 2018): 751–71. http://dx.doi.org/10.1177/0301006618777708.

Повний текст джерела
Анотація:
In spite of accumulating evidence for the spatial rule governing cross-modal interaction according to the spatial consistency of stimuli, it is still unclear whether 3D spatial consistency (i.e., front/rear of the body) of stimuli also regulates audiovisual interaction. We investigated how sounds with increasing/decreasing intensity (looming/receding sound) presented from the front and rear space of the body impact the size perception of a dynamic visual object. Participants performed a size-matching task (Experiments 1 and 2) and a size adjustment task (Experiment 3) of visual stimuli with increasing/decreasing diameter, while being exposed to a front- or rear-presented sound with increasing/decreasing intensity. Throughout these experiments, we demonstrated that only the front-presented looming sound caused overestimation of the spatially consistent looming visual stimulus in size, but not of the spatially inconsistent and the receding visual stimulus. The receding sound had no significant effect on vision. Our results revealed that looming sound alters dynamic visual size perception depending on the consistency in the approaching quality and the front–rear spatial location of audiovisual stimuli, suggesting that the human brain differently processes audiovisual inputs based on their 3D spatial consistency. This selective interaction between looming signals should contribute to faster detection of approaching threats. Our findings extend the spatial rule governing audiovisual interaction into 3D space.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Zhang, Maosheng, Ruimin Hu, Shihong Chen, Xiaochen Wang, Lin Jiang, and Heng Wang. "Spatial perception reproduction of sound event based on sound properties." Wuhan University Journal of Natural Sciences 20, no. 1 (January 10, 2015): 34–38. http://dx.doi.org/10.1007/s11859-015-1055-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Langkjær, Birger. "Spatial Perception and Technologies of Cinema Sound." Convergence: The International Journal of Research into New Media Technologies 3, no. 4 (December 1997): 92–107. http://dx.doi.org/10.1177/135485659700300408.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Melick, Joshua B., V. Ralph Algazi, and Richard O. Duda. "Spatial perception of motion‐tracked binaural sound." Journal of the Acoustical Society of America 117, no. 4 (April 2005): 2485. http://dx.doi.org/10.1121/1.4787772.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Jääskeläinen, Anni. "Mimetic schemas and shared perception through imitatives." Nordic Journal of Linguistics 39, no. 2 (September 27, 2016): 159–83. http://dx.doi.org/10.1017/s0332586516000147.

Повний текст джерела
Анотація:
This article examines the interplay between certain depictions of sound and certain mimetic schemas (intersubjectively shared, body-based image schemas that concern basic processes and activities). The research contributes to the study of ideophones and also demonstrates that it is beneficial to study these types of words in written everyday interaction, as well as in spoken everyday interaction. Two Finnish sound words (ideophones, imitatives),naps‘snap, pop’ andhumps(the sound of relatively soft falling) are examined and their different meanings are analysed. Some research questions of this analysis are: What causes the sound described by eithernapsorhumps? What kind of movement is described and to what mimetic schema is the sound linked? And also: What concrete, spatial processes might motivate the words’ more abstract uses? The examination indicates thatnapsandhumpsare used as concrete depictions of sounds and movements, but also more abstractly, as depictions of cognitive and emotional processes without any spatial movement or audible sound. The motivations for these more abstract uses are studied: It is argued that the basic uses ofnapsandhumpsare tied to certain bodily processes as their sounds or impressions, and that the more abstract uses ofnapsandhumpsreflect metaphorical mappings that map the mimetic schemas of these basic, bodily experiences to more abstract experiences. Grounds for this kind of use is the unique construal of imitatives: they present an imagistic, iconic depiction of a sensation and thus evoke imagery that is shared on a direct bodily level. Thus they aid in identifying with others and their experiences on a level that is directly accessible.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Möttönen, Riikka, Kaisa Tiippana, Mikko Sams, and Hanna Puharinen. "Sound Location Can Influence Audiovisual Speech Perception When Spatial Attention Is Manipulated." Seeing and Perceiving 24, no. 1 (2011): 67–90. http://dx.doi.org/10.1163/187847511x557308.

Повний текст джерела
Анотація:
AbstractAudiovisual speech perception has been considered to operate independent of sound location, since the McGurk effect (altered auditory speech perception caused by conflicting visual speech) has been shown to be unaffected by whether speech sounds are presented in the same or different location as a talking face. Here we show that sound location effects arise with manipulation of spatial attention. Sounds were presented from loudspeakers in five locations: the centre (location of the talking face) and 45°/90° to the left/right. Auditory spatial attention was focused on a location by presenting the majority (90%) of sounds from this location. In Experiment 1, the majority of sounds emanated from the centre, and the McGurk effect was enhanced there. In Experiment 2, the major location was 90° to the left, causing the McGurk effect to be stronger on the left and centre than on the right. Under control conditions, when sounds were presented with equal probability from all locations, the McGurk effect tended to be stronger for sounds emanating from the centre, but this tendency was not reliable. Additionally, reaction times were the shortest for a congruent audiovisual stimulus, and this was the case independent of location. Our main finding is that sound location can modulate audiovisual speech perception, and that spatial attention plays a role in this modulation.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Liu, Jiang, Jian Kang, Holger Behm, and Tao Luo. "LANDSCAPE SPATIAL PATTERN INDICES AND SOUNDSCAPE PERCEPTION IN A MULTI-FUNCTIONAL URBAN AREA, GERMANY." JOURNAL OF ENVIRONMENTAL ENGINEERING AND LANDSCAPE MANAGEMENT 22, no. 3 (September 22, 2014): 208–18. http://dx.doi.org/10.3846/16486897.2014.911181.

Повний текст джерела
Анотація:
Soundscape research could provide more information about urban acoustic environment, which should be integrated into urban management. The aim of this study is to test how landscape spatial pattern could affect soundscape perception. Soundscape data on specifically defined spatial and temporal scales were observed and evaluated in a multi-functional urban area in Rostock, Germany. The results show that urban soundscapes were characterised by artificial sounds (human, mechanical and traffic sounds) overwhelming the natural ones (biological and geophysical sounds). Major sound categories were normally mutual exclusive and dynamic on temporal scale, and have different spatial distribution on spatial scale. However, biological and traffic sounds seem to be co-existing on both temporal and spatial scales. Significant relationships were found existing between perception of major sound categories and a number of landscape spatial pattern indices, among which vegetation density (NDVI), landscape shape index (LSI) and largest patch index (LPI) showed the most effective indicating ability. The research indicated that soundscape concepts could be applied into landscape and urban planning process through the quantitative landscape indices to achieve a better urban acoustic environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Szczepański, Grzegorz, Leszek Morzyński, Dariusz Pleban, and Rafał Młyński. "CIOP-PIB test stand for studies on spatial sound perception using ambisonics." Occupational Safety – Science and Practice 565, no. 10 (October 22, 2018): 24–27. http://dx.doi.org/10.5604/01.3001.0012.6477.

Повний текст джерела
Анотація:
Acoustic signals can be a source of information affecting workers’ safety in the working environment. Sound perception, directional hearing and spatial orientation of people in the working environment depend on a number of factors, such as acoustic properties of the work room, noise and its parameters, the use of hearing protection, hearing loss or the use of hearing aids. Learning about the impact of these factors on perception, directional hearing and orientation requires using spatial sound and is essential for creating safe working conditions. This article presents basic information about ambisonics, a technique of spatial sound processing, and a test stand developed at the Central Institute for Labor Protection – National Research Institute for research on sound perception, directional hearing and spatial orientation of people using ambisonics.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Spatial sound perception"

1

Riera, Robusté Joan. "Spatial hearing and sound perception in musical composition." Doctoral thesis, Universidade de Aveiro, 2014. http://hdl.handle.net/10773/13269.

Повний текст джерела
Анотація:
Doutoramento em Música
This thesis explores the possibilities of spatial hearing in relation to sound perception, and presents three acousmatic compositions based on a musical aesthetic that emphasizes this relation in musical discourse. The first important characteristic of these compositions is the exclusive use of sine waves and other time invariant sound signals. Even though these types of sound signals present no variations in time, it is possible to perceive pitch, loudness, and tone color variations as soon as they move in space due to acoustic processes involved in spatial hearing. To emphasize the perception of such variations, this thesis proposes to divide a tone in multiple sound units and spread them in space using several loudspeakers arranged around the listener. In addition to the perception of sound attribute variations, it is also possible to create rhythm and texture variations that depend on how sound units are arranged in space. This strategy permits to overcome the so called "sound surrogacy" implicit in acousmatic music, as it is possible to establish cause-effect relations between sound movement and the perception of sound attribute, rhythm, and texture variations. Another important consequence of using sound fragmentation together with sound spatialization is the possibility to produce diffuse sound fields independently from the levels of reverberation of the room, and to create sound spaces with a certain spatial depth without using any kind of artificial sound delay or reverberation.
Esta tese explora as possibilidades da Audição Espacial em relação à percepção do som e apresenta três composições acusmáticas baseadas numa estética musical que enfatiza esta relação e a incorpora como uma parte do seu discurso musical. A primeira característica importante destas composições é a utilização exclusiva de sinusóides e de outros sinais sonoros invariáveis no tempo. Embora estes tipos de sinais não apresentem variações no tempo, é possível percepcionar variações de altura, intensidade e timbre assim que estes se movem no espaço, devido aos processos acústicos envolvidos na audição espacial. Para enfatizar a percepção destas variações, esta tese propõe dividir um som em múltiplas unidades e espalhá-las no espaço utilizando vários monitores dispostos à volta da plateia. Além da percepção de variações de características do som, também é possível criar variações de ritmo e de textura que dependem de como os sons são dispostos no espaço. Esta estratégia permite superar o problema de “sound surrogacy” implícito na música acusmática, uma vez que é possível estabelecer relações causa-efeito entre o movimento do som e a percepção de variações de características do som, variações do ritmo e textura. Outra consequênça importante da utilização da fragmentação com a espacialização do som é a possibilidade de criar campos sonoros difusos, independentemente dos níveis de reverberação da sala, e de criar espaços sonoros com uma certa profundidade, sem utilizar nenhum tipo de delay ou reverberação artificiais.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Best, Virginia Ann. "Spatial Hearing with Simultaneous Sound Sources: A Psychophysical Investigation." University of Sydney. Medicine, 2004. http://hdl.handle.net/2123/576.

Повний текст джерела
Анотація:
This thesis provides an overview of work conducted to investigate human spatial hearing in situations involving multiple concurrent sound sources. Much is known about spatial hearing with single sound sources, including the acoustic cues to source location and the accuracy of localisation under different conditions. However, more recently interest has grown in the behaviour of listeners in more complex environments. Concurrent sound sources pose a particularly difficult problem for the auditory system, as their identities and locations must be extracted from a common set of sensory receptors and shared computational machinery. It is clear that humans have a rich perception of their auditory world, but just how concurrent sounds are processed, and how accurately, are issues that are poorly understood. This work attempts to fill a gap in our understanding by systematically examining spatial resolution with multiple sound sources. A series of psychophysical experiments was conducted on listeners with normal hearing to measure performance in spatial localisation and discrimination tasks involving more than one source. The general approach was to present sources that overlapped in both frequency and time in order to observe performance in the most challenging of situations. Furthermore, the role of two primary sets of location cues in concurrent source listening was probed by examining performance in different spatial dimensions. The binaural cues arise due to the separation of the two ears, and provide information about the lateral position of sound sources. The spectral cues result from location-dependent filtering by the head and pinnae, and allow vertical and front-rear auditory discrimination. Two sets of experiments are described that employed relatively simple broadband noise stimuli. In the first of these, two-point discrimination thresholds were measured using simultaneous noise bursts. It was found that the pair could be resolved only if a binaural difference was present; spectral cues did not appear to be sufficient. In the second set of experiments, the two stimuli were made distinguishable on the basis of their temporal envelopes, and the localisation of a designated target source was directly examined. Remarkably robust localisation was observed, despite the simultaneous masker, and both binaural and spectral cues appeared to be of use in this case. Small but persistent errors were observed, which in the lateral dimension represented a systematic shift away from the location of the masker. The errors can be explained by interference in the processing of the different location cues. Overall these experiments demonstrated that the spatial perception of concurrent sound sources is highly dependent on stimulus characteristics and configurations. This suggests that the underlying spatial representations are limited by the accuracy with which acoustic spatial cues can be extracted from a mixed signal. Three sets of experiments are then described that examined spatial performance with speech, a complex natural sound. The first measured how well speech is localised in isolation. This work demonstrated that speech contains high-frequency energy that is essential for accurate three-dimensional localisation. In the second set of experiments, spatial resolution for concurrent monosyllabic words was examined using similar approaches to those used for the concurrent noise experiments. It was found that resolution for concurrent speech stimuli was similar to resolution for concurrent noise stimuli. Importantly, listeners were limited in their ability to concurrently process the location-dependent spectral cues associated with two brief speech sources. In the final set of experiments, the role of spatial hearing was examined in a more relevant setting containing concurrent streams of sentence speech. It has long been known that binaural differences can aid segregation and enhance selective attention in such situations. The results presented here confirmed this finding and extended it to show that the spectral cues associated with different locations can also contribute. As a whole, this work provides an in-depth examination of spatial performance in concurrent source situations and delineates some of the limitations of this process. In general, spatial accuracy with concurrent sources is poorer than with single sound sources, as both binaural and spectral cues are subject to interference. Nonetheless, binaural cues are quite robust for representing concurrent source locations, and spectral cues can enhance spatial listening in many situations. The findings also highlight the intricate relationship that exists between spatial hearing, auditory object processing, and the allocation of attention in complex environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Jin, Craig T. "Spectral analysis and resolving spatial ambiguities in human sound localization." Connect to full text, 2001. http://hdl.handle.net/2123/1342.

Повний текст джерела
Анотація:
Thesis (Ph. D.)--School of Electrical and Information Engineering, Faculty of Engineering, University of Sydney, 2001.
Title from title screen (viewed 13 January 2009). Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy to the School of Electrical and Information Engineering, Faculty of Engineering. Includes bibliographical references. Also available in print form.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Berg, Jan. "Systematic evaluation of perceived spatial quality in surround sound systems /." Luleå : School of Music, Division of Sound Recording, 2002. http://epubl.luth.se/1402-1544/2002/17/index.html.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Stanley, Raymond M. "Toward adapting spatial audio displays for use with bone conduction the cancellation of bone-conducted and air-conducted sound waves /." Thesis, Available online, Georgia Institute of Technology, 2006, 2006. http://etd.gatech.edu/theses/available/etd-11022006-103809/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Widman, Ludvig. "Binaural versus Stereo Audio in Navigation in a 3D Game: Differences in Perception and Localization of Sound." Thesis, Luleå tekniska universitet, Institutionen för ekonomi, teknik, konst och samhälle, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-85512.

Повний текст джерела
Анотація:
Recent advancements in audio technology for computer games has made possible for implementations with binaural audio. Compared to regular stereo sound, binaural audio offers possibilities for a player to experience spatial sound, including sounds along the vertical plane, using their own headphones. A computer game prototype called “Crystal Gatherer” was created for this study to explore the possibilities of binaural audio imple- mentation regarding localization and perception of objects that make sound in a 3D game. The game featured two similar game levels, with the difference that one used binaural sound, and the other stereo sound. The levels consisted of a dark space that the player could navigate freely with the objective to find objects that make sound, called “crystals”, as fast as they could. An experiment was conducted with 14 test sub- jects that played the game, qualitative and quantitative data was collected, including the time the players took to complete the game levels, respectively, and answers about how they experienced the levels. A majority of test subjects reported that they per- ceived a difference between the levels. No significant difference was found between the levels in terms of efficacy of finding the objects that made sound. Some test subjects stated that they found localization was better in the binaural level of the game, others found the stereo level to be better in this respect. The study shows that there can exist possibilities for binaural audio to change the perception of audio in computer games.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Gandemer, Lennie. "Son et posture : le rôle de la perception auditive spatiale dans le maintien de l'équilibre postural." Thesis, Aix-Marseille, 2016. http://www.theses.fr/2016AIXM4749/document.

Повний текст джерела
Анотація:
Le maintien de la stabilité posturale est généralement décrit comme le résultat de l’intégration de plusieurs modalités sensorielles : vision, proprioception, tactile plantaire et système vestibulaire. Bien qu’étant une source riche d’informations spatiales, l’audition a été très peut étudiée dans ce cadre. Dans cette thèse, nous nous sommes intéressés à l’influence spécifique du son sur la posture.La première partie de ces travaux concerne la mise en place et la caractérisation perceptive d’un système de spatialisation ambisonique d’ordre 5. Ce système permet de générer et de déplacer des sons dans tout l’espace 3D entourant l’auditeur, ainsi que de synthétiser des espaces sonores immersifs et réalistes.Ensuite, ce système a été utilisé comme un outil pour la génération de stimuli adaptés à l’étude de l’influence du son sur la posture. Ainsi, la posture debout statique de sujets jeunes et en bonne santé a été étudiée dans un ensemble de cinq expériences posturales. Les résultats de ces différentes études montrent que l’information auditive spatiale peut être intégrée dans le système de régulation posturale, et permettre aux sujets d’atteindre une meilleure stabilité posturale.Deux pistes sont évoquées pour interpréter cette stabilisation : d’un côté, l’utilisation des indices acoustiques pour construire une carte spatiale de l’espace environnant, représentation par rapport à laquelle les sujets peuvent se stabiliser ; de l’autre, des phénomènes d’intégration multi-sensorielle, où la modalité auditive permettrait de potentialiser l’intégration des différentes informations fournies par les autres modalités impliquées dans le contrôle postural
Postural control is known to be the result of the integration by the central nervous system of several sensory modalities. In the literature, visual, proprioceptive, plantar touch and vestibular inputs are generally mentioned, and the role of audition is often neglected, even though sound is a rich and broad source of information on the whole surroundind 3D space. In the frame of this PhD, we focused on the specific role of sound on posture. The first part of this work is related to the design, the set-up and the perceptual evaluation of a fifth order ambisonics sound spatialization system. This system makes it possible to generate and move sound sources in the 3D space surrounding the listener and also to synthesize immersive and realistic sound environments. Then, this sound spatialization system was used as a tool to generate sound stimuli used in five different postural tests. In these tests, we studied the static upright stance of young and healthy subjects. The results of these studies show that the spatial auditory information can be integrated in the postural control system, allowing the subjects to reach a better stability.Two complementary trails are proposed to explain these stabilizing effects. Firstly, the spatial acoustic cues can contribute to the building of a mental representation of the surrounding environment; given this representation, the subjects could improve their stability. Secondly, we introduce multisensory integration phenomena: the auditory component could facilitate the integration of the other modalities implied in the postural control system
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Warden, James. "Senses, Perception, and Video Gaming: Design of a College for Video Game Design and Production." Cincinnati, Ohio : University of Cincinnati, 2005. http://www.ohiolink.edu/etd/view.cgi?acc%5Fnum=ucin1116113863.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Moreira, Julian. "Évaluer l'apport du binaural dans une application mobile audiovisuelle." Thesis, Paris, CNAM, 2019. http://www.theses.fr/2019CNAM1243/document.

Повний текст джерела
Анотація:
Les terminaux mobiles offrent à ce jour des performances de plus en plus élevées (CPU, résolution de l’écran, capteurs optiques, etc.) Cela rehausse la qualité vidéo des services média, que ce soit pour le visionnage de contenu vidéo (streaming, TV, etc.) ou pour des applications interactives telles que le jeu vidéo. Mais cette évolution concernant l'image n'est pas ou peu suivie par l'intégration de systèmes de restitution audio de haute qualité dans ce type de terminal. Or, parallèlement à ces évolutions concernant l'image, des solutions de son spatialisé sur casque, à travers notamment la technique de restitution binaurale basée sur l'utilisation de filtres HRTF (Head Related Transfer Functions) voient le jour.Dans ce travail de thèse, nous nous proposons d’évaluer l’intérêt que peut présenter le son binaural lorsqu'il est utilisé sur une application mobile audiovisuelle. Une partie de notre travail a consisté à déterminer les différents sens que l’on pouvait donner au terme « application mobile audiovisuelle » et parmi ces sens ceux qui d’une part étaient pertinents et d’autre part pouvaient donner lieu à une évaluation comparative avec ou sans son binaural.Le couplage entre son binaural et visuel sur mobile occasionne en premier lieu une question d’ordre perceptive : comment peut-on organiser spatialement une scène virtuelle dont le son peut se déployer tout autour de l’utilisateur, et dont le visuel est restreint à un si petit écran ? La première partie de cette thèse est consacrée à cette question. Nous menons une expérience visant à étudier le découplage spatial possible entre un son binaural et un visuel rendus sur smartphone. Cette expérience révèle une forte tolérance de l’être humain face aux dégradations spatiales pouvant survenir entre les deux modalités. En particulier, l’absence d’individualisation des HRTF, ainsi qu’un très grand découplage en élévation ne semblent pas affecter la perception. Par ailleurs, les sujets semblent envisager la scène « comme si » ils y étaient eux-mêmes directement projetés, à la place de la caméra, et cela indépendamment de leur propre distance à l’écran. Tous ces résultats suggèrent la possibilité d’une association entre son binaural et visuel sur mobile dans des conditions d’utilisation proches du grand public.Dans la seconde partie de la thèse, nous tentons de répondre à la question de l’apport du binaural en déployant une expérience « hors les murs », dans un contexte plausible d’utilisation grand public. Trente sujets jouent dans leur vie quotidienne à un jeu vidéo de type Infinite Runner, développé pour l’occasion en deux versions, une avec du son binaural, et l’autre avec du son monophonique. L’expérience dure cinq semaines, à raison de deux sessions par jour. Ce protocole procède de la méthode dite "Experience Sampling Method", sur l’état de l’art de laquelle nous nous sommes appuyés. Nous calculons à chaque session des notes d’immersion, de mémorisation et de performance, et nous comparons les notes obtenues entre les deux versions sonores. Les résultats indiquent une immersion significativement meilleure pour le binaural. La mémorisation et la performance ne sont en revanche pas soumises à un effet statistiquement significatif du rendu sonore. Au-delà des résultats, cette expérience nous permet de discuter de la question de la validité des données en fonction de la méthode de déploiement, en confrontant notamment bien-fondé théorique et faisabilité pratique
In recent years, smartphone and tablet global performances have been increased significantly (CPU, screen resolution, webcams, etc.). This can be particularly observed with video quality of mobile media services, such as video streaming applications, or interactive applications (e.g., video games). However, these evolutions barely go with the integration of high quality sound restitution systems. Beside these evolutions though, new technologies related to spatialized sound on headphones have been developed, namely the binaural restitution model, using HRTF (Head Related Transfer Functions) filters.In this thesis, we assess the potential contribution of the binaural technology to enhance the quality of experience of an audiovisual mobile application. A part of our work has been dedicated to define what is an “audiovisual mobile application”, what kind of application could be fruitfully experienced with a binaural sound, and among those applications which one could lead to a comparative experiment with and without binaural.In a first place, the coupling of a binaural sound with a mobile-rendered visual tackles a question related to perception: how to spatially arrange a virtual scene whose sound can be spread all around the user, while its visual is limited to a very small space? We propose an experiment in these conditions to study how far a sound and a visual can be moved apart without breaking their perceptual fusion. The results reveal a strong tolerance of subjects to spatial discrepancies between the two modalities. Notably, the absence or presence of individualization for the HRTF filters, and a large separation in elevation between sound and visual don’t seem to affect the perception. Besides, subjects consider the virtual scene as if they were projected inside, at the camera’s position, no matter what distance to the phone they sit. All these results suggest that an association between a binaural sound and a visual on a smartphone could be used by the general public.In the second part, we address the main question of the thesis, i.e., the contribution of binaural, and we conduct an experiment in a realistic context of use. Thirty subjects play an Infinite Runner video game in their daily lives. The game was developed for the occasion in two versions, a monophonic one and a binaural one. The experiment lasts five weeks, at a rate of two sessions per day, which relates to a protocol known as the “Experience Sampling Method”. We collect at each session notes of immersion, memorization and performance, and compare the notes between the monophonic sessions and the binaural ones. Results indicate a significantly better immersion in the binaural sessions. No effect of sound rendering was found for memorization and performance. Beyond the contribution of the binaural, we discuss about the protocol, the validity of the collected data, and oppose theoretical considerations to practical feasibility
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Bergqvist, Emil. "Auditory displays : A study in effectiveness between binaural and stereo audio to support interface navigation." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-10072.

Повний текст джерела
Анотація:
This thesis analyses if the change of auditory feedback can improve the effectiveness of performance in the interaction with a non-visual system, or with a system used by individuals with visual impairment. Two prototypes were developed, one with binaural audio and the other with stereo audio. The interaction was evaluated in an experiment where 22 participants, divided into two groups, performed a number of interaction tasks. A post-interview were conducted together with the experiment. The result of the experiment displayed that there were no great difference between binaural audio and stereo regarding the speed and accuracy of the interaction. The post-interviews displayed interesting differences in the way participants visualized the virtual environment that affected the interaction. This opened up interesting questions for future studies.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Spatial sound perception"

1

Spatial hearing: The psychophysics of human sound localization. Cambridge, Mass: MIT Press, 1997.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Principles And Applications Of Spatial Hearing. World Scientific Publishing Company, 2011.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Suzuki, Yoiti, Douglas Brungart, and Kazuhiro Iida. Principles and Applications of Spatial Hearing. World Scientific Publishing Co Pte Ltd, 2011.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

McLachlan, Neil M. Timbre, Pitch, and Music. Oxford University Press, 2016. http://dx.doi.org/10.1093/oxfordhb/9780199935345.013.44.

Повний текст джерела
Анотація:
The perception of a sound’s timbre and pitch may be related to the more basic auditory function of sound recognition. Timbre may be related to the sensory experience (or memory) by which we recognize the source or meaning of a sound, while pitch may involve the recognition and mapping of timbres along a cognitive spatial dimension. Musical dissonance may then result from failure of sound recognition mechanisms, resulting in poor integration of pitch information and heightened arousal in musicians. Neurobiological models of auditory processing that include cortico-ponto-cerebellar and limbic pathways provide an account of the neural plasticity that underpins sound recognition and more complex human musical behaviors.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Schäfer, Armin, and Julia Kursell. Microsound and Macrocosm. Edited by Yael Kaduri. Oxford University Press, 2016. http://dx.doi.org/10.1093/oxfordhb/9780199841547.013.18.

Повний текст джерела
Анотація:
This chapter investigates concepts of space in French composer Gérard Grisey’s music. From the 1970s onward, he used sound spectrograms, introducing the compositional technique of “spectralism,” which can be rooted in Arnold Schoenberg’s concept ofKlangfarbe. The cycleLes Espaces acoustiques(1974–1985) uses this technique to create a sequence of musical forms that grow from the acoustic seed of a single tone. The cycle can be traced back to a new role for acoustic space, which emerged in early atonal composition. Grisey confronts the natural order of acoustic space with the human order of producing and perceiving sounds. The dis-symmetry between these two orders of magnitude is further explored in Grisey’sLe Noir de l’Étoile(1990) for six percussionists, magnetic tape, and real-time astrophysical signals. This piece unfolds a triadic constellation of spatial orders where human perception and performance are staged between musical micro-space and cosmic marco-space.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Murphet, Julian. Affect and spatial dynamics in Flags in the Dust and The Sound and the Fury. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780190664244.003.0003.

Повний текст джерела
Анотація:
This chapter analyzes the critical break in Faulkner’s career, between the relatively conventional and long-winded draft of Flags in the Dust (1927) and the extraordinary literary achievement The Sound and the Fury (1929)—both of which tackle the same basic material. It speculates that one determining factor is the diminution, in absolute terms, of Southern descriptive prose from one book to the next, and argues that Faulkner motivates this eclipse of one of the perdurable romance techniques via an astute attention to the changes wrought to the “distribution of the sensible” by the increase in automobile use in the late 1920s. The “chronotopes” of romance are modified from within by the extent to which automobile and electric streetcar transport overtakes the now anachronistic horse-and-buggy traps and mule-drawn carts of an earlier epoch. Faulkner proved perceptive as regards these modifications, and rendered them in enduring aesthetic terms in his early masterpiece.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Spatial sound perception"

1

Wenzel, Elizabeth M., Durand R. Begault, and Martine Godfroy-Cooper. "Perception of Spatial Sound." In Immersive Sound, 5–39. New York ; London : Routledge, 2017.: Routledge, 2017. http://dx.doi.org/10.4324/9781315707525-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Murakami, Koki, Keiichi Watanuki, and Kazunori Kaede. "Spatial Perception Under Visual Restriction by Moving a Sound Source Using 3D Audio." In Advances in Industrial Design, 757–64. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-80829-7_93.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

PULKKI, V., M.-V. LAITINEN, J. VILKAMO, J. AHONEN, T. LOKKI, and T. PIHLAJAMÄKI. "PERCEPTION-BASED REPRODUCTION OF SPATIAL SOUND WITH DIRECTIONAL AUDIO CODING." In Principles and Applications of Spatial Hearing, 324–36. WORLD SCIENTIFIC, 2011. http://dx.doi.org/10.1142/9789814299312_0026.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

ZAHORIK, P., E. BRANDEWIE, and V. P. SIVONEN. "AUDITORY PERCEPTION IN REVERBERANT SOUND FIELDS AND EFFECTS OF PRIOR LISTENING EXPOSURE." In Principles and Applications of Spatial Hearing, 24–34. WORLD SCIENTIFIC, 2011. http://dx.doi.org/10.1142/9789814299312_0003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Saqib, Usama, and Robin Kerstens. "Perceiving the World With Sound." In Design and Control Advances in Robotics, 30–59. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-5381-0.ch003.

Повний текст джерела
Анотація:
Robot perception is the ability of a robotic platform to perceive its environment by the means of sensor inputs, e.g., laser, IMU, motor encoders, and so on. Much like humans, robots are not limited to perceiving their environment through vision-based sensors, e.g., cameras. Robot perception, through the scope of this chapter, encompasses acoustic signal processing techniques to locate the presence of a sound source, e.g., human speaker, within an environment for human-robot interaction (HRI), that has gained great interest within scientific community. This chapter will serve as an introduction to acoustic signal processing within robotics, starting with passive acoustic localization and building up to contemporary active sensing methods, such as the usage of neural networks and spatial map generation. The origins of active acoustic localization, which finds its roots in biomimetics, are also discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ben-Hur, Zamir, David Alon, Or Berebi, Ravish Mehra, and Boaz Rafaely. "Binaural Reproduction Based on Bilateral Ambisonics." In Advances in Fundamental and Applied Research on Spatial Audio [Working Title]. IntechOpen, 2021. http://dx.doi.org/10.5772/intechopen.100402.

Повний текст джерела
Анотація:
Binaural reproduction of high-quality spatial sound has gained considerable interest with the recent technology developments in virtual and augmented reality. The reproduction of binaural signals in the Spherical-Harmonics (SH) domain using Ambisonics is now a well-established methodology, with flexible binaural processing realized using SH representations of the sound-field and the Head-Related Transfer Function (HRTF). However, in most practical cases, the binaural reproduction is order-limited, which introduces truncation errors that have a detrimental effect on the perception of the reproduced signals, mainly due to the truncation of the HRTF. Recently, it has been shown that manipulating the HRTF phase component, by ear-alignment, significantly reduces its effective SH order while preserving its phase information, which may be beneficial for alleviating the above detrimental effect. Incorporating the ear-aligned HRTF into the binaural reproduction process has been suggested by using Bilateral Ambisonics, which is an Ambisonics representation of the sound-field formulated at the two ears. While this method imposes challenges on acquiring the sound-field, and specifically, on applying head-rotations, it leads to a significant reduction in errors caused by the limited-order reproduction, which yields a substantial improvement in the perceived binaural reproduction quality even with first order SH.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Engel, Isaac, and Lorenzo Picinali. "Reverberation and its Binaural Reproduction: The Trade-off between Computational Efficiency and Perceived Quality." In Advances in Fundamental and Applied Research on Spatial Audio [Working Title]. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.101940.

Повний текст джерела
Анотація:
Accurately rendering reverberation is critical to produce realistic binaural audio, particularly in augmented reality applications where virtual objects must blend in seamlessly with real ones. However, rigorously simulating sound waves interacting with the auralised space can be computationally costly, sometimes to the point of being unfeasible in real time applications on resource-limited mobile platforms. Luckily, knowledge of auditory perception can be leveraged to make computational savings without compromising quality. This chapter reviews different approaches and methods for rendering binaural reverberation efficiently, focusing specifically on Ambisonics-based techniques aimed at reducing the spatial resolution of late reverberation components. Potential future research directions in this area are also discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Pesic, Peter. "Helmholtz and the Sirens." In Music and the Making of Modern Science. The MIT Press, 2014. http://dx.doi.org/10.7551/mitpress/9780262027274.003.0015.

Повний текст джерела
Анотація:
Hermann von Helmholtz’s investigations of physiological optics and acoustics reflected his profound interest in music. After devising instruments to measure the space and time parameters of visual and auditory response, Helmholtz produced “color curves” characterizing the complex response of the eye to the appropriate “dimensions” of hue, saturation, intensity. In so doing, he critiqued Newton’s attempt to impose the musical scale on vision. Through experiments on sirens, Helmholtz generalized auditory perception from vibrating bodies to air puffs. He gradually formed the view that recognition of musical intervals was closely analogous to spatial resemblance or recurrence. His unfolding conception of the “manifolds” or “spaces” of sensory experience radically reconfigured and extended Newton’s connection between the musical scale and visual perception via Thomas Young’s theory of color vision. In the process, Helmholtz’s studies of hearing and seeing led him to compare them as differently structured geometric manifolds. Throughout the book where various sound examples are referenced, please see http://mitpress.mit.edu/musicandmodernscience (please note that the sound examples should be viewed in Chrome or Safari Web browsers).
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Casati, Roberto, and Jérôme Dokic. "Some Varieties of Spatial Hearing1." In Sounds and Perception, 97–110. Oxford University Press, 2009. http://dx.doi.org/10.1093/acprof:oso/9780199282968.003.0005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Adkins, Imogen. "Soundworld Spatiality and the Unheroic Self-Giving of Jesus Christ." In Theology, Music, and Modernity, 84–106. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198846550.003.0005.

Повний текст джерела
Анотація:
This chapter argues that the spatial metaphor of resonant ‘edgeless difference’, which arises from our perception of musical sound, makes more conceptualizable a vision of kenotic freedom which the New Testament understands to be embodied in Jesus Christ and made accessible through the Spirit, to the glory of the Father. This proposal is briefly explored in conversation with the Christology and theological aesthetics of Rowan Williams (1950–), who works within a kenotic idiom. We discover that a conceptuality borne of musical phenomenology can liberate Christological grammar from modernist strongholds and can direct our attention to the multiple, interconnected freedoms of the New Creation (i.e., the cosmic, ecclesiological, and individual). In so doing, it supports the idea that a Christ-centred kenotic theory of self-emptying provides a radical alternative to certain modernist views of freedom.
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Spatial sound perception"

1

Maosheng Zhang, Ruimin Hu, Shihong Chen, Xiaochen Wang, Dengshi Li, and Lin Jiang. "Spatial perception reproduction of sound events based on sound property coincidences." In 2015 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2015. http://dx.doi.org/10.1109/icme.2015.7177412.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Troschka, Stefan, Melina Stephan, Benjamin Yatfung Wong, and Thomas Gorne. "Listening experiment on the perception of spatial sound configurations." In 2021 Immersive and 3D Audio: from Architecture to Automotive (I3DA). IEEE, 2021. http://dx.doi.org/10.1109/i3da48870.2021.9610880.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Sasaki, Yoko, Ryo Tanabe, and Hiroshi Takernura. "Online Spatial Sound Perception Using Microphone Array on Mobile Robot*." In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018. http://dx.doi.org/10.1109/iros.2018.8593777.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Cowan, Brent, David Rojas, Bill Kapralos, Karen Collins, and Adam Dubrowski. "Spatial sound and its effect on visual quality perception and task performance within a virtual environment." In ICA 2013 Montreal. ASA, 2013. http://dx.doi.org/10.1121/1.4798377.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ortega-Gonza´lez, Vladimir, Samir Garbaya, and Fre´de´ric Merienne. "Using 3D Sound for Providing 3D Interaction in Virtual Environment." In ASME 2010 World Conference on Innovative Virtual Reality. ASMEDC, 2010. http://dx.doi.org/10.1115/winvr2010-3750.

Повний текст джерела
Анотація:
In this paper we describe a proposal based on the use of 3D sound metaphors for providing precise spatial cueing in virtual environment. A 3D sound metaphor is a combination of the audio spatialization and audio cueing techniques. The 3D sound metaphors are supposed to improve the user performance and perception. The interest of this kind of stimulation mechanism is that it could allow providing efficient 3D interaction for interactive tasks such as selection, manipulation and navigation among others. We describe the main related concepts, the most relevant related work, the current theoretical and technical problems, the description of our approach, our scientific objectives, our methodology and our research perspectives.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Husung, Stephan, Antje Siegel, and Christian Weber. "Acoustical Investigations in Virtual Environments for a Car Passing Application." In ASME 2014 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/detc2014-34936.

Повний текст джерела
Анотація:
Product development is dominated by reducing time and costs, which is often contradictory to the required high quality of product properties. Therefore, the demand for efficient tools, which support the product development process, is rising. Virtual Reality (VR) can be used as such a tool. The interactive presentation of simulation results using (extended) VR technologies is very helpful — especially if both the simulation tools and the VR presentation are multimodal. Due to the increasing importance of acoustics and the expectation of an improving presence in VR environments the sense of perception should be extended. For this a special audio-visual VR-system and audio-visual models are necessary. For the current investigations a spatial, interactive auralisation-system is used. The main focus of the paper lies on the state dependent reproduction of the acoustical behavior using a real-time capable, network-based sound-server. The developed methods are explained in the paper by an automotive example.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Peters, Rob, Koen Smit, and Johan Versendaal. "Responsible AI and Power: Investigating the System Level Bureaucrat in the Legal Planning Process." In Digital Support from Crisis to Progressive Change. University of Maribor Press, 2021. http://dx.doi.org/10.18690/978-961-286-485-9.43.

Повний текст джерела
Анотація:
Numerous statements and pamphlets indicate that governments should increase the transparency of ICTimplementations and algorithms in eGovernment services and should encourage democratic control. This paper presents research among civil servants, suppliers and experts who play a role in the automation of spatial policymaking and planning (e.g. environment, building, sound and CO2 regulation, mobility). The case is a major digitalisation programme of that spatial planning in the Netherlands. In this digital transition, the research assumption is that public and political values such as transparency, legitimacy and (perceived) fairness are difficult to validate in the practice of the design process; policy makers tend to lose sight of the algorithms and decision trees designed during the ICT -implementation of eGovernment services. This situation would implicate a power shift towards the system level bureaucrat. i.e., the digitized execution of laws and regulations, thereby threatening democratic control. This also sets the stage for anxiety towards ICT projects and digital bureaucracies. We have investigated perceptions about ‘validation dark spots’ in the design process of the national planning platform that create unintended shifts in decision power in the context of the legal planning process. To identify these validation dark spots, 22 stakeholders were interviewed. The results partially confirm the assumption. Based on the collected data, nine validation dark spots are identified that require more attention and research.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Nijholt, Anton. "Augmented Reality: Beyond Interaction." In 13th International Conference on Applied Human Factors and Ergonomics (AHFE 2022). AHFE International, 2022. http://dx.doi.org/10.54941/ahfe1002058.

Повний текст джерела
Анотація:
In 1997 Ronald T. Azuma introduced the following definition of Augmented Reality (AR): “Some researchers define AR in a way that requires the use of Head-Mounted Displays (HMDs). To avoid limiting AR to specific technologies, this survey defines AR as systems that have the following three characteristics: 1) Combines real and virtual, 2) Interactive in real-time, 3) Registered in 3-D.” Azuma also mentions that “AR might apply to all senses, not just sight.” [1] This definition has been leading in AR research until now. AR researchers focused on the various ways technology, in particular digital technology (computer-generated imagery, computer vision and world modelling, interaction technology, and AR display technology), could be developed to realize this AR view. The emphasis has been on addressing the sight sense when generating and aligning virtual content, our most dominant sense, although we can not survive without the others. Azuma and others mention the other senses and assume that this definition also covers other than computer-generated imagery, per-haps even other than computer generated and (spatial-temporal) generated and con-trolled virtual content. Nevertheless, the definition has some constituents that can be given various interpretations. This makes it workable, but it is useful to discuss how we should distinguish between real and virtual content, what is it that distinguishes real from virtual, or how virtual content can trigger changes in the real world (and the other way around), take into account that AR becomes part of ubiquitous computing. That is, rather than looking at AR from the point of view of particular professional, educational, or entertaining applications, we should look at AR from the point of view that it is ever-present, and embedded in ubiquitous computing (Ubicomp), and having its AR devices’ sensors and actuators communicate with the smart environments in which it is embedded.The focus in this paper is on ‘optical see-through’ (OSR) AR and ever-present AR. Ever-present AR will become possible with non-obtrusive AR glasses [2] or contact lenses [3,4]. Usually, interaction is looked upon from the point of view of what we see and hear. But we certainly are aware of touch experiences and exploring objects with active touch. We can also experience scents and flavors, passively but also actively, that is, consciously explore scents or tastes, become aware of them, and ask the environment, not necessarily explicitly since our preferences are known and our intentions can be predicted, to respond in an appropriate way to evoke or continue an interaction.Interaction in AR and with AR technology requires a new look at interaction. Are we interacting with the AR device, with the environment, or with the environment through the AR device? Part of what we perceive is real, part of what we perceive is superimposed on reality, and part of what we perceive is the interaction between real and virtual reality. How to interact with this mix of realities? Additionally, our HMD AR provides us with view changes because of position and head orientation or gaze changes. We interact with the device with, for example, speech and hand gestures, we interact with the environment with, for example, pose changes, and we interact with the virtual content with interaction modalities that are appropriate for that content: push a virtual block, open a virtual door, or have a conversation with a virtual hu-man that inhabits the AR world. In addition, we can think of interactions that be-come possible because technology allows us to get access and act upon sensor information that cannot be perceived with our natural perception receptors. In a ubiquitous computing environment, our AR device can provide us with a 360 degrees view of our environment, drones can feed us with information from above, infrared sensors know about people and events in the dark, our car receives visual information about not yet visible vehicles approaching an intersection [5], sound frequencies be-yond the human ear can be made accessible, smell sensors can enhance the human smell sense, et cetera.In this paper, we investigate the characteristics of interactions in AR and relate them to the regular human-computer interaction characteristics (interacting with tools) [6], interaction with multimedia [7] interaction through behavior [8], implicit interaction [9], embodied interaction [10], fake interaction [11], and interaction based on Gibson’s visual perception theory [12]. This will be done from the point of view of ever-present AR [13] with optical see-through wearable devices.References could not be included because of space limitations.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії