Journal articles on the topic 'Spatial sound perception'

To see the other types of publications on this topic, follow the link: Spatial sound perception.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Spatial sound perception.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Elvemo, Johan-Magnus. "Spatial perception and diegesis in multi-channel surround cinema." New Soundtrack 3, no. 1 (March 2013): 31–44. http://dx.doi.org/10.3366/sound.2013.0034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Jian, Luigi Maffei, Aniello Pascale, and Massimiliano Masullo. "Effects of spatialized water-sound sequences for traffic noise masking on brain activities." Journal of the Acoustical Society of America 152, no. 1 (July 2022): 172–83. http://dx.doi.org/10.1121/10.0012222.

Full text
Abstract:
Informational masking of water sounds has been proven effective in mitigating traffic noise perception with different sound levels and signal-to-noise ratios, but less is known about the effects of the spatial distribution of water sounds on the perception of the surrounding environment and corresponding psychophysical responses. Three different spatial settings of water-sound sequences with a traffic noise condition were used to investigate the role of spatialization of water-sound sequences on traffic noise perception. The neural responses of 20 participants were recorded by a portable electroencephalogram (EEG) device during the spatial sound playback time. The mental effects and attention process related to informational masking were assessed by the analysis of the EEG spectral power distribution and sensor-level functional connectivity along with subjective assessments. The results showed higher relative power of the alpha band and greater alpha-beta ratio among water-sound sequence conditions compared to traffic noise conditions, which confirmed the increased relaxation on the mental state induced by the introduction of water sounds. Moreover, different spatial settings of water-sound sequences evoked different cognitive network responses. The setting of two-position switching water brought more attentional network activations than other water sequences related to the information masking process along with more positive subjective feelings.
APA, Harvard, Vancouver, ISO, and other styles
3

Yamasaki, Daiki, Kiyofumi Miyoshi, Christian F. Altmann, and Hiroshi Ashida. "Front-Presented Looming Sound Selectively Alters the Perceived Size of a Visual Looming Object." Perception 47, no. 7 (May 21, 2018): 751–71. http://dx.doi.org/10.1177/0301006618777708.

Full text
Abstract:
In spite of accumulating evidence for the spatial rule governing cross-modal interaction according to the spatial consistency of stimuli, it is still unclear whether 3D spatial consistency (i.e., front/rear of the body) of stimuli also regulates audiovisual interaction. We investigated how sounds with increasing/decreasing intensity (looming/receding sound) presented from the front and rear space of the body impact the size perception of a dynamic visual object. Participants performed a size-matching task (Experiments 1 and 2) and a size adjustment task (Experiment 3) of visual stimuli with increasing/decreasing diameter, while being exposed to a front- or rear-presented sound with increasing/decreasing intensity. Throughout these experiments, we demonstrated that only the front-presented looming sound caused overestimation of the spatially consistent looming visual stimulus in size, but not of the spatially inconsistent and the receding visual stimulus. The receding sound had no significant effect on vision. Our results revealed that looming sound alters dynamic visual size perception depending on the consistency in the approaching quality and the front–rear spatial location of audiovisual stimuli, suggesting that the human brain differently processes audiovisual inputs based on their 3D spatial consistency. This selective interaction between looming signals should contribute to faster detection of approaching threats. Our findings extend the spatial rule governing audiovisual interaction into 3D space.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Maosheng, Ruimin Hu, Shihong Chen, Xiaochen Wang, Lin Jiang, and Heng Wang. "Spatial perception reproduction of sound event based on sound properties." Wuhan University Journal of Natural Sciences 20, no. 1 (January 10, 2015): 34–38. http://dx.doi.org/10.1007/s11859-015-1055-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Langkjær, Birger. "Spatial Perception and Technologies of Cinema Sound." Convergence: The International Journal of Research into New Media Technologies 3, no. 4 (December 1997): 92–107. http://dx.doi.org/10.1177/135485659700300408.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Melick, Joshua B., V. Ralph Algazi, and Richard O. Duda. "Spatial perception of motion‐tracked binaural sound." Journal of the Acoustical Society of America 117, no. 4 (April 2005): 2485. http://dx.doi.org/10.1121/1.4787772.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jääskeläinen, Anni. "Mimetic schemas and shared perception through imitatives." Nordic Journal of Linguistics 39, no. 2 (September 27, 2016): 159–83. http://dx.doi.org/10.1017/s0332586516000147.

Full text
Abstract:
This article examines the interplay between certain depictions of sound and certain mimetic schemas (intersubjectively shared, body-based image schemas that concern basic processes and activities). The research contributes to the study of ideophones and also demonstrates that it is beneficial to study these types of words in written everyday interaction, as well as in spoken everyday interaction. Two Finnish sound words (ideophones, imitatives),naps‘snap, pop’ andhumps(the sound of relatively soft falling) are examined and their different meanings are analysed. Some research questions of this analysis are: What causes the sound described by eithernapsorhumps? What kind of movement is described and to what mimetic schema is the sound linked? And also: What concrete, spatial processes might motivate the words’ more abstract uses? The examination indicates thatnapsandhumpsare used as concrete depictions of sounds and movements, but also more abstractly, as depictions of cognitive and emotional processes without any spatial movement or audible sound. The motivations for these more abstract uses are studied: It is argued that the basic uses ofnapsandhumpsare tied to certain bodily processes as their sounds or impressions, and that the more abstract uses ofnapsandhumpsreflect metaphorical mappings that map the mimetic schemas of these basic, bodily experiences to more abstract experiences. Grounds for this kind of use is the unique construal of imitatives: they present an imagistic, iconic depiction of a sensation and thus evoke imagery that is shared on a direct bodily level. Thus they aid in identifying with others and their experiences on a level that is directly accessible.
APA, Harvard, Vancouver, ISO, and other styles
8

Möttönen, Riikka, Kaisa Tiippana, Mikko Sams, and Hanna Puharinen. "Sound Location Can Influence Audiovisual Speech Perception When Spatial Attention Is Manipulated." Seeing and Perceiving 24, no. 1 (2011): 67–90. http://dx.doi.org/10.1163/187847511x557308.

Full text
Abstract:
AbstractAudiovisual speech perception has been considered to operate independent of sound location, since the McGurk effect (altered auditory speech perception caused by conflicting visual speech) has been shown to be unaffected by whether speech sounds are presented in the same or different location as a talking face. Here we show that sound location effects arise with manipulation of spatial attention. Sounds were presented from loudspeakers in five locations: the centre (location of the talking face) and 45°/90° to the left/right. Auditory spatial attention was focused on a location by presenting the majority (90%) of sounds from this location. In Experiment 1, the majority of sounds emanated from the centre, and the McGurk effect was enhanced there. In Experiment 2, the major location was 90° to the left, causing the McGurk effect to be stronger on the left and centre than on the right. Under control conditions, when sounds were presented with equal probability from all locations, the McGurk effect tended to be stronger for sounds emanating from the centre, but this tendency was not reliable. Additionally, reaction times were the shortest for a congruent audiovisual stimulus, and this was the case independent of location. Our main finding is that sound location can modulate audiovisual speech perception, and that spatial attention plays a role in this modulation.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Jiang, Jian Kang, Holger Behm, and Tao Luo. "LANDSCAPE SPATIAL PATTERN INDICES AND SOUNDSCAPE PERCEPTION IN A MULTI-FUNCTIONAL URBAN AREA, GERMANY." JOURNAL OF ENVIRONMENTAL ENGINEERING AND LANDSCAPE MANAGEMENT 22, no. 3 (September 22, 2014): 208–18. http://dx.doi.org/10.3846/16486897.2014.911181.

Full text
Abstract:
Soundscape research could provide more information about urban acoustic environment, which should be integrated into urban management. The aim of this study is to test how landscape spatial pattern could affect soundscape perception. Soundscape data on specifically defined spatial and temporal scales were observed and evaluated in a multi-functional urban area in Rostock, Germany. The results show that urban soundscapes were characterised by artificial sounds (human, mechanical and traffic sounds) overwhelming the natural ones (biological and geophysical sounds). Major sound categories were normally mutual exclusive and dynamic on temporal scale, and have different spatial distribution on spatial scale. However, biological and traffic sounds seem to be co-existing on both temporal and spatial scales. Significant relationships were found existing between perception of major sound categories and a number of landscape spatial pattern indices, among which vegetation density (NDVI), landscape shape index (LSI) and largest patch index (LPI) showed the most effective indicating ability. The research indicated that soundscape concepts could be applied into landscape and urban planning process through the quantitative landscape indices to achieve a better urban acoustic environment.
APA, Harvard, Vancouver, ISO, and other styles
10

Szczepański, Grzegorz, Leszek Morzyński, Dariusz Pleban, and Rafał Młyński. "CIOP-PIB test stand for studies on spatial sound perception using ambisonics." Occupational Safety – Science and Practice 565, no. 10 (October 22, 2018): 24–27. http://dx.doi.org/10.5604/01.3001.0012.6477.

Full text
Abstract:
Acoustic signals can be a source of information affecting workers’ safety in the working environment. Sound perception, directional hearing and spatial orientation of people in the working environment depend on a number of factors, such as acoustic properties of the work room, noise and its parameters, the use of hearing protection, hearing loss or the use of hearing aids. Learning about the impact of these factors on perception, directional hearing and orientation requires using spatial sound and is essential for creating safe working conditions. This article presents basic information about ambisonics, a technique of spatial sound processing, and a test stand developed at the Central Institute for Labor Protection – National Research Institute for research on sound perception, directional hearing and spatial orientation of people using ambisonics.
APA, Harvard, Vancouver, ISO, and other styles
11

Robusté, Joan Riera. "Filling Sound with Space: An elemental approach to sound spatialisation." Organised Sound 23, no. 3 (December 2018): 296–307. http://dx.doi.org/10.1017/s1355771818000213.

Full text
Abstract:
This research introduces different compositional techniques involving the use of sound spatialisation. These permit the incorporation of sound distortions produced by the real space, the body and the auditory system into low-, middle- and large-scale musical structures, allowing sound spatialisation to become a fundamental parameter of the three compositions presented here. An important characteristic of these pieces is the exclusive use of sine waves and other time-invariant sound signals. Even though these types of signals present no alterations in time, it is possible to perceive pitch, loudness and tone-colour variations when they move in space, due to the psychoacoustic processes involved in spatial hearing. To emphasise the perception of such differences, this research proposes dividing a tone into multiple sound units and spreading these in space using several loudspeakers arranged around the listener. In addition to the perception of sound attribute variations, it is also possible to create dynamic rhythms and textures that depend almost exclusively on how sound units are arranged in space. Such compositional procedures help to overcome to some degree the unnaturalness implicit when using synthetic-generated sounds; through them, it is possible to establish cause–effect relationships between sound movement, on the one hand, and the perception of sound attribute, rhythm and texture variations on the other. Another important consequence is the possibility of producing diffuse sound fields independently of the levels of reverberation in the room, and to create sound spaces of a particular spatial depth without using artificial delay or reverb.
APA, Harvard, Vancouver, ISO, and other styles
12

Yao, Justin D., Peter Bremen, and John C. Middlebrooks. "Transformation of spatial sensitivity along the ascending auditory pathway." Journal of Neurophysiology 113, no. 9 (May 2015): 3098–111. http://dx.doi.org/10.1152/jn.01029.2014.

Full text
Abstract:
Locations of sounds are computed in the central auditory pathway based primarily on differences in sound level and timing at the two ears. In rats, the results of that computation appear in the primary auditory cortex (A1) as exclusively contralateral hemifield spatial sensitivity, with strong responses to sounds contralateral to the recording site, sharp cutoffs across the midline, and weak, sound-level-tolerant responses to ipsilateral sounds. We surveyed the auditory pathway in anesthetized rats to identify the brain level(s) at which level-tolerant spatial sensitivity arises. Noise-burst stimuli were varied in horizontal sound location and in sound level. Neurons in the central nucleus of the inferior colliculus (ICc) displayed contralateral tuning at low sound levels, but tuning was degraded at successively higher sound levels. In contrast, neurons in the nucleus of the brachium of the inferior colliculus (BIN) showed sharp, level-tolerant spatial sensitivity. The ventral division of the medial geniculate body (MGBv) contained two discrete neural populations, one showing broad sensitivity like the ICc and one showing sharp sensitivity like A1. Dorsal, medial, and shell regions of the MGB showed fairly sharp spatial sensitivity, likely reflecting inputs from A1 and/or the BIN. The results demonstrate two parallel brainstem pathways for spatial hearing. The tectal pathway, in which sharp, level-tolerant spatial sensitivity arises between ICc and BIN, projects to the superior colliculus and could support reflexive orientation to sounds. The lemniscal pathway, in which such sensitivity arises between ICc and the MGBv, projects to the forebrain to support perception of sound location.
APA, Harvard, Vancouver, ISO, and other styles
13

JAEKL, PHILIP M., and LAURENCE R. HARRIS. "Sounds can affect visual perception mediated primarily by the parvocellular pathway." Visual Neuroscience 26, no. 5-6 (November 2009): 477–86. http://dx.doi.org/10.1017/s0952523809990289.

Full text
Abstract:
AbstractWe investigated the effect of auditory–visual sensory integration on visual tasks that were predominantly dependent on parvocellular processing. These tasks were (i) detecting metacontrast-masked targets and (ii) discriminating orientation differences between high spatial frequency Gabor patch stimuli. Sounds that contained no information relevant to either task were presented before, synchronized with, or after the visual targets, and the results were compared to conditions with no sound. Both tasks used a two-alternative forced choice technique. For detecting metacontrast-masked targets, one interval contained the visual target and both (or neither) intervals contained a sound. Sound–target synchrony within 50 ms lowered luminance thresholds for detecting the presence of a target compared to when no sound occurred or when sound onset preceded target onset. Threshold angles for discriminating the orientation of a Gabor patch consistently increased in the presence of a sound. These results are compatible with sound-induced activity in the parvocellular visual pathway increasing the visibility of flashed targets and hindering orientation discrimination.
APA, Harvard, Vancouver, ISO, and other styles
14

Amelia, Ria R., and Dhany Arifianto. "Spatial cues on normal hearing and cochlear implant simulation with different coding strategies." Journal of the Acoustical Society of America 152, no. 4 (October 2022): A90. http://dx.doi.org/10.1121/10.0015647.

Full text
Abstract:
Cochlear implant users are known to have limited access to spatial cues. This study investigated the perception of spatial cues in normal-hearing listeners and cochlear implant simulation users. Perception of spatial cues is assessed for performance in determining the direction of the sound and understanding the speech. The results show that cochlear implant simulation users still have access to spatial cues, just like normal- hearing listeners. Normal-hearing listeners and cochlear implant simulation users can perceive spatial cues in ILD and ITD. Both can accurately identify the direction of the sound (slope ≈ 1.00 and of set ≈ 0.00°). Cochlear implant simulation users can understand sentences as well as normal-hearing listeners (PCW = 113.64 rau) by using the coding strategy SPEAK in all channels or CIS with channel above 8. Perception of spatial cues in normal-hearing listeners and cochlear implant users can be improved by listening with two ears and spatially separating the target-masker position. The largest improvement in spatial cue perception was obtained from the head shadow effect (normal-hearing (NH) = 12.96, cochlear implant simulation users (CI) = 59.02), followed by binaural summation (NH = 5.72, CI = 19, 86) and binaural squelch (NH = 3.76, CI = 7.66).
APA, Harvard, Vancouver, ISO, and other styles
15

MARRY, Solène. "Ordinary sonic public space. Sound perception parameters in urban public spaces and sonic representations associated with urban forms." SoundEffects - An Interdisciplinary Journal of Sound and Sound Experience 2, no. 1 (April 13, 2012): 171–96. http://dx.doi.org/10.7146/se.v2i1.5231.

Full text
Abstract:
The research referred to in the article concerns the factors influencing the perception of ordinary sonic public space and everyday sounds. Sound perception parameters, such as vegetation or sound sources, are analysed in urban public spaces. This research, which is based on my PhD project, tries to understand how urban people perceive their sonic environment and try to contribute to sonic ambiance knowledge. The research is based on a qualitative investigation conducted among 29 people. It is, on the one hand, based on questionnaires and focus groups in situ and, on the other hand, on individual interviews (in-depth interviews, sonic mind maps), and it illustrates different parameters (temporal, spatial, sensitive and individual) that influence a person’s assessment of the sound environment. This qualitative investigation is correlated with acoustic measures in two seasons. The results show, among other things, the impact of vegetation and urban fittings on sonic perception, and they underline the influence of city planning and urban fittings on sound perception in public urban spaces.
APA, Harvard, Vancouver, ISO, and other styles
16

Zimmer, Ulrike, Jörg Lewald, and Hans-Otto Karnath. "Disturbed Sound Lateralization in Patients with Spatial Neglect." Journal of Cognitive Neuroscience 15, no. 5 (July 2003): 694–703. http://dx.doi.org/10.1162/jocn.2003.15.5.694.

Full text
Abstract:
Previous studies on auditory space perception in patients with neglect have investigated localization of free-field-sound stimuli or lateralization of dichotic stimuli that are perceived intracranially. Since those studies in part revealed contradictory results, reporting either systematic errors to the left or systematic errors to the right, we reassessed the ability of auditory lateralization in patients with right hemispheric lesions with and without neglect. Unexpectedly, about half of the patients with neglect showed erratic judgments on sound position, that is, they were completely unable to lateralize sounds. The remaining neglect patients only showed a small deviation of the auditory median plane to the left side, indicating that they perceived the sounds as slightly shifted to the right side. The comparison between both groups revealed higher severity of neglect in the group of neglect patients who were unable to perform the task, suggesting that the inability of sound lateralization was associated with the strength of clinical neglect. However, we also observed 1 out of 9 patients with left brain damage who was not able to lateralize spatial sounds. This patient did not show any symptoms of spatial neglect. Thus, it may be that a spatial auditory deficit, such as that observed here in right-braindamaged patients, only co-occurs with spatial neglect if the right superior temporal cortex is lesioned.
APA, Harvard, Vancouver, ISO, and other styles
17

Cheng, Yuan, Yifan Zhang, Fang Wang, Guoqiang Jia, Jie Zhou, Ye Shan, Xinde Sun, et al. "Reversal of Age-Related Changes in Cortical Sound-Azimuth Selectivity with Training." Cerebral Cortex 30, no. 3 (September 3, 2019): 1768–78. http://dx.doi.org/10.1093/cercor/bhz201.

Full text
Abstract:
Abstract The compromised abilities to understand speech and localize sounds are two hallmark deficits in aged individuals. Earlier studies have shown that age-related deficits in cortical neural timing, which is clearly associated with speech perception, can be partially reversed with auditory training. However, whether training can reverse aged-related cortical changes in the domain of spatial processing has never been studied. In this study, we examined cortical spatial processing in ~21-month-old rats that were trained on a sound-azimuth discrimination task. We found that animals that experienced 1 month of training displayed sharper cortical sound-azimuth tuning when compared to the age-matched untrained controls. This training-induced remodeling in spatial tuning was paralleled by increases of cortical parvalbumin-labeled inhibitory interneurons. However, no measurable changes in cortical spatial processing were recorded in age-matched animals that were passively exposed to training sounds with no task demands. These results that demonstrate the effects of training on cortical spatial domain processing in the rodent model further support the notion that age-related changes in central neural process are, due to their plastic nature, reversible. Moreover, the results offer the encouraging possibility that behavioral training might be used to attenuate declines in auditory perception, which are commonly observed in older individuals.
APA, Harvard, Vancouver, ISO, and other styles
18

Carello, Claudia, Krista L. Anderson, and Andrew J. Kunkler-Peck. "Perception of Object Length by Sound." Psychological Science 9, no. 3 (May 1998): 211–14. http://dx.doi.org/10.1111/1467-9280.00040.

Full text
Abstract:
Although hearing is classically considered a temporal sense, everyday listening suggests that subtle spatial properties constitute an important part of what people know about the world through sound. Typically neglected in psychoacoustics research, the ability to perceive the precise sizes of objects on the basis of sound was investigated during the routine event of dropping wooden dowels of different lengths onto a hard surface. In two experiments, the ordinal and metrical success of naive listeners was related to length but not to the simple acoustic variables (duration, amplitude, frequency) likely to be related to it. Additional analysis suggests the potential relevance of an object's inertia tensor in constraining perception of that object's length, analogous to the case that has been made for perceiving length by effortful touch.
APA, Harvard, Vancouver, ISO, and other styles
19

Gyoba, Jiro, and Yuika Suzuki. "Effects of Sound on the Tactile Perception of Roughness in Peri-Head Space." Seeing and Perceiving 24, no. 5 (2011): 471–83. http://dx.doi.org/10.1163/187847511x588728.

Full text
Abstract:
AbstractThe aim of this study is to investigate whether or not spatial congruency between tactile and auditory stimuli would influence the tactile roughness discrimination of stimuli presented to the fingers or cheeks. In the experiment, when abrasive films were passively presented to the participants, white noise bursts were simultaneously presented from the same or different side, either near or far from the head. The results showed that when white noise was presented from the same side as the tactile stimuli, especially from near the head, the discrimination sensitivity on the cheeks was higher than when sound was absent or presented from a different side. A similar pattern was observed in discrimination by the fingers but it was not significant. The roughness discrimination by the fingers was also influenced by the presentation of sound close to the head, but significant differences between conditions with and without sounds were observed at the decisional level. Thus, the spatial congruency between tactile and auditory information selectively modulated the roughness sensitivity of the skin on the cheek, especially when the sound source was close to the head.
APA, Harvard, Vancouver, ISO, and other styles
20

Guo, Yanlong, Ke Wang, Han Zhang, and Zuoqing Jiang. "Soundscape Perception Preference in an Urban Forest Park: Evidence from Moon Island Forest Park in Lu’an City." Sustainability 14, no. 23 (December 2, 2022): 16132. http://dx.doi.org/10.3390/su142316132.

Full text
Abstract:
Urban forest parks improve the environment by reducing noise, which can promote the development of physical and mental health. This study aimed to investigate the soundscape preferences of visitors in different spaces. It also provides practical suggestions for the study of urban green-space soundscapes. This study took the example of Moon Island Forest Park in Lu’an City, based on the questionnaire field survey that acquired public soundscape perception data. SPSS 26.0 was used to analyze five different spatial soundscape perception preferences in Moon Island Forest Park, starting from the subjective evaluation of users’ soundscape perception, based on user preference for different spatial sound source types. A one-way analysis of variance (ANOVA) was used and a separate analysis of soundscape preferences in each space was undertaken; the mean (SD) was also used to reveal the respondents’ preference for each sound-source perceptual soundscape. The study found that the five dimensions of different spaces were significantly correlated with sound perception preferences. First, the same sound source had different perceptual characteristics and differences in different functional areas. Second, different spatial features were influenced differently by typical sound sources. Third, in each functional area, water sound was the main sound source of positive impact and mechanical sound was the main source of negative impact. Mechanical sound had the greatest negative impact on the overall area. Overall, natural sound provided the most popular significant contribution to the soundscape preference; second was the human voice, and mechanical sound produced a negative effect. The results of these studies were analyzed from the perspective of soundscape characteristics in different spaces, providing a more quantitative basis for urban forest park soundscape design.
APA, Harvard, Vancouver, ISO, and other styles
21

Green, Evan, and Eckhard Kahle. "Dynamic Spatial Responsiveness in Concert Halls." Acoustics 1, no. 3 (July 22, 2019): 549–60. http://dx.doi.org/10.3390/acoustics1030031.

Full text
Abstract:
In musical perception, a proportion of the reflected sound energy arriving at the ear is not consciously perceived. Investigations by Wettschurek in the 1970s showed the detectability to be dependent on the overall loudness and direction of arrival of reflected sound. The relationship Wettschurek found between reflection detectability, listening level, and direction of arrival correlates well with the subjective progression of spatial response during a musical crescendo: from frontal at pianissimo, through increasing apparent source width, to a fully present room acoustic at forte. “Dynamic spatial responsiveness” was mentioned in some of the earliest psychoacoustics research and recent work indicates that it is a key factor in acoustical preference. This article describes measurements of perception thresholds made using a binaural virtual acoustics system—these show good agreement with Wettschurek’s results. The perception measurements indicate that the subjective effect of reflections varies with overall listening level, even when the reflection level, delay, and direction relative to the direct sound are maintained. Reflections which are perceptually fused with the source may at louder overall listening levels become allocated to the room presence. An algorithm has been developed to visualize dynamic spatial responsiveness—i.e., which aspects of a three-dimensional (3D) Room Impulse Response would be detectable at different dynamic levels—and has been applied to measured concert hall impulse responses.
APA, Harvard, Vancouver, ISO, and other styles
22

Pulkki, Ville, Henri Pöntynen, and Olli Santala. "Spatial Perception of Sound Source Distribution in the Median Plane." Journal of the Audio Engineering Society 67, no. 11 (November 22, 2019): 855–70. http://dx.doi.org/10.17743/jaes.2019.0033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Wendt, Florian, Gerriet K. Sharma, Matthias Frank, Franz Zotter, and Robert Höldrich. "Perception of Spatial Sound Phenomena Created by the Icosahedral Loudspeaker." Computer Music Journal 41, no. 1 (March 2017): 76–88. http://dx.doi.org/10.1162/comj_a_00396.

Full text
Abstract:
The icosahedral loudspeaker (IKO) is able to project strongly focused sound beams into arbitrary directions. Incorporating artistic experience and psychoacoustic research, this article presents three listening experiments that provide evidence for a common, intersubjective perception of spatial sonic phenomena created by the IKO. The experiments are designed on the basis of a hierarchical model of spatiosonic phenomena that exhibit increasing complexity, ranging from a single static sonic object to combinations of multiple, partly moving objects. The results are promising and explore new compositional perspectives in spatial computer music.
APA, Harvard, Vancouver, ISO, and other styles
24

Jasińska, Alicja, and Maurycy Kin. "THE INFLUENCE OF ROOM ACOUSTIC PARAMETERS ON THE PERCEPTION OF ROOM CHARACTERISTICS." Akustika, VOLUME 37 (December 15, 2020): 28. http://dx.doi.org/10.36336/akustika20203728.

Full text
Abstract:
The article presents the possibility of identification of rooms on the basis of binaural perception. Results of subjective evaluation were compared with the values of sound strength, G. A previously unknown sound term was introduced: the strength of spatial impression as the inverse of standard deviation of the results obtained. It turned out that the results presenting the sound strength parameter can be correlated with the subjective evaluation of the spatial impression, which is the size of the room. It can be helpful in the process of room identification, probably due to the reverberation impression in the room. Authors plan to continue the study with more rooms and different types of sound sources.
APA, Harvard, Vancouver, ISO, and other styles
25

Avni, Amir, Jens Ahrens, Matthias Geier, Sascha Spors, Hagen Wierstorf, and Boaz Rafaely. "Spatial perception of sound fields recorded by spherical microphone arrays with varying spatial resolution." Journal of the Acoustical Society of America 133, no. 5 (May 2013): 2711–21. http://dx.doi.org/10.1121/1.4795780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Zhu, Tian-Yuan Zhu, Jiang Liu, and Xin-Chen Hong. "Before Becoming a World Heritage: Spatiotemporal Dynamics and Spatial Dependency of the Soundscapes in Kulangsu Scenic Area, China." Forests 13, no. 9 (September 19, 2022): 1526. http://dx.doi.org/10.3390/f13091526.

Full text
Abstract:
Kulangsu is a famous scenic area in China and a World Heritage Site. It is important to obtain knowledge with regard to the status of soundscape and landscape resources and their interrelationships in Kulangsu before it became a World Heritage. The objective of this study was to explore the spatial dependency of the soundscapes in Kulangsu, based on the spatiotemporal dynamics of soundscape and landscape perceptions, including perceived sound sources, soundscape quality, and landscape satisfaction degree, and the spatial landscape characteristics, including the distance to green spaces, normalized difference vegetation index, and landscape spatial patterns. The results showed that perception of soundscape and landscape were observed in significant spatiotemporal dynamics, and the dominance of biological sounds in all sampling periods and human sounds in the evening indicated that Kulangsu scenic area had a good natural environment and a developed night-time economy, respectively. The green spaces and commercial lands may contribute to both the soundscape pleasantness and eventfulness. Moreover, the soundscape quality was dependent on the sound dominant degree and landscape satisfaction degree but not on the landscape characteristics. The GWR model had better goodness of fit than the OLS model, and possible non-linear relationships were found between the soundscape pleasantness and the variables of perceived sound sources and landscape satisfaction degree. The GWR models with spatial stationarity were found to be more effective in understanding the spatial dependence of soundscapes. In particular, the data applied should ideally include a complete temporal dimension to obtain a relatively high fitting accuracy of the model. These findings can provide useful data support and references for future planning and design practices, and management strategies for the soundscape resources in scenic areas and World Heritage Sites.
APA, Harvard, Vancouver, ISO, and other styles
27

Shinn‐Cunningham, Barbara, and K. Anthony Hoover. "How does spatial auditory perception impact how we enjoy surround sound." Journal of the Acoustical Society of America 123, no. 5 (May 2008): 2980. http://dx.doi.org/10.1121/1.2932495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Collignon, Olivier, Marco Davare, Anne G. De Volder, Colline Poirier, Etienne Olivier, and Claude Veraart. "Time-course of Posterior Parietal and Occipital Cortex Contribution to Sound Localization." Journal of Cognitive Neuroscience 20, no. 8 (August 2008): 1454–63. http://dx.doi.org/10.1162/jocn.2008.20102.

Full text
Abstract:
It has been suggested that both the posterior parietal cortex (PPC) and the extrastriate occipital cortex (OC) participate in the spatial processing of sounds. However, the precise time-course of their contribution remains unknown, which is of particular interest, considering that it could give new insights into the mechanisms underlying auditory space perception. To address this issue, we have used event-related transcranial magnetic stimulation (TMS) to induce virtual lesions of either the right PPC or right OC at different delays in subjects performing a sound lateralization task. Our results confirmed that these two areas participate in the spatial processing of sounds. More precisely, we found that TMS applied over the right OC 50 msec after the stimulus onset significantly impaired the localization of sounds presented either to the right or to the left side. Moreover, right PPC virtual lesions induced 100 and 150 msec after sound presentation led to a rightward bias for stimuli delivered on the center and on the left side, reproducing transiently the deficits commonly observed in hemineglect patients. The finding that the right OC is involved in sound processing before the right PPC suggests that the OC exerts a feedforward influence on the PPC during auditory spatial processing.
APA, Harvard, Vancouver, ISO, and other styles
29

Brøvig-Hanssen, Ragnhild, and Anne Danielsen. "The Naturalised and the Surreal: changes in the perception of popular music sound." Organised Sound 18, no. 1 (March 26, 2013): 71–80. http://dx.doi.org/10.1017/s1355771812000258.

Full text
Abstract:
In a musical context, the word ‘sound’ implies a set of sonic characteristics. Within popular music, this notion of sound sometimes supplies the very identity of a tune, a band or a musician. Sound is often conceptualised as a virtual space and in turn compared to actual spatial environments, such as a stage or an enclosed room. One possible consequence of this tendency is that this virtual space can become utterly surreal, displaying sonic features that could never occur in actual physical environments. This article concerns the ways in which the increased possibilities for creating a spatially surreal sound, thanks to new technological tools, have been explored within the field of popular music over the past few decades. We also look at the ways in which the effect of such features may have changed over time as a consequence of what we call processes of naturalisation. As a particularly interesting example of the complexity of such processes, we explore ‘the music sound stage’. In addition, we analyse three songs by Prince, Suede and Portishead to reveal the possibly surreal aspects of these productions.
APA, Harvard, Vancouver, ISO, and other styles
30

Mackenzie, Roderick, Farideh Zarei, Vincent Le Men, and Joonhee Lee. "Spatial uniformity tolerances for sound masking systems in open-plan offices." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 263, no. 1 (August 1, 2021): 5643–49. http://dx.doi.org/10.3397/in-2021-3199.

Full text
Abstract:
Electronic sound masking systems raise the ambient sound level in offices to a controlled minimum sound level in order to increase speech privacy and reduce distractions. Sound masking systems are calibrated to provide the most uniform sound field achievable, as a spatially non-uniform masking sound field could result in occupant perception and uneven speech privacy conditions. Tolerances for acceptable spatial uniformity vary between specifiers, and may be based on different evaluation methods using only a few discrete measurement points to represent an entire office space. However, the actual uniformity of a masking sound field across an office, and the parameters influencing it, has not been widely investigated. Thus, this study aims to investigate the masking sound uniformity in a typical open-plan office space using fine-grid measurements conforming to measurement method of ASTM E1573-18. Percentages of measured locations where the sound pressure levels were within specified tolerances (with increments of 0.5 dB) were calculated using the measured 1/3 octave band levels. The research also utilized geometric acoustical simulations to investigate how physical office parameters (number of loudspeakers, partition heights, ceiling absorption, and diffusion characteristics) affect the sound field uniformity of the sound masking system.
APA, Harvard, Vancouver, ISO, and other styles
31

Kirsch, Christoph, Josef Poppitz, Torben Wendt, Steven van de Par, and Stephan D. Ewert. "Spatial Resolution of Late Reverberation in Virtual Acoustic Environments." Trends in Hearing 25 (January 2021): 233121652110549. http://dx.doi.org/10.1177/23312165211054924.

Full text
Abstract:
Late reverberation involves the superposition of many sound reflections, approaching the properties of a diffuse sound field. Since the spatially resolved perception of individual late reflections is impossible, simplifications can potentially be made for modelling late reverberation in room acoustics simulations with reduced spatial resolution. Such simplifications are desired for interactive, real-time virtual acoustic environments with applications in hearing research and for the evaluation of hearing supportive devices. In this context, the number and spatial arrangement of loudspeakers used for playback additionally affect spatial resolution. The current study assessed the minimum number of spatially evenly distributed virtual late reverberation sources required to perceptually approximate spatially highly resolved isotropic and anisotropic late reverberation and to technically approximate a spherically isotropic sound field. The spatial resolution of the rendering was systematically reduced by using subsets of the loudspeakers of an 86-channel spherical loudspeaker array in an anechoic chamber, onto which virtual reverberation sources were mapped using vector base amplitude panning. It was tested whether listeners can distinguish lower spatial resolutions of reproduction of late reverberation from the highest achievable spatial resolution in different simulated rooms. The rendering of early reflections remained unchanged. The coherence of the sound field across a pair of microphones at ear and behind-the-ear hearing device distance was assessed to separate the effects of number of virtual sources and loudspeaker array geometry. Results show that between 12 and 24 reverberation sources are required for the rendering of late reverberation in virtual acoustic environments.
APA, Harvard, Vancouver, ISO, and other styles
32

Afonso-Jaco, Amandine, and Brian F. G. Katz. "Spatial Knowledge via Auditory Information for Blind Individuals: Spatial Cognition Studies and the Use of Audio-VR." Sensors 22, no. 13 (June 24, 2022): 4794. http://dx.doi.org/10.3390/s22134794.

Full text
Abstract:
Spatial cognition is a daily life ability, developed in order to be able to understand and interact with our environment. Even if all the senses are involved in mental representation of space elaboration, the lack of vision makes it more difficult, especially because of the importance of peripheral information in updating the relative positions of surrounding landmarks when one is moving. Spatial audio technology has long been used for studies of human perception, particularly in the area of auditory source localisation. The ability to reproduce individual sounds at desired positions, or complex spatial audio scenes, without the need to manipulate physical devices has provided researchers with many benefits. We present a review of several studies employing the power of spatial audio virtual reality for research in spatial cognition with blind individuals. These include studies investigating simple spatial configurations, architectural navigation, reaching to sounds, and sound design for improved acceptability. Prospects for future research, including those currently underway, are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
33

Siedenburg, Kai, Jackson Graves, and Daniel Pressnitzer. "A unitary model of auditory frequency change perception." PLOS Computational Biology 19, no. 1 (January 12, 2023): e1010307. http://dx.doi.org/10.1371/journal.pcbi.1010307.

Full text
Abstract:
Changes in the frequency content of sounds over time are arguably the most basic form of information about the behavior of sound-emitting objects. In perceptual studies, such changes have mostly been investigated separately, as aspects of either pitch or timbre. Here, we propose a unitary account of “up” and “down” subjective judgments of frequency change, based on a model combining auditory correlates of acoustic cues in a sound-specific and listener-specific manner. To do so, we introduce a generalized version of so-called Shepard tones, allowing symmetric manipulations of spectral information on a fine scale, usually associated to pitch (spectral fine structure, SFS), and on a coarse scale, usually associated timbre (spectral envelope, SE). In a series of behavioral experiments, listeners reported “up” or “down” shifts across pairs of generalized Shepard tones that differed in SFS, in SE, or in both. We observed the classic properties of Shepard tones for either SFS or SE shifts: subjective judgements followed the smallest log-frequency change direction, with cases of ambiguity and circularity. Interestingly, when both SFS and SE changes were applied concurrently (synergistically or antagonistically), we observed a trade-off between cues. Listeners were encouraged to report when they perceived “both” directions of change concurrently, but this rarely happened, suggesting a unitary percept. A computational model could accurately fit the behavioral data by combining different cues reflecting frequency changes after auditory filtering. The model revealed that cue weighting depended on the nature of the sound. When presented with harmonic sounds, listeners put more weight on SFS-related cues, whereas inharmonic sounds led to more weight on SE-related cues. Moreover, these stimulus-based factors were modulated by inter-individual differences, revealing variability across listeners in the detailed recipe for “up” and “down” judgments. We argue that frequency changes are tracked perceptually via the adaptive combination of a diverse set of cues, in a manner that is in fact similar to the derivation of other basic auditory dimensions such as spatial location.
APA, Harvard, Vancouver, ISO, and other styles
34

Hassager, Henrik Gert, Tobias May, Alan Wiinberg, and Torsten Dau. "Preserving spatial perception in rooms using direct-sound driven dynamic range compression." Journal of the Acoustical Society of America 141, no. 6 (June 2017): 4556–66. http://dx.doi.org/10.1121/1.4984040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Fischetti, A., J. Jouhaneau, and Y. Hemim. "Differences between headphones and loudspeakers listening in spatial properties of sound perception." Applied Acoustics 39, no. 4 (1993): 291–305. http://dx.doi.org/10.1016/0003-682x(93)90012-u.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

de Vos, Jeroen. "Listen to the Absent: Binaural Inquiry in Auditive Spatial Perception." Leonardo Music Journal 26 (December 2016): 7–9. http://dx.doi.org/10.1162/lmj_a_00958.

Full text
Abstract:
If space can be perceived through sound, then recording and playback techniques allow capturing a unique spatial moment for later retrieval. The notion of spectrality as the invisible visible can be used to explain the embodiment of an auditive momentum in a space that is ultimately absent. This empirical study presents the results of five structured interviews in which interviewees are confronted with three binaural spatial recordings to reflect on the perception of dwelling in a spectral space: a space that is not there.
APA, Harvard, Vancouver, ISO, and other styles
37

Courtois, Gilles, Vincent Grimaldi, Hervé Lissek, Philippe Estoppey, and Eleftheria Georganti. "Perception of Auditory Distance in Normal-Hearing and Moderate-to-Profound Hearing-Impaired Listeners." Trends in Hearing 23 (January 2019): 233121651988761. http://dx.doi.org/10.1177/2331216519887615.

Full text
Abstract:
The auditory system allows the estimation of the distance to sound-emitting objects using multiple spatial cues. In virtual acoustics over headphones, a prerequisite to render auditory distance impression is sound externalization, which denotes the perception of synthesized stimuli outside of the head. Prior studies have found that listeners with mild-to-moderate hearing loss are able to perceive auditory distance and are sensitive to externalization. However, this ability may be degraded by certain factors, such as non-linear amplification in hearing aids or the use of a remote wireless microphone. In this study, 10 normal-hearing and 20 moderate-to-profound hearing-impaired listeners were instructed to estimate the distance of stimuli processed with different methods yielding various perceived auditory distances in the vicinity of the listeners. Two different configurations of non-linear amplification were implemented, and a novel feature aiming to restore a sense of distance in wireless microphone systems was tested. The results showed that the hearing-impaired listeners, even those with a profound hearing loss, were able to discriminate nearby and far sounds that were equalized in level. Their perception of auditory distance was however more contracted than in normal-hearing listeners. Non-linear amplification was found to distort the original spatial cues, but no adverse effect on the ratings of auditory distance was evident. Finally, it was shown that the novel feature was successful in allowing the hearing-impaired participants to perceive externalized sounds with wireless microphone systems.
APA, Harvard, Vancouver, ISO, and other styles
38

GRIFFITHS, T. D. "Human complex sound analysis*." Clinical Science 96, no. 3 (March 1, 1999): 231–34. http://dx.doi.org/10.1042/cs0960231.

Full text
Abstract:
The analysis of complex sound features is important for the perception of environmental sounds, speech and music, and may be abnormal in disorders such as specific language impairment in children, and in common adult lesions including stroke and multiple sclerosis. This work addresses the problem of how the human auditory system detects features in complex sound, and uses those features to perceive the auditory world. The work has been carried out using two independent means of testing the same hypotheses; detailed psychophysical studies of neurological patients with central lesions, and functional imaging using positron emission tomography and functional magnetic resonance imaging of normal subjects. The psychophysical and imaging studies have both examined which brain areas are concerned with the analysis of auditory space, and which are concerned with the analysis of timing information in the auditory system. This differs from many previous human auditory studies, which have concentrated on the analysis of sound frequency. The combined lesion and functional imaging approach has demonstrated analysis of the spatial property of sound movement within the right parietal lobe. The timing work has confirmed that the primary auditory cortex is active as a function of the time structure of sound, and therefore not only concerned with frequency representation of sounds.
APA, Harvard, Vancouver, ISO, and other styles
39

Walker, Bruce N., Raymond M. Stanley, Nandini Iyer, Brian D. Simpson,, and Douglas S. Brungart. "Evaluation of Bone-Conduction Headsets for Use in Multitalker Communication Environments." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 49, no. 17 (September 2005): 1615–19. http://dx.doi.org/10.1177/154193120504901725.

Full text
Abstract:
Standard audio headphones are useful in many applications, but they cover the ears of the listener and thus may impair the perception of ambient sounds. Bone-conduction headphones offer a possible alternative, but traditionally their use has been limited to monaural applications due to the high propagation speed of sound in the human skull. Here we show that stereo bone-conduction headsets can be used to provide a limited amount of interaural isolation in a dichotic speech perception task. The results suggest that reliable spatial separation is possible with bone-conduction headsets, but that they probably cannot be used to lateralize signals to extreme left or right apparent locations
APA, Harvard, Vancouver, ISO, and other styles
40

Roberson, Gwendolyn E., Mark T. Wallace, and James A. Schirillo. "The sensorimotor contingency of multisensory localization correlates with the conscious percept of spatial unity." Behavioral and Brain Sciences 24, no. 5 (October 2001): 1001–2. http://dx.doi.org/10.1017/s0140525x0154011x.

Full text
Abstract:
Two cross-modal experiments provide partial support for O'Regan & Noë's (O&N's) claim that sensorimotor contingencies mediate perception. Differences in locating a target sound accompanied by a spatially disparate neutral light correlate with whether the two stimuli were perceived as spatially unified. This correlation suggests that internal representations are necessary for conscious perception, which may also mediate sensorimotor contingencies.
APA, Harvard, Vancouver, ISO, and other styles
41

Tajadura-Jiménez, Ana, Aleksander Väljamäe, Iwaki Toshima, Toshitaka Kimura, Manos Tsakiris, and Norimichi Kitagawa. "Action sounds recalibrate perceived tactile distance." Seeing and Perceiving 25 (2012): 217. http://dx.doi.org/10.1163/187847612x648431.

Full text
Abstract:
Almost every bodily movement, from the most complex to the most mundane, such as walking, can generate impact sounds that contain spatial information of high temporal resolution. Despite the conclusive evidence about the role that the integration of vision, touch and proprioception plays in updating body-representations, hardly any study has looked at the contribution of audition. We show that the representation of a key property of one’s body, like its length, is affected by the sound of one’s actions. Participants tapped on a surface while progressively extending their right arm sideways, and in synchrony with each tap participants listened to a tapping sound. In the critical condition, the sound originated at double the distance at which participants actually tapped. After exposure to this condition, tactile distances on the test right arm, as compared to distances on the reference left arm, felt bigger than those before the exposure. No evidence of changes in tactile distance reports was found at the quadruple tapping sound distance or the asynchronous auditory feedback conditions. Our results suggest that tactile perception is referenced to an implicit body-representation which is informed by auditory feedback. This is the first evidence of the contribution of self-produced sounds to body-representation, addressing the auditory-dependent plasticity of body-representation and its spatial boundaries.
APA, Harvard, Vancouver, ISO, and other styles
42

Li, Rong, Xiangyang Zeng, Haitao Wang, and Na Li. "Study on Auditory Perception Variability Caused by Distinct Structural Parameters of the Enclosure Space." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 36, no. 4 (August 2018): 671–78. http://dx.doi.org/10.1051/jnwpu/20183640671.

Full text
Abstract:
The enclosure space is widespread in real life. However, from the angle on spatial hearing to study the variation of auditory perception caused by the changed structural parameters (such as volume, shape, etc) are very rare, particular in systematically research. In this paper, based on auditory scene simulation, the binaural auralization and sound field measurement techniques are used to obtain plenty of binaural signals in diverse enclosure spaces with different structural parameters. Subsequently, with the aid of the acoustic evaluation, the quantitative evaluation of spatial auditory perception change and its degree was carried out on the binaural signal. Major findings are summarized as follows:①The change of the volume, shape, absorption coefficient of indoor walls of enclosure space can induce the variation of auditory perception of sound field. According to the degree of change (from obvious to not), the absorption coefficient of indoor walls, the shape, the volume is sorted. ②changing the receiving location could affect the variation on auditory perception caused by the volume and shape of an enclosure space.
APA, Harvard, Vancouver, ISO, and other styles
43

Schumacher, Federico, Vicente Espinoza, Francisca Mardones, Rodrigo Vergara, Alberto Aránguiz, and Valentina Aguilera. "Perceptual Recognition of Sound Trajectories in Space." Computer Music Journal 45, no. 1 (2021): 39–54. http://dx.doi.org/10.1162/comj_a_00593.

Full text
Abstract:
Abstract Sound spatialization is a technique used in various musical genres as well as in soundtrack production for films and video games. In this context, specialized software has been developed for the design of sound trajectories we have classified as (1) basic movements, or image schemas of spatial movement; and (2) archetypal geometric figures. Our contribution is to reach an understanding of how we perceive the movement of sound in space as a result of the interaction between an agent's or listener's sensory-motor characteristics and the morphological characteristics of the stimuli and the acoustic space where such interaction occurs. An experiment was designed involving listening to auditory stimuli and associating them with the aforementioned spatial movement categories. The results suggest that in most cases, the ability to recognize moving sound is hindered when there are no visual stimuli present. Moreover, they indicate that archetypal geometric figures are rarely perceived as such and that the perception of sound movements in space can be organized into three spatial dimensions—height, depth, and width—which the literature on sound localization also confirms.
APA, Harvard, Vancouver, ISO, and other styles
44

Chen, Jing, and Hui Ma. "An impact study of acoustic environment on users in large interior spaces." Building Acoustics 26, no. 2 (May 22, 2019): 139–53. http://dx.doi.org/10.1177/1351010x19848119.

Full text
Abstract:
As many large buildings have been built worldwide recently, it is necessary to study how the acoustic environment in those buildings affects people in order to improve the acoustical comfort in them. The aim of this study is to explore the influence of acoustic environment on people in eight large-scale spaces, which are divided into three categories according to function, through grounded theory, and questionnaire. The results showed that “loud background noise,” “large number of sound sources,” “emotional change,” “mixed sounds,” and “sensible sound with certain spectrum component” were people’s main evaluation to the acoustic environment in large-scale spaces. Based on respondents’ perception, the influence of the acoustic environment in large-scale spaces could be classified into the following three aspects: emotional effect, influence on attention, and influence on thinking ability and behavior. Although the evaluation of the acoustic environment varied widely with the difference in spatial functions, same perception dimensions could be summarized.
APA, Harvard, Vancouver, ISO, and other styles
45

Bălan, Oana, Alin Moldoveanu, Florica Moldoveanu, Hunor Nagy, György Wersényi, and Rúnar Unnórsson. "Improving the Audio Game–Playing Performances of People with Visual Impairments through Multimodal Training." Journal of Visual Impairment & Blindness 111, no. 2 (March 2017): 148–64. http://dx.doi.org/10.1177/0145482x1711100206.

Full text
Abstract:
Introduction As the number of people with visual impairments (that is, those who are blind or have low vision) is continuously increasing, rehabilitation and engineering researchers have identified the need to design sensory-substitution devices that would offer assistance and guidance to these people for performing navigational tasks. Auditory and haptic cues have been shown to be an effective approach towards creating a rich spatial representation of the environment, so they are considered for inclusion in the development of assistive tools that would enable people with visual impairments to acquire knowledge of the surrounding space in a way close to the visually based perception of sighted individuals. However, achieving efficiency through a sensory substitution device requires extensive training for visually impaired users to learn how to process the artificial auditory cues and convert them into spatial information. Methods Considering all the potential advantages game-based learning can provide, we propose a new method for training sound localization and virtual navigational skills of visually impaired people in a 3D audio game with hierarchical levels of difficulty. The training procedure is focused on a multimodal (auditory and haptic) learning approach in which the subjects have been asked to listen to 3D sounds while simultaneously perceiving a series of vibrations on a haptic headband that corresponds to the direction of the sound source in space. Results The results we obtained in a sound-localization experiment with 10 visually impaired people showed that the proposed training strategy resulted in significant improvements in auditory performance and navigation skills of the subjects, thus ensuring behavioral gains in the spatial perception of the environment.
APA, Harvard, Vancouver, ISO, and other styles
46

Spors, Sascha, Hagen Wierstorf, Alexander Raake, Frank Melchior, Matthias Frank, and Franz Zotter. "Spatial Sound With Loudspeakers and Its Perception: A Review of the Current State." Proceedings of the IEEE 101, no. 9 (September 2013): 1920–38. http://dx.doi.org/10.1109/jproc.2013.2264784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Hassager, Henrik G., Tobias May, Alan Wiinberg, and Torsten Dau. "Preserving spatial perception in reverberant environments using direct-sound driven dynamic range compression." Journal of the Acoustical Society of America 141, no. 5 (May 2017): 3637. http://dx.doi.org/10.1121/1.4987840.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Barrett, Natasha. "Interactive Spatial Sonification of Multidimensional Data for Composition and Auditory Display." Computer Music Journal 40, no. 2 (June 2016): 47–69. http://dx.doi.org/10.1162/comj_a_00358.

Full text
Abstract:
This article presents a new approach to interactive spatial sonification of multidimensional data as a tool for spatial sound synthesis, for composing temporal–spatial musical materials, and as an auditory display for scientists to analyze multidimensional data sets in time and space. The approach applies parameter-mapping sonification and is currently implemented in an application called Cheddar, which was programmed in Max/MSP. Cheddar sonifies data in real time, where the user can modify a wide variety of temporal, spatial, and sonic parameters during the listening process, and thus more easily uncover patterns and processes in the data than when applying non-real-time, noninteractive techniques. The design draws on existing literature concerning perception and acoustics, and it applies the author's practical experience in acousmatic composition, spectromorphology, and sound semantics, while addressing accuracy, flexibility, and ease of use. Although previous sonification applications have addressed some degree of real-time control and spatialization, this approach integrates space and sound in an interactive framework. Spatial information is sonified in high-order 3-D ambisonics, where the user can interactively move the virtual listening position to reveal details easily missed from fixed or noninteractive spatial views. Sounds used as input to the sonification take advantage of the rich spectra and extramusical attributes of acoustic sources, which, although previously theorized, are investigated here in a practical context thoroughly tested alongside acoustic and psychoacoustic considerations. Furthermore, when using Cheddar, no specialized knowledge of programming, acoustics, or psychoacoustics is required. These approaches position Cheddar at the junction between science and art. With one application serving both disciplines, the patterns and processes of science are more fluently appropriated into music or sound art, and vice versa for scientific research, science public outreach, and education.
APA, Harvard, Vancouver, ISO, and other styles
49

Bălan, Oana, Alin Moldoveanu, and Florica Moldoveanu. "Multimodal Perceptual Training for Improving Spatial Auditory Performance in Blind and Sighted Listeners." Archives of Acoustics 40, no. 4 (December 1, 2015): 491–502. http://dx.doi.org/10.1515/aoa-2015-0049.

Full text
Abstract:
Abstract The use of individualised Head Related Transfer Functions (HRTF) is a fundamental prerequisite for obtaining an accurate rendering of 3D spatialised sounds in virtual auditory environments. The HRTFs are transfer functions that define the acoustical basis of auditory perception of a sound source in space and are frequently used in virtual auditory displays to simulate free-field listening conditions. However, they depend on the anatomical characteristics of the human body and significantly vary among individuals, so that the use of the same dataset of HRTFs for all the users of a designed system will not offer the same level of auditory performance. This paper presents an alternative approach to the use on non-individualised HRTFs that is based on a procedural learning, training, and adaptation to altered auditory cues.We tested the sound localisation performance of nine sighted and visually impaired people, before and after a series of perceptual (auditory, visual, and haptic) feedback based training sessions. The results demonstrated that our subjects significantly improved their spatial hearing under altered listening conditions (such as the presentation of 3D binaural sounds synthesised from non-individualized HRTFs), the improvement being reflected into a higher localisation accuracy and a lower rate of front-back confusion errors.
APA, Harvard, Vancouver, ISO, and other styles
50

Chapman, David. "Context-based Sound and the Ecological Theory of Perception." Organised Sound 22, no. 1 (March 7, 2017): 42–50. http://dx.doi.org/10.1017/s1355771816000327.

Full text
Abstract:
This article aims to investigate the ways in which context-based sonic art is capable of furthering a knowledge and understanding of place based on the initial perceptual encounter. How might this perceptual encounter operate in terms of a sound work’s affective dimension? To explore these issues I draw upon James J. Gibson’s ecological theory of perception and Gernot Böhme’s concept of an ‘aesthetic of atmospheres’.Within the ecological model of perception, an individual can be regarded as a ‘perceptual system’: a mobile organism that seeks information from a coherent environment. I relate this concept to notions of the spatial address of environmental sound work in order to explore (a) how the human perceptual apparatus relates to the sonic environment in its mediated form and (b) how this impacts on individuals’ ability to experience such work as complex sonic ‘environments’. Can the ecological theory of perception aid the understanding of how the listener engages with context-based work? In proposing answers to this question, this article advances a coherent analytical framework that may lead us to a more systematic grasp of the ways in which individuals engage aesthetically with sonic space and environment. I illustrate this methodology through an examination of some of the recorded work of sound artist Chris Watson.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography