Статті в журналах з теми "Perception spatiale auditive"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Perception spatiale auditive.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-33 статей у журналах для дослідження на тему "Perception spatiale auditive".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

de Vos, Jeroen. "Listen to the Absent: Binaural Inquiry in Auditive Spatial Perception." Leonardo Music Journal 26 (December 2016): 7–9. http://dx.doi.org/10.1162/lmj_a_00958.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
If space can be perceived through sound, then recording and playback techniques allow capturing a unique spatial moment for later retrieval. The notion of spectrality as the invisible visible can be used to explain the embodiment of an auditive momentum in a space that is ultimately absent. This empirical study presents the results of five structured interviews in which interviewees are confronted with three binaural spatial recordings to reflect on the perception of dwelling in a spectral space: a space that is not there.
2

Recanzone, Gregg H. "Auditory Influences on Visual Temporal Rate Perception." Journal of Neurophysiology 89, no. 2 (February 1, 2003): 1078–93. http://dx.doi.org/10.1152/jn.00706.2002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Visual stimuli are known to influence the perception of auditory stimuli in spatial tasks, giving rise to the ventriloquism effect. These influences can persist in the absence of visual input following a period of exposure to spatially disparate auditory and visual stimuli, a phenomenon termed the ventriloquism aftereffect. It has been speculated that the visual dominance over audition in spatial tasks is due to the superior spatial acuity of vision compared with audition. If that is the case, then the auditory system should dominate visual perception in a manner analogous to the ventriloquism effect and aftereffect if one uses a task in which the auditory system has superior acuity. To test this prediction, the interactions of visual and auditory stimuli were measured in a temporally based task in normal human subjects. The results show that the auditory system has a pronounced influence on visual temporal rate perception. This influence was independent of the spatial location, spectral bandwidth, and intensity of the auditory stimulus. The influence was, however, strongly dependent on the disparity in temporal rate between the two stimulus modalities. Further, aftereffects were observed following approximately 20 min of exposure to temporally disparate auditory and visual stimuli. These results show that the auditory system can strongly influence visual perception and are consistent with the idea that bimodal sensory conflicts are dominated by the sensory system with the greater acuity for the stimulus parameter being discriminated.
3

Keough, Megan, Donald Derrick, and Bryan Gick. "Cross-Modal Effects in Speech Perception." Annual Review of Linguistics 5, no. 1 (January 14, 2019): 49–66. http://dx.doi.org/10.1146/annurev-linguistics-011718-012353.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Speech research during recent years has moved progressively away from its traditional focus on audition toward a more multisensory approach. In addition to audition and vision, many somatosenses including proprioception, pressure, vibration, and aerotactile sensation are all highly relevant modalities for experiencing and/or conveying speech. In this article, we review both long-standing cross-modal effects stemming from decades of audiovisual speech research and new findings related to somatosensory effects. Cross-modal effects in speech perception to date have been found to be constrained by temporal congruence and signal relevance, but appear to be unconstrained by spatial congruence. The literature reveals that, far from taking place in a one-, two-, or even three-dimensional space, speech occupies a highly multidimensional sensory space. We argue that future research in cross-modal effects should expand to consider each of these modalities both separately and in combination with other modalities in speech.
4

Bertonati, Giorgia, Maria Bianca Amadeo, Claudio Campus, and Monica Gori. "Auditory speed processing in sighted and blind individuals." PLOS ONE 16, no. 9 (September 22, 2021): e0257676. http://dx.doi.org/10.1371/journal.pone.0257676.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Multisensory experience is crucial for developing a coherent perception of the world. In this context, vision and audition are essential tools to scaffold spatial and temporal representations, respectively. Since speed encompasses both space and time, investigating this dimension in blindness allows deepening the relationship between sensory modalities and the two representation domains. In the present study, we hypothesized that visual deprivation influences the use of spatial and temporal cues underlying acoustic speed perception. To this end, ten early blind and ten blindfolded sighted participants performed a speed discrimination task in which spatial, temporal, or both cues were available to infer moving sounds’ velocity. The results indicated that both sighted and early blind participants preferentially relied on temporal cues to determine stimuli speed, by following an assumption that identified as faster those sounds with a shorter duration. However, in some cases, this temporal assumption produces a misperception of the stimulus speed that negatively affected participants’ performance. Interestingly, early blind participants were more influenced by this misleading temporal assumption than sighted controls, resulting in a stronger impairment in the speed discrimination performance. These findings demonstrate that the absence of visual experience in early life increases the auditory system’s preference for the time domain and, consequentially, affects the perception of speed through audition.
5

Cui, Qi N., Babak Razavi, William E. O'Neill, and Gary D. Paige. "Perception of Auditory, Visual, and Egocentric Spatial Alignment Adapts Differently to Changes in Eye Position." Journal of Neurophysiology 103, no. 2 (February 2010): 1020–35. http://dx.doi.org/10.1152/jn.00500.2009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Vision and audition represent the outside world in spatial synergy that is crucial for guiding natural activities. Input conveying eye-in-head position is needed to maintain spatial congruence because the eyes move in the head while the ears remain head-fixed. Recently, we reported that the human perception of auditory space shifts with changes in eye position. In this study, we examined whether this phenomenon is 1) dependent on a visual fixation reference, 2) selective for frequency bands (high-pass and low-pass noise) related to specific auditory spatial channels, 3) matched by a shift in the perceived straight-ahead (PSA), and 4) accompanied by a spatial shift for visual and/or bimodal (visual and auditory) targets. Subjects were tested in a dark echo-attenuated chamber with their heads fixed facing a cylindrical screen, behind which a mobile speaker/LED presented targets across the frontal field. Subjects fixated alternating reference spots (0, ±20°) horizontally or vertically while either localizing targets or indicating PSA using a laser pointer. Results showed that the spatial shift induced by ocular eccentricity is 1) preserved for auditory targets without a visual fixation reference, 2) generalized for all frequency bands, and thus all auditory spatial channels, 3) paralleled by a shift in PSA, and 4) restricted to auditory space. Findings are consistent with a set-point control strategy by which eye position governs multimodal spatial alignment. The phenomenon is robust for auditory space and egocentric perception, and highlights the importance of controlling for eye position in the examination of spatial perception and behavior.
6

Radić-Šestić, Marina, Mia Šešum, and Ljubica Isaković. "The phenomenon of signed music in Deaf culture." Specijalna edukacija i rehabilitacija 20, no. 4 (2021): 259–71. http://dx.doi.org/10.5937/specedreh20-34296.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Introduction. Music in the Deaf community is a socio-cultural phenomenon that depicts a specific identity and way of experiencing the world, which is just as diverse, rich and meaningful as that of members of any other culture. Objective. The aim of this paper was to point out the historical and socio-cultural frameworks, complexity, richness, specific elements, types and forms of musical expression of members of the Deaf community. Methods. The applied methods included comparative analysis, evaluation, and deduction and induction system. Results. Due to limitations or a lack of auditive component, the members of Deaf culture use different communication tools, such as speech, pantomime, facial expressions and sign language. Signed music, as a phenomenon, is the artistic form which does not have long history. However, since the nineties of the past century and with technological development, it has been gaining greater interest and acknowledgement within the Deaf community and among the hearing audience. Signed music uses specific visuo-spatial-kinaesthetic and auditive elements in expression, such as rhythm, dynamism, rhyme, expressiveness, iconicity, intensity of the musical perception and the combination of the role of the performer. Conclusion. Signed music as a phenomenon is an art form that incorporates sign poetic characteristics (lyrical contents), visual musical elements and dance.
7

Tian, Yapeng. "Towards Unified, Explainable, and Robust Multisensory Perception." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (June 26, 2023): 15456. http://dx.doi.org/10.1609/aaai.v37i13.26823.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Humans perceive surrounding scenes through multiple senses with multisensory integration. For example, hearing helps capture the spatial location of a racing car behind us; seeing peoples' talking faces can strengthen our perception of their speech. However, today's state-of-the-art scene understanding systems are usually designed to rely on a single audio or visual modality. Ignoring multisensory cooperation has become one of the key bottlenecks in creating intelligent systems with human-level perception capability, which impedes the real-world applications of existing scene understanding models. To address this limitation, my research has pioneered marrying computer vision with computer audition to create multimodal systems that can learn to understand audio and visual data. In particular, my current research focuses on asking and solving fundamental problems in a fresh research area: audio-visual scene understanding and strives to develop unified, explainable, and robust multisensory perception machines. The three themes are distinct yet interconnected, and all of them are essential for designing powerful and trustworthy perception systems. In my talk, I will give a brief overview about this new research area and then introduce my works in the three research thrusts.
8

Plaza, Paula, Isabel Cuevas, Cécile Grandin, Anne G. De Volder, and Laurent Renier. "Looking into Task-Specific Activation Using a Prosthesis Substituting Vision with Audition." ISRN Rehabilitation 2012 (February 6, 2012): 1–15. http://dx.doi.org/10.5402/2012/490950.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A visual-to-auditory sensory substitution device initially developed for the blind is known to allow visual-like perception through sequential exploratory strategies. Here we used functional magnetic resonance imaging (fMRI) to test whether processing the location versus the orientation of simple (elementary) “visual” stimuli encoded into sounds using the device modulates the brain activity within the dorsal visual stream in the absence of sequential exploration of these stimuli. Location and orientation detection with the device induced a similar recruitment of frontoparietal brain areas in blindfolded sighted subjects as the corresponding tasks using the same stimuli in the same subjects in vision. We observed a similar preference of the right superior parietal lobule for spatial localization over orientation processing in both sensory modalities. This provides evidence that the parietal cortex activation during the use of the prosthesis is task related and further indicates the multisensory recruitment of the dorsal visual pathway in spatial processing.
9

Bueti, Domenica, and Vincent Walsh. "The parietal cortex and the representation of time, space, number and other magnitudes." Philosophical Transactions of the Royal Society B: Biological Sciences 364, no. 1525 (July 12, 2009): 1831–40. http://dx.doi.org/10.1098/rstb.2009.0028.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The development of sub-disciplines within cognitive neuroscience follows common sense categories such as language, audition, action, memory, emotion and perception among others. There are also well-established research programmes into temporal perception, spatial perception and mathematical cognition that also reflect the subjective impression of how experience is constructed. There is of course no reason why the brain should respect these common sense, text book divisions and, here, we discuss the contention that generalized magnitude processing is a more accurate conceptual description of how the brain deals with information about time, space, number and other dimensions. The roots of the case for linking magnitudes are based on the use to which magnitude information is put (action), the way in which we learn about magnitudes (ontogeny), shared properties and locations of magnitude processing neurons, the effects of brain lesions and behavioural interference studies. Here, we assess this idea in the context of a theory of magnitude, which proposed common processing mechanisms of time, space, number and other dimensions.
10

Hong, Fangfang, Stephanie Badde, and Michael S. Landy. "Causal inference regulates audiovisual spatial recalibration via its influence on audiovisual perception." PLOS Computational Biology 17, no. 11 (November 15, 2021): e1008877. http://dx.doi.org/10.1371/journal.pcbi.1008877.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli.
11

Virsu, V., and M. Aura. "Implicit Learning of Temporal Discriminations in Perception." Perception 26, no. 1_suppl (August 1997): 30. http://dx.doi.org/10.1068/v970226.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We measured the temporal accuracy of signal transfer in the brain by means of periodic pulse stimuli in various sensory modalities using an adaptive threshold algorithm. Trains of supra-threshold signal pairs in 0° or 180° phase shifts appeared, and the subject indicated whether the signals of each pair in a train were simultaneous or not at various nominal frequencies of the pairs. The signals were spatially separate flashes of light, clicks, tactile pulses, or combinations thereof in intermodal comparisons. Temporal discrimination thresholds involving one signal in central photopic vision and the other in audition or tactile presentation to the finger tips did not exceed frequencies from 3 to 8 Hz, and in vision alone the average synchronism thresholds were about 10 Hz (SOA 0.05 s with 0.008 s pulses). The practice derived from 9 experimental sessions during 6 weeks improved temporal accuracy by factors ranging from 1.2 to 2.0 on the average in a sample of 33 naive subjects (university students), although no explicit feedback was given. The practice effect was lasting, for the average performance decrement was only 9% in 7 months. Thus, a considerable temporal modifiability must exist in the brain because a large learning effect was found in the simple temporal synchronism discrimination tasks.
12

Jacques, Guillaume, and Aimée Lahaussois. "The auditory demonstrative in Khaling." Studies in Language 38, no. 2 (August 8, 2014): 393–404. http://dx.doi.org/10.1075/sl.38.2.05jac.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper shows the existence of an auditory demonstrative in Khaling. The use of the demonstrative is illustrated via examples taken from narrative discourse. It is described here within the context of the spatial demonstrative system, in order to demonstrate how it is specifically used to highlight that perception of the referent is attained using the sense of audition, regardless of the visibility of the object in question. Khaling appears to be unique in having a true auditory demonstrative and it is hoped that this description will prompt field linguists to refine the description of the contrasts found within the demonstrative systems of languages around the world.
13

Berger, Christopher C., and H. Henrik Ehrsson. "Mental Imagery Induces Cross-Modal Sensory Plasticity and Changes Future Auditory Perception." Psychological Science 29, no. 6 (April 10, 2018): 926–35. http://dx.doi.org/10.1177/0956797617748959.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Can what we imagine in our minds change how we perceive the world in the future? A continuous process of multisensory integration and recalibration is responsible for maintaining a correspondence between the senses (e.g., vision, touch, audition) and, ultimately, a stable and coherent perception of our environment. This process depends on the plasticity of our sensory systems. The so-called ventriloquism aftereffect—a shift in the perceived localization of sounds presented alone after repeated exposure to spatially mismatched auditory and visual stimuli—is a clear example of this type of plasticity in the audiovisual domain. In a series of six studies with 24 participants each, we investigated an imagery-induced ventriloquism aftereffect in which imagining a visual stimulus elicits the same frequency-specific auditory aftereffect as actually seeing one. These results demonstrate that mental imagery can recalibrate the senses and induce the same cross-modal sensory plasticity as real sensory stimuli.
14

Shayman, Corey S., Robert J. Peterka, Frederick J. Gallun, Yonghee Oh, Nai-Yuan N. Chang, and Timothy E. Hullar. "Frequency-dependent integration of auditory and vestibular cues for self-motion perception." Journal of Neurophysiology 123, no. 3 (March 1, 2020): 936–44. http://dx.doi.org/10.1152/jn.00307.2019.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recent evidence has shown that auditory information may be used to improve postural stability, spatial orientation, navigation, and gait, suggesting an auditory component of self-motion perception. To determine how auditory and other sensory cues integrate for self-motion perception, we measured motion perception during yaw rotations of the body and the auditory environment. Psychophysical thresholds in humans were measured over a range of frequencies (0.1–1.0 Hz) during self-rotation without spatial auditory stimuli, rotation of a sound source around a stationary listener, and self-rotation in the presence of an earth-fixed sound source. Unisensory perceptual thresholds and the combined multisensory thresholds were found to be frequency dependent. Auditory thresholds were better at lower frequencies, and vestibular thresholds were better at higher frequencies. Expressed in terms of peak angular velocity, multisensory vestibular and auditory thresholds ranged from 0.39°/s at 0.1 Hz to 0.95°/s at 1.0 Hz and were significantly better over low frequencies than either the auditory-only (0.54°/s to 2.42°/s at 0.1 and 1.0 Hz, respectively) or vestibular-only (2.00°/s to 0.75°/s at 0.1 and 1.0 Hz, respectively) unisensory conditions. Monaurally presented auditory cues were less effective than binaural cues in lowering multisensory thresholds. Frequency-independent thresholds were derived, assuming that vestibular thresholds depended on a weighted combination of velocity and acceleration cues, whereas auditory thresholds depended on displacement and velocity cues. These results elucidate fundamental mechanisms for the contribution of audition to balance and help explain previous findings, indicating its significance in tasks requiring self-orientation. NEW & NOTEWORTHY Auditory information can be integrated with visual, proprioceptive, and vestibular signals to improve balance, orientation, and gait, but this process is poorly understood. Here, we show that auditory cues significantly improve sensitivity to self-motion perception below 0.5 Hz, whereas vestibular cues contribute more at higher frequencies. Motion thresholds are determined by a weighted combination of displacement, velocity, and acceleration information. These findings may help understand and treat imbalance, particularly in people with sensory deficits.
15

Spence, Charles. "Attending to the Chemical Senses." Multisensory Research 32, no. 7 (2019): 635–64. http://dx.doi.org/10.1163/22134808-20191468.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Theorizing around the topic of attention and its role in human information processing largely emerged out of research on the so-called spatial senses: vision, audition, and to a lesser extent, touch. Thus far, the chemical senses have received far less research interest (or should that be attention) from those experimental psychologists and cognitive neuroscientists interested in the topic. Nevertheless, this review highlights the key role that attentional selection also plays in chemosensory information processing and awareness. Indeed, many of the same theoretical approaches/experimental paradigms that were originally developed in the context of the spatial senses, can be (and in some cases already have been) extended to provide a useful framework for thinking about the perception of taste/flavour. Furthermore, a number of those creative individuals interested in modifying the perception of taste/flavour by manipulating product-extrinsic cues (such as, for example, music in the case of sonic seasoning) are increasingly looking to attentional accounts in order to help explain the empirical phenomena that they are starting to uncover. However, separate from its role in explaining sonic seasoning, gaining a better understanding of the role of attentional distraction in modulating our eating/drinking behaviours really ought to be a topic of growing societal concern. This is because distracted diners (e.g., those who eat while watching TV, fiddling with a mobile device or smartphone, or even while driving) consume significantly more than those who mindfully pay attention to the sensations associated with eating and drinking.
16

Wilbiks, Jonathan M. P., and Benjamin Dyson. "Effects of within-modal congruency, cross-modal congruency and temporal asynchrony on the perception of perceived audio–visual distance." Seeing and Perceiving 25 (2012): 178. http://dx.doi.org/10.1163/187847612x648080.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The factors we use to determine whether information from separate modalities should be assigned to the same source include task demands, the spatial and temporal coincidence of the composite signals, and, whether the signals are congruent with one another. In a series of experiments, we examined how temporal asynchrony and congruency interact in a competitive binding situation. Across a series of experiments, participants assigned a temporally roving auditory stimulus to competing primary or secondary visual anchors (VAV), or, a temporally roving visual stimulus to competing primary or secondary auditory anchors (AVA), based on causality. Congruency was defined in terms of simulated distance both within- and between-modalities (visual: small, auditory: quiet = far; visual: large, auditory: loud = near). Strong temporal effects were revealed, with differences between VAV and AVA conditions reflecting natural auditory lag tolerance for binding. During VAV conditions, binding was influenced only by visual congruency. During AVA conditions, binding was influenced by audio–visual congruency. These differences did not seem to be due to the relative discriminability between visual and auditory magnitude. The data reiterate the dominance of audition in the time domain (showing stronger temporal effects), the dominance of vision in the spatial domain (showing stronger congruency effects), and, the assistance of domain-inappropriate modalities by domain-appropriate modalities. A special case of congruency in terms of visual looming will also be discussed, along with the potential alerting properties of high magnitude stimuli.
17

Martolini, Chiara, Giulia Cappagli, Claudio Campus, and Monica Gori. "Shape Recognition With Sounds: Improvement in Sighted Individuals After Audio–Motor Training." Multisensory Research 33, no. 4-5 (March 17, 2020): 417–31. http://dx.doi.org/10.1163/22134808-20191460.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Recent studies have demonstrated that audition used to complement or substitute visual feedback is effective in conveying spatial information, e.g., sighted individuals can understand the curvature of a shape when solely auditory input is provided. Recently we also demonstrated that, in the absence of vision, auditory feedback of body movements can enhance spatial perception in visually impaired adults and children. In the present study, we assessed whether sighted adults can also improve their spatial abilities related to shape recognition with an audio-motor training based on the idea that the coupling of auditory and motor information can further refine the representation of space when vision is missing. Auditory shape recognition was assessed in 22 blindfolded sighted adults with an auditory task requiring participants to identify four shapes by means of the sound conveyed through a set of consecutive loudspeakers embedded on a fixed two-dimensional vertical array. We divided participants into two groups of 11 adults each, performing a training session in two different modalities: active audio-motor training (experimental group) and passive auditory training (control group). The audio-motor training consisted in the reproduction of specific movements with the arm by relying on the sound produced by an auditory source positioned on the wrist of participants. Results showed that sighted individuals improved the recognition of auditory shapes only after active training, suggesting that audio-motor feedback can be an effective tool to enhance spatial representation when visual information is lacking.
18

Winter, Bodo, and Benjamin Bergen. "Language comprehenders represent object distance both visually and auditorily." Language and Cognition 4, no. 1 (March 2012): 1–16. http://dx.doi.org/10.1515/langcog-2012-0001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractWhen they process sentences, language comprehenders activate perceptual and motor representations of described scenes. On the “immersed experiencer” account, comprehenders engage motor and perceptual systems to create experiences that someone participating in the described scene would have. We tested two predictions of this view. First, the distance of mentioned objects from the protagonist of a described scene should produce perceptual correlates in mental simulations. And second, mental simulation of perceptual features should be multimodal, like actual perception of such features. In Experiment 1, we found that language about objects at different distances modulated the size of visually simulated objects. In Experiment 2, we found a similar effect for volume in the auditory modality. These experiments lend support to the view that language-driven mental simulation encodes experiencer-specific spatial details. The fact that we obtained similar simulation effects for two different modalities—audition and vision—confirms the multimodal nature of mental simulations during language understanding.
19

Opoku-Baah, Collins, Adriana M. Schoenhaut, Sarah G. Vassall, David A. Tovar, Ramnarayan Ramachandran, and Mark T. Wallace. "Visual Influences on Auditory Behavioral, Neural, and Perceptual Processes: A Review." Journal of the Association for Research in Otolaryngology 22, no. 4 (May 20, 2021): 365–86. http://dx.doi.org/10.1007/s10162-021-00789-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractIn a naturalistic environment, auditory cues are often accompanied by information from other senses, which can be redundant with or complementary to the auditory information. Although the multisensory interactions derived from this combination of information and that shape auditory function are seen across all sensory modalities, our greatest body of knowledge to date centers on how vision influences audition. In this review, we attempt to capture the state of our understanding at this point in time regarding this topic. Following a general introduction, the review is divided into 5 sections. In the first section, we review the psychophysical evidence in humans regarding vision’s influence in audition, making the distinction between vision’s ability to enhance versus alter auditory performance and perception. Three examples are then described that serve to highlight vision’s ability to modulate auditory processes: spatial ventriloquism, cross-modal dynamic capture, and the McGurk effect. The final part of this section discusses models that have been built based on available psychophysical data and that seek to provide greater mechanistic insights into how vision can impact audition. The second section reviews the extant neuroimaging and far-field imaging work on this topic, with a strong emphasis on the roles of feedforward and feedback processes, on imaging insights into the causal nature of audiovisual interactions, and on the limitations of current imaging-based approaches. These limitations point to a greater need for machine-learning-based decoding approaches toward understanding how auditory representations are shaped by vision. The third section reviews the wealth of neuroanatomical and neurophysiological data from animal models that highlights audiovisual interactions at the neuronal and circuit level in both subcortical and cortical structures. It also speaks to the functional significance of audiovisual interactions for two critically important facets of auditory perception—scene analysis and communication. The fourth section presents current evidence for alterations in audiovisual processes in three clinical conditions: autism, schizophrenia, and sensorineural hearing loss. These changes in audiovisual interactions are postulated to have cascading effects on higher-order domains of dysfunction in these conditions. The final section highlights ongoing work seeking to leverage our knowledge of audiovisual interactions to develop better remediation approaches to these sensory-based disorders, founded in concepts of perceptual plasticity in which vision has been shown to have the capacity to facilitate auditory learning.
20

Viaud-Delmon, Isabelle, Jane Mason, Karim Haddad, Markus Noisternig, Frédéric Bevilacqua, and Olivier Warusfel. "A Sounding Body in a Sounding Space: the Building of Space in Choreography – Focus on Auditory-motor Interactions." Dance Research 29, supplement (November 2011): 433–49. http://dx.doi.org/10.3366/drs.2011.0027.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the last 4 years, we have developed a partnership between dance and neuroscience to study the relationships between body space in dance and the surrounding space, and the link between movement and audition as experienced by the dancer. The opportunity to work with a dancer/choreographer, an expert in movement, gives neuroscientists better access to the significance of the auditory-motor loop and its role in perception of the surrounding space. Given that a dancer has a very strong sense of body ownership (probably through a very accurate dynamic body schema) ( Walsh et al. 2011 ), she is an ideal subject to investigate the feeling of controlling one's own body movements, and, through them, events in the external environment ( Moore et al. 2009 , Jola et al in press). We conducted several work sessions, which brought together a choreographer/dancer, a neuroscientist, a composer, and two researchers in acoustics and audio signal processing. These sessions were held at IRCAM (Institute for Research and Coordination Acoustic/Music, Paris) in a variable-acoustics concert hall equipped with a Wave Field Synthesis (WFS) sound reproduction system and infrared cameras for motion capture. During these work sessions, we concentrated on two specific questions: 1) is it possible to extend the body space of the dancer through auditory feedback ( Maravita and Iriki 2004 )? and 2) can we alter the dancer's perception of space by altering perceptions associated with movements? We used an interactive setup in which a collection of pre-composed sound events (individual sounds or musical sentences) could be transformed and rendered in real time according to the movements and the position of the dancer, that were sensed by markers on her body and detected by a motion tracking system. The transformations applied to the different sound components through the dancer's movement and position concerned not only musical parameters such as intensity, timbre, etc. but also the spatial parameters of the sounds. The technology we used allowed us to control their trajectory in space, apparent distance and the sound reverberation ambiance. We elaborated a catalogue of interaction modes with auditory settings that changed according to the dancer's movements. An interaction mode is defined by different mappings of position, posture or gesture of the dancer to musical and spatial parameters. For instance, a sound event may be triggered if the dancer is within a certain region or if she performs a predefined gesture. More elaborated modes involved the modulation of musical parameters by continuous movements of the dancer. The pertinence at a perceptive and cognitive level of the catalogue of interactions has been tested throughout the sessions. We observed that the detachable markers could be used to create a perception of extended body space, and that the performer perceived the stage space differently according to the auditory feedback of her action. The dancer reported that each experience with the technology shed light on her need for greater awareness and exploration of her relationships with space. Real-time interactivity with sound heightened her physical awareness – as though the stage itself took on a role and became another character.
21

Tajadura-Jiménez, Ana, Aleksander Väljamäe, Iwaki Toshima, Toshitaka Kimura, Manos Tsakiris, and Norimichi Kitagawa. "Action sounds recalibrate perceived tactile distance." Seeing and Perceiving 25 (2012): 217. http://dx.doi.org/10.1163/187847612x648431.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Almost every bodily movement, from the most complex to the most mundane, such as walking, can generate impact sounds that contain spatial information of high temporal resolution. Despite the conclusive evidence about the role that the integration of vision, touch and proprioception plays in updating body-representations, hardly any study has looked at the contribution of audition. We show that the representation of a key property of one’s body, like its length, is affected by the sound of one’s actions. Participants tapped on a surface while progressively extending their right arm sideways, and in synchrony with each tap participants listened to a tapping sound. In the critical condition, the sound originated at double the distance at which participants actually tapped. After exposure to this condition, tactile distances on the test right arm, as compared to distances on the reference left arm, felt bigger than those before the exposure. No evidence of changes in tactile distance reports was found at the quadruple tapping sound distance or the asynchronous auditory feedback conditions. Our results suggest that tactile perception is referenced to an implicit body-representation which is informed by auditory feedback. This is the first evidence of the contribution of self-produced sounds to body-representation, addressing the auditory-dependent plasticity of body-representation and its spatial boundaries.
22

Cohen, Michael. "Exclude and Include for Audio Sources and Sinks: Analogs of Mute & Solo Are Deafen & Attend." Presence: Teleoperators and Virtual Environments 9, no. 1 (February 2000): 84–96. http://dx.doi.org/10.1162/105474600566637.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Non-immersive perspectives in virtual environments enable flexible paradigms of perception, especially in the context of frames of reference for conferencing and musical audition. Traditional mixing idioms for enabling and disabling various audio sources employ mute and solo functions, that, along with cue, selectively disable or focus on respective channels. Exocentric interfaces which explicitly model not only sources but also sinks, motivate the generalization of mute and solo (or cue) to exclude and include, manifested for sinks as deafen and attend (confide and harken). Such functions, which narrow stimuli by explicitly blocking out and/or concentrating on selected entities, can be applied not only to other users' sinks for privacy, but also to one's own sinks for selective attendance or presence. Multiple sinks are useful in groupware, where a common environment implies social inhibitions to rearranging shared sources like musical voices or conferees, as well as individual sessions in which spatial arrangement of sources, like the configuration of a concert orchestra, has mnemonic value. A taxonomy of modal narrowcasting functions is proposed, and an audibility protocol is described, comprising revoke, renounce, grant, and claim methods, invocable by these narrowcasting commands to control superposition of soundscapes.
23

Yamasaki, Daiki, and Hiroshi Ashida. "Size–Distance Scaling With Absolute and Relative Auditory Distance Information." Multisensory Research 33, no. 1 (July 1, 2020): 109–26. http://dx.doi.org/10.1163/22134808-20191467.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract In the dynamic 3D space, it is critical for survival to perceive size of an object and rescale it with distance from an observer. Humans can perceive distance via not only vision but also audition, which plays an important role in the localization of objects, especially in visually ambiguous environments. However, whether and how auditory distance information contributes to visual size perception is not well understood. To address this issue, we investigated the efficiency of size–distance scaling by using auditory distance information that was conveyed by binaurally recorded auditory stimuli. We examined the effects of absolute distance information of a single sound sequence (Experiment 1) and relative distance information between two sound sequences (Experiment 2) on visual size estimation performances in darkened and well-lit environments. We demonstrated that humans could perform size–distance disambiguation by using auditory distance information even in darkness. Curiously, relative distance information was more efficient in size–distance scaling than absolute distance information, suggesting a high reliance on relative auditory distance information in our visual spatial experiences. The results highlight a benefit of audiovisual interaction for size–distance processing and calibration of external events under visually degraded situations.
24

Solecka, Iga, Dietmar Bothmer, and Arkadiusz Głogowski. "Recognizing Landscapes for the Purpose of Sustainable Development—Experiences from Poland." Sustainability 11, no. 12 (June 21, 2019): 3429. http://dx.doi.org/10.3390/su11123429.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Landscape identification forms a base for landscape management and sustainable land use policy. According to the European Landscape Convention, each Member State needs to recognize the landscapes as an essential component of people’s surroundings. Poland developed a method for landscape auditing that will be conducted for landscapes in the whole country. The identification of landscape units is based on landscape type characteristics and spatial data that is layered and analyzed in order to identify landscape units. In this paper, we aim to test the possibilities of automatic landscape identification. We take the assumptions designed for landscape identification for the needs of the audit. Based on the “Typology of Poland’s current landscapes”, we design a process to identify landscape units with the use of the aggregation of land cover data and multivariable analysis. We use tools in an ArcGIS environment to design a process that will support human perception. Our approach is compared with the approach presented in the method designed for a landscape audit in order to be used for landscape unit identification at the municipal level. The case study area is the municipality of Siechnice within the suburban area of the city of Wrocław, an example of a changing landscape under suburbanization pressure. We conclude that both approaches can support each other in the landscape identification process.
25

Cahyadi, Tio Natalia Sari, Edi Purwanto, and Widjayanti Widjayanti. "IMAJINASI PETA MENTAL PENYANDANG DISABILITAS NETRA TERHADAP KAWASAN SIMPANG LIMA SEMARANG." Jurnal Arsitektur ARCADE 7, no. 2 (July 1, 2023): 273. http://dx.doi.org/10.31848/arcade.v7i2.3141.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract: The Simpang Lima area is the center and icon of Semarang City, in the form of a public space that has various legitimacy and imaginability in the minds of each observer. In this study, the visually impaired person were selected as a sample of observers, highlighting how visually impaired person are able to recognize elements of urban identity in their own way. Using a qualitative descriptive approach method to find out how these elements of urban identity can be understood from the area structure by exploration, as well as how they feel after they experience a process of interaction in that environment. The data used in this study were obtained from interviews with several sources, including persons with visual disabilities as the main informants and administrators of the disability community and the Semarang city government as supporting informants. In addition, this research is also supported by data from various other literature related to visual disabilities and the development of the Simpang Lima area and is supported by the image of the city theory, mental maps or spatial cognition and neuroscience theory.The research results have proven, that the elements of identity in the image of a city that stand out and can be recognized by Visually Impaired Person are Landmarks and Path elements. That the image of the city's environment, ease of access, availability of facilities, involvement in the planning process, and adequate navigational features are very important in increasing the ability of Visually Impaired Person to be orientated. In addition, psychological factors such as feelings of security and comfort also play an important role in influencing the breadth of the exploration area for Visually Impaired Person. On the other hand, using a neuroscience theory approach shows that the brain's nervous system of Visually Impaired Person has the ability to process multisensory information from the built environment, including tactile and auditory information which will then produce emotional and meaningful responses to the built environment. There is also a relationship between past memory and the ability of Visually Impaired Person to recognize elements of urban identity, and the concept of attachment to a place can also influence the perception of Visually Impaired Person. This research provides enrichment of insights for planning a more inclusive and sustainable urban environment.The research area in this study is limited to the area along the Simpang Lima Semarang roundabout to a 100meter radius on the five road branches.Keyword: Visually Impaired Person, Mental Mapping, Simpang Lima Semarang AreaAbstrak: Kawasan Simpang Lima merupakan pusat Kota Semarang, berupa ruang publik yang memiliki legibilitas dan imajibilitas yang beragam dalam benak masing-masing pengamatnya. Pada penelitian ini dipilihlah para penyandang disabilitas netra yang akan menjadi sample pengamat, mengangkat akan bagaimana para penyandang disabilitas netra dengan keterbatasannya mampu mengenali akan elemen identitas perkotaan dengan cara mereka sendiri. Menggunakan metode pendekatan deskriptif kualitatif untuk mengetahui bagaimana elemen identitas perkotaan tersebut dapat dihapami dari sebuah struktur kawasan dengan cara penjelajahan peyandang disabilitas netra, serta bagaimana perasaan yang dirasakan dalam benak penyandang disabilitas netra setelah mereka mengalami suatu proses interaksi dalam lingkungan tersebut. Data yang digunakan dalam penelitian ini didapatkan dari hasil wawancara dengan beberapa sumber, diantaranya ialah para penyandang disabilitas netra sebagai informan utama dan pengurus komunitas disabilitas serta pemerintah kota semarang sebagai informan pendukung. Selain itu penelitian ini juga didukung oleh data dari berbagai literatur lain yang berkaitan dengan disabilitas netra serta perkembangan Kawasan Simpang Lima dan didukung oleh teori citra kota, peta mental atau kognisi spasial dan teori neuroscience. Hasil penelitian telah membuktikan, bahwa elemen identitas dalam citra sebuah kota yang menonjol dan dapat dikenali oleh para penyandang disabilitas netra ialah elemen Landmarks dan Path. Bahwa citra lingkungan kota, kemudahan akses, keberadaan fasilitas, keterlibatan dalam proses perencanaan, serta fitur navigasi yang memadai sangat penting dalam meningkatkan kemampuan penyandang disabilitas netra untuk dapat berorientasi. Selain itu, faktor psikologis seperti perasaan aman dan nyaman juga berperan penting dalam mempengaruhi luas jangkauan area penjelajahan penyandang disabilitas netra. Disisi lain, dengan menggunakan pendekatan teori neuroscience menunjukan bahwa sistem saraf otak para penyandang disabilitas netra memiliki kemampuan untuk memproses informasi multisensori dari lingkungan binaan, termasuk informasi taktil serta auditif yang kemudian akan menghasilkan respon emosi dan pemaknaan terhadap lingkungan binaan tersebut. Terdapat pula hubungan antara memori/kenangan masa lampau dan kemampuan penyandang disabilitas netra dalam mengenali elemen identitas perkotaan, serta konsep kelekatan pada sebuah tempat juga dapat mempengaruhi persepsi penyandang disabilitas netra. Penelitian ini memberikan pengkayaan wawasan bagi perencanaan lingkungan perkotaan yang lebih inklusif dan berkelanjutan. Gambaran imajinasi peta mental yang digali pada penelitian ini mencakup pada area jalan disepanjang bundaran Kawasan Simpang Lima Semarang hingga radius 100meter pada kelima cabang jalannya.Kata Kunci: Disabilitas Netra, Peta Mental, Kawasan Simpang Lima Semarang
26

Netzer, Ophir, Benedetta Heimler, Amir Shur, Tomer Behor, and Amir Amedi. "Backward spatial perception can be augmented through a novel visual-to-auditory sensory substitution algorithm." Scientific Reports 11, no. 1 (June 7, 2021). http://dx.doi.org/10.1038/s41598-021-88595-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractCan humans extend and augment their natural perceptions during adulthood? Here, we address this fascinating question by investigating the extent to which it is possible to successfully augment visual spatial perception to include the backward spatial field (a region where humans are naturally blind) via other sensory modalities (i.e., audition). We thus developed a sensory-substitution algorithm, the “Topo-Speech” which conveys identity of objects through language, and their exact locations via vocal-sound manipulations, namely two key features of visual spatial perception. Using two different groups of blindfolded sighted participants, we tested the efficacy of this algorithm to successfully convey location of objects in the forward or backward spatial fields following ~ 10 min of training. Results showed that blindfolded sighted adults successfully used the Topo-Speech to locate objects on a 3 × 3 grid either positioned in front of them (forward condition), or behind their back (backward condition). Crucially, performances in the two conditions were entirely comparable. This suggests that novel spatial sensory information conveyed via our existing sensory systems can be successfully encoded to extend/augment human perceptions. The implications of these results are discussed in relation to spatial perception, sensory augmentation and sensory rehabilitation.
27

Grigorii, Roman V., J. Edward Colgate, and Roberta Klatzky. "The spatial profile of skin indentation shapes tactile perception across stimulus frequencies." Scientific Reports 12, no. 1 (August 1, 2022). http://dx.doi.org/10.1038/s41598-022-17324-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractMultiple human sensory systems exhibit sensitivity to spatial and temporal variations of physical stimuli. Vision has evolved to offer high spatial acuity with limited temporal sensitivity, while audition has developed complementary characteristics. Neural coding in touch has been believed to transition from a spatial to a temporal domain in relation to surface scale, such that coarse features (e.g., a braille cell or corduroy texture) are coded as spatially distributed signals, while fine textures (e.g., fine-grit sandpaper) are encoded by temporal variation. However, the interplay between the two domains is not well understood. We studied tactile encoding with a custom-designed pin array apparatus capable of deforming the fingerpad at 5 to 80 Hz in each of 14 individual locations spaced 2.5 mm apart. Spatial variation of skin indentation was controlled by moving each of the pins at the same frequency and amplitude, but with phase delays distributed across the array. Results indicate that such stimuli enable rendering of shape features at actuation frequencies up to 20 Hz. Even at frequencies > 20 Hz, however, spatial variation of skin indentation continues to play a vital role. In particular, perceived roughness is affected by spatial variation within the fingerpad even at 80 Hz. We provide evidence that perceived roughness is encoded via a summary measure of skin displacement. Relative displacements in neighboring pins of less than 10 µm generate skin stretch, which regulates the roughness percept.
28

Hildebrandt, Alexandra, Eric Grießbach, and Rouwen Cañal-Bruland. "Auditory perception dominates in motor rhythm reproduction." Perception, April 19, 2022, 030100662210936. http://dx.doi.org/10.1177/03010066221093604.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
It is commonly agreed that vision is more sensitive to spatial information, while audition is more sensitive to temporal information. When both visual and auditory information are available simultaneously, the modality appropriateness hypothesis predicts that, depending on the task, the most appropriate (i.e., reliable) modality dominates perception. While previous research mainly focused on discrepant information from different sensory inputs to scrutinize the modality appropriateness hypothesis, the current study aimed at investigating the modality appropriateness hypothesis when multimodal information was provided in a nondiscrepant and simultaneous manner. To this end, participants performed a temporal rhythm reproduction task for which the auditory modality is known to be the most appropriate. The experiment comprised an auditory (i.e., beeps), a visual (i.e., flashing dots), and an audiovisual condition (i.e., beeps and dots simultaneously). Moreover, constant as well as variable interstimulus intervals were implemented. Results revealed higher accuracy and lower variability in the auditory condition for both interstimulus interval types when compared to the visual condition. More importantly, there were no differences between the auditory and the audiovisual condition across both interstimulus interval types. This indicates that the auditory modality dominated multimodal perception in the task, whereas the visual modality was disregarded and hence did not add to reproduction performance.
29

Jonasson, Kristin A., Amanda M. Adams, Alyson F. Brokaw, Michael D. Whitby, M. Teague O'Mara, and Winifred F. Frick. "A multisensory approach to understanding bat responses to wind energy developments." Mammal Review, January 11, 2024. http://dx.doi.org/10.1111/mam.12340.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Millions of bats are killed at wind energy facilities worldwide, yet the behavioural mechanisms underlying why bats are vulnerable to wind turbines remain unclear. Anthropogenic stimuli that alter perceptions of the environment, known as sensory pollution, could create ecological traps and cause bat mortality at wind farms. We review the sensory abilities of bats to evaluate potential stimuli associated with wind farms and examine the role of spatial scale on the perceptual mechanisms of sensory pollutants associated with wind energy facilities. Audition, vision, somatosensation and olfaction are sensory modalities that bats use to perceive their environment, including wind farms and turbine structures, but they will not all be useful at the same spatial scales. Bats most likely use vision to perceive wind farms on the landscape, and obstruction lighting may be the first sensory cue to attract bats to wind farms from kilometres away. Research that assesses the risks posed by specific sensory pollutants, when conducted at the appropriate scale, can help identify solutions to reduce bat mortality, such as determining the attractiveness of obstruction lighting to bats at a landscape scale.
30

Hüg, Mercedes X., Fernando Bermejo, Fabián C. Tommasini, and Ezequiel A. Di Paolo. "Effects of guided exploration on reaching measures of auditory peripersonal space." Frontiers in Psychology 13 (October 20, 2022). http://dx.doi.org/10.3389/fpsyg.2022.983189.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Despite the recognized importance of bodily movements in spatial audition, few studies have integrated action-based protocols with spatial hearing in the peripersonal space. Recent work shows that tactile feedback and active exploration allow participants to improve performance in auditory distance perception tasks. However, the role of the different aspects involved in the learning phase, such as voluntary control of movement, proprioceptive cues, and the possibility of self-correcting errors, is still unclear. We study the effect of guided reaching exploration on perceptual learning of auditory distance in peripersonal space. We implemented a pretest-posttest experimental design in which blindfolded participants must reach for a sound source located in this region. They were divided into three groups that were differentiated by the intermediate training phase: Guided, an experimenter guides the participant’s arm to contact the sound source; Active, the participant freely explores the space until contacting the source; and Control, without tactile feedback. The effects of exploration feedback on auditory distance perception in the peripersonal space are heterogeneous. Both the Guided and Active groups change their performance. However, participants in the Guided group tended to overestimate distances more than those in the Active group. The response error of the Guided group corresponds to a generalized calibration criterion over the entire range of reachable distances. Whereas the Active group made different adjustments for proximal and distal positions. The results suggest that guided exploration can induce changes on the boundary of the auditory reachable space. We postulate that aspects of agency such as initiation, control, and monitoring of movement, assume different degrees of involvement in both guided and active tasks, reinforcing a non-binary approach to the question of activity-passivity in perceptual learning and supporting a complex view of the phenomena involved in action-based learning.
31

Badde, Stephanie, Pia Ley, Siddhart S. Rajendran, Idris Shareef, Ramesh Kekunnaya, and Brigitte Röder. "Sensory experience during early sensitive periods shapes cross-modal temporal biases." eLife 9 (August 25, 2020). http://dx.doi.org/10.7554/elife.61238.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Typical human perception features stable biases such as perceiving visual events as later than synchronous auditory events. The origin of such perceptual biases is unknown. To investigate the role of early sensory experience, we tested whether a congenital, transient loss of pattern vision, caused by bilateral dense cataracts, has sustained effects on audio-visual and tactile-visual temporal biases and resolution. Participants judged the temporal order of successively presented, spatially separated events within and across modalities. Individuals with reversed congenital cataracts showed a bias towards perceiving visual stimuli as occurring earlier than auditory (Expt. 1) and tactile (Expt. 2) stimuli. This finding stood in stark contrast to normally sighted controls and sight-recovery individuals who had developed cataracts later in childhood: both groups exhibited the typical bias of perceiving vision as delayed compared to audition. These findings provide strong evidence that cross-modal temporal biases depend on sensory experience during an early sensitive period.
32

Shvadron, Shira, Adi Snir, Amber Maimon, Or Yizhar, Sapir Harel, Keinan Poradosu, and Amir Amedi. "Shape detection beyond the visual field using a visual-to-auditory sensory augmentation device." Frontiers in Human Neuroscience 17 (March 2, 2023). http://dx.doi.org/10.3389/fnhum.2023.1058617.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Current advancements in both technology and science allow us to manipulate our sensory modalities in new and unexpected ways. In the present study, we explore the potential of expanding what we perceive through our natural senses by utilizing a visual-to-auditory sensory substitution device (SSD), the EyeMusic, an algorithm that converts images to sound. The EyeMusic was initially developed to allow blind individuals to create a spatial representation of information arriving from a video feed at a slow sampling rate. In this study, we aimed to use the EyeMusic for the blind areas of sighted individuals. We use it in this initial proof-of-concept study to test the ability of sighted subjects to combine visual information with surrounding auditory sonification representing visual information. Participants in this study were tasked with recognizing and adequately placing the stimuli, using sound to represent the areas outside the standard human visual field. As such, the participants were asked to report shapes’ identities as well as their spatial orientation (front/right/back/left), requiring combined visual (90° frontal) and auditory input (the remaining 270°) for the successful performance of the task (content in both vision and audition was presented in a sweeping clockwise motion around the participant). We found that participants were successful at a highly above chance level after a brief 1-h-long session of online training and one on-site training session of an average of 20 min. They could even draw a 2D representation of this image in some cases. Participants could also generalize, recognizing new shapes they were not explicitly trained on. Our findings provide an initial proof of concept indicating that sensory augmentation devices and techniques can potentially be used in combination with natural sensory information in order to expand the natural fields of sensory perception.
33

Candau, Joel. "Altricialité." Anthropen, 2018. http://dx.doi.org/10.17184/eac.anthropen.087.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Deux faits signent la nature profonde de l’être humain : (i) un cerveau d’une grande plasticité et (ii) la puissance impérieuse de la culture qui se manifeste non seulement par la diversité et l’intensité de son expression, mais aussi par la forte influence qu’elle exerce rétroactivement sur le développement de notre architecture cérébrale – qui l’a rendue possible. Cette plasticité développementale, résumée dans l’idée que « nous héritons notre cerveau ; nous acquérons notre esprit » (we inherit our brains ; we acquire our minds)(Goldschmidt 2000), relève d’un processus plus général appelé « altricialité » par les éthologues. Le terme est dérivé de l’anglais altricial, mot qui vient lui-même du latin altrix : « celle qui nourrit », « nourrice » (Gaffiot 1934). Dans son acception première, l’altricialité signifie qu’une espèce n’est pas immédiatement compétente à la naissance, contrairement aux espèces dites précoces. C’est le cas, par exemple, de la plupart des passereaux qui naissent les yeux fermés et dont la survie dépend entièrement de l’aide apportée par leur entourage. Il en va de même pour notre espèce. Dans le cas des nouveau-nés humains, toutefois, s’ajoute à l’altricialité primaire une altricialité secondaire. On désigne ainsi le fait que notre cerveau n’est pleinement compétent (sur les plans cognitif, émotionnel, sensoriel et moteur) que tardivement. La force et la durée de la croissance cérébrale post-natale caractérisent cette altricialité secondaire. Du point de vue de la force, le chimpanzé Pan troglodytes, espèce animale qui nous est phylogénétiquement la plus proche, a un coefficient de croissance cérébrale de 2,5 entre la naissance et l’âge adulte, contre 3,3 chez les humains (DeSilva et Lesnik 2008). Du point de vue de la durée, on a longtemps cru que la maturité du cerveau humain coïncidait avec la puberté, mais on sait aujourd’hui que la période de surproduction et d’élimination des épines dendritiques sur les neurones pyramidaux du cortex préfrontal court jusqu’à la trentaine (Petanjeket al. 2011). Outre des contraintes obstétriques, cette maturation prolongée est probablement due aux coûts métaboliques élevés du développement cérébral (Goyal et al. 2014), un processus de co-évolution ayant favorisé l’étalement dans le temps de la dépense énergétique (Kuzawa et al. 2014). Cette forte altricialité cérébrale est propre aux êtres humains, le contrôle génétique qui s’exerce sur l’organisation somatopique de notre cortex, sur la connectique cérébrale et sur les aires d’association étant plus faible que chez le chimpanzé commun. Par exemple, deux frères chimpanzés auront des sillons cérébraux davantage similaires que deux frères humains, parce que le cerveau des premiers est moins réceptif aux influences environnementales que celui des membres de notre espèce (Gómez-Robles et al. 2015). Cette spécificité du cerveau humain est tout aussi importante que son quotient d’encéphalisation (6,9 fois plus élevé que celui d’un autre mammifère du même poids, et 2,6 fois supérieur à celui d’un chimpanzé), le nombre élevé de ses neurones (86 milliards contre 28 milliards chez le chimpanzé), la complexité de sa connectique (environ 1014 synapses), les changements néoténiques lors de l’expression des gènes (Somel et al. 2009) et son architecture complexe. Chez le nouveau-né humain, la neurogenèse est achevée, excepté dans la zone sous-ventriculaire – connectée aux bulbes olfactifs – et la zone sous-granulaire, qui part du gyrus denté de l’hippocampe (Eriksson et al. 1998). Toutefois, si tous les neurones sont déjà présents, le cerveau néonatal représente moins de 30% de sa taille adulte. Immédiatement après la naissance, sa croissance se poursuit au même taux qu’au stade fœtal pour atteindre 50% de la taille adulte vers 1 an et 95% vers 10 ans. Cette croissance concerne essentiellement les connexions des neurones entre eux (synaptogenèse, mais aussi élagage de cette interconnectivité ou synaptose) et la myélinisation néocorticale. À chaque minute de la vie du bébé, rappelle Jean-Pierre Changeux (2002), « plus de deux millions de synapses se mettent en place ! » Au total, 50% de ces connexions se font après la naissance (Changeux 2003). Cette spécificité d’Homo sapiens a une portée anthropologique capitale. Elle expose si fortement les êtres humains aux influences de leur environnement qu’ils deviennent naturellement des êtres hyper-sociaux et hyper-culturels, ce qu’avait pressenti Malinowski (1922 : 79-80) quand il soutenait que nos « états mentaux sont façonnés d’une certaine manière » par les « institutions au sein desquelles ils se développent ». Le développement du cerveau dans la longue durée permet une « imprégnation » progressive du tissu cérébral par l’environnement physique et social (Changeux 1983), en particulier lors des phases de socialisation primaire et secondaire. L’être humain a ainsi des «dispositions épigénétiques à l’empreinte culturelle » (Changeux 2002). Les effets sociaux et les incidences évolutionnaires (Kuzawa et Bragg 2012) d’une telle aptitude sont immenses. L’entourage doit non seulement aider les nouveau-nés, mais aussi accompagner les enfants jusqu’à leur développement complet, l’immaturité du cerveau des adolescents étant à l’origine de leur caractère souvent impulsif. Cet accompagnement de l’enfant se traduit par des changements dans la structure sociale, au sein de la famille et de la société tout entière, notamment sous la forme d’institutions d’apprentissage social et culturel. Les êtres humains sont ainsi contraints de coopérer, d’abord à l’intérieur de leur groupe familial et d’appartenance, puis sous des formes plus ouvertes (voir Coopération). Née de processus évolutifs anciens d’au moins 200 000 ans (Neubaueret al. 2018), l’altricialité secondaire nous donne un avantage adaptatif : contrairement à d’autres espèces, nos comportements ne sont pas « mis sur des rails » à la naissance, ce qui les rend flexibles face à des environnements changeants, favorisant ainsi la diversité phénotypique et culturelle. Cette plasticité cérébrale peut produire le meilleur. Par exemple, 15 mois seulement d’éducation musicale avant l’âge de 7 ans peuvent renforcer les connexions entre les deux hémisphères cérébraux (Schlaug et al. 1995) et induire d’autres changements structuraux dans les régions assurant des fonctions motrices, auditives et visuo-spatiales (Hyde et al. 2009). Une formation musicale précoce prévient aussi la perte d’audition (White-Schwoch et al. 2013) et améliore la perception de la parole (Du et Zatorre 2017). Cependant, comme cela est souvent le cas en évolution, il y a un prix à payer pour cet avantage considérable qu’est l’altricialité secondaire. Il a pour contrepartie un appétit vorace en énergie de notre cerveau (Pontzer et al. 2016). Il nous rend plus vulnérables, non seulement jusqu’à l’adolescence mais tout au long de la vie où, suppose-t-on, des anomalies des reconfigurations neuronales contribuent au développement de certaines pathologies neurologiques (Greenhill et al. 2015). Enfin, un risque associé au « recyclage culturel des cartes corticales » (Dehaene et Cohen 2007) est rarement noté : si ce recyclage peut produire le meilleur, il peut aussi produire le pire, selon la nature de la matrice culturelle dans laquelle les individus sont pris (Candau 2017). Par exemple, le choix social et culturel consistant à développer des industries polluantes peut provoquer des maladies neurodégénératives et divers désordres mentaux (Underwood 2017), notamment chez les enfants (Bennett et al. 2016), phénomène qui est accentué quand il est associé à l’adversité sociale précoce (Stein et al. 2016). Toujours dans le registre économique, la mise en œuvre de politiques qui appauvrissent des populations peut affecter le développement intellectuel des enfants (Luby et al. 2013), un message clé du World Development Report 2015 étant que la pauvreté est une « taxe cognitive ». Un dernier exemple : Voigtländer et Voth (2015) ont montré que les Allemands nés dans les années 1920 et 1930 manifestent un degré d’antisémitisme deux à trois fois plus élevé que leurs compatriotes nés avant ou après cette période. Bien plus souvent que d’autres Allemands, ils se représentent les Juifs comme « une population qui a trop d’influence dans le monde » ou « qui est responsable de sa propre persécution ». Ceci est la conséquence de l’endoctrinement nazi qu’ils ont subi durant toute leur enfance, notamment à l’école, en pleine période d’altricialité secondaire. En résumé, l’altricialité secondaire est au fondement (i) de l’aptitude naturelle de notre cerveau à devenir une représentation du monde et (ii) d’une focalisation culturelle de cette représentation, sous l’influence de la diversité des matrices culturelles, cela pour le meilleur comme pour le pire. Cette hyperplasticité du cerveau pendant la période altricielle laisse la place à une plasticité plus modérée à l’âge adulte puis décroît à l’approche du grand âge, mais elle ne disparaît jamais complètement. Par conséquent, loin de voir dans les données neurobiologiques des contraintes qui auraient pour seule caractéristique de déterminer les limites de la variabilité culturelle – limitation qui est incontestable – il faut les considérer également comme la possibilité de cette variabilité.

До бібліографії