Journal articles on the topic 'Auditory binding'

To see the other types of publications on this topic, follow the link: Auditory binding.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Auditory binding.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Burr, David, Ottavia Silva, Guido Marco Cicchini, Martin S. Banks, and Maria Concetta Morrone. "Temporal mechanisms of multimodal binding." Proceedings of the Royal Society B: Biological Sciences 276, no. 1663 (February 25, 2009): 1761–69. http://dx.doi.org/10.1098/rspb.2008.1899.

Full text
Abstract:
The simultaneity of signals from different senses—such as vision and audition—is a useful cue for determining whether those signals arose from one environmental source or from more than one. To understand better the sensory mechanisms for assessing simultaneity, we measured the discrimination thresholds for time intervals marked by auditory, visual or auditory–visual stimuli, as a function of the base interval. For all conditions, both unimodal and cross-modal, the thresholds followed a characteristic ‘dipper function’ in which the lowest thresholds occurred when discriminating against a non-zero interval. The base interval yielding the lowest threshold was roughly equal to the threshold for discriminating asynchronous from synchronous presentations. Those lowest thresholds occurred at approximately 5, 15 and 75 ms for auditory, visual and auditory–visual stimuli, respectively. Thus, the mechanisms mediating performance with cross-modal stimuli are considerably slower than the mechanisms mediating performance within a particular sense. We developed a simple model with temporal filters of different time constants and showed that the model produces discrimination functions similar to the ones we observed in humans. Both for processing within a single sense, and for processing across senses, temporal perception is affected by the properties of temporal filters, the outputs of which are used to estimate time offsets, correlations between signals, and more.
APA, Harvard, Vancouver, ISO, and other styles
2

Sanders, Mark C., Nai-Yuan N. Chang, Meghan M. Hiss, Rosalie M. Uchanski, and Timothy E. Hullar. "Temporal binding of auditory and rotational stimuli." Experimental Brain Research 210, no. 3-4 (February 2, 2011): 539–47. http://dx.doi.org/10.1007/s00221-011-2554-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chernyshev, Boris V., Dmitri V. Bryzgalov, Ivan E. Lazarev, and Elena G. Chernysheva. "Distributed feature binding in the auditory modality." NeuroReport 27, no. 11 (August 2016): 837–42. http://dx.doi.org/10.1097/wnr.0000000000000623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bell, Raoul, Jan P. Röer, and Axel Buchner. "Irrelevant Speech Disrupts Item-Context Binding." Experimental Psychology 60, no. 5 (June 1, 2013): 376–84. http://dx.doi.org/10.1027/1618-3169/a000212.

Full text
Abstract:
The present study examines the effects of irrelevant speech on immediate memory. Previous research led to the suggestion that auditory distractors particularly impair memory for serial order. These findings were explained by assuming that irrelevant speech disrupts the formation and maintenance of links between adjacent items in a to-be-remembered sequence, resulting in a loss of order information. Here we propose a more general explanation of these findings by claiming that the capacity to form and maintain item-context bindings is generally impaired by the presence of auditory distractors. The results of Experiment 1 show that memory for the association between an item and its background color is drastically impaired by irrelevant speech, just as memory for the association between an item and its serial position. In Experiment 2 it was examined whether the disrupting effects of irrelevant sound are limited to memory for item-context associations or whether item memory is also affected by the auditory distractors. The results revealed that irrelevant speech disrupts both item memory and item-context binding. The results suggest that the effects of irrelevant sound on immediate memory are more general than previously assumed, which has important theoretical and applied implications.
APA, Harvard, Vancouver, ISO, and other styles
5

BIDELMAN, GAVIN M., and SHELLEY T. HEATH. "Enhanced temporal binding of audiovisual information in the bilingual brain." Bilingualism: Language and Cognition 22, no. 04 (July 5, 2018): 752–62. http://dx.doi.org/10.1017/s1366728918000408.

Full text
Abstract:
We asked whether bilinguals’ benefits reach beyond the auditory modality to benefit multisensory processing. We measured audiovisual integration of auditory and visual cues in monolinguals and bilinguals via the double-flash illusion where the presentation of multiple auditory stimuli concurrent with a single visual flash induces an illusory perception of multiple flashes. We varied stimulus onset asynchrony (SOA) between auditory and visual cues to measure the “temporal binding window” where listeners fuse a single percept. Bilinguals showed faster responses and were less susceptible to the double-flash illusion than monolinguals. Moreover, monolinguals showed poorer sensitivity in AV processing compared to bilinguals. The width of bilinguals’ AV temporal integration window was narrower than monolinguals’ for both leading and lagging SOAs (Biling.: -65–112 ms; Mono.: -193 – 112 ms). Our results suggest the plasticity afforded by speaking multiple languages enhances multisensory integration and audiovisual binding in the bilingual brain.
APA, Harvard, Vancouver, ISO, and other styles
6

Wilbiks, Jonathan M. P., and Benjamin Dyson. "Effects of within-modal congruency, cross-modal congruency and temporal asynchrony on the perception of perceived audio–visual distance." Seeing and Perceiving 25 (2012): 178. http://dx.doi.org/10.1163/187847612x648080.

Full text
Abstract:
The factors we use to determine whether information from separate modalities should be assigned to the same source include task demands, the spatial and temporal coincidence of the composite signals, and, whether the signals are congruent with one another. In a series of experiments, we examined how temporal asynchrony and congruency interact in a competitive binding situation. Across a series of experiments, participants assigned a temporally roving auditory stimulus to competing primary or secondary visual anchors (VAV), or, a temporally roving visual stimulus to competing primary or secondary auditory anchors (AVA), based on causality. Congruency was defined in terms of simulated distance both within- and between-modalities (visual: small, auditory: quiet = far; visual: large, auditory: loud = near). Strong temporal effects were revealed, with differences between VAV and AVA conditions reflecting natural auditory lag tolerance for binding. During VAV conditions, binding was influenced only by visual congruency. During AVA conditions, binding was influenced by audio–visual congruency. These differences did not seem to be due to the relative discriminability between visual and auditory magnitude. The data reiterate the dominance of audition in the time domain (showing stronger temporal effects), the dominance of vision in the spatial domain (showing stronger congruency effects), and, the assistance of domain-inappropriate modalities by domain-appropriate modalities. A special case of congruency in terms of visual looming will also be discussed, along with the potential alerting properties of high magnitude stimuli.
APA, Harvard, Vancouver, ISO, and other styles
7

Winkler, István, István Czigler, Elyse Sussman, János Horváth, and László Balázs. "Preattentive Binding of Auditory and Visual Stimulus Features." Journal of Cognitive Neuroscience 17, no. 2 (February 2005): 320–39. http://dx.doi.org/10.1162/0898929053124866.

Full text
Abstract:
We investigated the role of attention in feature binding in the auditory and the visual modality. One auditory and one visual experiment used the mismatch negativity (MMN and vMMN, respectively) event-related potential to index the memory representations created from stimulus sequences, which were either task-relevant and, therefore, attended or task-irrelevant and ignored. In the latter case, the primary task was a continuous demanding within-modality task. The test sequences were composed of two frequently occurring stimuli, which differed from each other in two stimulus features (standard stimuli) and two infrequently occurring stimuli (deviants), which combined one feature from one standard stimulus with the other feature of the other standard stimulus. Deviant stimuli elicited MMN responses of similar parameters across the different attentional conditions. These results suggest that the memory representations involved in the MMN deviance detection response encoded the frequently occurring feature combinations whether or not the test sequences were attended. A possible alternative to the memory-based interpretation of the visual results, the elicitation of the McCollough color-contingent aftereffect, was ruled out by the results of our third experiment. The current results are compared with those supporting the attentive feature integration theory. We conclude that (1) with comparable stimulus paradigms, similar results have been obtained in the two modalities, (2) there exist preattentive processes of feature binding, however, (3) conjoining features within rich arrays of objects under time pressure and/or long-term retention of the feature-conjoined memory representations may require attentive processes.
APA, Harvard, Vancouver, ISO, and other styles
8

Shisler, Rebecca. "Aphasia and auditory extinction: Preliminary evidence of binding." Aphasiology 19, no. 7 (July 2005): 633–50. http://dx.doi.org/10.1080/02687030444000930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Terrence, Peter I., Justin F. Morgan, and Richard D. Gilson. "Dynamic Frequencies and Perceptual Binding in a Combined Auditory-Tactile Task." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 51, no. 19 (October 2007): 1336–40. http://dx.doi.org/10.1177/154193120705101913.

Full text
Abstract:
Two experiments examined the potential effect of perceptual binding in the auditory and tactile modalities for one stimulus parameter: dynamic frequency sweeps versus static frequencies. Experiment 1 established baseline performance for identifying a single stimulus presentation. Experiment 2 examined the effects of presenting simultaneous auditory and tactile signals while attempting to focus on a single sensory channel. Experiment 1 demonstrates that identifying the frequency sweep or static signal is relatively easy in both modalities. Experiment 2 shows the unidirectional domination of auditory signals over tactile, irrespective of sensory focus modality. Overall findings and the implications for design and directions for future research are discussed.
APA, Harvard, Vancouver, ISO, and other styles
10

Wilbiks, Jonathan M. P., and Benjamin J. Dyson. "The Influence of Previous Environmental History on Audio-Visual Binding Occurs during Visual-Weighted but not Auditory-Weighted Environments." Multisensory Research 26, no. 6 (2013): 561–68. http://dx.doi.org/10.1163/22134808-00002432.

Full text
Abstract:
Although there is substantial evidence for the adjustment of audio-visual binding as a function of the distribution of audio-visual lag, it is not currently clear whether adjustment can take place as a function of task demands. To address this, participants took part in competitive binding paradigms whereby a temporally roving auditory stimulus was assigned to one of two visual anchors (visual-weighted; VAV), or, a temporally roving visual stimulus was assigned to one of two auditory anchors (auditory-weighted; AVA). Using a blocked design it was possible to assess the malleability of audio-visual binding as a function of both the repetition and change of paradigm. VAV performance showed sensitivity to preceding contexts, echoing previous ‘repulsive’ effects shown in recalibration literature. AVA performance showed no sensitivity to preceding contexts. Despite the use of identical equi-probable temporal distributions in both paradigms, data support the contention that visual contexts may be more sensitive than auditory contexts in being influenced by previous environmental history of temporal events.
APA, Harvard, Vancouver, ISO, and other styles
11

Atilgan, Huriye, Stephen M. Town, Katherine C. Wood, Gareth P. Jones, Ross K. Maddox, Adrian K. C. Lee, and Jennifer K. Bizley. "Integration of Visual Information in Auditory Cortex Promotes Auditory Scene Analysis through Multisensory Binding." Neuron 97, no. 3 (February 2018): 640–55. http://dx.doi.org/10.1016/j.neuron.2017.12.034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Wadhwa, Surbhi, Ekta Masand Bhavnani, and Shashi Wadhwa. "Expression of calcium-binding proteins in the chick auditory nuclei following prenatal auditory stimulation." Journal of the Anatomical Society of India 62, no. 1 (June 2013): 1–5. http://dx.doi.org/10.1016/s0003-2778(13)80003-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Gupta, Saumya, and Mark A. Bee. "Treefrogs exploit temporal coherence to form perceptual objects of communication signals." Biology Letters 16, no. 9 (September 2020): 20200573. http://dx.doi.org/10.1098/rsbl.2020.0573.

Full text
Abstract:
For many animals, navigating their environment requires an ability to organize continuous streams of sensory input into discrete ‘perceptual objects’ that correspond to physical entities in visual and auditory scenes. The human visual and auditory systems follow several Gestalt laws of perceptual organization to bind constituent features into coherent perceptual objects. A largely unexplored question is whether nonhuman animals follow similar Gestalt laws in perceiving behaviourally relevant stimuli, such as communication signals. We used females of Cope's grey treefrog ( Hyla chrysoscelis ) to test the hypothesis that temporal coherence—a powerful Gestalt principle in human auditory scene analysis—promotes perceptual binding in forming auditory objects of species-typical vocalizations. According to the principle of temporal coherence, sound elements that start and stop at the same time or that modulate coherently over time are likely to become bound together into the same auditory object. We found that the natural temporal coherence between two spectral components of advertisement calls promotes their perceptual binding into auditory objects of advertisement calls. Our findings confirm the broad ecological validity of temporal coherence as a Gestalt law of auditory perceptual organization guiding the formation of biologically relevant perceptual objects in animal behaviour.
APA, Harvard, Vancouver, ISO, and other styles
14

Golden, Erin J., Ana Benito-Gonzalez, and Angelika Doetzlhofer. "The RNA-binding protein LIN28B regulates developmental timing in the mammalian cochlea." Proceedings of the National Academy of Sciences 112, no. 29 (July 2, 2015): E3864—E3873. http://dx.doi.org/10.1073/pnas.1501077112.

Full text
Abstract:
Proper tissue development requires strict coordination of proliferation, growth, and differentiation. Strict coordination is particularly important for the auditory sensory epithelium, where deviations from the normal spatial and temporal pattern of auditory progenitor cell (prosensory cell) proliferation and differentiation result in abnormal cellular organization and, thus, auditory dysfunction. The molecular mechanisms involved in the timing and coordination of auditory prosensory proliferation and differentiation are poorly understood. Here we identify the RNA-binding protein LIN28B as a critical regulator of developmental timing in the murine cochlea. We show that Lin28b and its opposing let-7 miRNAs are differentially expressed in the auditory sensory lineage, with Lin28b being highly expressed in undifferentiated prosensory cells and let-7 miRNAs being highly expressed in their progeny—hair cells (HCs) and supporting cells (SCs). Using recently developed transgenic mouse models for LIN28B and let-7g, we demonstrate that prolonged LIN28B expression delays prosensory cell cycle withdrawal and differentiation, resulting in HC and SC patterning and maturation defects. Surprisingly, let-7g overexpression, although capable of inducing premature prosensory cell cycle exit, failed to induce premature HC differentiation, suggesting that LIN28B’s functional role in the timing of differentiation uses let-7 independent mechanisms. Finally, we demonstrate that overexpression of LIN28B or let-7g can significantly alter the postnatal production of HCs in response to Notch inhibition; LIN28B has a positive effect on HC production, whereas let-7 antagonizes this process. Together, these results implicate a key role for the LIN28B/let-7 axis in regulating postnatal SC plasticity.
APA, Harvard, Vancouver, ISO, and other styles
15

Bharadwaj, Hari, Fahimeh Mamashli, Sheraz Khan, Ravinderjit Singh, Robert M. Joseph, Ainsley Losh, Stephanie Pawlyszyn, et al. "Cortical signatures of auditory object binding in children with autism spectrum disorder are anomalous in concordance with behavior and diagnosis." PLOS Biology 20, no. 2 (February 15, 2022): e3001541. http://dx.doi.org/10.1371/journal.pbio.3001541.

Full text
Abstract:
Organizing sensory information into coherent perceptual objects is fundamental to everyday perception and communication. In the visual domain, indirect evidence from cortical responses suggests that children with autism spectrum disorder (ASD) have anomalous figure–ground segregation. While auditory processing abnormalities are common in ASD, especially in environments with multiple sound sources, to date, the question of scene segregation in ASD has not been directly investigated in audition. Using magnetoencephalography, we measured cortical responses to unattended (passively experienced) auditory stimuli while parametrically manipulating the degree of temporal coherence that facilitates auditory figure–ground segregation. Results from 21 children with ASD (aged 7–17 years) and 26 age- and IQ-matched typically developing children provide evidence that children with ASD show anomalous growth of cortical neural responses with increasing temporal coherence of the auditory figure. The documented neurophysiological abnormalities did not depend on age, and were reflected both in the response evoked by changes in temporal coherence of the auditory scene and in the associated induced gamma rhythms. Furthermore, the individual neural measures were predictive of diagnosis (83% accuracy) and also correlated with behavioral measures of ASD severity and auditory processing abnormalities. These findings offer new insight into the neural mechanisms underlying auditory perceptual deficits and sensory overload in ASD, and suggest that temporal-coherence-based auditory scene analysis and suprathreshold processing of coherent auditory objects may be atypical in ASD.
APA, Harvard, Vancouver, ISO, and other styles
16

Oh, Yonghee, Kayla Borges, Kelli Meyers, Altemis Lopez, Shelbey Spratlin, and Elizabeth Fisch. "Temporal binding window between three different sensory modalities: Auditory, visual, and tactile." Journal of the Acoustical Society of America 151, no. 4 (April 2022): A221. http://dx.doi.org/10.1121/10.0011119.

Full text
Abstract:
Multisensory integration is crucial to perceptual understanding, but cues from multiple modalities do not necessarily arrive at the brain simultaneously. The window of time in which sensory inputs can be interpreted as simultaneous is termed the “temporal binding window” (TBW). Evidence suggests unimodal temporal discrimination is altered in a series of neurodevelopmental disorders but its relationship to perceptual timing of multisensory events (auditory, visual, and tactile) is not known, even in healthy populations. The purpose of this study was to characterize temporal integration of sensory modalities within a young healthy population. Twenty-five adults participated in TBW measurements with three different modalities. 100-ms stimuli (auditory: beep; visual: flash; tactile: vibration) were presented with various onset asynchrony (-500 to 500 ms) between two stimuli (AV: auditory-visual; AT: auditory-tactile; VT: visual-tactile). Participants were instructed to select “simultaneous” or “asynchronous” responses. Results show that the TBW for the AV condition ranged from 222 to 689 ms (mean ± SD = 418 ± 103), the AT condition ranged from 164 to 666 ms (376 ± 130), and the VT condition ranged from 70 to 751 ms (279 ± 131). These findings suggest that deficits in multisensory temporal function may factor into perceptual and cognitive weaknesses that characterize clinical disorders, raising the possibility that it may be a contributing pathologic factor in these conditions.
APA, Harvard, Vancouver, ISO, and other styles
17

Stecker, G. Christopher. "Temporal binding of auditory spatial information across dynamic binaural events." Attention, Perception, & Psychophysics 80, no. 1 (October 30, 2017): 14–20. http://dx.doi.org/10.3758/s13414-017-1436-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Maybery, Murray T., Peter J. Clissa, Fabrice B. R. Parmentier, Doris Leung, Grefin Harsa, Allison M. Fox, and Dylan M. Jones. "Binding of verbal and spatial features in auditory working memory." Journal of Memory and Language 61, no. 1 (July 2009): 112–33. http://dx.doi.org/10.1016/j.jml.2009.03.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Thompson, Glenn C., Ann M. Thompson, Kennon M. Garrett, and B. Hill Britton. "Serotonin and Serotonin Receptors in the Central Auditory System." Otolaryngology–Head and Neck Surgery 110, no. 1 (January 1994): 93–102. http://dx.doi.org/10.1177/019459989411000111.

Full text
Abstract:
Immunohistochemical and ligand-binding techniques were used to visualize the neurotransmitter serotonin and one of its receptors, the 5-HT1A subtype, in auditory nuclei of the brainstem. Serotonergic fibers and terminal endings were found in all auditory nuclei extending from the cochlear nucleus to the inferior colliculus, including the superior olivary complex and the nuclei of the lateral lemniscus. The density of the innervation varied between and within each nucleus. All serotonergic cell bodies were located outside the auditory nuclei. The 5-HT1A receptor subtype was found in the cochlear nucleus as well as in the inferior colliculus. With no serotonergic cell bodies present in the auditory nuclei, the present neuroanatomic and neurochemical findings support behavioral and neurophysiologic findings that the serotonergic system may modulate central auditory processing.
APA, Harvard, Vancouver, ISO, and other styles
20

Horikawa, Junsei, Atsushi Tanahashi, and Nobuo Suga. "After-discharges in the auditory cortex of the mustached bat: No oscillatory discharges for binding auditory information." Hearing Research 76, no. 1-2 (June 1994): 45–52. http://dx.doi.org/10.1016/0378-5955(94)90085-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Seibold, Julia C., Sophie Nolden, Josefa Oberem, Janina Fels, and Iring Koch. "The binding of an auditory target location to a judgement: A two-component switching approach." Quarterly Journal of Experimental Psychology 72, no. 8 (February 15, 2019): 2056–67. http://dx.doi.org/10.1177/1747021819829422.

Full text
Abstract:
In a two-component switching paradigm, in which participants switched between two auditory attention selection criteria (attention component: left vs. right ear) and two judgements (judgement component: number vs. letter judgement), we found high judgement switch costs in attention criterion repetitions, but low costs in attention criterion switches. This finding showed an interaction of components. Previous two-component switching studies observed differently emphasised interaction patterns. In the present study, we explored whether the strength of the interaction pattern reflects the strength of the binding of target location and judgement. Specifically, we investigated whether exogenous target location cueing led to weaker binding than endogenous cueing, and whether preparation for ear selection could influence the binding. Attention switches with auditory exogenous target location cues did not affect the component interaction pattern, whereas a prolonged preparation interval led to a more emphasised pattern. Binding between target location and judgement may therefore be rather automatic and may not necessarily require concurrent component processing. Sufficient time for target location switches with long preparation time may activate the previous trial’s episode or facilitate switches of the subsequent judgement.
APA, Harvard, Vancouver, ISO, and other styles
22

Du, Haibo, Rui Ren, Panpan Chen, Zhigang Xu, and Yanfei Wang. "Identification of Binding Partners of Deafness-Related Protein PDZD7." Neural Plasticity 2018 (2018): 1–10. http://dx.doi.org/10.1155/2018/2062346.

Full text
Abstract:
PDZD7 is an important deafness gene, whose mutations are associated with syndromic and nonsyndromic hearing loss. PDZD7 contains multiple PDZ domains that are essential for organizing various proteins into protein complex. Several PDZD7-binding proteins have been identified, including usherin, ADGRV1, whirlin, harmonin, SANS, and MYO7A, all belonging to USH proteins. Here, we report the identification of novel PDZD7-binding partners through yeast two-hybrid screening using the first two PDZ domains of PDZD7 as bait. Eleven proteins were identified, most of which have not been reported as PDZD7-binding partners before. Among the identified proteins, ADGRV1, gelsolin, and β-catenin have been shown to play important roles in hearing, whereas the functions of other proteins in the inner ear remain elusive. We confirmed the expression of one candidate PDZD7-binding protein, CADM1, in the mouse inner ear and evaluated the auditory function of Cadm1 knockout mice by performing auditory brainstem response (ABR) measurement. Unexpectedly, Cadm1 knockout mice show normal hearing threshold, which might be explained by the possible compensation by its homologs that are also expressed in the inner ear. Taken together, our work identified several novel PDZD7-binding proteins, which will help us to further understand the role of PDZD7 in hearing transduction.
APA, Harvard, Vancouver, ISO, and other styles
23

Berding, Georg, and Thomas Lenarz. "Imaging in hearing using radiotracers." Current Directions in Biomedical Engineering 3, no. 2 (September 7, 2017): 187–90. http://dx.doi.org/10.1515/cdbme-2017-0039.

Full text
Abstract:
AbstractRadiotracers offer unique options for brain imaging of functional and molecular processes related to hearing. Such imaging can be applied in a broad spectrum of situations from preclinical research to clinical patient care. Functional imaging to assess activation in brain regions and networks involved in auditory processing uses markers of blood flow or energy-metabolism in well-defined conditions with and without auditory stimulation. Molecular markers can be used in hearing research for example to study changes in inhibitory neurotransmission systems related to hearing loss. For imaging either positron emission tomography (PET) or single-photon emission computed tomography (SPECT) are employed. Data analysis can encompasses voxel-wise statistical analysis of activation and calculation of quantitative parameters like receptor binding-potentials based on bio-kinetic modeling. Functional imaging has been frequently used in the context of auditory implantation. Before implantation it aims to assess intactness of the central auditory pathway and prognosis. After implantation it is used to improve understanding of the outcome with respect to auditory function and finally speech understanding, e.g. by measuring correlates of central auditory processing and neuroplasticity.
APA, Harvard, Vancouver, ISO, and other styles
24

Antusch, S., R. Custers, H. Marien, and H. Aarts. "Studying the sense of agency in the absence of motor movement: an investigation into temporal binding of tactile sensations and auditory effects." Experimental Brain Research 239, no. 6 (April 7, 2021): 1795–806. http://dx.doi.org/10.1007/s00221-021-06087-8.

Full text
Abstract:
AbstractPeople form coherent representations of goal-directed actions. Such agency experiences of intentional action are reflected by a shift in temporal perception: self-generated motor movements and subsequent sensory effects are perceived to occur closer together in time—a phenomenon termed intentional binding. Building on recent research suggesting that temporal binding occurs without intentionally performing actions, we further examined whether such perceptual compression occurs when motor action is fully absent. In three experiments, we used a novel sensory-based adaptation of the Libet clock paradigm to assess how a brief tactile sensation on the index finger and a resulting auditory stimulus perceptually bind together in time. Findings revealed robust temporal repulsion (instead of binding) between tactile sensation and auditory effect. Temporal repulsion was attenuated when participants could anticipate the identity and temporal onset (two crucial components of intentional action) of the tactile sensation. These findings are briefly discussed in the context of differences between intentional movement and anticipated bodily sensations in shaping action coherence and agentic experiences.
APA, Harvard, Vancouver, ISO, and other styles
25

Hara, Yusuke, and Yoshiki Kashimori. "Neural mechanism of binding of elementary sound components in auditory cortex." Neuroscience Research 71 (September 2011): e353. http://dx.doi.org/10.1016/j.neures.2011.07.1548.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Kulesza, R. J. "Characterization of human auditory brainstem circuits by calcium-binding protein immunohistochemistry." Neuroscience 258 (January 2014): 318–31. http://dx.doi.org/10.1016/j.neuroscience.2013.11.035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Molinari, M., M. E. Dell'Anna, E. Rausell, M. G. Leggio, T. Hashikawa, and E. G. Jones. "Auditory thalamocortical pathways defined in monkeys by calcium-binding protein immunoreactivity." Journal of Comparative Neurology 362, no. 2 (November 13, 1995): 171–94. http://dx.doi.org/10.1002/cne.903620203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Yan, Kai, Ye-Zhong Tang, and Catherine E. Carr. "Calcium-binding protein immunoreactivity characterizes the auditory system of Gekko gecko." Journal of Comparative Neurology 518, no. 17 (May 20, 2010): 3409–26. http://dx.doi.org/10.1002/cne.22428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Yan, Kai, Ye-Zhong Tang, and Catherine E. Carr. "Calcium-binding protein immunoreactivity characterizes the auditory system of Gekko gecko." Journal of Comparative Neurology 518, no. 17 (June 30, 2010): spc1. http://dx.doi.org/10.1002/cne.22454.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Lindborg, Alma, and Tobias S. Andersen. "Bayesian binding and fusion models explain illusion and enhancement effects in audiovisual speech perception." PLOS ONE 16, no. 2 (February 19, 2021): e0246986. http://dx.doi.org/10.1371/journal.pone.0246986.

Full text
Abstract:
Speech is perceived with both the ears and the eyes. Adding congruent visual speech improves the perception of a faint auditory speech stimulus, whereas adding incongruent visual speech can alter the perception of the utterance. The latter phenomenon is the case of the McGurk illusion, where an auditory stimulus such as e.g. “ba” dubbed onto a visual stimulus such as “ga” produces the illusion of hearing “da”. Bayesian models of multisensory perception suggest that both the enhancement and the illusion case can be described as a two-step process of binding (informed by prior knowledge) and fusion (informed by the information reliability of each sensory cue). However, there is to date no study which has accounted for how they each contribute to audiovisual speech perception. In this study, we expose subjects to both congruent and incongruent audiovisual speech, manipulating the binding and the fusion stages simultaneously. This is done by varying both temporal offset (binding) and auditory and visual signal-to-noise ratio (fusion). We fit two Bayesian models to the behavioural data and show that they can both account for the enhancement effect in congruent audiovisual speech, as well as the McGurk illusion. This modelling approach allows us to disentangle the effects of binding and fusion on behavioural responses. Moreover, we find that these models have greater predictive power than a forced fusion model. This study provides a systematic and quantitative approach to measuring audiovisual integration in the perception of the McGurk illusion as well as congruent audiovisual speech, which we hope will inform future work on audiovisual speech perception.
APA, Harvard, Vancouver, ISO, and other styles
31

Li, Jun, Ting Zhang, Aarthi Ramakrishnan, Bernd Fritzsch, Jinshu Xu, Elaine Y. M. Wong, Yong-Hwee Eddie Loh, Jianqiang Ding, Li Shen, and Pin-Xian Xu. "Dynamic changes in cis-regulatory occupancy by Six1 and its cooperative interactions with distinct cofactors drive lineage-specific gene expression programs during progressive differentiation of the auditory sensory epithelium." Nucleic Acids Research 48, no. 6 (January 20, 2020): 2880–96. http://dx.doi.org/10.1093/nar/gkaa012.

Full text
Abstract:
Abstract The transcription factor Six1 is essential for induction of sensory cell fate and formation of auditory sensory epithelium, but how it activates gene expression programs to generate distinct cell-types remains unknown. Here, we perform genome-wide characterization of Six1 binding at different stages of auditory sensory epithelium development and find that Six1-binding to cis-regulatory elements changes dramatically at cell-state transitions. Intriguingly, Six1 pre-occupies enhancers of cell-type-specific regulators and effectors before their expression. We demonstrate in-vivo cell-type-specific activity of Six1-bound novel enhancers of Pbx1, Fgf8, Dusp6, Vangl2, the hair-cell master regulator Atoh1 and a cascade of Atoh1’s downstream factors, including Pou4f3 and Gfi1. A subset of Six1-bound sites carry consensus-sequences for its downstream factors, including Atoh1, Gfi1, Pou4f3, Gata3 and Pbx1, all of which physically interact with Six1. Motif analysis identifies RFX/X-box as one of the most significantly enriched motifs in Six1-bound sites, and we demonstrate that Six1-RFX proteins cooperatively regulate gene expression through binding to SIX:RFX-motifs. Six1 targets a wide range of hair-bundle regulators and late Six1 deletion disrupts hair-bundle polarity. This study provides a mechanistic understanding of how Six1 cooperates with distinct cofactors in feedforward loops to control lineage-specific gene expression programs during progressive differentiation of the auditory sensory epithelium.
APA, Harvard, Vancouver, ISO, and other styles
32

Tong, Jonathan, Lux Li, Patrick Bruns, and Brigitte Röder. "Crossmodal associations modulate multisensory spatial integration." Attention, Perception, & Psychophysics 82, no. 7 (July 5, 2020): 3490–506. http://dx.doi.org/10.3758/s13414-020-02083-2.

Full text
Abstract:
Abstract According to the Bayesian framework of multisensory integration, audiovisual stimuli associated with a stronger prior belief that they share a common cause (i.e., causal prior) are predicted to result in a greater degree of perceptual binding and therefore greater audiovisual integration. In the present psychophysical study, we systematically manipulated the causal prior while keeping sensory evidence constant. We paired auditory and visual stimuli during an association phase to be spatiotemporally either congruent or incongruent, with the goal of driving the causal prior in opposite directions for different audiovisual pairs. Following this association phase, every pairwise combination of the auditory and visual stimuli was tested in a typical ventriloquism-effect (VE) paradigm. The size of the VE (i.e., the shift of auditory localization towards the spatially discrepant visual stimulus) indicated the degree of multisensory integration. Results showed that exposure to an audiovisual pairing as spatiotemporally congruent compared to incongruent resulted in a larger subsequent VE (Experiment 1). This effect was further confirmed in a second VE paradigm, where the congruent and the incongruent visual stimuli flanked the auditory stimulus, and a VE in the direction of the congruent visual stimulus was shown (Experiment 2). Since the unisensory reliabilities for the auditory or visual components did not change after the association phase, the observed effects are likely due to changes in multisensory binding by association learning. As suggested by Bayesian theories of multisensory processing, our findings support the existence of crossmodal causal priors that are flexibly shaped by experience in a changing world.
APA, Harvard, Vancouver, ISO, and other styles
33

Hanna-Pladdy, Brenda, Hyun Choi, Brian Herman, and Spenser Haffey. "Audiovisual Lexical Retrieval Deficits Following Left Hemisphere Stroke." Brain Sciences 8, no. 12 (November 28, 2018): 206. http://dx.doi.org/10.3390/brainsci8120206.

Full text
Abstract:
Binding sensory features of multiple modalities of what we hear and see allows formation of a coherent percept to access semantics. Previous work on object naming has focused on visual confrontation naming with limited research in nonverbal auditory or multisensory processing. To investigate neural substrates and sensory effects of lexical retrieval, we evaluated healthy adults (n = 118) and left hemisphere stroke patients (LHD, n = 42) in naming manipulable objects across auditory (sound), visual (picture), and multisensory (audiovisual) conditions. LHD patients were divided into cortical, cortical–subcortical, or subcortical lesions (CO, CO–SC, SC), and specific lesion location investigated in a predictive model. Subjects produced lower accuracy in auditory naming relative to other conditions. Controls demonstrated greater naming accuracy and faster reaction times across all conditions compared to LHD patients. Naming across conditions was most severely impaired in CO patients. Both auditory and visual naming accuracy were impacted by temporal lobe involvement, although auditory naming was sensitive to lesions extending subcortically. Only controls demonstrated significant improvement over visual naming with the addition of auditory cues (i.e., multisensory condition). Results support overlapping neural networks for visual and auditory modalities related to semantic integration in lexical retrieval and temporal lobe involvement, while multisensory integration was impacted by both occipital and temporal lobe lesion involvement. The findings support modality specificity in naming and suggest that auditory naming is mediated by a distributed cortical–subcortical network overlapping with networks mediating spatiotemporal aspects of skilled movements producing sound.
APA, Harvard, Vancouver, ISO, and other styles
34

Tschacher, Wolfgang, Fabian Ramseyer, and Claudia Bergomi. "The Subjective Present and Its Modulation in Clinical Contexts." Timing & Time Perception 1, no. 2 (2013): 239–59. http://dx.doi.org/10.1163/22134468-00002013.

Full text
Abstract:
Time is a basic dimension in psychology, underlying behavior and experience. Timing and time perception constitute implicit processes that are often inaccessible to the individual person. Research in this field has shown that timing is involved in many areas of clinical significance. In the projects presented here, we combine timing with seemingly different fields of research, such as psychopathology, perceptual grouping, and embodied cognition. Focusing on the time scale of the subjective present, we report findings from three different clinical studies: (1) We studied perceived causality in schizophrenia patients, finding that perceptual grouping (‘binding’, ‘Gestalt formation’), which leads to visual causality perceptions, did not distinguish between patients and healthy controls. Patients however did integrate context (provided by the temporal distribution of auditory context stimuli) less into perceptions, in significant contrast to controls. This is consistent with reports of higher inaccuracy in schizophrenia patients’ temporal processing. (2) In a project on auditory Gestalt perception we investigated auditory perceptual grouping in schizophrenia patients. The mean dwell time was positively related to how much patients were prone to auditory hallucinations. Dwell times of auditory Gestalts may be regarded as operationalizations of the subjective present; findings thus suggested that patients with hallucinations had a shorter present. (3) The movement correlations of interacting individuals were used to study the non-verbal synchrony between therapist and patient in psychotherapy sessions. We operationalized the duration of an embodied ‘social present’ by the statistical significance of such associations, finding a window of roughly 5.7 seconds in conversing dyads. We discuss that temporal scales of nowness may be modifiable, e.g., by mindfulness. This yields promising goals for future research on timing in the clinical context: psychotherapeutic techniques may alter binding processes, hence the subjective present of individuals, and may affect the social present in therapeutic interactions.
APA, Harvard, Vancouver, ISO, and other styles
35

Bharadwaj, Hari M., Sheraz Khan, Matti Hämäläinen, and Tal Kenet. "Electrophysiological correlates of auditory object binding with application to autism spectrum disorders." Journal of the Acoustical Society of America 140, no. 4 (October 2016): 3045. http://dx.doi.org/10.1121/1.4969457.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Desgent, Sébastien, Denis Boire, and Maurice Ptito. "Distribution of calcium binding proteins in visual and auditory cortices of hamsters." Experimental Brain Research 163, no. 2 (January 26, 2005): 159–72. http://dx.doi.org/10.1007/s00221-004-2151-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Nahorna, Olha, Frédéric Berthommier, and Jean-Luc Schwartz. "Binding and unbinding the auditory and visual streams in the McGurk effect." Journal of the Acoustical Society of America 132, no. 2 (August 2012): 1061–77. http://dx.doi.org/10.1121/1.4728187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Jones, E. G., M. E. Dell'Anna, M. Molinari, E. Rausell, and T. Hashikawa. "Subdivisions of macaque monkey auditory cortex revealed by calcium-binding protein immunoreactivity." Journal of Comparative Neurology 362, no. 2 (November 13, 1995): 153–70. http://dx.doi.org/10.1002/cne.903620202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Caicedo, Alejandro, Christine d'Aldin, Michel Eybalin, and Jean-Luc Puel. "Temporary sensory deprivation changes calcium-binding proteins levels in the auditory brainstem." Journal of Comparative Neurology 378, no. 1 (February 3, 1997): 1–15. http://dx.doi.org/10.1002/(sici)1096-9861(19970203)378:1<1::aid-cne1>3.0.co;2-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Cusack, Rhodri. "The Intraparietal Sulcus and Perceptual Organization." Journal of Cognitive Neuroscience 17, no. 4 (April 2005): 641–51. http://dx.doi.org/10.1162/0898929053467541.

Full text
Abstract:
The structuring of the sensory scene (perceptual organization) profoundly affects what we perceive, and is of increasing clinical interest. In both vision and audition, many cues have been identified that influence perceptual organization, but only a little is known about its neural basis. Previous studies have suggested that auditory cortex may play a role in auditory perceptual organization (also called auditory stream segregation). However, these studies were limited in that they just examined auditory cortex and that the stimuli they used to generate different organizations had different physical characteristics, which per se may have led to the differences in neural response. In the current study, functional magnetic resonance imaging was used to test for an effect of perceptual organization across the whole brain. To avoid confounding physical changes to the stimuli with differences in perceptual organization, we exploited an ambiguous auditory figure that is sometimes perceived as a single auditory stream and sometimes as two streams. We found that regions in the intraparietal sulcus (IPS) showed greater activity when 2 streams were perceived rather than 1. The specific involvement of this region in perceptual organization is exciting, as there is a growing literature that suggests a role for the IPS in binding in vision, touch, and cross-modally. This evidence is discussed, and a general role proposed for regions of the IPS in structuring sensory input.
APA, Harvard, Vancouver, ISO, and other styles
41

Kondo, Hirohito M., Anouk M. van Loon, Jun-Ichiro Kawahara, and Brian C. J. Moore. "Auditory and visual scene analysis: an overview." Philosophical Transactions of the Royal Society B: Biological Sciences 372, no. 1714 (February 19, 2017): 20160099. http://dx.doi.org/10.1098/rstb.2016.0099.

Full text
Abstract:
We perceive the world as stable and composed of discrete objects even though auditory and visual inputs are often ambiguous owing to spatial and temporal occluders and changes in the conditions of observation. This raises important questions regarding where and how ‘scene analysis’ is performed in the brain. Recent advances from both auditory and visual research suggest that the brain does not simply process the incoming scene properties. Rather, top-down processes such as attention, expectations and prior knowledge facilitate scene perception. Thus, scene analysis is linked not only with the extraction of stimulus features and formation and selection of perceptual objects, but also with selective attention, perceptual binding and awareness. This special issue covers novel advances in scene-analysis research obtained using a combination of psychophysics, computational modelling, neuroimaging and neurophysiology, and presents new empirical and theoretical approaches. For integrative understanding of scene analysis beyond and across sensory modalities, we provide a collection of 15 articles that enable comparison and integration of recent findings in auditory and visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’.
APA, Harvard, Vancouver, ISO, and other styles
42

Picher, Maria Magdalena, Anna Gehrt, Sandra Meese, Aleksandra Ivanovic, Friederike Predoehl, SangYong Jung, Isabelle Schrauwen, et al. "Ca2+-binding protein 2 inhibits Ca2+-channel inactivation in mouse inner hair cells." Proceedings of the National Academy of Sciences 114, no. 9 (February 9, 2017): E1717—E1726. http://dx.doi.org/10.1073/pnas.1617533114.

Full text
Abstract:
Ca2+-binding protein 2 (CaBP2) inhibits the inactivation of heterologously expressed voltage-gated Ca2+ channels of type 1.3 (CaV1.3) and is defective in human autosomal-recessive deafness 93 (DFNB93). Here, we report a newly identified mutation in CABP2 that causes a moderate hearing impairment likely via nonsense-mediated decay of CABP2-mRNA. To study the mechanism of hearing impairment resulting from CABP2 loss of function, we disrupted Cabp2 in mice (Cabp2LacZ/LacZ). CaBP2 was expressed by cochlear hair cells, preferentially in inner hair cells (IHCs), and was lacking from the postsynaptic spiral ganglion neurons (SGNs). Cabp2LacZ/LacZ mice displayed intact cochlear amplification but impaired auditory brainstem responses. Patch-clamp recordings from Cabp2LacZ/LacZ IHCs revealed enhanced Ca2+-channel inactivation. The voltage dependence of activation and the number of Ca2+ channels appeared normal in Cabp2LacZ/LacZ mice, as were ribbon synapse counts. Recordings from single SGNs showed reduced spontaneous and sound-evoked firing rates. We propose that CaBP2 inhibits CaV1.3 Ca2+-channel inactivation, and thus sustains the availability of CaV1.3 Ca2+ channels for synaptic sound encoding. Therefore, we conclude that human deafness DFNB93 is an auditory synaptopathy.
APA, Harvard, Vancouver, ISO, and other styles
43

Stiles, Noelle R. B., Armand R. Tanguay, and Shinsuke Shimojo. "The Dynamic Double Flash Illusion: Auditory Triggered Replay of Illusory Visual Expansion." Multisensory Research 33, no. 1 (July 1, 2020): 87–108. http://dx.doi.org/10.1163/22134808-20191392.

Full text
Abstract:
Abstract In the original double flash illusion, a visual flash (e.g., a sharp-edged disk, or uniformly filled circle) presented with two short auditory tones (beeps) is often followed by an illusory flash. The illusory flash has been previously shown to be triggered by the second auditory beep. The current study extends the double flash illusion by showing that this paradigm can not only create the illusory repeat of an on-off flash, but also trigger an illusory expansion (and in some cases a subsequent contraction) that is induced by the flash of a circular brightness gradient (gradient disk) to replay as well. The perception of the dynamic double flash illusion further supports the interpretation of the illusory flash (in the double flash illusion) as similar in its spatial and temporal properties to the perception of the real visual flash, likely by replicating the neural processes underlying the illusory expansion of the real flash. We show further that if a gradient disk (generating an illusory expansion) and a sharp-edged disk are presented simultaneously side by side with two sequential beeps, often only one visual stimulus or the other will be perceived to double flash. This indicates selectivity in auditory–visual binding, suggesting the usefulness of this paradigm as a psychophysical tool for investigating crossmodal binding phenomena.
APA, Harvard, Vancouver, ISO, and other styles
44

Cai, Rui, Bopanna I. Kalappa, Thomas J. Brozoski, Lynne L. Ling, and Donald M. Caspary. "Is GABA neurotransmission enhanced in auditory thalamus relative to inferior colliculus?" Journal of Neurophysiology 111, no. 2 (January 15, 2014): 229–38. http://dx.doi.org/10.1152/jn.00556.2013.

Full text
Abstract:
Gamma-aminobutyric acid (GABA) is the major inhibitory neurotransmitter in the central auditory system. Sensory thalamic structures show high levels of non-desensitizing extrasynaptic GABAA receptors (GABAARs) and a reduction in the redundancy of coded information. The present study compared the inhibitory potency of GABA acting at GABAARs between the inferior colliculus (IC) and the medial geniculate body (MGB) using quantitative in vivo, in vitro, and ex vivo experimental approaches. In vivo single unit studies compared the ability of half maximal inhibitory concentrations of GABA to inhibit sound-evoked temporal responses, and found that GABA was two to three times ( P < 0.01) more potent at suppressing MGB single unit responses than IC unit responses. In vitro whole cell patch-clamp slice recordings were used to demonstrate that gaboxadol, a δ-subunit selective GABAAR agonist, was significantly more potent at evoking tonic inhibitory currents from MGB neurons than IC neurons ( P < 0.01). These electrophysiological findings were supported by an in vitro receptor binding assay which used the picrotoxin analog [3H]TBOB to assess binding in the GABAAR chloride channel. MGB GABAARs had significantly greater total open chloride channel capacity relative to GABAARs in IC ( P < 0.05) as shown by increased total [3H]TBOB binding. Finally, a comparative ex vivo measurement compared endogenous GABA levels and suggested a trend towards higher GABA concentrations in MGB than in IC. Collectively, these studies suggest that, per unit GABA, high affinity extrasynaptic and synaptic GABAARs confer a significant inhibitory GABAAR advantage to MGB neurons relative to IC neurons. This increased GABA sensitivity likely underpins the vital filtering role of auditory thalamus.
APA, Harvard, Vancouver, ISO, and other styles
45

Gu, Shenyan, Daniel Knowland, Jose A. Matta, Min L. O’Carroll, Weston B. Davini, Madhurima Dhara, Hae-Jin Kweon, and David S. Bredt. "Hair cell α9α10 nicotinic acetylcholine receptor functional expression regulated by ligand binding and deafness gene products." Proceedings of the National Academy of Sciences 117, no. 39 (September 14, 2020): 24534–44. http://dx.doi.org/10.1073/pnas.2013762117.

Full text
Abstract:
Auditory hair cells receive olivocochlear efferent innervation, which refines tonotopic mapping, improves sound discrimination, and mitigates acoustic trauma. The olivocochlear synapse involves α9α10 nicotinic acetylcholine receptors (nAChRs), which assemble in hair cells only coincident with cholinergic innervation and do not express in recombinant mammalian cell lines. Here, genome-wide screening determined that assembly and surface expression of α9α10 require ligand binding. Ion channel function additionally demands an auxiliary subunit, which can be transmembrane inner ear (TMIE) or TMEM132e. Both of these single-pass transmembrane proteins are enriched in hair cells and underlie nonsyndromic human deafness. Inner hair cells from TMIE mutant mice show altered postsynaptic α9α10 function and retain α9α10-mediated transmission beyond the second postnatal week associated with abnormally persistent cholinergic innervation. Collectively, this study provides a mechanism to link cholinergic input with α9α10 assembly, identifies unexpected functions for human deafness genes TMIE/TMEM132e, and enables drug discovery for this elusive nAChR implicated in prevalent auditory disorders.
APA, Harvard, Vancouver, ISO, and other styles
46

Lange, Kathrin, and Brigitte Röder. "Orienting Attention to Points in Time Improves Stimulus Processing Both within and across Modalities." Journal of Cognitive Neuroscience 18, no. 5 (May 1, 2006): 715–29. http://dx.doi.org/10.1162/jocn.2006.18.5.715.

Full text
Abstract:
Spatial attention affects the processing of stimuli of both a task-relevant and a task-irrelevant modality. The present study investigated if similar cross-modal effects exist when attention is oriented to a point in time. Short (600 msec) and long (1200 msec) empty intervals, marked by a tactile onset and an auditory or a tactile offset marker, were presented. In each block, the participants had to attend one interval and one modality. Event-related potentials (ERPs) to auditory and tactile offset markers of attended as compared to unattended intervals were characterized by an enhancement of early negative deflections of the auditory and somatosensory ERPs (audition, 100–140 msec; touch, 130–180 msec) when audition or touch was task relevant, respectively. Similar effects were found for auditory stimuli when touch was task relevant. An additional reaction time experiment revealed faster responses to both auditory and tactile stimuli at the attended as compared to the unattended point in time, irrespective of which modality was primary. Both behavioral and ERP data show that attention can be focused on a point in time, which results in a more efficient processing of auditory and tactile stimuli. The ERP data further suggest that a relative enhancement at perceptual processing stages contributes to the processing advantage for temporally attended stimuli. The existence of cross-modal effects of temporal attention underlines the importance of time as a feature for binding input across different modalities.
APA, Harvard, Vancouver, ISO, and other styles
47

Pogarell, Oliver, Walter Koch, Nadine Schaaff, Gabriele Pöpperl, Christoph Mulert, Georg Juckel, Hans-Jürgen Möller, Ulrich Hegerl, and Klaus Tatsch. "[123I] ADAM brainstem binding correlates with the loudness dependence of auditory evoked potentials." European Archives of Psychiatry and Clinical Neuroscience 258, no. 1 (November 7, 2007): 40–47. http://dx.doi.org/10.1007/s00406-007-0760-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Ramkumar, V., R. Ravi, M. C. Wilson, T. W. Gettys, C. Whitworth, and L. P. Rybak. "Identification of A1 adenosine receptors in rat cochlea coupled to inhibition of adenylyl cyclase." American Journal of Physiology-Cell Physiology 267, no. 3 (September 1, 1994): C731—C737. http://dx.doi.org/10.1152/ajpcell.1994.267.3.c731.

Full text
Abstract:
A1 adenosine receptors (A1ARs) are found in a number of tissues in the body where their physiological roles have been identified. In the cochlea, neither the existence of these receptors nor a physiological role of adenosine has been described previously. Membranes prepared from rat cochlea demonstrated high affinity and saturable binding to N6-2-(4-amino-3-[125I]iodophenyl)ethyladenosine ([125I]APNEA), an A1AR agonist, with maximum binding capacity and dissociation constant values being 40.5 +/- 0.5 fmol/mg protein and 1.28 +/- 0.03 nM, respectively. Adenosine analogues competed for [125I]APNEA binding sites with a rank order of potency characteristic of these sites being the A1AR. The [125I]APNEA binding was significantly reduced by pertussis toxin, indicating coupling of these receptors with the Gi and/or Go proteins in cochlear membranes. Photoaffinity labeling of the receptor protein with the A1AR agonist N6-2-(4-azido-3[125I]iodophenyl)ethyladenosine showed specific labeling of a 36-kDa receptor protein. Activation of the A1AR with R-phenylisopropyladenosine (R-PIA) led to inhibition of forskolin-stimulated adenylyl cyclase activity. Amplification of reverse-transcribed RNA derived from cochlear tissue by polymerase chain reaction (using primers for the bovine A1AR) yielded a 770-bp product that hybridized to an A1AR cDNA probe on Southern blots. These data indicate the presence of an inhibitory receptor in the peripheral auditory system, which may play an important role in modulating auditory functions.
APA, Harvard, Vancouver, ISO, and other styles
49

Malone, Alex K., Nai-Yuan N. Chang, and Timothy E. Hullar. "Age-related changes in temporal processing of vestibular stimuli." Seeing and Perceiving 25 (2012): 153. http://dx.doi.org/10.1163/187847612x647847.

Full text
Abstract:
Falls are one of the leading causes of disability in the elderly. Previous research has shown that falls may be related to changes in the temporal integration of multisensory stimuli. This study compared the temporal integration and processing of a vestibular and auditory stimulus in younger and older subjects. The vestibular stimulus consisted of a continuous sinusoidal rotational velocity delivered using a rotational chair and the auditory stimulus consisted of 5 ms of white noise presented dichotically through headphones (both at 0.5 Hz). Simultaneity was defined as perceiving the chair being at its furthest rightward or leftward trajectory at the same moment as the auditory stimulus was perceived in the contralateral ear. The temporal offset of the auditory stimulus was adjusted using a method of constant stimuli so that the auditory stimulus either led or lagged true simultaneity. 15 younger (ages 21–27) and 12 older (ages 63–89) healthy subjects were tested using a two alternative forced choice task to determine at what times they perceived the two stimuli as simultaneous. Younger subjects had a mean temporal binding window of 334 ± 37 ms (mean ± SEM) and a mean point of subjective simultaneity of 83 ± 15 ms. Older subjects had a mean TBW of 556 ± 36 ms and a mean point of subjective simultaneity of 158 ± 27. Both differences were significant indicating that older subjects have a wider temporal range over which they integrate vestibular and auditory stimuli than younger subjects. These findings were consistent upon retesting and were not due to differences in vestibular perception thresholds.
APA, Harvard, Vancouver, ISO, and other styles
50

Ross, B., A. T. Herdman, and C. Pantev. "Stimulus Induced Desynchronization of Human Auditory 40-Hz Steady-State Responses." Journal of Neurophysiology 94, no. 6 (December 2005): 4082–93. http://dx.doi.org/10.1152/jn.00469.2005.

Full text
Abstract:
The hypothesis that gamma-band oscillations are related to the representation of an environmental scene in the cerebral cortex after binding of corresponding perceptual elements is currently under discussion. One question is how the sensory system reacts to a fast change in the scene if perceptual elements are rigidly bound together. A reset of the gamma-band oscillation forced by a change in sensory input may dissolve the binding, which then would be re-established for the new sensation. We studied the reset of gamma-band oscillations on the 40-Hz auditory steady-state responses (ASSR) by means of whole-head magnetoencephalography (MEG). The rhythm of 40-Hz AM of a 500-Hz tone evoked the ASSR, and a short noise burst served as a concurrent stimulus. Possible direct interactions of the auditory stimuli were excluded by presenting the noise impulse in a different frequency channel (2,000–3,000 Hz) to the contralateral ear. The concurrent stimulus induced a considerable decrement in the amplitude of ASSR, which was localized in primary auditory cortices. This decrement lasted 250 ms and was significantly longer than the duration of the transient gamma-band response evoked by the noise burst. Thus it could not be explained by any linear superimposition of the responses. The time courses of ASSR amplitude and phase during recovery from the decrement resembled those after stimulus onset, indicating that a new ASSR was built up after the resetting stimulus. The results are discussed as reset of oscillations in human thalamocortical networks.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography