Статті в журналах з теми "Auditory Acoustic Features"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Auditory Acoustic Features.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Auditory Acoustic Features".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Futamura, Ryohei. "Differences in acoustic characteristics of hitting sounds in baseball games." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 265, no. 3 (February 1, 2023): 4550–56. http://dx.doi.org/10.3397/in_2022_0654.

Повний текст джерела
Анотація:
In sports, athletes use visual and auditory information to perform full-body exercises. Some studies reported that auditory information is an essential cue for athletes: They utilized auditory information to predict ball behavior and determine body movements. However, because athletes instinctively use situation-related sounds, there is no systematic methodology to improve auditory-based competitive ability. Few studies attempted to approach the utilization of sound in games from the perspective of acoustics, and the functional acoustical features have not been quantitatively revealed. Therefore, the objective of this study is to clarify the acoustical characteristics of auditory information to maximize its utilization in baseball games. In particular, to analyze the acoustical features of batted ball sounds that enhance defensive skills, we conducted acoustic measurements of batted ball sounds in realistic situations. The results showed that the peak gain values of fly and liner batted balls were greater than those of grounder, and the frequency components included in the hitting sound were also different among them.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Rupp, Kyle, Jasmine L. Hect, Madison Remick, Avniel Ghuman, Bharath Chandrasekaran, Lori L. Holt, and Taylor J. Abel. "Neural responses in human superior temporal cortex support coding of voice representations." PLOS Biology 20, no. 7 (July 28, 2022): e3001675. http://dx.doi.org/10.1371/journal.pbio.3001675.

Повний текст джерела
Анотація:
The ability to recognize abstract features of voice during auditory perception is an intricate feat of human audition. For the listener, this occurs in near-automatic fashion to seamlessly extract complex cues from a highly variable auditory signal. Voice perception depends on specialized regions of auditory cortex, including superior temporal gyrus (STG) and superior temporal sulcus (STS). However, the nature of voice encoding at the cortical level remains poorly understood. We leverage intracerebral recordings across human auditory cortex during presentation of voice and nonvoice acoustic stimuli to examine voice encoding at the cortical level in 8 patient-participants undergoing epilepsy surgery evaluation. We show that voice selectivity increases along the auditory hierarchy from supratemporal plane (STP) to the STG and STS. Results show accurate decoding of vocalizations from human auditory cortical activity even in the complete absence of linguistic content. These findings show an early, less-selective temporal window of neural activity in the STG and STS followed by a sustained, strongly voice-selective window. Encoding models demonstrate divergence in the encoding of acoustic features along the auditory hierarchy, wherein STG/STS responses are best explained by voice category and acoustics, as opposed to acoustic features of voice stimuli alone. This is in contrast to neural activity recorded from STP, in which responses were accounted for by acoustic features. These findings support a model of voice perception that engages categorical encoding mechanisms within STG and STS to facilitate feature extraction.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Bendor, Daniel, and Xiaoqin Wang. "Neural Coding of Periodicity in Marmoset Auditory Cortex." Journal of Neurophysiology 103, no. 4 (April 2010): 1809–22. http://dx.doi.org/10.1152/jn.00281.2009.

Повний текст джерела
Анотація:
Pitch, our perception of how high or low a sound is on a musical scale, crucially depends on a sound's periodicity. If an acoustic signal is temporally jittered so that it becomes aperiodic, the pitch will no longer be perceivable even though other acoustical features that normally covary with pitch are unchanged. Previous electrophysiological studies investigating pitch have typically used only periodic acoustic stimuli, and as such these studies cannot distinguish between a neural representation of pitch and an acoustical feature that only correlates with pitch. In this report, we examine in the auditory cortex of awake marmoset monkeys ( Callithrix jacchus) the neural coding of a periodicity's repetition rate, an acoustic feature that covaries with pitch. We first examine if individual neurons show similar repetition rate tuning for different periodic acoustic signals. We next measure how sensitive these neural representations are to the temporal regularity of the acoustic signal. We find that neurons throughout auditory cortex covary their firing rate with the repetition rate of an acoustic signal. However, similar repetition rate tuning across acoustic stimuli and sensitivity to temporal regularity were generally only observed in a small group of neurons found near the anterolateral border of primary auditory cortex, the location of a previously identified putative pitch processing center. These results suggest that although the encoding of repetition rate is a general component of auditory cortical processing, the neural correlate of periodicity is confined to a special class of pitch-selective neurons within the putative pitch processing center of auditory cortex.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Merritt, Brandon. "Speech beyond the binary: Some acoustic-phonetic and auditory-perceptual characteristics of non-binary speakers." JASA Express Letters 3, no. 3 (February 2023): 035206. http://dx.doi.org/10.1121/10.0017642.

Повний текст джерела
Анотація:
Speech acoustics research typically assumes speakers are men or women with speech characteristics associated with these two gender categories. Less work has assessed acoustic-phonetic characteristics of non-binary speakers. This study examined acoustic-phonetic features across adult cisgender (15 men and 15 women) and subgroups of transgender (15 non-binary, 7 transgender men, and 7 transgender women) speakers and relations among these features and perceptual ratings of gender identity and masculinity/femininity. Differing acoustic-phonetic features were predictive of confidence in speaker gender and masculinity/femininity across cisgender and transgender speakers. Non-binary speakers were perceptually rated within an intermediate range of cisgender women and all other groups.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Fox, Robert Allen, and Jean Booth. "Research Note on Perceptual Features and Auditory Representations." Perceptual and Motor Skills 65, no. 3 (December 1987): 837–38. http://dx.doi.org/10.2466/pms.1987.65.3.837.

Повний текст джерела
Анотація:
It has been argued that bark-scale transformed formant frequency values more accurately reflect auditory representations of vowels in the perceptual system than do the absolute physical values (in Hertz). In the present study the perceptual features of 15 monophthongal and diphthongal vowels (obtained using multidimensional scaling) were compared with both absolute and bark-scale transformed acoustic vowel measures. Analyses suggest that bark-transformation of the acoustic data does not necessarily produce better predictions of the vowels' perceptual space.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Donnelly, Martin J., Carmel A. Daly, and Robert J. S. Briggs. "MR imaging features of an intracochlear acoustic schwannoma." Journal of Laryngology & Otology 108, no. 12 (December 1994): 1111–14. http://dx.doi.org/10.1017/s0022215100129056.

Повний текст джерела
Анотація:
AbstractWe present a very unusual case of an acoustic neuroma involving the left cochlea and internal auditory canal of a 24-year-old man. Clinical suspicion was aroused when the patient presented with a left total sensorineural hearing loss and continuing vertigo. The diagnosis was made pre-operatively with MRI after initial CT scanning was normal. The tumour was removed via a transotic approach. This case report demonstrates the MRI features of an intracochlear schwannoma and emphasizes the importance of MRI in patients with significant auditory and clinical abnormalities with normal CT scans of the relevant region.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Buckley, Daniel P., Manuel Diaz Cadiz, Tanya L. Eadie, and Cara E. Stepp. "Acoustic Model of Perceived Overall Severity of Dysphonia in Adductor-Type Laryngeal Dystonia." Journal of Speech, Language, and Hearing Research 63, no. 8 (August 10, 2020): 2713–22. http://dx.doi.org/10.1044/2020_jslhr-19-00354.

Повний текст джерела
Анотація:
Purpose This study is a secondary analysis of existing data. The goal of the study was to construct an acoustic model of perceived overall severity of dysphonia in adductory laryngeal dystonia (AdLD). We predicted that acoustic measures (a) related to voice and pitch breaks and (b) related to vocal effort would form the primary elements of a model corresponding to auditory-perceptual ratings of overall severity of dysphonia. Method Twenty inexperienced listeners evaluated the overall severity of dysphonia of speech stimuli from 19 individuals with AdLD. Acoustic features related to primary signs of AdLD (hyperadduction resulting in pitch and voice breaks) and to a potential secondary symptom of AdLD (vocal effort, measures of relative fundamental frequency) were computed from the speech stimuli. Multiple linear regression analysis was applied to construct an acoustic model of the overall severity of dysphonia. Results The acoustic model included an acoustic feature related to pitch and voice breaks and three acoustic measures derived from relative fundamental frequency; it explained 84.9% of the variance in the auditory-perceptual ratings of overall severity of dysphonia in the speech samples. Conclusions Auditory-perceptual ratings of overall severity of dysphonia in AdLD were related to acoustic features of primary signs (pitch and voice breaks, hyperadduction associated with laryngeal spasms) and were also related to acoustic features of vocal effort. This suggests that compensatory vocal effort may be a secondary symptom in AdLD. Future work to generalize this acoustic model to a larger, independent data set is necessary before clinical translation is warranted.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Zong, Nannan, and Meihong Wu. "A Computational Model for Evaluating Transient Auditory Storage of Acoustic Features in Normal Listeners." Sensors 22, no. 13 (July 4, 2022): 5033. http://dx.doi.org/10.3390/s22135033.

Повний текст джерела
Анотація:
Humans are able to detect an instantaneous change in correlation, demonstrating an ability to temporally process extremely rapid changes in interaural configurations. This temporal dynamic is correlated with human listeners’ ability to store acoustic features in a transient auditory manner. The present study investigated whether the ability of transient auditory storage of acoustic features was affected by the interaural delay, which was assessed by measuring the sensitivity for detecting the instantaneous change in correlation for both wideband and narrowband correlated noise with various interaural delays. Furthermore, whether an instantaneous change in correlation between correlated interaural narrowband or wideband noise was detectable when introducing the longest interaural delay was investigated. Then, an auditory computational description model was applied to explore the relationship between wideband and narrowband simulation noise with various center frequencies in the auditory processes of lower-level transient memory of acoustic features. The computing results indicate that low-frequency information dominated perception and was more distinguishable in length than the high-frequency components, and the longest interaural delay for narrowband noise signals was highly correlated with that for wideband noise signals in the dynamic process of auditory perception.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Boşnak, Mehmet, and Ayhan Eralp. "Electrophysiological, Histological and Neurochemical Features of Cochlear Nucleus." European Journal of Therapeutics 13, no. 2 (May 1, 2007): 42–49. http://dx.doi.org/10.58600/eurjther.2007-13-2-1383-arch.

Повний текст джерела
Анотація:
The cochlear nucleus (CN), as the first brain centre in the auditory system and is responsible for sorting the neural signals received from the cochlea, into parallel processing streams for transmission to the assorted higher auditory nuclei. A commissural connection formed between cochlear nuclei through direct projections, thereby provides the first site in the central auditory system at which binaural information is able to influence the ascending auditory signal. This restricted review investigates the nature of commissural projections and the impact of their input upon neurons of the CN through intracellular and extracellular electrophysiological recordings together with both acoustic and electrical stimulation of the contralateral KN. It also investigates electrophysiological, histological and neurochemical features of CN and commissural projections.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Yang, Honghui, Junhao Li, Sheng Shen, and Guanghui Xu. "A Deep Convolutional Neural Network Inspired by Auditory Perception for Underwater Acoustic Target Recognition." Sensors 19, no. 5 (March 4, 2019): 1104. http://dx.doi.org/10.3390/s19051104.

Повний текст джерела
Анотація:
Underwater acoustic target recognition (UATR) using ship-radiated noise faces big challenges due to the complex marine environment. In this paper, inspired by neural mechanisms of auditory perception, a new end-to-end deep neural network named auditory perception inspired Deep Convolutional Neural Network (ADCNN) is proposed for UATR. In the ADCNN model, inspired by the frequency component perception neural mechanism, a bank of multi-scale deep convolution filters are designed to decompose raw time domain signal into signals with different frequency components. Inspired by the plasticity neural mechanism, the parameters of the deep convolution filters are initialized randomly, and the is n learned and optimized for UATR. The n, max-pooling layers and fully connected layers extract features from each decomposed signal. Finally, in fusion layers, features from each decomposed signal are merged and deep feature representations are extracted to classify underwater acoustic targets. The ADCNN model simulates the deep acoustic information processing structure of the auditory system. Experimental results show that the proposed model can decompose, model and classify ship-radiated noise signals efficiently. It achieves a classification accuracy of 81.96%, which is the highest in the contrast experiments. The experimental results show that auditory perception inspired deep learning method has encouraging potential to improve the classification performance of UATR.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Xiong, Feifei, Stefan Goetze, Birger Kollmeier, and Bernd T. Meyer. "Exploring Auditory-Inspired Acoustic Features for Room Acoustic Parameter Estimation From Monaural Speech." IEEE/ACM Transactions on Audio, Speech, and Language Processing 26, no. 10 (October 2018): 1809–20. http://dx.doi.org/10.1109/taslp.2018.2843537.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Kislyuk, Daniel S., Riikka Möttönen, and Mikko Sams. "Visual Processing Affects the Neural Basis of Auditory Discrimination." Journal of Cognitive Neuroscience 20, no. 12 (December 2008): 2175–84. http://dx.doi.org/10.1162/jocn.2008.20152.

Повний текст джерела
Анотація:
The interaction between auditory and visual speech streams is a seamless and surprisingly effective process. An intriguing example is the “McGurk effect”: The acoustic syllable /ba/ presented simultaneously with a mouth articulating /ga/ is typically heard as /da/ [McGurk, H., & MacDonald, J. Hearing lips and seeing voices. Nature, 264, 746–748, 1976]. Previous studies have demonstrated the interaction of auditory and visual streams at the auditory cortex level, but the importance of these interactions for the qualitative perception change remained unclear because the change could result from interactions at higher processing levels as well. In our electroencephalogram experiment, we combined the McGurk effect with mismatch negativity (MMN), a response that is elicited in the auditory cortex at a latency of 100–250 msec by any above-threshold change in a sequence of repetitive sounds. An “odd-ball” sequence of acoustic stimuli consisting of frequent /va/ syllables (standards) and infrequent /ba/ syllables (deviants) was presented to 11 participants. Deviant stimuli in the unisensory acoustic stimulus sequence elicited a typical MMN, reflecting discrimination of acoustic features in the auditory cortex. When the acoustic stimuli were dubbed onto a video of a mouth constantly articulating /va/, the deviant acoustic /ba/ was heard as /va/ due to the McGurk effect and was indistinguishable from the standards. Importantly, such deviants did not elicit MMN, indicating that the auditory cortex failed to discriminate between the acoustic stimuli. Our findings show that visual stream can qualitatively change the auditory percept at the auditory cortex level, profoundly influencing the auditory cortex mechanisms underlying early sound discrimination.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Brown, David H., and Richard L. Hyson. "Intrinsic physiological properties underlie auditory response diversity in the avian cochlear nucleus." Journal of Neurophysiology 121, no. 3 (March 1, 2019): 908–27. http://dx.doi.org/10.1152/jn.00459.2018.

Повний текст джерела
Анотація:
Sensory systems exploit parallel processing of stimulus features to enable rapid, simultaneous extraction of information. Mechanisms that facilitate this differential extraction of stimulus features can be intrinsic or synaptic in origin. A subdivision of the avian cochlear nucleus, nucleus angularis (NA), extracts sound intensity information from the auditory nerve and contains neurons that exhibit diverse responses to sound and current injection. NA neurons project to multiple regions ascending the auditory brain stem including the superior olivary nucleus, lateral lemniscus, and avian inferior colliculus, with functional implications for inhibitory gain control and sound localization. Here we investigated whether the diversity of auditory response patterns in NA can be accounted for by variation in intrinsic physiological features. Modeled sound-evoked auditory nerve input was applied to NA neurons with dynamic clamp during in vitro whole cell recording at room temperature. Temporal responses to auditory nerve input depended on variation in intrinsic properties, and the low-threshold K+ current was implicated as a major contributor to temporal response diversity and neuronal input-output functions. An auditory nerve model of acoustic amplitude modulation produced synchrony coding of modulation frequency that depended on the intrinsic physiology of the individual neuron. In Primary-Like neurons, varying low-threshold K+ conductance with dynamic clamp altered temporal modulation tuning bidirectionally. Taken together, these data suggest that intrinsic physiological properties play a key role in shaping auditory response diversity to both simple and more naturalistic auditory stimuli in the avian cochlear nucleus. NEW & NOTEWORTHY This article addresses the question of how the nervous system extracts different information in sounds. Neurons in the cochlear nucleus show diverse responses to acoustic stimuli that may allow for parallel processing of acoustic features. The present studies suggest that diversity in intrinsic physiological features of individual neurons, including levels of a low voltage-activated K+ current, play a major role in regulating the diversity of auditory responses.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Pannese, Alessia, Didier Grandjean, and Sascha Frühholz. "Amygdala and auditory cortex exhibit distinct sensitivity to relevant acoustic features of auditory emotions." Cortex 85 (December 2016): 116–25. http://dx.doi.org/10.1016/j.cortex.2016.10.013.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Smotherman, M. S., and P. M. Narins. "Hair cells, hearing and hopping: a field guide to hair cell physiology in the frog." Journal of Experimental Biology 203, no. 15 (August 1, 2000): 2237–46. http://dx.doi.org/10.1242/jeb.203.15.2237.

Повний текст джерела
Анотація:
For more than four decades, hearing in frogs has been an important source of information for those interested in auditory neuroscience, neuroethology and the evolution of hearing. Individual features of the frog auditory system can be found represented in one or many of the other vertebrate classes, but collectively the frog inner ear represents a cornucopia of evolutionary experiments in acoustic signal processing. The mechano-sensitive hair cell, as the focal point of transduction, figures critically in the encoding of acoustic information in the afferent auditory nerve. In this review, we provide a short description of how auditory signals are encoded by the specialized anatomy and physiology of the frog inner ear and examine the role of hair cell physiology and its influence on the encoding of sound in the frog auditory nerve. We hope to demonstrate that acoustic signal processing in frogs may offer insights into the evolution and biology of hearing not only in amphibians but also in reptiles, birds and mammals, including man.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Ma, Yanxin, Yifan Zhang, Jiahua Zhu, Ke Xu, and Yujin Cai. "A Fast Instantaneous Frequency Estimation for Underwater Acoustic Target Feature Extraction." Journal of Physics: Conference Series 2031, no. 1 (September 1, 2021): 012018. http://dx.doi.org/10.1088/1742-6596/2031/1/012018.

Повний текст джерела
Анотація:
Abstract Traditional auditory features merely present the amplitude characteristics of target signals in frequency domain. Such features are susceptible to environmental noise, resulting in significant degradation of recognition stability. Inspired by instantaneous information applied in speech signal processing field, this paper proposed a feature extraction method using sub-based instantaneous frequency. A fast instantaneous frequency information extraction algorithm is proposed with the normalized Gammatone filterbanks. Experiments confirm that the proposed feature extraction method can effectively maintain the recognition accuracy under low SNR conditions while reduce the computation cost.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Winkler, István, and Nelson Cowan. "From Sensory to Long-Term Memory." Experimental Psychology 52, no. 1 (January 2005): 3–20. http://dx.doi.org/10.1027/1618-3169.52.1.3.

Повний текст джерела
Анотація:
Abstract. Everyday experience tells us that some types of auditory sensory information are retained for long periods of time. For example, we are able to recognize friends by their voice alone or identify the source of familiar noises even years after we last heard the sounds. It is thus somewhat surprising that the results of most studies of auditory sensory memory show that acoustic details, such as the pitch of a tone, fade from memory in ca. 10-15 s. One should, therefore, ask (1) what types of acoustic information can be retained for a longer term, (2) what circumstances allow or help the formation of durable memory records for acoustic details, and (3) how such memory records can be accessed. The present review discusses the results of experiments that used a model of auditory recognition, the auditory memory reactivation paradigm. Results obtained with this paradigm suggest that the brain stores features of individual sounds embedded within representations of acoustic regularities that have been detected for the sound patterns and sequences in which the sounds appeared. Thus, sounds closely linked with their auditory context are more likely to be remembered. The representations of acoustic regularities are automatically activated by matching sounds, enabling object recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Al Mahmud, Nahyan, and Shahfida Amjad Munni. "Qualitative Analysis of PLP in LSTM for Bangla Speech Recognition." International journal of Multimedia & Its Applications 12, no. 5 (October 30, 2020): 1–8. http://dx.doi.org/10.5121/ijma.2020.12501.

Повний текст джерела
Анотація:
The performance of various acoustic feature extraction methods has been compared in this work using Long Short-Term Memory (LSTM) neural network in a Bangla speech recognition system. The acoustic features are a series of vectors that represents the speech signals. They can be classified in either words or sub word units such as phonemes. In this work, at first linear predictive coding (LPC) is used as acoustic vector extraction technique. LPC has been chosen due to its widespread popularity. Then other vector extraction techniques like Mel frequency cepstral coefficients (MFCC) and perceptual linear prediction (PLP) have also been used. These two methods closely resemble the human auditory system. These feature vectors are then trained using the LSTM neural network. Then the obtained models of different phonemes are compared with different statistical tools namely Bhattacharyya Distance and Mahalanobis Distance to investigate the nature of those acoustic features.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Cheng, Miao, and Ah Chung Tsoi. "Fractal dimension pattern-based multiresolution analysis for rough estimator of speaker-dependent audio emotion recognition." International Journal of Wavelets, Multiresolution and Information Processing 15, no. 05 (August 28, 2017): 1750042. http://dx.doi.org/10.1142/s0219691317500424.

Повний текст джерела
Анотація:
As a general means of expression, audio analysis and recognition have attracted much attention for its wide applications in real-life world. Audio emotion recognition (AER) attempts to understand the emotional states of human with the given utterance signals, and has been studied abroad for its further development on friendly human–machine interfaces. Though there have been several the-state-of-the-arts auditory methods devised to audio recognition, most of them focus on discriminative usage of acoustic features, while feedback efficiency of recognition demands is ignored. This makes possible application of AER, and rapid learning of emotion patterns is desired. In order to make predication of audio emotion possible, the speaker-dependent patterns of audio emotions are learned with multiresolution analysis, and fractal dimension (FD) features are calculated for acoustic feature extraction. Furthermore, it is able to efficiently learn the intrinsic characteristics of auditory emotions, while the utterance features are learned from FDs of each sub-band. Experimental results show the proposed method is able to provide comparative performance for AER.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Cohen, Yale E., Frédéric Theunissen, Brian E. Russ, and Patrick Gill. "Acoustic Features of Rhesus Vocalizations and Their Representation in the Ventrolateral Prefrontal Cortex." Journal of Neurophysiology 97, no. 2 (February 2007): 1470–84. http://dx.doi.org/10.1152/jn.00769.2006.

Повний текст джерела
Анотація:
Communication is one of the fundamental components of both human and nonhuman animal behavior. Auditory communication signals (i.e., vocalizations) are especially important in the socioecology of several species of nonhuman primates such as rhesus monkeys. In rhesus, the ventrolateral prefrontal cortex (vPFC) is thought to be part of a circuit involved in representing vocalizations and other auditory objects. To further our understanding of the role of the vPFC in processing vocalizations, we characterized the spectrotemporal features of rhesus vocalizations, compared these features with other classes of natural stimuli, and then related the rhesus-vocalization acoustic features to neural activity. We found that the range of these spectrotemporal features was similar to that found in other ensembles of natural stimuli, including human speech, and identified the subspace of these features that would be particularly informative to discriminate between different vocalizations. In a first neural study, however, we found that the tuning properties of vPFC neurons did not emphasize these particularly informative spectrotemporal features. In a second neural study, we found that a first-order linear model (the spectrotemporal receptive field) is not a good predictor of vPFC activity. The results of these two neural studies are consistent with the hypothesis that the vPFC is not involved in coding the first-order acoustic properties of a stimulus but is involved in processing the higher-order information needed to form representations of auditory objects.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Carrasco, Andres, and Stephen G. Lomber. "Neuronal activation times to simple, complex, and natural sounds in cat primary and nonprimary auditory cortex." Journal of Neurophysiology 106, no. 3 (September 2011): 1166–78. http://dx.doi.org/10.1152/jn.00940.2010.

Повний текст джерела
Анотація:
Interactions between living organisms and the environment are commonly regulated by accurate and timely processing of sensory signals. Hence, behavioral response engagement by an organism is typically constrained by the arrival time of sensory information to the brain. While psychophysical response latencies to acoustic information have been investigated, little is known about how variations in neuronal response time relate to sensory signal characteristics. Consequently, the primary objective of the present investigation was to determine the pattern of neuronal activation induced by simple (pure tones), complex (noise bursts and frequency modulated sweeps), and natural (conspecific vocalizations) acoustic signals of different durations in cat auditory cortex. Our analysis revealed three major cortical response characteristics. First, latency measures systematically increase in an antero-dorsal to postero-ventral direction among regions of auditory cortex. Second, complex acoustic stimuli reliably provoke faster neuronal response engagement than simple stimuli. Third, variations in neuronal response time induced by changes in stimulus duration are dependent on acoustic spectral features. Collectively, these results demonstrate that acoustic signals, regardless of complexity, induce a directional pattern of activation in auditory cortex.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Zhang, Ke, Yu Su, Jingyu Wang, Sanyu Wang, and Yanhua Zhang. "Environment Sound Classification System Based on Hybrid Feature and Convolutional Neural Network." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 38, no. 1 (February 2020): 162–69. http://dx.doi.org/10.1051/jnwpu/20203810162.

Повний текст джерела
Анотація:
At present, the environment sound recognition system mainly identifies environment sounds with deep neural networks and a wide variety of auditory features. Therefore, it is necessary to analyze which auditory features are more suitable for deep neural networks based ESCR systems. In this paper, we chose three sound features which based on two widely used filters:the Mel and Gammatone filter banks. Subsequently, the hybrid feature MGCC is presented. Finally, a deep convolutional neural network is proposed to verify which features are more suitable for environment sound classification and recognition tasks. The experimental results show that the signal processing features are better than the spectrogram features in the deep neural network based environmental sound recognition system. Among all the acoustic features, the MGCC feature achieves the best performance than other features. Finally, the MGCC-CNN model proposed in this paper is compared with the state-of-the-art environmental sound classification models on the UrbanSound 8K dataset. The results show that the proposed model has the best classification accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Ogg, Mattson, Thomas A. Carlson, and L. Robert Slevc. "The Rapid Emergence of Auditory Object Representations in Cortex Reflect Central Acoustic Attributes." Journal of Cognitive Neuroscience 32, no. 1 (January 2020): 111–23. http://dx.doi.org/10.1162/jocn_a_01472.

Повний текст джерела
Анотація:
Human listeners are bombarded by acoustic information that the brain rapidly organizes into coherent percepts of objects and events in the environment, which aids speech and music perception. The efficiency of auditory object recognition belies the critical constraint that acoustic stimuli necessarily require time to unfold. Using magnetoencephalography, we studied the time course of the neural processes that transform dynamic acoustic information into auditory object representations. Participants listened to a diverse set of 36 tokens comprising everyday sounds from a typical human environment. Multivariate pattern analysis was used to decode the sound tokens from the magnetoencephalographic recordings. We show that sound tokens can be decoded from brain activity beginning 90 msec after stimulus onset with peak decoding performance occurring at 155 msec poststimulus onset. Decoding performance was primarily driven by differences between category representations (e.g., environmental vs. instrument sounds), although within-category decoding was better than chance. Representational similarity analysis revealed that these emerging neural representations were related to harmonic and spectrotemporal differences among the stimuli, which correspond to canonical acoustic features processed by the auditory pathway. Our findings begin to link the processing of physical sound properties with the perception of auditory objects and events in cortex.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Nealen, Paul M., and Marc F. Schmidt. "Distributed and Selective Auditory Representation of Song Repertoires in the Avian Song System." Journal of Neurophysiology 96, no. 6 (December 2006): 3433–47. http://dx.doi.org/10.1152/jn.01130.2005.

Повний текст джерела
Анотація:
For many songbirds, the vocal repertoire constitutes acoustically distinct songs that are flexibly used in various behavioral contexts. To investigate how these different vocalizations are represented in the song neural system, we presented multiple song stimuli while performing extracellular recording in nucleus HVC in adult male song sparrows Melospiza melodia, a species known for its complex vocal repertoire and territorial use of song. We observed robust auditory responses to natural song stimuli in both awake and anesthetized animals. Auditory responses were selective for multiple songs of the bird's own repertoire (BOR) over acoustically modified versions of these stimuli. Selectivity was evident in both awake and anesthetized HVC, in contrast to auditory selectivity in zebra finch HVC, which is apparent only under anesthesia. Presentation of multiple song stimuli at different recording locations demonstrated that stimulus acoustic features and local neuronal tuning both contribute to auditory responsiveness. HVC auditory responsiveness was broadly distributed and nontopographic. Variance in auditory responsiveness was greater among than within HVC recording locations in both anesthetized and awake birds, in contrast to the global nature of auditory representation within zebra finch HVC. To assess the spatial consistency of auditory representation within HVC, we measured the repeatability with which ensembles of BOR songs were represented across the nucleus. Auditory response ranks to different songs were more consistent across recording locations in awake than in anesthetized animals. This spatial reliability of auditory responsiveness suggests that sound stimulus acoustic features contribute relatively more to auditory responsiveness in awake than in anesthetized animals.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Mobley, Frank. "Classification of SUAS propellers with auditory feature extraction methods." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 266, no. 2 (May 25, 2023): 102–13. http://dx.doi.org/10.3397/nc_2023_0014.

Повний текст джерела
Анотація:
A measurement of one stock and three custom designed propellers was conducted with the United States Air Force Academy. The measurement consisted of a constant radius arc, and a radial array to examine acoustic attributes as a function of distance and angle. During the measurement activity the experimenters' observed that each propeller possessed different audio attributes that assisted in distinguishing the stock from any of the custom propellers. To adequately explore attributes beyond the propeller's A-weighted level as a function of thrust, a timbre and sound quality analysis were conducted. These auditory feature extraction methods were combined with a fractional octave analysis into a database for machine learning classification analysis. The new baseline propeller is distinguished by the acoustic roughness alone, but the other blade designs require additional timbre features to be segregated from the stock propeller.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Shen, Sheng, Honghui Yang, Xiaohui Yao, Junhao Li, Guanghui Xu, and Meiping Sheng. "Ship Type Classification by Convolutional Neural Networks with Auditory-Like Mechanisms." Sensors 20, no. 1 (January 1, 2020): 253. http://dx.doi.org/10.3390/s20010253.

Повний текст джерела
Анотація:
Ship type classification with radiated noise helps monitor the noise of shipping around the hydrophone deployment site. This paper introduces a convolutional neural network with several auditory-like mechanisms for ship type classification. The proposed model mainly includes a cochlea model and an auditory center model. In cochlea model, acoustic signal decomposition at basement membrane is implemented by time convolutional layer with auditory filters and dilated convolutions. The transformation of neural patterns at hair cells is modeled by a time frequency conversion layer to extract auditory features. In the auditory center model, auditory features are first selectively emphasized in a supervised manner. Then, spectro-temporal patterns are extracted by deep architecture with multistage auditory mechanisms. The whole model is optimized with an objective function of ship type classification to form the plasticity of the auditory system. The contributions compared with an auditory inspired convolutional neural network include the improvements in dilated convolutions, deep architecture and target layer. The proposed model can extract auditory features from a raw hydrophone signal and identify types of ships under different working conditions. The model achieved a classification accuracy of 87.2% on four ship types and ocean background noise.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Ding, Nai, and Jonathan Z. Simon. "Neural coding of continuous speech in auditory cortex during monaural and dichotic listening." Journal of Neurophysiology 107, no. 1 (January 2012): 78–89. http://dx.doi.org/10.1152/jn.00297.2011.

Повний текст джерела
Анотація:
The cortical representation of the acoustic features of continuous speech is the foundation of speech perception. In this study, noninvasive magnetoencephalography (MEG) recordings are obtained from human subjects actively listening to spoken narratives, in both simple and cocktail party-like auditory scenes. By modeling how acoustic features of speech are encoded in ongoing MEG activity as a spectrotemporal response function, we demonstrate that the slow temporal modulations of speech in a broad spectral region are represented bilaterally in auditory cortex by a phase-locked temporal code. For speech presented monaurally to either ear, this phase-locked response is always more faithful in the right hemisphere, but with a shorter latency in the hemisphere contralateral to the stimulated ear. When different spoken narratives are presented to each ear simultaneously (dichotic listening), the resulting cortical neural activity precisely encodes the acoustic features of both of the spoken narratives, but slightly weakened and delayed compared with the monaural response. Critically, the early sensory response to the attended speech is considerably stronger than that to the unattended speech, demonstrating top-down attentional gain control. This attentional gain is substantial even during the subjects' very first exposure to the speech mixture and therefore largely independent of knowledge of the speech content. Together, these findings characterize how the spectrotemporal features of speech are encoded in human auditory cortex and establish a single-trial-based paradigm to study the neural basis underlying the cocktail party phenomenon.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Kumar, Sukhbinder, Heidi M. Bonnici, Sundeep Teki, Trevor R. Agus, Daniel Pressnitzer, Eleanor A. Maguire, and Timothy D. Griffiths. "Representations of specific acoustic patterns in the auditory cortex and hippocampus." Proceedings of the Royal Society B: Biological Sciences 281, no. 1791 (September 22, 2014): 20141000. http://dx.doi.org/10.1098/rspb.2014.1000.

Повний текст джерела
Анотація:
Previous behavioural studies have shown that repeated presentation of a randomly chosen acoustic pattern leads to the unsupervised learning of some of its specific acoustic features. The objective of our study was to determine the neural substrate for the representation of freshly learnt acoustic patterns. Subjects first performed a behavioural task that resulted in the incidental learning of three different noise-like acoustic patterns. During subsequent high-resolution functional magnetic resonance imaging scanning, subjects were then exposed again to these three learnt patterns and to others that had not been learned. Multi-voxel pattern analysis was used to test if the learnt acoustic patterns could be ‘decoded’ from the patterns of activity in the auditory cortex and medial temporal lobe. We found that activity in planum temporale and the hippocampus reliably distinguished between the learnt acoustic patterns. Our results demonstrate that these structures are involved in the neural representation of specific acoustic patterns after they have been learnt.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Dehaene-Lambertz, G. "Cerebral Specialization for Speech and Non-Speech Stimuli in Infants." Journal of Cognitive Neuroscience 12, no. 3 (May 2000): 449–60. http://dx.doi.org/10.1162/089892900562264.

Повний текст джерела
Анотація:
Early cerebral specialization and lateralization for auditory processing in 4-month-old infants was studied by recording high-density evoked potentials to acoustical and phonetic changes in a series of repeated stimuli (either tones or syllables). Mismatch responses to these stimuli exhibit a distinct topography suggesting that different neural networks within the temporal lobe are involved in the perception and representation of the different features of an auditory stimulus. These data confirm that specialized modules are present within the auditory cortex very early in development. However, both for syllables and continuous tones, higher voltages were recorded over the left hemisphere than over the right with no significant interaction of hemisphere by type of stimuli. This suggests that there is no greater left hemisphere involvement in phonetic processing than in acoustic processing during the first months of life.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Shidlovskaya, Tetiana A., Tamara V. Shidlovskaya, Kateryna Yu Kureneva, Nikolay S. Kozak, and Tetiana V. Shevtsova. "Peculiarities of the acoustic reflex registration thresholds in relation to the parameters of the thresholds tone audiometry in patients with acute combat trauma." OTORHINOLARYNGOLOGY, No6(5) 2022 (January 30, 2023): 39–43. http://dx.doi.org/10.37219/2528-8253-2022-6-39.

Повний текст джерела
Анотація:
Aim: study of threshold ratios between indicators of acoustic impedance and threshold tonal audiometry in military personnel – participants in hostilities. Materials and methods: the results of audiometric and impedancemetric examinations of 43 servicemen aged 18 to 42 years who received combat acute trauma in the period March-August 2022 and 20 patients with severe sensorineural deafness (2, 3, 4 stages according international classification) of non-acutraumatic genesis as a comparison group. Results and their discussion: During the examination of 43 soldiers, it was found that with significant sensorineural hearing loss on both sides (3 and 4 stages according international classification), the patients registered a full-fledged acoustic reflex during ipsilateral and contralateral stimulation. When conducting an impedanceometric examination of patients in the comparison group, it was found that in most patients the acoustic reflex was not registered at all. Conclusions: 1. In combat acute acoustic trauma, it is possible to register acoustic middle-ear-muscle reflex with severe hearing loss, which may be caused by damage to the receptor part of the auditory analyzer and the accompanying symptom of the phenomenon of accelerated loudness, by a central mechanism, or by a combination of central and peripheral mechanisms. 2. Dissociation of acoustic middle-ear-muscle reflex threshold values and threshold tonal audiometry is a characteristic feature of acutraumatic damage to the auditory analyzer. 3. Identified features, such as registration of acoustic middle-ear-muscle reflex in severe hearing loss, dissociation of acoustic reflex thresholds and tonal audiometry may be useful for further study of the pathogenesis of combat acute trauma, which, in turn, will contribute to improving the quality of diagnosis and finding ways to correct disorders of the auditory system in individuals, who suffered as a result of hostilities.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Itatani, Naoya, and Georg M. Klump. "Animal models for auditory streaming." Philosophical Transactions of the Royal Society B: Biological Sciences 372, no. 1714 (February 19, 2017): 20160112. http://dx.doi.org/10.1098/rstb.2016.0112.

Повний текст джерела
Анотація:
Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons’ response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Chiu, Yi-Fang, Amy Neel, and Travis Loux. "Exploring the Acoustic Perceptual Relationship of Speech in Parkinson's Disease." Journal of Speech, Language, and Hearing Research 64, no. 5 (May 11, 2021): 1560–70. http://dx.doi.org/10.1044/2021_jslhr-20-00610.

Повний текст джерела
Анотація:
Purpose Auditory perceptual judgments are commonly used to diagnose dysarthria and assess treatment progress. The purpose of the study was to examine the acoustic underpinnings of perceptual speech abnormalities in individuals with Parkinson's disease (PD). Method Auditory perceptual judgments were obtained from sentences produced by 13 speakers with PD and five healthy older adults. Twenty young listeners rated overall ease of understanding, articulatory precision, voice quality, and prosodic adequacy on a visual analog scale. Acoustic measures associated with the speech subsystems of articulation, phonation, and prosody were obtained, including second formant transitions, articulation rate, cepstral and spectral measures of voice, and pitch variations. Regression analyses were performed to assess the relationships between perceptual judgments and acoustic variables. Results Perceptual impressions of Parkinsonian speech were related to combinations of several acoustic variables. Approximately 36%–49% of the variance in the perceptual ratings were explained by the acoustic measures indicating a modest acoustic perceptual relationship. Conclusions The relationships between perceptual ratings and acoustic signals in Parkinsonian speech are multifactorial and involve a variety of acoustic features simultaneously. The modest acoustic perceptual relationships, however, suggest that future work is needed to further examine the acoustic bases of perceptual judgments in dysarthria.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Schöneich, Stefan, Konstantinos Kostarakos, and Berthold Hedwig. "An auditory feature detection circuit for sound pattern recognition." Science Advances 1, no. 8 (September 2015): e1500325. http://dx.doi.org/10.1126/sciadv.1500325.

Повний текст джерела
Анотація:
From human language to birdsong and the chirps of insects, acoustic communication is based on amplitude and frequency modulation of sound signals. Whereas frequency processing starts at the level of the hearing organs, temporal features of the sound amplitude such as rhythms or pulse rates require processing by central auditory neurons. Besides several theoretical concepts, brain circuits that detect temporal features of a sound signal are poorly understood. We focused on acoustically communicating field crickets and show how five neurons in the brain of females form an auditory feature detector circuit for the pulse pattern of the male calling song. The processing is based on a coincidence detector mechanism that selectively responds when a direct neural response and an intrinsically delayed response to the sound pulses coincide. This circuit provides the basis for auditory mate recognition in field crickets and reveals a principal mechanism of sensory processing underlying the perception of temporal patterns.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Ishikawa, Norihiko, Atsushi Komatsuzaki, and Hisashi Tokano. "Meningioma of the internal auditory canal with extension into the vestibule." Journal of Laryngology & Otology 113, no. 12 (December 1999): 1101–3. http://dx.doi.org/10.1017/s0022215100158001.

Повний текст джерела
Анотація:
AbstractMeningiomas account for approximately 18 to 19 per cent of all brain tumours. Although they can arise in numerous locations, meningiomas of the internal auditory canal (IAC) are rare. Most tumours that originate in the IAC are schwannomas of the Vilith cranial nerve (acoustic neuromas). We report a case of a meningioma which appears to originate from the IAC and extends into the vestibule. The clinical findings and the radiographical features of meningiomas of the IAC are similar to those of acoustic neuromas. Pre-operative differentiation between acoustic neuromas and meningiomas of the IAC may be difficult.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Wang, Xingmei, Jiaxiang Meng, Yangtao Liu, Ge Zhan, and Zhaonan Tian. "Self-supervised acoustic representation learning via acoustic-embedding memory unit modified space autoencoder for underwater target recognition." Journal of the Acoustical Society of America 152, no. 5 (November 2022): 2905–15. http://dx.doi.org/10.1121/10.0015138.

Повний текст джерела
Анотація:
Since the expensive annotation of high-quality signals obtained from passive sonars and the weak generalization ability of the single feature in the ocean, this paper proposes the self-supervised acoustic representation learning under acoustic-embedding memory unit modified space autoencoder (ASAE) and performs the underwater target recognition task. In the manner of the animal-like acoustic auditory system, the first step is to design a self-supervised representation learning method called space autoencoder (SAE) to merge Mel filter-bank (FBank) with the acoustic discrimination and gammatone filter-bank (GBank) with the anti-noise robustness into SAE spectrogram (SAE Spec). Meanwhile, due to poor high-level semantic information in SAE Spec, an acoustic-embedding memory unit (AEMU) is introduced as the strategy of adversarial enhancement. During the auxiliary task, more negative samples are joined in the improved contrastive loss function to obtain adversarial enhanced features called ASAE spectrogram (ASAE Spec). Ultimately, the comprehensive contrast experiments and ablation experiments on two underwater datasets show that ASAE Spec increases by more than 0.96% in accuracy, convergence rate, and anti-noise robustness of other mainstream acoustic features. The results prove the potential value of ASAE in practical applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Tran Ba Huy, Patrice, Jean Michel Hassan, Michel Wassef, Jacqueline Mikol, and Claude Thurel. "Acoustic Schwannoma Presenting as a Tumor of the External Auditory Canal." Annals of Otology, Rhinology & Laryngology 96, no. 4 (July 1987): 415–18. http://dx.doi.org/10.1177/000348948709600413.

Повний текст джерела
Анотація:
An acoustic neurinoma involving the internal auditory canal, the vestibule, the cochlea, the middle ear, and extending into the cerebellopontine angle and the external auditory canal, is described in a 56-year-old woman. An initial episode of vertigo was followed by a 27-year history of progressive unilateral hearing loss leading to complete deafness and areflexia with central compensation. The tumor was removed by a two-step surgical procedure, and the histologic features were those of a schwannoma.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Woods, David L., G. Christopher Stecker, Teemu Rinne, Timothy J. Herron, Anthony D. Cate, E. William Yund, Isaac Liao, and Xiaojian Kang. "Functional Maps of Human Auditory Cortex: Effects of Acoustic Features and Attention." PLoS ONE 4, no. 4 (April 13, 2009): e5183. http://dx.doi.org/10.1371/journal.pone.0005183.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Schlesinger, Joseph J., Sarah H. Baum Miller, Katherine Nash, Marissa Bruce, Daniel Ashmead, Matthew S. Shotwell, Judy R. Edworthy, Mark T. Wallace, and Matthew B. Weinger. "Acoustic features of auditory medical alarms—An experimental study of alarm volume." Journal of the Acoustical Society of America 143, no. 6 (June 2018): 3688–97. http://dx.doi.org/10.1121/1.5043396.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Gold, Rinat, Pamela Butler, Nadine Revheim, David I. Leitman, John A. Hansen, Ruben C. Gur, Joshua T. Kantrowitz, et al. "Auditory Emotion Recognition Impairments in Schizophrenia: Relationship to Acoustic Features and Cognition." American Journal of Psychiatry 169, no. 4 (April 2012): 424–32. http://dx.doi.org/10.1176/appi.ajp.2011.11081230.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Kurtcan, S., A. Alkan, R. Kilicarslan, A. A. Bakan, H. Toprak, A. Aralasmak, F. Aksoy, and A. Kocer. "Auditory Pathway Features Determined by DTI in Subjects with Unilateral Acoustic Neuroma." Clinical Neuroradiology 26, no. 4 (March 27, 2015): 439–44. http://dx.doi.org/10.1007/s00062-015-0385-z.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Cotana, Franco, Francesco Asdrubali, Giulio Arcangeli, Sergio Luzzi, Giampietro Ricci, Lucia Busa, Michele Goretti, et al. "Extra-Auditory Effects from Noise Exposure in Schools: Results of Nine Italian Case Studies." Acoustics 5, no. 1 (February 24, 2023): 216–41. http://dx.doi.org/10.3390/acoustics5010013.

Повний текст джерела
Анотація:
Noise exposure may cause auditory and extra-auditory effects. School teachers and students are exposed to high noise levels which have an impact on perceptual-cognitive and neurobehavioral aspects. The latter influence teaching conditions and student school performance. A Protocol was defined and parameters to be investigated were identified for acoustic characterization of unoccupied and occupied school environments, assessment of users by means of questionnaires completed by teachers and students, and vocal effort evaluation. Classrooms, laboratories, auditoriums, gymnasiums, common areas, canteens and outdoor areas were analysed in terms of acoustic features and identification of the origin of noise. The Protocol was tested in three kindergartens, three primary schools and three secondary schools placed in Rome, Florence and Perugia. Results of nine case studies are presented, including comparisons of objective and subjective investigations. Generally, the acoustic performances of the spaces under investigation do not meet the requirements of current Italian legislation. In particular, student activity determines high noise levels in laboratories, gymnasiums, and canteens. Students notice that noise mainly causes loss of concentration, fatigue, boredom, and headache. The outcomes of this research will be the starting point to define strategies and solutions for noise control and mitigation in schools and to draft guidelines for the acoustical school design.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Sauvé, Sarah A., Jeremy Marozeau, and Benjamin Rich Zendel. "The effects of aging and musicianship on the use of auditory streaming cues." PLOS ONE 17, no. 9 (September 22, 2022): e0274631. http://dx.doi.org/10.1371/journal.pone.0274631.

Повний текст джерела
Анотація:
Auditory stream segregation, or separating sounds into their respective sources and tracking them over time, is a fundamental auditory ability. Previous research has separately explored the impacts of aging and musicianship on the ability to separate and follow auditory streams. The current study evaluated the simultaneous effects of age and musicianship on auditory streaming induced by three physical features: intensity, spectral envelope and temporal envelope. In the first study, older and younger musicians and non-musicians with normal hearing identified deviants in a four-note melody interleaved with distractors that were more or less similar to the melody in terms of intensity, spectral envelope and temporal envelope. In the second study, older and younger musicians and non-musicians participated in a dissimilarity rating paradigm with pairs of melodies that differed along the same three features. Results suggested that auditory streaming skills are maintained in older adults but that older adults rely on intensity more than younger adults while musicianship is associated with increased sensitivity to spectral and temporal envelope, acoustic features that are typically less effective for stream segregation, particularly in older adults.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Näätänen, Risto. "The role of attention in auditory information processing as revealed by event-related potentials and other brain measures of cognitive function." Behavioral and Brain Sciences 13, no. 2 (June 1990): 201–33. http://dx.doi.org/10.1017/s0140525x00078407.

Повний текст джерела
Анотація:
AbstractThis article examines the role of attention and automaticity in auditory processing as revealed by event-related potential (ERP) research. An ERP component called the mismatch negativity, generated by the brain's automatic response to changes in repetitive auditory input, reveals that physical features of auditory stimuli are fully processed whether or not they are attended. It also suggests that there exist precise neuronal representations of the physical features of recent auditory stimuli, perhaps the traces underlying acoustic sensory (“echoic”) memory. A mechanism of passive attention switching in response to changes in repetitive input is also implicated.Conscious perception of discrete acoustic stimuli might be mediated by some of the mechanisms underlying another ERP component (NI), one sensitive to stimulus onset and offset. Frequent passive attentional shifts might accountforthe effect cognitive psychologists describe as “the breakthrough of the unattended” (Broadbent 1982), that is, that even unattended stimuli may be semantically processed, without assuming automatic semantic processing or late selection in selective attention.The processing negativity supports the early-selection theory and may arise from a mechanism for selectively attending to stimuli defined by certain features. This stimulus selection occurs in the form ofa matching process in which each input is compared with the “attentional trace,” a voluntarily maintained representation of the task-relevant features of the stimulus to be attended. The attentional mechanism described might underlie the stimulus-set mode of attention proposed by Broadbent. Finally, a model of automatic and attentional processing in audition is proposed that is based mainly on the aforementioned ERP components and some other physiological measures.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Bhaya-Grossman, Ilina, and Edward F. Chang. "Speech Computations of the Human Superior Temporal Gyrus." Annual Review of Psychology 73, no. 1 (January 4, 2022): 79–102. http://dx.doi.org/10.1146/annurev-psych-022321-035256.

Повний текст джерела
Анотація:
Human speech perception results from neural computations that transform external acoustic speech signals into internal representations of words. The superior temporal gyrus (STG) contains the nonprimary auditory cortex and is a critical locus for phonological processing. Here, we describe how speech sound representation in the STG relies on fundamentally nonlinear and dynamical processes, such as categorization, normalization, contextual restoration, and the extraction of temporal structure. A spatial mosaic of local cortical sites on the STG exhibits complex auditory encoding for distinct acoustic-phonetic and prosodic features. We propose that as a population ensemble, these distributed patterns of neural activity give rise to abstract, higher-order phonemic and syllabic representations that support speech perception. This review presents a multi-scale, recurrent model of phonological processing in the STG, highlighting the critical interface between auditory and language systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Wu, Yacen, Feng Lin, Huahua Li, and Zhongli Jiang. "Effects of Visuoauditory Stimuli on the Acoustic Features of Swallowing in the Elderly." Journal of Medical Imaging and Health Informatics 10, no. 10 (October 1, 2020): 2324–29. http://dx.doi.org/10.1166/jmihi.2020.2989.

Повний текст джерела
Анотація:
Objective: To investigate the effect of audiovisual stimulation on swallowing sounds in the elderly. Method: Mirror therapy (MT) videos were prepared and divided into AMs, LMs, AFs, and LFs. Sixty videos were randomly selected from AMs, LMs, AFs, and LFs. The selected videos were divided into two sections (10 min per section). The control videos were extracted from the film "Le Peuple Migrateur." Finally, the TD (ms), TE (dB), DHE (ms), DHE/TD (%), PI (dB), DPI (ms), FPI (Hz), and PF (Hz) were analyzed. Result: TD of AS was significantly shorter than that of AS. Lower TE and PI were observed in AS compared to those observed in visual and auditory stimuli (VAS). DHE/TD and DPI were longer in AS relative to VAS. In addition, a lower FPI was observed in AS than in VAS. Conclusion: VAS can significantly improve swallowing frequencies, speed up swallowing movements and increase swallowing functional reserve in the elderly. In addition, the decreased swallowing efficacy under auditory stimuli could be reversed by visual stimuli.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Pang, Huadong, Shibo Wang, Xijie Dou, Houguang Liu, Xu Chen, Shanguo Yang, Teng Wang, and Siyang Wang. "A Feature Extraction Method Using Auditory Nerve Response for Collapsing Coal-Gangue Recognition." Applied Sciences 10, no. 21 (October 23, 2020): 7471. http://dx.doi.org/10.3390/app10217471.

Повний текст джерела
Анотація:
To intelligentize the top-coal caving’s process, many data-driven coal-gangue recognition techniques have been proposed recently. However, practical applications of these techniques are hindered by coal mine underground’s high background noise and complex environment. Considering that workers distinguish coal and gangue by hearing the impact sounds on the hydraulic support, we proposed a novel feature extraction method based on an auditory nerve (AN) response model simulating the human auditory system. Firstly, vibration signals were measured by an acceleration sensor mounted on the back of the hydraulic support’s tail beam, and then they were converted into acoustic pressure signals. Secondly, an AN response model of different characteristic frequencies was applied to process these signals, whose output constituted the auditory spectrum for feature extraction. Meanwhile, a feature selection method integrated with variance was used to reduce redundant information of the original features. Finally, a support vector machine was employed as the classifier model in this work. The proposed method was tested and evaluated on experimental datasets collected from the Tashan Coal Mine in China. In addition, its recognition accuracy was compared with other coal-gangue recognition methods based on commonly used features. The results show that our proposed method can reach a superior recognition accuracy of 99.23% and presents better generalization ability.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Carrasco, Andres, Trecia A. Brown, and Stephen G. Lomber. "Spectral and Temporal Acoustic Features Modulate Response Irregularities within Primary Auditory Cortex Columns." PLoS ONE 9, no. 12 (December 10, 2014): e114550. http://dx.doi.org/10.1371/journal.pone.0114550.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Mangiamele, L. A., and S. S. Burmeister. "Auditory selectivity for acoustic features that confer species recognition in the tungara frog." Journal of Experimental Biology 214, no. 17 (August 10, 2011): 2911–18. http://dx.doi.org/10.1242/jeb.058362.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Magdziarz, Daniel D., Richard J. Wiet, Elizabeth A. Dinces, and Lois C. Adamiec. "Normal audiologic presentations in patients with acoustic neuroma: An evaluation using strict audiologic parameters." Otolaryngology–Head and Neck Surgery 122, no. 2 (February 2000): 157–62. http://dx.doi.org/10.1016/s0194-5998(00)70232-4.

Повний текст джерела
Анотація:
Although several studies have previously reported on patients presenting with “normal” audiologic parameters in acoustic neuroma, the present study is, to our knowledge, the first to exclusively examine in detail cases involving exceptionally stringent objective audiometric features. Of 369 patients with acoustic neuroma who were operated on between April 1980 and April 1997 by our group, 10 had strictly normal hearing, defined as follows: (1) pure-tone average < 20 dB; (2) speech discrimination score > 90%; and (3) interaural differences < 10 dB at every hertz level. A high level of audiologic functioning was found to significantly lower the sensitivity of auditory brainstem response in the detection of acoustic neuroma. Magnetic resonance imaging was the only preoperative test exhibiting 100% sensitivity in this setting. Thus, a high level of clinical suspicion appears warranted in any case involving unexplained unilateral audio-vestibular symptoms—including those instances in which strictly normal hearing parameters exist and are associated with negative auditory brainstem response findings.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Montes-Lourido, Pilar, Manaswini Kar, Stephen V. David, and Srivatsun Sadagopan. "Neuronal selectivity to complex vocalization features emerges in the superficial layers of primary auditory cortex." PLOS Biology 19, no. 6 (June 16, 2021): e3001299. http://dx.doi.org/10.1371/journal.pbio.3001299.

Повний текст джерела
Анотація:
Early in auditory processing, neural responses faithfully reflect acoustic input. At higher stages of auditory processing, however, neurons become selective for particular call types, eventually leading to specialized regions of cortex that preferentially process calls at the highest auditory processing stages. We previously proposed that an intermediate step in how nonselective responses are transformed into call-selective responses is the detection of informative call features. But how neural selectivity for informative call features emerges from nonselective inputs, whether feature selectivity gradually emerges over the processing hierarchy, and how stimulus information is represented in nonselective and feature-selective populations remain open question. In this study, using unanesthetized guinea pigs (GPs), a highly vocal and social rodent, as an animal model, we characterized the neural representation of calls in 3 auditory processing stages—the thalamus (ventral medial geniculate body (vMGB)), and thalamorecipient (L4) and superficial layers (L2/3) of primary auditory cortex (A1). We found that neurons in vMGB and A1 L4 did not exhibit call-selective responses and responded throughout the call durations. However, A1 L2/3 neurons showed high call selectivity with about a third of neurons responding to only 1 or 2 call types. These A1 L2/3 neurons only responded to restricted portions of calls suggesting that they were highly selective for call features. Receptive fields of these A1 L2/3 neurons showed complex spectrotemporal structures that could underlie their high call feature selectivity. Information theoretic analysis revealed that in A1 L4, stimulus information was distributed over the population and was spread out over the call durations. In contrast, in A1 L2/3, individual neurons showed brief bursts of high stimulus-specific information and conveyed high levels of information per spike. These data demonstrate that a transformation in the neural representation of calls occurs between A1 L4 and A1 L2/3, leading to the emergence of a feature-based representation of calls in A1 L2/3. Our data thus suggest that observed cortical specializations for call processing emerge in A1 and set the stage for further mechanistic studies.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії