Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Auditory Acoustic Features.

Artykuły w czasopismach na temat „Auditory Acoustic Features”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Auditory Acoustic Features”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Futamura, Ryohei. "Differences in acoustic characteristics of hitting sounds in baseball games". INTER-NOISE and NOISE-CON Congress and Conference Proceedings 265, nr 3 (1.02.2023): 4550–56. http://dx.doi.org/10.3397/in_2022_0654.

Pełny tekst źródła
Streszczenie:
In sports, athletes use visual and auditory information to perform full-body exercises. Some studies reported that auditory information is an essential cue for athletes: They utilized auditory information to predict ball behavior and determine body movements. However, because athletes instinctively use situation-related sounds, there is no systematic methodology to improve auditory-based competitive ability. Few studies attempted to approach the utilization of sound in games from the perspective of acoustics, and the functional acoustical features have not been quantitatively revealed. Therefore, the objective of this study is to clarify the acoustical characteristics of auditory information to maximize its utilization in baseball games. In particular, to analyze the acoustical features of batted ball sounds that enhance defensive skills, we conducted acoustic measurements of batted ball sounds in realistic situations. The results showed that the peak gain values of fly and liner batted balls were greater than those of grounder, and the frequency components included in the hitting sound were also different among them.
Style APA, Harvard, Vancouver, ISO itp.
2

Rupp, Kyle, Jasmine L. Hect, Madison Remick, Avniel Ghuman, Bharath Chandrasekaran, Lori L. Holt i Taylor J. Abel. "Neural responses in human superior temporal cortex support coding of voice representations". PLOS Biology 20, nr 7 (28.07.2022): e3001675. http://dx.doi.org/10.1371/journal.pbio.3001675.

Pełny tekst źródła
Streszczenie:
The ability to recognize abstract features of voice during auditory perception is an intricate feat of human audition. For the listener, this occurs in near-automatic fashion to seamlessly extract complex cues from a highly variable auditory signal. Voice perception depends on specialized regions of auditory cortex, including superior temporal gyrus (STG) and superior temporal sulcus (STS). However, the nature of voice encoding at the cortical level remains poorly understood. We leverage intracerebral recordings across human auditory cortex during presentation of voice and nonvoice acoustic stimuli to examine voice encoding at the cortical level in 8 patient-participants undergoing epilepsy surgery evaluation. We show that voice selectivity increases along the auditory hierarchy from supratemporal plane (STP) to the STG and STS. Results show accurate decoding of vocalizations from human auditory cortical activity even in the complete absence of linguistic content. These findings show an early, less-selective temporal window of neural activity in the STG and STS followed by a sustained, strongly voice-selective window. Encoding models demonstrate divergence in the encoding of acoustic features along the auditory hierarchy, wherein STG/STS responses are best explained by voice category and acoustics, as opposed to acoustic features of voice stimuli alone. This is in contrast to neural activity recorded from STP, in which responses were accounted for by acoustic features. These findings support a model of voice perception that engages categorical encoding mechanisms within STG and STS to facilitate feature extraction.
Style APA, Harvard, Vancouver, ISO itp.
3

Bendor, Daniel, i Xiaoqin Wang. "Neural Coding of Periodicity in Marmoset Auditory Cortex". Journal of Neurophysiology 103, nr 4 (kwiecień 2010): 1809–22. http://dx.doi.org/10.1152/jn.00281.2009.

Pełny tekst źródła
Streszczenie:
Pitch, our perception of how high or low a sound is on a musical scale, crucially depends on a sound's periodicity. If an acoustic signal is temporally jittered so that it becomes aperiodic, the pitch will no longer be perceivable even though other acoustical features that normally covary with pitch are unchanged. Previous electrophysiological studies investigating pitch have typically used only periodic acoustic stimuli, and as such these studies cannot distinguish between a neural representation of pitch and an acoustical feature that only correlates with pitch. In this report, we examine in the auditory cortex of awake marmoset monkeys ( Callithrix jacchus) the neural coding of a periodicity's repetition rate, an acoustic feature that covaries with pitch. We first examine if individual neurons show similar repetition rate tuning for different periodic acoustic signals. We next measure how sensitive these neural representations are to the temporal regularity of the acoustic signal. We find that neurons throughout auditory cortex covary their firing rate with the repetition rate of an acoustic signal. However, similar repetition rate tuning across acoustic stimuli and sensitivity to temporal regularity were generally only observed in a small group of neurons found near the anterolateral border of primary auditory cortex, the location of a previously identified putative pitch processing center. These results suggest that although the encoding of repetition rate is a general component of auditory cortical processing, the neural correlate of periodicity is confined to a special class of pitch-selective neurons within the putative pitch processing center of auditory cortex.
Style APA, Harvard, Vancouver, ISO itp.
4

Merritt, Brandon. "Speech beyond the binary: Some acoustic-phonetic and auditory-perceptual characteristics of non-binary speakers". JASA Express Letters 3, nr 3 (luty 2023): 035206. http://dx.doi.org/10.1121/10.0017642.

Pełny tekst źródła
Streszczenie:
Speech acoustics research typically assumes speakers are men or women with speech characteristics associated with these two gender categories. Less work has assessed acoustic-phonetic characteristics of non-binary speakers. This study examined acoustic-phonetic features across adult cisgender (15 men and 15 women) and subgroups of transgender (15 non-binary, 7 transgender men, and 7 transgender women) speakers and relations among these features and perceptual ratings of gender identity and masculinity/femininity. Differing acoustic-phonetic features were predictive of confidence in speaker gender and masculinity/femininity across cisgender and transgender speakers. Non-binary speakers were perceptually rated within an intermediate range of cisgender women and all other groups.
Style APA, Harvard, Vancouver, ISO itp.
5

Fox, Robert Allen, i Jean Booth. "Research Note on Perceptual Features and Auditory Representations". Perceptual and Motor Skills 65, nr 3 (grudzień 1987): 837–38. http://dx.doi.org/10.2466/pms.1987.65.3.837.

Pełny tekst źródła
Streszczenie:
It has been argued that bark-scale transformed formant frequency values more accurately reflect auditory representations of vowels in the perceptual system than do the absolute physical values (in Hertz). In the present study the perceptual features of 15 monophthongal and diphthongal vowels (obtained using multidimensional scaling) were compared with both absolute and bark-scale transformed acoustic vowel measures. Analyses suggest that bark-transformation of the acoustic data does not necessarily produce better predictions of the vowels' perceptual space.
Style APA, Harvard, Vancouver, ISO itp.
6

Donnelly, Martin J., Carmel A. Daly i Robert J. S. Briggs. "MR imaging features of an intracochlear acoustic schwannoma". Journal of Laryngology & Otology 108, nr 12 (grudzień 1994): 1111–14. http://dx.doi.org/10.1017/s0022215100129056.

Pełny tekst źródła
Streszczenie:
AbstractWe present a very unusual case of an acoustic neuroma involving the left cochlea and internal auditory canal of a 24-year-old man. Clinical suspicion was aroused when the patient presented with a left total sensorineural hearing loss and continuing vertigo. The diagnosis was made pre-operatively with MRI after initial CT scanning was normal. The tumour was removed via a transotic approach. This case report demonstrates the MRI features of an intracochlear schwannoma and emphasizes the importance of MRI in patients with significant auditory and clinical abnormalities with normal CT scans of the relevant region.
Style APA, Harvard, Vancouver, ISO itp.
7

Buckley, Daniel P., Manuel Diaz Cadiz, Tanya L. Eadie i Cara E. Stepp. "Acoustic Model of Perceived Overall Severity of Dysphonia in Adductor-Type Laryngeal Dystonia". Journal of Speech, Language, and Hearing Research 63, nr 8 (10.08.2020): 2713–22. http://dx.doi.org/10.1044/2020_jslhr-19-00354.

Pełny tekst źródła
Streszczenie:
Purpose This study is a secondary analysis of existing data. The goal of the study was to construct an acoustic model of perceived overall severity of dysphonia in adductory laryngeal dystonia (AdLD). We predicted that acoustic measures (a) related to voice and pitch breaks and (b) related to vocal effort would form the primary elements of a model corresponding to auditory-perceptual ratings of overall severity of dysphonia. Method Twenty inexperienced listeners evaluated the overall severity of dysphonia of speech stimuli from 19 individuals with AdLD. Acoustic features related to primary signs of AdLD (hyperadduction resulting in pitch and voice breaks) and to a potential secondary symptom of AdLD (vocal effort, measures of relative fundamental frequency) were computed from the speech stimuli. Multiple linear regression analysis was applied to construct an acoustic model of the overall severity of dysphonia. Results The acoustic model included an acoustic feature related to pitch and voice breaks and three acoustic measures derived from relative fundamental frequency; it explained 84.9% of the variance in the auditory-perceptual ratings of overall severity of dysphonia in the speech samples. Conclusions Auditory-perceptual ratings of overall severity of dysphonia in AdLD were related to acoustic features of primary signs (pitch and voice breaks, hyperadduction associated with laryngeal spasms) and were also related to acoustic features of vocal effort. This suggests that compensatory vocal effort may be a secondary symptom in AdLD. Future work to generalize this acoustic model to a larger, independent data set is necessary before clinical translation is warranted.
Style APA, Harvard, Vancouver, ISO itp.
8

Zong, Nannan, i Meihong Wu. "A Computational Model for Evaluating Transient Auditory Storage of Acoustic Features in Normal Listeners". Sensors 22, nr 13 (4.07.2022): 5033. http://dx.doi.org/10.3390/s22135033.

Pełny tekst źródła
Streszczenie:
Humans are able to detect an instantaneous change in correlation, demonstrating an ability to temporally process extremely rapid changes in interaural configurations. This temporal dynamic is correlated with human listeners’ ability to store acoustic features in a transient auditory manner. The present study investigated whether the ability of transient auditory storage of acoustic features was affected by the interaural delay, which was assessed by measuring the sensitivity for detecting the instantaneous change in correlation for both wideband and narrowband correlated noise with various interaural delays. Furthermore, whether an instantaneous change in correlation between correlated interaural narrowband or wideband noise was detectable when introducing the longest interaural delay was investigated. Then, an auditory computational description model was applied to explore the relationship between wideband and narrowband simulation noise with various center frequencies in the auditory processes of lower-level transient memory of acoustic features. The computing results indicate that low-frequency information dominated perception and was more distinguishable in length than the high-frequency components, and the longest interaural delay for narrowband noise signals was highly correlated with that for wideband noise signals in the dynamic process of auditory perception.
Style APA, Harvard, Vancouver, ISO itp.
9

Boşnak, Mehmet, i Ayhan Eralp. "Electrophysiological, Histological and Neurochemical Features of Cochlear Nucleus". European Journal of Therapeutics 13, nr 2 (1.05.2007): 42–49. http://dx.doi.org/10.58600/eurjther.2007-13-2-1383-arch.

Pełny tekst źródła
Streszczenie:
The cochlear nucleus (CN), as the first brain centre in the auditory system and is responsible for sorting the neural signals received from the cochlea, into parallel processing streams for transmission to the assorted higher auditory nuclei. A commissural connection formed between cochlear nuclei through direct projections, thereby provides the first site in the central auditory system at which binaural information is able to influence the ascending auditory signal. This restricted review investigates the nature of commissural projections and the impact of their input upon neurons of the CN through intracellular and extracellular electrophysiological recordings together with both acoustic and electrical stimulation of the contralateral KN. It also investigates electrophysiological, histological and neurochemical features of CN and commissural projections.
Style APA, Harvard, Vancouver, ISO itp.
10

Yang, Honghui, Junhao Li, Sheng Shen i Guanghui Xu. "A Deep Convolutional Neural Network Inspired by Auditory Perception for Underwater Acoustic Target Recognition". Sensors 19, nr 5 (4.03.2019): 1104. http://dx.doi.org/10.3390/s19051104.

Pełny tekst źródła
Streszczenie:
Underwater acoustic target recognition (UATR) using ship-radiated noise faces big challenges due to the complex marine environment. In this paper, inspired by neural mechanisms of auditory perception, a new end-to-end deep neural network named auditory perception inspired Deep Convolutional Neural Network (ADCNN) is proposed for UATR. In the ADCNN model, inspired by the frequency component perception neural mechanism, a bank of multi-scale deep convolution filters are designed to decompose raw time domain signal into signals with different frequency components. Inspired by the plasticity neural mechanism, the parameters of the deep convolution filters are initialized randomly, and the is n learned and optimized for UATR. The n, max-pooling layers and fully connected layers extract features from each decomposed signal. Finally, in fusion layers, features from each decomposed signal are merged and deep feature representations are extracted to classify underwater acoustic targets. The ADCNN model simulates the deep acoustic information processing structure of the auditory system. Experimental results show that the proposed model can decompose, model and classify ship-radiated noise signals efficiently. It achieves a classification accuracy of 81.96%, which is the highest in the contrast experiments. The experimental results show that auditory perception inspired deep learning method has encouraging potential to improve the classification performance of UATR.
Style APA, Harvard, Vancouver, ISO itp.
11

Xiong, Feifei, Stefan Goetze, Birger Kollmeier i Bernd T. Meyer. "Exploring Auditory-Inspired Acoustic Features for Room Acoustic Parameter Estimation From Monaural Speech". IEEE/ACM Transactions on Audio, Speech, and Language Processing 26, nr 10 (październik 2018): 1809–20. http://dx.doi.org/10.1109/taslp.2018.2843537.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Kislyuk, Daniel S., Riikka Möttönen i Mikko Sams. "Visual Processing Affects the Neural Basis of Auditory Discrimination". Journal of Cognitive Neuroscience 20, nr 12 (grudzień 2008): 2175–84. http://dx.doi.org/10.1162/jocn.2008.20152.

Pełny tekst źródła
Streszczenie:
The interaction between auditory and visual speech streams is a seamless and surprisingly effective process. An intriguing example is the “McGurk effect”: The acoustic syllable /ba/ presented simultaneously with a mouth articulating /ga/ is typically heard as /da/ [McGurk, H., & MacDonald, J. Hearing lips and seeing voices. Nature, 264, 746–748, 1976]. Previous studies have demonstrated the interaction of auditory and visual streams at the auditory cortex level, but the importance of these interactions for the qualitative perception change remained unclear because the change could result from interactions at higher processing levels as well. In our electroencephalogram experiment, we combined the McGurk effect with mismatch negativity (MMN), a response that is elicited in the auditory cortex at a latency of 100–250 msec by any above-threshold change in a sequence of repetitive sounds. An “odd-ball” sequence of acoustic stimuli consisting of frequent /va/ syllables (standards) and infrequent /ba/ syllables (deviants) was presented to 11 participants. Deviant stimuli in the unisensory acoustic stimulus sequence elicited a typical MMN, reflecting discrimination of acoustic features in the auditory cortex. When the acoustic stimuli were dubbed onto a video of a mouth constantly articulating /va/, the deviant acoustic /ba/ was heard as /va/ due to the McGurk effect and was indistinguishable from the standards. Importantly, such deviants did not elicit MMN, indicating that the auditory cortex failed to discriminate between the acoustic stimuli. Our findings show that visual stream can qualitatively change the auditory percept at the auditory cortex level, profoundly influencing the auditory cortex mechanisms underlying early sound discrimination.
Style APA, Harvard, Vancouver, ISO itp.
13

Brown, David H., i Richard L. Hyson. "Intrinsic physiological properties underlie auditory response diversity in the avian cochlear nucleus". Journal of Neurophysiology 121, nr 3 (1.03.2019): 908–27. http://dx.doi.org/10.1152/jn.00459.2018.

Pełny tekst źródła
Streszczenie:
Sensory systems exploit parallel processing of stimulus features to enable rapid, simultaneous extraction of information. Mechanisms that facilitate this differential extraction of stimulus features can be intrinsic or synaptic in origin. A subdivision of the avian cochlear nucleus, nucleus angularis (NA), extracts sound intensity information from the auditory nerve and contains neurons that exhibit diverse responses to sound and current injection. NA neurons project to multiple regions ascending the auditory brain stem including the superior olivary nucleus, lateral lemniscus, and avian inferior colliculus, with functional implications for inhibitory gain control and sound localization. Here we investigated whether the diversity of auditory response patterns in NA can be accounted for by variation in intrinsic physiological features. Modeled sound-evoked auditory nerve input was applied to NA neurons with dynamic clamp during in vitro whole cell recording at room temperature. Temporal responses to auditory nerve input depended on variation in intrinsic properties, and the low-threshold K+ current was implicated as a major contributor to temporal response diversity and neuronal input-output functions. An auditory nerve model of acoustic amplitude modulation produced synchrony coding of modulation frequency that depended on the intrinsic physiology of the individual neuron. In Primary-Like neurons, varying low-threshold K+ conductance with dynamic clamp altered temporal modulation tuning bidirectionally. Taken together, these data suggest that intrinsic physiological properties play a key role in shaping auditory response diversity to both simple and more naturalistic auditory stimuli in the avian cochlear nucleus. NEW & NOTEWORTHY This article addresses the question of how the nervous system extracts different information in sounds. Neurons in the cochlear nucleus show diverse responses to acoustic stimuli that may allow for parallel processing of acoustic features. The present studies suggest that diversity in intrinsic physiological features of individual neurons, including levels of a low voltage-activated K+ current, play a major role in regulating the diversity of auditory responses.
Style APA, Harvard, Vancouver, ISO itp.
14

Pannese, Alessia, Didier Grandjean i Sascha Frühholz. "Amygdala and auditory cortex exhibit distinct sensitivity to relevant acoustic features of auditory emotions". Cortex 85 (grudzień 2016): 116–25. http://dx.doi.org/10.1016/j.cortex.2016.10.013.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Smotherman, M. S., i P. M. Narins. "Hair cells, hearing and hopping: a field guide to hair cell physiology in the frog". Journal of Experimental Biology 203, nr 15 (1.08.2000): 2237–46. http://dx.doi.org/10.1242/jeb.203.15.2237.

Pełny tekst źródła
Streszczenie:
For more than four decades, hearing in frogs has been an important source of information for those interested in auditory neuroscience, neuroethology and the evolution of hearing. Individual features of the frog auditory system can be found represented in one or many of the other vertebrate classes, but collectively the frog inner ear represents a cornucopia of evolutionary experiments in acoustic signal processing. The mechano-sensitive hair cell, as the focal point of transduction, figures critically in the encoding of acoustic information in the afferent auditory nerve. In this review, we provide a short description of how auditory signals are encoded by the specialized anatomy and physiology of the frog inner ear and examine the role of hair cell physiology and its influence on the encoding of sound in the frog auditory nerve. We hope to demonstrate that acoustic signal processing in frogs may offer insights into the evolution and biology of hearing not only in amphibians but also in reptiles, birds and mammals, including man.
Style APA, Harvard, Vancouver, ISO itp.
16

Ma, Yanxin, Yifan Zhang, Jiahua Zhu, Ke Xu i Yujin Cai. "A Fast Instantaneous Frequency Estimation for Underwater Acoustic Target Feature Extraction". Journal of Physics: Conference Series 2031, nr 1 (1.09.2021): 012018. http://dx.doi.org/10.1088/1742-6596/2031/1/012018.

Pełny tekst źródła
Streszczenie:
Abstract Traditional auditory features merely present the amplitude characteristics of target signals in frequency domain. Such features are susceptible to environmental noise, resulting in significant degradation of recognition stability. Inspired by instantaneous information applied in speech signal processing field, this paper proposed a feature extraction method using sub-based instantaneous frequency. A fast instantaneous frequency information extraction algorithm is proposed with the normalized Gammatone filterbanks. Experiments confirm that the proposed feature extraction method can effectively maintain the recognition accuracy under low SNR conditions while reduce the computation cost.
Style APA, Harvard, Vancouver, ISO itp.
17

Winkler, István, i Nelson Cowan. "From Sensory to Long-Term Memory". Experimental Psychology 52, nr 1 (styczeń 2005): 3–20. http://dx.doi.org/10.1027/1618-3169.52.1.3.

Pełny tekst źródła
Streszczenie:
Abstract. Everyday experience tells us that some types of auditory sensory information are retained for long periods of time. For example, we are able to recognize friends by their voice alone or identify the source of familiar noises even years after we last heard the sounds. It is thus somewhat surprising that the results of most studies of auditory sensory memory show that acoustic details, such as the pitch of a tone, fade from memory in ca. 10-15 s. One should, therefore, ask (1) what types of acoustic information can be retained for a longer term, (2) what circumstances allow or help the formation of durable memory records for acoustic details, and (3) how such memory records can be accessed. The present review discusses the results of experiments that used a model of auditory recognition, the auditory memory reactivation paradigm. Results obtained with this paradigm suggest that the brain stores features of individual sounds embedded within representations of acoustic regularities that have been detected for the sound patterns and sequences in which the sounds appeared. Thus, sounds closely linked with their auditory context are more likely to be remembered. The representations of acoustic regularities are automatically activated by matching sounds, enabling object recognition.
Style APA, Harvard, Vancouver, ISO itp.
18

Al Mahmud, Nahyan, i Shahfida Amjad Munni. "Qualitative Analysis of PLP in LSTM for Bangla Speech Recognition". International journal of Multimedia & Its Applications 12, nr 5 (30.10.2020): 1–8. http://dx.doi.org/10.5121/ijma.2020.12501.

Pełny tekst źródła
Streszczenie:
The performance of various acoustic feature extraction methods has been compared in this work using Long Short-Term Memory (LSTM) neural network in a Bangla speech recognition system. The acoustic features are a series of vectors that represents the speech signals. They can be classified in either words or sub word units such as phonemes. In this work, at first linear predictive coding (LPC) is used as acoustic vector extraction technique. LPC has been chosen due to its widespread popularity. Then other vector extraction techniques like Mel frequency cepstral coefficients (MFCC) and perceptual linear prediction (PLP) have also been used. These two methods closely resemble the human auditory system. These feature vectors are then trained using the LSTM neural network. Then the obtained models of different phonemes are compared with different statistical tools namely Bhattacharyya Distance and Mahalanobis Distance to investigate the nature of those acoustic features.
Style APA, Harvard, Vancouver, ISO itp.
19

Cheng, Miao, i Ah Chung Tsoi. "Fractal dimension pattern-based multiresolution analysis for rough estimator of speaker-dependent audio emotion recognition". International Journal of Wavelets, Multiresolution and Information Processing 15, nr 05 (28.08.2017): 1750042. http://dx.doi.org/10.1142/s0219691317500424.

Pełny tekst źródła
Streszczenie:
As a general means of expression, audio analysis and recognition have attracted much attention for its wide applications in real-life world. Audio emotion recognition (AER) attempts to understand the emotional states of human with the given utterance signals, and has been studied abroad for its further development on friendly human–machine interfaces. Though there have been several the-state-of-the-arts auditory methods devised to audio recognition, most of them focus on discriminative usage of acoustic features, while feedback efficiency of recognition demands is ignored. This makes possible application of AER, and rapid learning of emotion patterns is desired. In order to make predication of audio emotion possible, the speaker-dependent patterns of audio emotions are learned with multiresolution analysis, and fractal dimension (FD) features are calculated for acoustic feature extraction. Furthermore, it is able to efficiently learn the intrinsic characteristics of auditory emotions, while the utterance features are learned from FDs of each sub-band. Experimental results show the proposed method is able to provide comparative performance for AER.
Style APA, Harvard, Vancouver, ISO itp.
20

Cohen, Yale E., Frédéric Theunissen, Brian E. Russ i Patrick Gill. "Acoustic Features of Rhesus Vocalizations and Their Representation in the Ventrolateral Prefrontal Cortex". Journal of Neurophysiology 97, nr 2 (luty 2007): 1470–84. http://dx.doi.org/10.1152/jn.00769.2006.

Pełny tekst źródła
Streszczenie:
Communication is one of the fundamental components of both human and nonhuman animal behavior. Auditory communication signals (i.e., vocalizations) are especially important in the socioecology of several species of nonhuman primates such as rhesus monkeys. In rhesus, the ventrolateral prefrontal cortex (vPFC) is thought to be part of a circuit involved in representing vocalizations and other auditory objects. To further our understanding of the role of the vPFC in processing vocalizations, we characterized the spectrotemporal features of rhesus vocalizations, compared these features with other classes of natural stimuli, and then related the rhesus-vocalization acoustic features to neural activity. We found that the range of these spectrotemporal features was similar to that found in other ensembles of natural stimuli, including human speech, and identified the subspace of these features that would be particularly informative to discriminate between different vocalizations. In a first neural study, however, we found that the tuning properties of vPFC neurons did not emphasize these particularly informative spectrotemporal features. In a second neural study, we found that a first-order linear model (the spectrotemporal receptive field) is not a good predictor of vPFC activity. The results of these two neural studies are consistent with the hypothesis that the vPFC is not involved in coding the first-order acoustic properties of a stimulus but is involved in processing the higher-order information needed to form representations of auditory objects.
Style APA, Harvard, Vancouver, ISO itp.
21

Carrasco, Andres, i Stephen G. Lomber. "Neuronal activation times to simple, complex, and natural sounds in cat primary and nonprimary auditory cortex". Journal of Neurophysiology 106, nr 3 (wrzesień 2011): 1166–78. http://dx.doi.org/10.1152/jn.00940.2010.

Pełny tekst źródła
Streszczenie:
Interactions between living organisms and the environment are commonly regulated by accurate and timely processing of sensory signals. Hence, behavioral response engagement by an organism is typically constrained by the arrival time of sensory information to the brain. While psychophysical response latencies to acoustic information have been investigated, little is known about how variations in neuronal response time relate to sensory signal characteristics. Consequently, the primary objective of the present investigation was to determine the pattern of neuronal activation induced by simple (pure tones), complex (noise bursts and frequency modulated sweeps), and natural (conspecific vocalizations) acoustic signals of different durations in cat auditory cortex. Our analysis revealed three major cortical response characteristics. First, latency measures systematically increase in an antero-dorsal to postero-ventral direction among regions of auditory cortex. Second, complex acoustic stimuli reliably provoke faster neuronal response engagement than simple stimuli. Third, variations in neuronal response time induced by changes in stimulus duration are dependent on acoustic spectral features. Collectively, these results demonstrate that acoustic signals, regardless of complexity, induce a directional pattern of activation in auditory cortex.
Style APA, Harvard, Vancouver, ISO itp.
22

Zhang, Ke, Yu Su, Jingyu Wang, Sanyu Wang i Yanhua Zhang. "Environment Sound Classification System Based on Hybrid Feature and Convolutional Neural Network". Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 38, nr 1 (luty 2020): 162–69. http://dx.doi.org/10.1051/jnwpu/20203810162.

Pełny tekst źródła
Streszczenie:
At present, the environment sound recognition system mainly identifies environment sounds with deep neural networks and a wide variety of auditory features. Therefore, it is necessary to analyze which auditory features are more suitable for deep neural networks based ESCR systems. In this paper, we chose three sound features which based on two widely used filters:the Mel and Gammatone filter banks. Subsequently, the hybrid feature MGCC is presented. Finally, a deep convolutional neural network is proposed to verify which features are more suitable for environment sound classification and recognition tasks. The experimental results show that the signal processing features are better than the spectrogram features in the deep neural network based environmental sound recognition system. Among all the acoustic features, the MGCC feature achieves the best performance than other features. Finally, the MGCC-CNN model proposed in this paper is compared with the state-of-the-art environmental sound classification models on the UrbanSound 8K dataset. The results show that the proposed model has the best classification accuracy.
Style APA, Harvard, Vancouver, ISO itp.
23

Ogg, Mattson, Thomas A. Carlson i L. Robert Slevc. "The Rapid Emergence of Auditory Object Representations in Cortex Reflect Central Acoustic Attributes". Journal of Cognitive Neuroscience 32, nr 1 (styczeń 2020): 111–23. http://dx.doi.org/10.1162/jocn_a_01472.

Pełny tekst źródła
Streszczenie:
Human listeners are bombarded by acoustic information that the brain rapidly organizes into coherent percepts of objects and events in the environment, which aids speech and music perception. The efficiency of auditory object recognition belies the critical constraint that acoustic stimuli necessarily require time to unfold. Using magnetoencephalography, we studied the time course of the neural processes that transform dynamic acoustic information into auditory object representations. Participants listened to a diverse set of 36 tokens comprising everyday sounds from a typical human environment. Multivariate pattern analysis was used to decode the sound tokens from the magnetoencephalographic recordings. We show that sound tokens can be decoded from brain activity beginning 90 msec after stimulus onset with peak decoding performance occurring at 155 msec poststimulus onset. Decoding performance was primarily driven by differences between category representations (e.g., environmental vs. instrument sounds), although within-category decoding was better than chance. Representational similarity analysis revealed that these emerging neural representations were related to harmonic and spectrotemporal differences among the stimuli, which correspond to canonical acoustic features processed by the auditory pathway. Our findings begin to link the processing of physical sound properties with the perception of auditory objects and events in cortex.
Style APA, Harvard, Vancouver, ISO itp.
24

Nealen, Paul M., i Marc F. Schmidt. "Distributed and Selective Auditory Representation of Song Repertoires in the Avian Song System". Journal of Neurophysiology 96, nr 6 (grudzień 2006): 3433–47. http://dx.doi.org/10.1152/jn.01130.2005.

Pełny tekst źródła
Streszczenie:
For many songbirds, the vocal repertoire constitutes acoustically distinct songs that are flexibly used in various behavioral contexts. To investigate how these different vocalizations are represented in the song neural system, we presented multiple song stimuli while performing extracellular recording in nucleus HVC in adult male song sparrows Melospiza melodia, a species known for its complex vocal repertoire and territorial use of song. We observed robust auditory responses to natural song stimuli in both awake and anesthetized animals. Auditory responses were selective for multiple songs of the bird's own repertoire (BOR) over acoustically modified versions of these stimuli. Selectivity was evident in both awake and anesthetized HVC, in contrast to auditory selectivity in zebra finch HVC, which is apparent only under anesthesia. Presentation of multiple song stimuli at different recording locations demonstrated that stimulus acoustic features and local neuronal tuning both contribute to auditory responsiveness. HVC auditory responsiveness was broadly distributed and nontopographic. Variance in auditory responsiveness was greater among than within HVC recording locations in both anesthetized and awake birds, in contrast to the global nature of auditory representation within zebra finch HVC. To assess the spatial consistency of auditory representation within HVC, we measured the repeatability with which ensembles of BOR songs were represented across the nucleus. Auditory response ranks to different songs were more consistent across recording locations in awake than in anesthetized animals. This spatial reliability of auditory responsiveness suggests that sound stimulus acoustic features contribute relatively more to auditory responsiveness in awake than in anesthetized animals.
Style APA, Harvard, Vancouver, ISO itp.
25

Mobley, Frank. "Classification of SUAS propellers with auditory feature extraction methods". INTER-NOISE and NOISE-CON Congress and Conference Proceedings 266, nr 2 (25.05.2023): 102–13. http://dx.doi.org/10.3397/nc_2023_0014.

Pełny tekst źródła
Streszczenie:
A measurement of one stock and three custom designed propellers was conducted with the United States Air Force Academy. The measurement consisted of a constant radius arc, and a radial array to examine acoustic attributes as a function of distance and angle. During the measurement activity the experimenters' observed that each propeller possessed different audio attributes that assisted in distinguishing the stock from any of the custom propellers. To adequately explore attributes beyond the propeller's A-weighted level as a function of thrust, a timbre and sound quality analysis were conducted. These auditory feature extraction methods were combined with a fractional octave analysis into a database for machine learning classification analysis. The new baseline propeller is distinguished by the acoustic roughness alone, but the other blade designs require additional timbre features to be segregated from the stock propeller.
Style APA, Harvard, Vancouver, ISO itp.
26

Shen, Sheng, Honghui Yang, Xiaohui Yao, Junhao Li, Guanghui Xu i Meiping Sheng. "Ship Type Classification by Convolutional Neural Networks with Auditory-Like Mechanisms". Sensors 20, nr 1 (1.01.2020): 253. http://dx.doi.org/10.3390/s20010253.

Pełny tekst źródła
Streszczenie:
Ship type classification with radiated noise helps monitor the noise of shipping around the hydrophone deployment site. This paper introduces a convolutional neural network with several auditory-like mechanisms for ship type classification. The proposed model mainly includes a cochlea model and an auditory center model. In cochlea model, acoustic signal decomposition at basement membrane is implemented by time convolutional layer with auditory filters and dilated convolutions. The transformation of neural patterns at hair cells is modeled by a time frequency conversion layer to extract auditory features. In the auditory center model, auditory features are first selectively emphasized in a supervised manner. Then, spectro-temporal patterns are extracted by deep architecture with multistage auditory mechanisms. The whole model is optimized with an objective function of ship type classification to form the plasticity of the auditory system. The contributions compared with an auditory inspired convolutional neural network include the improvements in dilated convolutions, deep architecture and target layer. The proposed model can extract auditory features from a raw hydrophone signal and identify types of ships under different working conditions. The model achieved a classification accuracy of 87.2% on four ship types and ocean background noise.
Style APA, Harvard, Vancouver, ISO itp.
27

Ding, Nai, i Jonathan Z. Simon. "Neural coding of continuous speech in auditory cortex during monaural and dichotic listening". Journal of Neurophysiology 107, nr 1 (styczeń 2012): 78–89. http://dx.doi.org/10.1152/jn.00297.2011.

Pełny tekst źródła
Streszczenie:
The cortical representation of the acoustic features of continuous speech is the foundation of speech perception. In this study, noninvasive magnetoencephalography (MEG) recordings are obtained from human subjects actively listening to spoken narratives, in both simple and cocktail party-like auditory scenes. By modeling how acoustic features of speech are encoded in ongoing MEG activity as a spectrotemporal response function, we demonstrate that the slow temporal modulations of speech in a broad spectral region are represented bilaterally in auditory cortex by a phase-locked temporal code. For speech presented monaurally to either ear, this phase-locked response is always more faithful in the right hemisphere, but with a shorter latency in the hemisphere contralateral to the stimulated ear. When different spoken narratives are presented to each ear simultaneously (dichotic listening), the resulting cortical neural activity precisely encodes the acoustic features of both of the spoken narratives, but slightly weakened and delayed compared with the monaural response. Critically, the early sensory response to the attended speech is considerably stronger than that to the unattended speech, demonstrating top-down attentional gain control. This attentional gain is substantial even during the subjects' very first exposure to the speech mixture and therefore largely independent of knowledge of the speech content. Together, these findings characterize how the spectrotemporal features of speech are encoded in human auditory cortex and establish a single-trial-based paradigm to study the neural basis underlying the cocktail party phenomenon.
Style APA, Harvard, Vancouver, ISO itp.
28

Kumar, Sukhbinder, Heidi M. Bonnici, Sundeep Teki, Trevor R. Agus, Daniel Pressnitzer, Eleanor A. Maguire i Timothy D. Griffiths. "Representations of specific acoustic patterns in the auditory cortex and hippocampus". Proceedings of the Royal Society B: Biological Sciences 281, nr 1791 (22.09.2014): 20141000. http://dx.doi.org/10.1098/rspb.2014.1000.

Pełny tekst źródła
Streszczenie:
Previous behavioural studies have shown that repeated presentation of a randomly chosen acoustic pattern leads to the unsupervised learning of some of its specific acoustic features. The objective of our study was to determine the neural substrate for the representation of freshly learnt acoustic patterns. Subjects first performed a behavioural task that resulted in the incidental learning of three different noise-like acoustic patterns. During subsequent high-resolution functional magnetic resonance imaging scanning, subjects were then exposed again to these three learnt patterns and to others that had not been learned. Multi-voxel pattern analysis was used to test if the learnt acoustic patterns could be ‘decoded’ from the patterns of activity in the auditory cortex and medial temporal lobe. We found that activity in planum temporale and the hippocampus reliably distinguished between the learnt acoustic patterns. Our results demonstrate that these structures are involved in the neural representation of specific acoustic patterns after they have been learnt.
Style APA, Harvard, Vancouver, ISO itp.
29

Dehaene-Lambertz, G. "Cerebral Specialization for Speech and Non-Speech Stimuli in Infants". Journal of Cognitive Neuroscience 12, nr 3 (maj 2000): 449–60. http://dx.doi.org/10.1162/089892900562264.

Pełny tekst źródła
Streszczenie:
Early cerebral specialization and lateralization for auditory processing in 4-month-old infants was studied by recording high-density evoked potentials to acoustical and phonetic changes in a series of repeated stimuli (either tones or syllables). Mismatch responses to these stimuli exhibit a distinct topography suggesting that different neural networks within the temporal lobe are involved in the perception and representation of the different features of an auditory stimulus. These data confirm that specialized modules are present within the auditory cortex very early in development. However, both for syllables and continuous tones, higher voltages were recorded over the left hemisphere than over the right with no significant interaction of hemisphere by type of stimuli. This suggests that there is no greater left hemisphere involvement in phonetic processing than in acoustic processing during the first months of life.
Style APA, Harvard, Vancouver, ISO itp.
30

Shidlovskaya, Tetiana A., Tamara V. Shidlovskaya, Kateryna Yu Kureneva, Nikolay S. Kozak i Tetiana V. Shevtsova. "Peculiarities of the acoustic reflex registration thresholds in relation to the parameters of the thresholds tone audiometry in patients with acute combat trauma". OTORHINOLARYNGOLOGY, No6(5) 2022 (30.01.2023): 39–43. http://dx.doi.org/10.37219/2528-8253-2022-6-39.

Pełny tekst źródła
Streszczenie:
Aim: study of threshold ratios between indicators of acoustic impedance and threshold tonal audiometry in military personnel – participants in hostilities. Materials and methods: the results of audiometric and impedancemetric examinations of 43 servicemen aged 18 to 42 years who received combat acute trauma in the period March-August 2022 and 20 patients with severe sensorineural deafness (2, 3, 4 stages according international classification) of non-acutraumatic genesis as a comparison group. Results and their discussion: During the examination of 43 soldiers, it was found that with significant sensorineural hearing loss on both sides (3 and 4 stages according international classification), the patients registered a full-fledged acoustic reflex during ipsilateral and contralateral stimulation. When conducting an impedanceometric examination of patients in the comparison group, it was found that in most patients the acoustic reflex was not registered at all. Conclusions: 1. In combat acute acoustic trauma, it is possible to register acoustic middle-ear-muscle reflex with severe hearing loss, which may be caused by damage to the receptor part of the auditory analyzer and the accompanying symptom of the phenomenon of accelerated loudness, by a central mechanism, or by a combination of central and peripheral mechanisms. 2. Dissociation of acoustic middle-ear-muscle reflex threshold values and threshold tonal audiometry is a characteristic feature of acutraumatic damage to the auditory analyzer. 3. Identified features, such as registration of acoustic middle-ear-muscle reflex in severe hearing loss, dissociation of acoustic reflex thresholds and tonal audiometry may be useful for further study of the pathogenesis of combat acute trauma, which, in turn, will contribute to improving the quality of diagnosis and finding ways to correct disorders of the auditory system in individuals, who suffered as a result of hostilities.
Style APA, Harvard, Vancouver, ISO itp.
31

Itatani, Naoya, i Georg M. Klump. "Animal models for auditory streaming". Philosophical Transactions of the Royal Society B: Biological Sciences 372, nr 1714 (19.02.2017): 20160112. http://dx.doi.org/10.1098/rstb.2016.0112.

Pełny tekst źródła
Streszczenie:
Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons’ response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’.
Style APA, Harvard, Vancouver, ISO itp.
32

Chiu, Yi-Fang, Amy Neel i Travis Loux. "Exploring the Acoustic Perceptual Relationship of Speech in Parkinson's Disease". Journal of Speech, Language, and Hearing Research 64, nr 5 (11.05.2021): 1560–70. http://dx.doi.org/10.1044/2021_jslhr-20-00610.

Pełny tekst źródła
Streszczenie:
Purpose Auditory perceptual judgments are commonly used to diagnose dysarthria and assess treatment progress. The purpose of the study was to examine the acoustic underpinnings of perceptual speech abnormalities in individuals with Parkinson's disease (PD). Method Auditory perceptual judgments were obtained from sentences produced by 13 speakers with PD and five healthy older adults. Twenty young listeners rated overall ease of understanding, articulatory precision, voice quality, and prosodic adequacy on a visual analog scale. Acoustic measures associated with the speech subsystems of articulation, phonation, and prosody were obtained, including second formant transitions, articulation rate, cepstral and spectral measures of voice, and pitch variations. Regression analyses were performed to assess the relationships between perceptual judgments and acoustic variables. Results Perceptual impressions of Parkinsonian speech were related to combinations of several acoustic variables. Approximately 36%–49% of the variance in the perceptual ratings were explained by the acoustic measures indicating a modest acoustic perceptual relationship. Conclusions The relationships between perceptual ratings and acoustic signals in Parkinsonian speech are multifactorial and involve a variety of acoustic features simultaneously. The modest acoustic perceptual relationships, however, suggest that future work is needed to further examine the acoustic bases of perceptual judgments in dysarthria.
Style APA, Harvard, Vancouver, ISO itp.
33

Schöneich, Stefan, Konstantinos Kostarakos i Berthold Hedwig. "An auditory feature detection circuit for sound pattern recognition". Science Advances 1, nr 8 (wrzesień 2015): e1500325. http://dx.doi.org/10.1126/sciadv.1500325.

Pełny tekst źródła
Streszczenie:
From human language to birdsong and the chirps of insects, acoustic communication is based on amplitude and frequency modulation of sound signals. Whereas frequency processing starts at the level of the hearing organs, temporal features of the sound amplitude such as rhythms or pulse rates require processing by central auditory neurons. Besides several theoretical concepts, brain circuits that detect temporal features of a sound signal are poorly understood. We focused on acoustically communicating field crickets and show how five neurons in the brain of females form an auditory feature detector circuit for the pulse pattern of the male calling song. The processing is based on a coincidence detector mechanism that selectively responds when a direct neural response and an intrinsically delayed response to the sound pulses coincide. This circuit provides the basis for auditory mate recognition in field crickets and reveals a principal mechanism of sensory processing underlying the perception of temporal patterns.
Style APA, Harvard, Vancouver, ISO itp.
34

Ishikawa, Norihiko, Atsushi Komatsuzaki i Hisashi Tokano. "Meningioma of the internal auditory canal with extension into the vestibule". Journal of Laryngology & Otology 113, nr 12 (grudzień 1999): 1101–3. http://dx.doi.org/10.1017/s0022215100158001.

Pełny tekst źródła
Streszczenie:
AbstractMeningiomas account for approximately 18 to 19 per cent of all brain tumours. Although they can arise in numerous locations, meningiomas of the internal auditory canal (IAC) are rare. Most tumours that originate in the IAC are schwannomas of the Vilith cranial nerve (acoustic neuromas). We report a case of a meningioma which appears to originate from the IAC and extends into the vestibule. The clinical findings and the radiographical features of meningiomas of the IAC are similar to those of acoustic neuromas. Pre-operative differentiation between acoustic neuromas and meningiomas of the IAC may be difficult.
Style APA, Harvard, Vancouver, ISO itp.
35

Wang, Xingmei, Jiaxiang Meng, Yangtao Liu, Ge Zhan i Zhaonan Tian. "Self-supervised acoustic representation learning via acoustic-embedding memory unit modified space autoencoder for underwater target recognition". Journal of the Acoustical Society of America 152, nr 5 (listopad 2022): 2905–15. http://dx.doi.org/10.1121/10.0015138.

Pełny tekst źródła
Streszczenie:
Since the expensive annotation of high-quality signals obtained from passive sonars and the weak generalization ability of the single feature in the ocean, this paper proposes the self-supervised acoustic representation learning under acoustic-embedding memory unit modified space autoencoder (ASAE) and performs the underwater target recognition task. In the manner of the animal-like acoustic auditory system, the first step is to design a self-supervised representation learning method called space autoencoder (SAE) to merge Mel filter-bank (FBank) with the acoustic discrimination and gammatone filter-bank (GBank) with the anti-noise robustness into SAE spectrogram (SAE Spec). Meanwhile, due to poor high-level semantic information in SAE Spec, an acoustic-embedding memory unit (AEMU) is introduced as the strategy of adversarial enhancement. During the auxiliary task, more negative samples are joined in the improved contrastive loss function to obtain adversarial enhanced features called ASAE spectrogram (ASAE Spec). Ultimately, the comprehensive contrast experiments and ablation experiments on two underwater datasets show that ASAE Spec increases by more than 0.96% in accuracy, convergence rate, and anti-noise robustness of other mainstream acoustic features. The results prove the potential value of ASAE in practical applications.
Style APA, Harvard, Vancouver, ISO itp.
36

Tran Ba Huy, Patrice, Jean Michel Hassan, Michel Wassef, Jacqueline Mikol i Claude Thurel. "Acoustic Schwannoma Presenting as a Tumor of the External Auditory Canal". Annals of Otology, Rhinology & Laryngology 96, nr 4 (lipiec 1987): 415–18. http://dx.doi.org/10.1177/000348948709600413.

Pełny tekst źródła
Streszczenie:
An acoustic neurinoma involving the internal auditory canal, the vestibule, the cochlea, the middle ear, and extending into the cerebellopontine angle and the external auditory canal, is described in a 56-year-old woman. An initial episode of vertigo was followed by a 27-year history of progressive unilateral hearing loss leading to complete deafness and areflexia with central compensation. The tumor was removed by a two-step surgical procedure, and the histologic features were those of a schwannoma.
Style APA, Harvard, Vancouver, ISO itp.
37

Woods, David L., G. Christopher Stecker, Teemu Rinne, Timothy J. Herron, Anthony D. Cate, E. William Yund, Isaac Liao i Xiaojian Kang. "Functional Maps of Human Auditory Cortex: Effects of Acoustic Features and Attention". PLoS ONE 4, nr 4 (13.04.2009): e5183. http://dx.doi.org/10.1371/journal.pone.0005183.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Schlesinger, Joseph J., Sarah H. Baum Miller, Katherine Nash, Marissa Bruce, Daniel Ashmead, Matthew S. Shotwell, Judy R. Edworthy, Mark T. Wallace i Matthew B. Weinger. "Acoustic features of auditory medical alarms—An experimental study of alarm volume". Journal of the Acoustical Society of America 143, nr 6 (czerwiec 2018): 3688–97. http://dx.doi.org/10.1121/1.5043396.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Gold, Rinat, Pamela Butler, Nadine Revheim, David I. Leitman, John A. Hansen, Ruben C. Gur, Joshua T. Kantrowitz i in. "Auditory Emotion Recognition Impairments in Schizophrenia: Relationship to Acoustic Features and Cognition". American Journal of Psychiatry 169, nr 4 (kwiecień 2012): 424–32. http://dx.doi.org/10.1176/appi.ajp.2011.11081230.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Kurtcan, S., A. Alkan, R. Kilicarslan, A. A. Bakan, H. Toprak, A. Aralasmak, F. Aksoy i A. Kocer. "Auditory Pathway Features Determined by DTI in Subjects with Unilateral Acoustic Neuroma". Clinical Neuroradiology 26, nr 4 (27.03.2015): 439–44. http://dx.doi.org/10.1007/s00062-015-0385-z.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Cotana, Franco, Francesco Asdrubali, Giulio Arcangeli, Sergio Luzzi, Giampietro Ricci, Lucia Busa, Michele Goretti i in. "Extra-Auditory Effects from Noise Exposure in Schools: Results of Nine Italian Case Studies". Acoustics 5, nr 1 (24.02.2023): 216–41. http://dx.doi.org/10.3390/acoustics5010013.

Pełny tekst źródła
Streszczenie:
Noise exposure may cause auditory and extra-auditory effects. School teachers and students are exposed to high noise levels which have an impact on perceptual-cognitive and neurobehavioral aspects. The latter influence teaching conditions and student school performance. A Protocol was defined and parameters to be investigated were identified for acoustic characterization of unoccupied and occupied school environments, assessment of users by means of questionnaires completed by teachers and students, and vocal effort evaluation. Classrooms, laboratories, auditoriums, gymnasiums, common areas, canteens and outdoor areas were analysed in terms of acoustic features and identification of the origin of noise. The Protocol was tested in three kindergartens, three primary schools and three secondary schools placed in Rome, Florence and Perugia. Results of nine case studies are presented, including comparisons of objective and subjective investigations. Generally, the acoustic performances of the spaces under investigation do not meet the requirements of current Italian legislation. In particular, student activity determines high noise levels in laboratories, gymnasiums, and canteens. Students notice that noise mainly causes loss of concentration, fatigue, boredom, and headache. The outcomes of this research will be the starting point to define strategies and solutions for noise control and mitigation in schools and to draft guidelines for the acoustical school design.
Style APA, Harvard, Vancouver, ISO itp.
42

Sauvé, Sarah A., Jeremy Marozeau i Benjamin Rich Zendel. "The effects of aging and musicianship on the use of auditory streaming cues". PLOS ONE 17, nr 9 (22.09.2022): e0274631. http://dx.doi.org/10.1371/journal.pone.0274631.

Pełny tekst źródła
Streszczenie:
Auditory stream segregation, or separating sounds into their respective sources and tracking them over time, is a fundamental auditory ability. Previous research has separately explored the impacts of aging and musicianship on the ability to separate and follow auditory streams. The current study evaluated the simultaneous effects of age and musicianship on auditory streaming induced by three physical features: intensity, spectral envelope and temporal envelope. In the first study, older and younger musicians and non-musicians with normal hearing identified deviants in a four-note melody interleaved with distractors that were more or less similar to the melody in terms of intensity, spectral envelope and temporal envelope. In the second study, older and younger musicians and non-musicians participated in a dissimilarity rating paradigm with pairs of melodies that differed along the same three features. Results suggested that auditory streaming skills are maintained in older adults but that older adults rely on intensity more than younger adults while musicianship is associated with increased sensitivity to spectral and temporal envelope, acoustic features that are typically less effective for stream segregation, particularly in older adults.
Style APA, Harvard, Vancouver, ISO itp.
43

Näätänen, Risto. "The role of attention in auditory information processing as revealed by event-related potentials and other brain measures of cognitive function". Behavioral and Brain Sciences 13, nr 2 (czerwiec 1990): 201–33. http://dx.doi.org/10.1017/s0140525x00078407.

Pełny tekst źródła
Streszczenie:
AbstractThis article examines the role of attention and automaticity in auditory processing as revealed by event-related potential (ERP) research. An ERP component called the mismatch negativity, generated by the brain's automatic response to changes in repetitive auditory input, reveals that physical features of auditory stimuli are fully processed whether or not they are attended. It also suggests that there exist precise neuronal representations of the physical features of recent auditory stimuli, perhaps the traces underlying acoustic sensory (“echoic”) memory. A mechanism of passive attention switching in response to changes in repetitive input is also implicated.Conscious perception of discrete acoustic stimuli might be mediated by some of the mechanisms underlying another ERP component (NI), one sensitive to stimulus onset and offset. Frequent passive attentional shifts might accountforthe effect cognitive psychologists describe as “the breakthrough of the unattended” (Broadbent 1982), that is, that even unattended stimuli may be semantically processed, without assuming automatic semantic processing or late selection in selective attention.The processing negativity supports the early-selection theory and may arise from a mechanism for selectively attending to stimuli defined by certain features. This stimulus selection occurs in the form ofa matching process in which each input is compared with the “attentional trace,” a voluntarily maintained representation of the task-relevant features of the stimulus to be attended. The attentional mechanism described might underlie the stimulus-set mode of attention proposed by Broadbent. Finally, a model of automatic and attentional processing in audition is proposed that is based mainly on the aforementioned ERP components and some other physiological measures.
Style APA, Harvard, Vancouver, ISO itp.
44

Bhaya-Grossman, Ilina, i Edward F. Chang. "Speech Computations of the Human Superior Temporal Gyrus". Annual Review of Psychology 73, nr 1 (4.01.2022): 79–102. http://dx.doi.org/10.1146/annurev-psych-022321-035256.

Pełny tekst źródła
Streszczenie:
Human speech perception results from neural computations that transform external acoustic speech signals into internal representations of words. The superior temporal gyrus (STG) contains the nonprimary auditory cortex and is a critical locus for phonological processing. Here, we describe how speech sound representation in the STG relies on fundamentally nonlinear and dynamical processes, such as categorization, normalization, contextual restoration, and the extraction of temporal structure. A spatial mosaic of local cortical sites on the STG exhibits complex auditory encoding for distinct acoustic-phonetic and prosodic features. We propose that as a population ensemble, these distributed patterns of neural activity give rise to abstract, higher-order phonemic and syllabic representations that support speech perception. This review presents a multi-scale, recurrent model of phonological processing in the STG, highlighting the critical interface between auditory and language systems.
Style APA, Harvard, Vancouver, ISO itp.
45

Wu, Yacen, Feng Lin, Huahua Li i Zhongli Jiang. "Effects of Visuoauditory Stimuli on the Acoustic Features of Swallowing in the Elderly". Journal of Medical Imaging and Health Informatics 10, nr 10 (1.10.2020): 2324–29. http://dx.doi.org/10.1166/jmihi.2020.2989.

Pełny tekst źródła
Streszczenie:
Objective: To investigate the effect of audiovisual stimulation on swallowing sounds in the elderly. Method: Mirror therapy (MT) videos were prepared and divided into AMs, LMs, AFs, and LFs. Sixty videos were randomly selected from AMs, LMs, AFs, and LFs. The selected videos were divided into two sections (10 min per section). The control videos were extracted from the film "Le Peuple Migrateur." Finally, the TD (ms), TE (dB), DHE (ms), DHE/TD (%), PI (dB), DPI (ms), FPI (Hz), and PF (Hz) were analyzed. Result: TD of AS was significantly shorter than that of AS. Lower TE and PI were observed in AS compared to those observed in visual and auditory stimuli (VAS). DHE/TD and DPI were longer in AS relative to VAS. In addition, a lower FPI was observed in AS than in VAS. Conclusion: VAS can significantly improve swallowing frequencies, speed up swallowing movements and increase swallowing functional reserve in the elderly. In addition, the decreased swallowing efficacy under auditory stimuli could be reversed by visual stimuli.
Style APA, Harvard, Vancouver, ISO itp.
46

Pang, Huadong, Shibo Wang, Xijie Dou, Houguang Liu, Xu Chen, Shanguo Yang, Teng Wang i Siyang Wang. "A Feature Extraction Method Using Auditory Nerve Response for Collapsing Coal-Gangue Recognition". Applied Sciences 10, nr 21 (23.10.2020): 7471. http://dx.doi.org/10.3390/app10217471.

Pełny tekst źródła
Streszczenie:
To intelligentize the top-coal caving’s process, many data-driven coal-gangue recognition techniques have been proposed recently. However, practical applications of these techniques are hindered by coal mine underground’s high background noise and complex environment. Considering that workers distinguish coal and gangue by hearing the impact sounds on the hydraulic support, we proposed a novel feature extraction method based on an auditory nerve (AN) response model simulating the human auditory system. Firstly, vibration signals were measured by an acceleration sensor mounted on the back of the hydraulic support’s tail beam, and then they were converted into acoustic pressure signals. Secondly, an AN response model of different characteristic frequencies was applied to process these signals, whose output constituted the auditory spectrum for feature extraction. Meanwhile, a feature selection method integrated with variance was used to reduce redundant information of the original features. Finally, a support vector machine was employed as the classifier model in this work. The proposed method was tested and evaluated on experimental datasets collected from the Tashan Coal Mine in China. In addition, its recognition accuracy was compared with other coal-gangue recognition methods based on commonly used features. The results show that our proposed method can reach a superior recognition accuracy of 99.23% and presents better generalization ability.
Style APA, Harvard, Vancouver, ISO itp.
47

Carrasco, Andres, Trecia A. Brown i Stephen G. Lomber. "Spectral and Temporal Acoustic Features Modulate Response Irregularities within Primary Auditory Cortex Columns". PLoS ONE 9, nr 12 (10.12.2014): e114550. http://dx.doi.org/10.1371/journal.pone.0114550.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Mangiamele, L. A., i S. S. Burmeister. "Auditory selectivity for acoustic features that confer species recognition in the tungara frog". Journal of Experimental Biology 214, nr 17 (10.08.2011): 2911–18. http://dx.doi.org/10.1242/jeb.058362.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Magdziarz, Daniel D., Richard J. Wiet, Elizabeth A. Dinces i Lois C. Adamiec. "Normal audiologic presentations in patients with acoustic neuroma: An evaluation using strict audiologic parameters". Otolaryngology–Head and Neck Surgery 122, nr 2 (luty 2000): 157–62. http://dx.doi.org/10.1016/s0194-5998(00)70232-4.

Pełny tekst źródła
Streszczenie:
Although several studies have previously reported on patients presenting with “normal” audiologic parameters in acoustic neuroma, the present study is, to our knowledge, the first to exclusively examine in detail cases involving exceptionally stringent objective audiometric features. Of 369 patients with acoustic neuroma who were operated on between April 1980 and April 1997 by our group, 10 had strictly normal hearing, defined as follows: (1) pure-tone average < 20 dB; (2) speech discrimination score > 90%; and (3) interaural differences < 10 dB at every hertz level. A high level of audiologic functioning was found to significantly lower the sensitivity of auditory brainstem response in the detection of acoustic neuroma. Magnetic resonance imaging was the only preoperative test exhibiting 100% sensitivity in this setting. Thus, a high level of clinical suspicion appears warranted in any case involving unexplained unilateral audio-vestibular symptoms—including those instances in which strictly normal hearing parameters exist and are associated with negative auditory brainstem response findings.
Style APA, Harvard, Vancouver, ISO itp.
50

Montes-Lourido, Pilar, Manaswini Kar, Stephen V. David i Srivatsun Sadagopan. "Neuronal selectivity to complex vocalization features emerges in the superficial layers of primary auditory cortex". PLOS Biology 19, nr 6 (16.06.2021): e3001299. http://dx.doi.org/10.1371/journal.pbio.3001299.

Pełny tekst źródła
Streszczenie:
Early in auditory processing, neural responses faithfully reflect acoustic input. At higher stages of auditory processing, however, neurons become selective for particular call types, eventually leading to specialized regions of cortex that preferentially process calls at the highest auditory processing stages. We previously proposed that an intermediate step in how nonselective responses are transformed into call-selective responses is the detection of informative call features. But how neural selectivity for informative call features emerges from nonselective inputs, whether feature selectivity gradually emerges over the processing hierarchy, and how stimulus information is represented in nonselective and feature-selective populations remain open question. In this study, using unanesthetized guinea pigs (GPs), a highly vocal and social rodent, as an animal model, we characterized the neural representation of calls in 3 auditory processing stages—the thalamus (ventral medial geniculate body (vMGB)), and thalamorecipient (L4) and superficial layers (L2/3) of primary auditory cortex (A1). We found that neurons in vMGB and A1 L4 did not exhibit call-selective responses and responded throughout the call durations. However, A1 L2/3 neurons showed high call selectivity with about a third of neurons responding to only 1 or 2 call types. These A1 L2/3 neurons only responded to restricted portions of calls suggesting that they were highly selective for call features. Receptive fields of these A1 L2/3 neurons showed complex spectrotemporal structures that could underlie their high call feature selectivity. Information theoretic analysis revealed that in A1 L4, stimulus information was distributed over the population and was spread out over the call durations. In contrast, in A1 L2/3, individual neurons showed brief bursts of high stimulus-specific information and conveyed high levels of information per spike. These data demonstrate that a transformation in the neural representation of calls occurs between A1 L4 and A1 L2/3, leading to the emergence of a feature-based representation of calls in A1 L2/3. Our data thus suggest that observed cortical specializations for call processing emerge in A1 and set the stage for further mechanistic studies.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii