Academic literature on the topic 'Auditory Acoustic Features'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Auditory Acoustic Features.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Auditory Acoustic Features"

1

Futamura, Ryohei. "Differences in acoustic characteristics of hitting sounds in baseball games." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 265, no. 3 (February 1, 2023): 4550–56. http://dx.doi.org/10.3397/in_2022_0654.

Full text
Abstract:
In sports, athletes use visual and auditory information to perform full-body exercises. Some studies reported that auditory information is an essential cue for athletes: They utilized auditory information to predict ball behavior and determine body movements. However, because athletes instinctively use situation-related sounds, there is no systematic methodology to improve auditory-based competitive ability. Few studies attempted to approach the utilization of sound in games from the perspective of acoustics, and the functional acoustical features have not been quantitatively revealed. Therefore, the objective of this study is to clarify the acoustical characteristics of auditory information to maximize its utilization in baseball games. In particular, to analyze the acoustical features of batted ball sounds that enhance defensive skills, we conducted acoustic measurements of batted ball sounds in realistic situations. The results showed that the peak gain values of fly and liner batted balls were greater than those of grounder, and the frequency components included in the hitting sound were also different among them.
APA, Harvard, Vancouver, ISO, and other styles
2

Rupp, Kyle, Jasmine L. Hect, Madison Remick, Avniel Ghuman, Bharath Chandrasekaran, Lori L. Holt, and Taylor J. Abel. "Neural responses in human superior temporal cortex support coding of voice representations." PLOS Biology 20, no. 7 (July 28, 2022): e3001675. http://dx.doi.org/10.1371/journal.pbio.3001675.

Full text
Abstract:
The ability to recognize abstract features of voice during auditory perception is an intricate feat of human audition. For the listener, this occurs in near-automatic fashion to seamlessly extract complex cues from a highly variable auditory signal. Voice perception depends on specialized regions of auditory cortex, including superior temporal gyrus (STG) and superior temporal sulcus (STS). However, the nature of voice encoding at the cortical level remains poorly understood. We leverage intracerebral recordings across human auditory cortex during presentation of voice and nonvoice acoustic stimuli to examine voice encoding at the cortical level in 8 patient-participants undergoing epilepsy surgery evaluation. We show that voice selectivity increases along the auditory hierarchy from supratemporal plane (STP) to the STG and STS. Results show accurate decoding of vocalizations from human auditory cortical activity even in the complete absence of linguistic content. These findings show an early, less-selective temporal window of neural activity in the STG and STS followed by a sustained, strongly voice-selective window. Encoding models demonstrate divergence in the encoding of acoustic features along the auditory hierarchy, wherein STG/STS responses are best explained by voice category and acoustics, as opposed to acoustic features of voice stimuli alone. This is in contrast to neural activity recorded from STP, in which responses were accounted for by acoustic features. These findings support a model of voice perception that engages categorical encoding mechanisms within STG and STS to facilitate feature extraction.
APA, Harvard, Vancouver, ISO, and other styles
3

Bendor, Daniel, and Xiaoqin Wang. "Neural Coding of Periodicity in Marmoset Auditory Cortex." Journal of Neurophysiology 103, no. 4 (April 2010): 1809–22. http://dx.doi.org/10.1152/jn.00281.2009.

Full text
Abstract:
Pitch, our perception of how high or low a sound is on a musical scale, crucially depends on a sound's periodicity. If an acoustic signal is temporally jittered so that it becomes aperiodic, the pitch will no longer be perceivable even though other acoustical features that normally covary with pitch are unchanged. Previous electrophysiological studies investigating pitch have typically used only periodic acoustic stimuli, and as such these studies cannot distinguish between a neural representation of pitch and an acoustical feature that only correlates with pitch. In this report, we examine in the auditory cortex of awake marmoset monkeys ( Callithrix jacchus) the neural coding of a periodicity's repetition rate, an acoustic feature that covaries with pitch. We first examine if individual neurons show similar repetition rate tuning for different periodic acoustic signals. We next measure how sensitive these neural representations are to the temporal regularity of the acoustic signal. We find that neurons throughout auditory cortex covary their firing rate with the repetition rate of an acoustic signal. However, similar repetition rate tuning across acoustic stimuli and sensitivity to temporal regularity were generally only observed in a small group of neurons found near the anterolateral border of primary auditory cortex, the location of a previously identified putative pitch processing center. These results suggest that although the encoding of repetition rate is a general component of auditory cortical processing, the neural correlate of periodicity is confined to a special class of pitch-selective neurons within the putative pitch processing center of auditory cortex.
APA, Harvard, Vancouver, ISO, and other styles
4

Merritt, Brandon. "Speech beyond the binary: Some acoustic-phonetic and auditory-perceptual characteristics of non-binary speakers." JASA Express Letters 3, no. 3 (February 2023): 035206. http://dx.doi.org/10.1121/10.0017642.

Full text
Abstract:
Speech acoustics research typically assumes speakers are men or women with speech characteristics associated with these two gender categories. Less work has assessed acoustic-phonetic characteristics of non-binary speakers. This study examined acoustic-phonetic features across adult cisgender (15 men and 15 women) and subgroups of transgender (15 non-binary, 7 transgender men, and 7 transgender women) speakers and relations among these features and perceptual ratings of gender identity and masculinity/femininity. Differing acoustic-phonetic features were predictive of confidence in speaker gender and masculinity/femininity across cisgender and transgender speakers. Non-binary speakers were perceptually rated within an intermediate range of cisgender women and all other groups.
APA, Harvard, Vancouver, ISO, and other styles
5

Fox, Robert Allen, and Jean Booth. "Research Note on Perceptual Features and Auditory Representations." Perceptual and Motor Skills 65, no. 3 (December 1987): 837–38. http://dx.doi.org/10.2466/pms.1987.65.3.837.

Full text
Abstract:
It has been argued that bark-scale transformed formant frequency values more accurately reflect auditory representations of vowels in the perceptual system than do the absolute physical values (in Hertz). In the present study the perceptual features of 15 monophthongal and diphthongal vowels (obtained using multidimensional scaling) were compared with both absolute and bark-scale transformed acoustic vowel measures. Analyses suggest that bark-transformation of the acoustic data does not necessarily produce better predictions of the vowels' perceptual space.
APA, Harvard, Vancouver, ISO, and other styles
6

Donnelly, Martin J., Carmel A. Daly, and Robert J. S. Briggs. "MR imaging features of an intracochlear acoustic schwannoma." Journal of Laryngology & Otology 108, no. 12 (December 1994): 1111–14. http://dx.doi.org/10.1017/s0022215100129056.

Full text
Abstract:
AbstractWe present a very unusual case of an acoustic neuroma involving the left cochlea and internal auditory canal of a 24-year-old man. Clinical suspicion was aroused when the patient presented with a left total sensorineural hearing loss and continuing vertigo. The diagnosis was made pre-operatively with MRI after initial CT scanning was normal. The tumour was removed via a transotic approach. This case report demonstrates the MRI features of an intracochlear schwannoma and emphasizes the importance of MRI in patients with significant auditory and clinical abnormalities with normal CT scans of the relevant region.
APA, Harvard, Vancouver, ISO, and other styles
7

Buckley, Daniel P., Manuel Diaz Cadiz, Tanya L. Eadie, and Cara E. Stepp. "Acoustic Model of Perceived Overall Severity of Dysphonia in Adductor-Type Laryngeal Dystonia." Journal of Speech, Language, and Hearing Research 63, no. 8 (August 10, 2020): 2713–22. http://dx.doi.org/10.1044/2020_jslhr-19-00354.

Full text
Abstract:
Purpose This study is a secondary analysis of existing data. The goal of the study was to construct an acoustic model of perceived overall severity of dysphonia in adductory laryngeal dystonia (AdLD). We predicted that acoustic measures (a) related to voice and pitch breaks and (b) related to vocal effort would form the primary elements of a model corresponding to auditory-perceptual ratings of overall severity of dysphonia. Method Twenty inexperienced listeners evaluated the overall severity of dysphonia of speech stimuli from 19 individuals with AdLD. Acoustic features related to primary signs of AdLD (hyperadduction resulting in pitch and voice breaks) and to a potential secondary symptom of AdLD (vocal effort, measures of relative fundamental frequency) were computed from the speech stimuli. Multiple linear regression analysis was applied to construct an acoustic model of the overall severity of dysphonia. Results The acoustic model included an acoustic feature related to pitch and voice breaks and three acoustic measures derived from relative fundamental frequency; it explained 84.9% of the variance in the auditory-perceptual ratings of overall severity of dysphonia in the speech samples. Conclusions Auditory-perceptual ratings of overall severity of dysphonia in AdLD were related to acoustic features of primary signs (pitch and voice breaks, hyperadduction associated with laryngeal spasms) and were also related to acoustic features of vocal effort. This suggests that compensatory vocal effort may be a secondary symptom in AdLD. Future work to generalize this acoustic model to a larger, independent data set is necessary before clinical translation is warranted.
APA, Harvard, Vancouver, ISO, and other styles
8

Zong, Nannan, and Meihong Wu. "A Computational Model for Evaluating Transient Auditory Storage of Acoustic Features in Normal Listeners." Sensors 22, no. 13 (July 4, 2022): 5033. http://dx.doi.org/10.3390/s22135033.

Full text
Abstract:
Humans are able to detect an instantaneous change in correlation, demonstrating an ability to temporally process extremely rapid changes in interaural configurations. This temporal dynamic is correlated with human listeners’ ability to store acoustic features in a transient auditory manner. The present study investigated whether the ability of transient auditory storage of acoustic features was affected by the interaural delay, which was assessed by measuring the sensitivity for detecting the instantaneous change in correlation for both wideband and narrowband correlated noise with various interaural delays. Furthermore, whether an instantaneous change in correlation between correlated interaural narrowband or wideband noise was detectable when introducing the longest interaural delay was investigated. Then, an auditory computational description model was applied to explore the relationship between wideband and narrowband simulation noise with various center frequencies in the auditory processes of lower-level transient memory of acoustic features. The computing results indicate that low-frequency information dominated perception and was more distinguishable in length than the high-frequency components, and the longest interaural delay for narrowband noise signals was highly correlated with that for wideband noise signals in the dynamic process of auditory perception.
APA, Harvard, Vancouver, ISO, and other styles
9

Boşnak, Mehmet, and Ayhan Eralp. "Electrophysiological, Histological and Neurochemical Features of Cochlear Nucleus." European Journal of Therapeutics 13, no. 2 (May 1, 2007): 42–49. http://dx.doi.org/10.58600/eurjther.2007-13-2-1383-arch.

Full text
Abstract:
The cochlear nucleus (CN), as the first brain centre in the auditory system and is responsible for sorting the neural signals received from the cochlea, into parallel processing streams for transmission to the assorted higher auditory nuclei. A commissural connection formed between cochlear nuclei through direct projections, thereby provides the first site in the central auditory system at which binaural information is able to influence the ascending auditory signal. This restricted review investigates the nature of commissural projections and the impact of their input upon neurons of the CN through intracellular and extracellular electrophysiological recordings together with both acoustic and electrical stimulation of the contralateral KN. It also investigates electrophysiological, histological and neurochemical features of CN and commissural projections.
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Honghui, Junhao Li, Sheng Shen, and Guanghui Xu. "A Deep Convolutional Neural Network Inspired by Auditory Perception for Underwater Acoustic Target Recognition." Sensors 19, no. 5 (March 4, 2019): 1104. http://dx.doi.org/10.3390/s19051104.

Full text
Abstract:
Underwater acoustic target recognition (UATR) using ship-radiated noise faces big challenges due to the complex marine environment. In this paper, inspired by neural mechanisms of auditory perception, a new end-to-end deep neural network named auditory perception inspired Deep Convolutional Neural Network (ADCNN) is proposed for UATR. In the ADCNN model, inspired by the frequency component perception neural mechanism, a bank of multi-scale deep convolution filters are designed to decompose raw time domain signal into signals with different frequency components. Inspired by the plasticity neural mechanism, the parameters of the deep convolution filters are initialized randomly, and the is n learned and optimized for UATR. The n, max-pooling layers and fully connected layers extract features from each decomposed signal. Finally, in fusion layers, features from each decomposed signal are merged and deep feature representations are extracted to classify underwater acoustic targets. The ADCNN model simulates the deep acoustic information processing structure of the auditory system. Experimental results show that the proposed model can decompose, model and classify ship-radiated noise signals efficiently. It achieves a classification accuracy of 81.96%, which is the highest in the contrast experiments. The experimental results show that auditory perception inspired deep learning method has encouraging potential to improve the classification performance of UATR.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Auditory Acoustic Features"

1

Anderson, Jill M. "Lateralization Effects of Brainstem Responses and Middle Latency Responses to a Complex Tone and Speech Syllable." University of Cincinnati / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1313687765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Yuxuan. "Supervised Speech Separation Using Deep Neural Networks." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1426366690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Jitong. "On Generalization of Supervised Speech Separation." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1492038295603502.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Auditory Acoustic Features"

1

Santoro, T. S. Effect of digital recording parameters on discrimination features of acoustic signals in noise. Groton, CT: Naval Submarine Medical Research Laboratory, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

McAdams, Stephen, and Bruno L. Giordano. The perception of musical timbre. Edited by Susan Hallam, Ian Cross, and Michael Thaut. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780199298457.013.0007.

Full text
Abstract:
This article discusses musical-timbre perception. Musical timbre is a combination of continuous perceptual dimensions and discrete features to which listeners are differentially sensitive. The continuous dimensions often have quantifiable acoustic correlates. The timbre-space representation is a powerful psychological model that allows predictions to be made about timbre perception in situations beyond those used to derive the model in the first place. Timbre can play a role in larger-scale movements of tension and relaxation and thus contribute to the expression inherent in musical form. Under conditions of high blend among instruments composing a vertical sonority, timbral roughness is a major component of musical tension. However, it strongly depends on the way auditory grouping processes have parsed the incoming acoustic information into events and streams.
APA, Harvard, Vancouver, ISO, and other styles
3

Soteriou, Matthew. Sound and Illusion. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198722304.003.0002.

Full text
Abstract:
A variety of different proposals have been made about the nature of sounds. Although these proposals differ in a number of significant respects, some common assumptions appear to be made by their advocates: (1) the assumption that sounds possess audible, acoustic features, such as timbre, pitch, and loudness (and so the assumption that a sound is not a property that is identical to any one of those audible features); and (2) the assumption that sounds are one kind of thing. The second assumption is rarely defended in debates about sound and auditory perception. This chapter explores ways in which such debates are affected if the relevant assumption is rejected.
APA, Harvard, Vancouver, ISO, and other styles
4

Mansell, James G. National Acoustics. University of Illinois Press, 2017. http://dx.doi.org/10.5406/illinois/9780252040672.003.0005.

Full text
Abstract:
This chapter takes the case study of the Second World War to trace the progress of the various “ways of hearing” outlined so far in the book. The chapter focusses on national sounds and national hearing as features of sonic modernity, tracing the war’s influence on attempts to shape the auditory space of the nation. It shows how the noise abatement movement dealt with the war, taking civil defence workers out of the city for quiet rest breaks in the countryside, and considers the meaning of different wartime sounds, such as bomb noise and church bells, to the wartime nation. The chapter argues that wartime citizens were situated as hearers and directed towards “healthy” ways to hear the war by different auditory experts.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Auditory Acoustic Features"

1

Xue, Dingming, Daisuke Shinma, Yuki Harazono, Hirotake Ishii, and Hiroshi Shimoda. "Experimental Evaluation of Auditory Human Interface for Radiation Awareness Based on Different Acoustic Features." In Human Interface and the Management of Information. Information Presentation and Visualization, 88–100. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-78321-1_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Honda, Tatsuya, Tetsuaki Baba, and Makoto Okamoto. "Ontenna: Design and Social Implementation of Auditory Information Transmission Devices Using Tactile and Visual Senses." In Lecture Notes in Computer Science, 130–38. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-08645-8_16.

Full text
Abstract:
Abstract Ontenna is a device that can be worn on the hair, earlobe, collar, or sleeve, and it transmits sound characteristics to the human body using vibrations and light. It can serve as an auxiliary acoustic sensory device for the Deaf and Hard of Hearing (DHH), whereas for others, it can serve as a novel acoustic perception device. A condenser microphone mounted on the main body of Ontenna acquires sound pressure data and drives the vibration motor and light-emitting diode in real-time according to the input signals. This allows the user to perceive various sonic features such as the rhythm, pattern, and strength of sound. Furthermore, by simultaneously controlling several Ontenna devices using a controller, rhythms can be transmitted to each user. In this paper, we present the design of Ontenna for DHH and its fabrication process, which was improved through digital fabrication methods. Additionally, we present case studies regarding the usage of Ontenna in a hearing-impaired school and case studies on the application of Ontenna in the entertainment field for hearing-impaired people and others. Furthermore, we discuss the effects of programming education using Ontenna.
APA, Harvard, Vancouver, ISO, and other styles
3

Frisina, Robert D., Jian Wang, Jonathan D. Byrd, Kenneth J. Karcich, and Richard J. Salvi. "Enhanced Processing of Temporal Features of Sounds in Background Noise by Cochlear Nucleus Single Neurons." In Acoustical Signal Processing in the Central Auditory System, 109–25. Boston, MA: Springer US, 1997. http://dx.doi.org/10.1007/978-1-4419-8712-9_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Maempel, Hans-Joachim, and Michael Horn. "The Influences of Hearing and Vision on Egocentric Distance and Room Size Perception under Rich-Cue Conditions." In Advances in Fundamental and Applied Research on Spatial Audio [Working Title]. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.102810.

Full text
Abstract:
Artistic renditions are mediated by the performance rooms in which they are staged. The perceived egocentric distance to the artists and the perceived room size are relevant features in this regard. The influences of both the presence and the properties of acoustic and visual environments on these features were investigated. Recordings of music and a speech performance were integrated into direct renderings of six rooms by applying dynamic binaural synthesis and chroma-key compositing. By the use of a linearized extraaural headset and a semi-panoramic stereoscopic projection, the auralized, visualized, and auralized-visualized spatial scenes were presented to test participants who were asked to estimate the egocentric distance and the room size. The mean estimates differed between the acoustic and the visual as well as between the acoustic-visual and the combined single-domain conditions. Geometric estimations in performance rooms relied upon nine-tenths on the visual, and one-tenth on the acoustic properties of the virtualized spatial scenes, but negligibly on their interaction. Structural and material properties of rooms may also influence auditory-visual distance perception.
APA, Harvard, Vancouver, ISO, and other styles
5

Jepson, Kathleen, and Thomas Ennever. "Lexical stress." In The Oxford Guide to Australian Languages, 145–58. Oxford University PressOxford, 2023. http://dx.doi.org/10.1093/oso/9780198824978.003.0014.

Full text
Abstract:
Abstract Australian languages have been drawn upon extensively to exemplify aspects of the design space of metrical structure, including rhythmical patterns, foot structure, and other stress-related phonological features. While some auditory properties have emerged in descriptions (e.g. duration, loudness, pitch, vowel quality), we are only beginning to examine what the acoustic correlates are that underlie these rhythmic structures. In this chapter, we proffer a broad overview of stress in Australian languages, highlighting some of the under-reviewed phonological and phonetic aspects of stress, along with a summary of the ranges of metrical structures found across the continent. Looking to the future, we see great value in the re-examination of primary data (where possible) and the acoustic properties underlying reported stress patterns. We also view ongoing work in the relationship between morphological and prosodic structure and the unpicking of the relationship between lexical stress and prosody above the word as areas which promise to yield many exciting insights.
APA, Harvard, Vancouver, ISO, and other styles
6

Juslin, Patrik N. "Jumping at Shadows." In Musical Emotions Explained, 265–74. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198753421.003.0018.

Full text
Abstract:
This chapter introduces a psychological mechanism that involves a close link between perception and motor behaviour. It focuses on a mechanism called the brain stem reflex, which refers to a process whereby an emotion is aroused in a listener because an acoustic feature — such as sound intensity or roughness of timbre — exceeds a certain cut-off value for which the auditory system has been designed by natural selection to quickly alert the brain. It is a kind of ‘override’ system, which is activated when an event seems to require first-priority attention. Brain stem reflexes are said to be ‘hard-wired’: they are quick, automatic, and unlearned.
APA, Harvard, Vancouver, ISO, and other styles
7

Leydon, Rebecca. "Scelsi’s Pfhat." In The Oxford Handbook of Spectral Music. Oxford University Press, 2022. http://dx.doi.org/10.1093/oxfordhb/9780190633547.013.16.

Full text
Abstract:
Abstract Timbre is the aspect of sound that enables listeners to link it with material objects and physical processes. Dennis Smalley’s term “source bonding” sums up this tendency to link timbral features directly to their material origins. Spectral features of harmonicity, attack, and spectral flux encode the physical qualities of sources—real or imagined—and of actions such as striking, flexing, and splintering. In this essay I imagine the shimmering orchestral surface of Giacinto Scelsi’s Pfhat as a window onto a virtual materiality—in particular, the monumental ruins of the Forum Romanum. The work’s many composite timbres are categorized by their degrees of durability: “sturdy” timbres, such as those produced by struck metallophones and the steady-state quality of the organ’s pitches, contrast with pliable and ephemeral timbres such as the choral voices and modified brass instruments. The former are mapped to enduring weight-bearing structures, such as the surviving columns of the temple of Castor and Pollux, visible from Scelsi’s studio, while the latter are imagined as the ghostly afterimages of architectural features that once existed but which have collapsed or weathered away over time. Drawing on texts of Freud, Pierre Jouve, and recent writings on acoustics and ecological hearing, I approach this music as an auditory hallucination of monumental architecture as it exists across time and in various stages of construction and decay.
APA, Harvard, Vancouver, ISO, and other styles
8

Grossberg, Stephen. "Overview." In Conscious Mind, Resonant Brain, 1–49. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780190070557.003.0001.

Full text
Abstract:
An overview is provided of multiple book themes. A critical one is explaining how and where conscious states of seeing, hearing, feeling, and knowing arise in our minds, why they are needed to choose effective actions, yet how unconscious states also critically influence behavior. Other themes include learning, expectation, attention, imagination, and creativity; differences between illusion and reality, and between conscious seeing and recognizing, as embodied within surface-shroud resonances and feature-category resonances, respectively; roles of visual boundaries and surfaces in understanding visual art, movies, and TV; different legacies of Helmholtz and Kanizsa towards understanding vision; how stable opaque percepts and bistable transparent percepts are explained by the same laws; how solving the stability-plasticity dilemma enables brains to learn quickly without catastrophically forgetting previously learned but still useful knowledge; how we correct errors, explore novel experiences, and develop individual selves and cumulative cultural accomplishments; how expected vs. unexpected events are regulated by interacting top-down and bottom-up processes, leading to either adaptive resonances that support fast and stable new learning, or hypothesis testing whereby to learn about novel experiences; how variations of the same cooperative and competitive processes shape intelligence in species, cellular tissues, economic markets, and political systems; how short-term memory, medium-term memory, and long-term memory regulate adaptation to changing environments on different time scales; how processes whereby we learn what events are causal also support irrational, superstitious, obsessional, self-punitive, and antisocial behaviors; how relaxation responses arise; and how future acoustic contexts can disambiguate conscious percepts of past auditory and speech sequences that are occluded by noise or multiple speakers.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Auditory Acoustic Features"

1

Suniya, V. S., and Dominic Mathew. "Acoustic modeling using auditory model features and Convolutional neural Network." In 2015 International Conference on Power, Instrumentation, Control and Computing (PICC). IEEE, 2015. http://dx.doi.org/10.1109/picc.2015.7455805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kato, Keizo, and Akinori Ito. "Acoustic Features and Auditory Impressions of Death Growl and Screaming Voice." In 2013 Ninth International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP). IEEE, 2013. http://dx.doi.org/10.1109/iih-msp.2013.120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Liao, Kun. "Combining Evidence from Auditory, Instantaneous Frequency and Random Forest for Anti-Noise Speech Recognition." In 7th International Conference on Computer Science and Information Technology (CSTY 2021). Academy and Industry Research Collaboration Center (AIRCC), 2021. http://dx.doi.org/10.5121/csit.2021.112207.

Full text
Abstract:
Due to the shortcomings of acoustic feature parameters in speech signals, and the limitations of existing acoustic features in characterizing the integrity of the speech information, This paper proposes a method for speech recognition combining cochlear feature and random forest. Environmental noise can pose a threat to the stable operation of current speech recognition systems. It is therefore essential to develop robust systems that are able to identify speech under low signal-to-noise ratio. In this paper, we propose a method of speech recognition combining spectral subtraction, auditory and energy features extraction. This method first extract novel auditory features based on cochlear filter cepstral coefficients (CFCC) and instantaneous frequency (IF), i.e., CFCCIF. Spectral subtraction is then introduced into the front end of feature extraction, and the extracted feature is called enhanced auditory features (EAF). An energy feature Teager energy operator (TEO) is also extracted, the combination of them is known as a fusion feature. Linear discriminate analysis (LDA) is then applied to feature selection and optimization of the fusion feature. Finally, random forest (RF) is used as the classifier in a non-specific persons, isolated words, and small-vocabulary speech recognition system. On the Korean isolated words database, the proposed features (i.e., EAF) after fusion with Teager energy features have shown strong robustness in the nosiy situation. Our experiments show that the optimization feature achieved in a speech recognition task display a high recognition rate and excellent anti-noise performance.
APA, Harvard, Vancouver, ISO, and other styles
4

Ambrazaitis, Gilbert, and David House. "Acoustic features of multimodal prominences: Do visual beat gestures affect verbal pitch accent realization?" In The 14th International Conference on Auditory-Visual Speech Processing. ISCA: ISCA, 2017. http://dx.doi.org/10.21437/avsp.2017-17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Biswas, Astik, P. K. Sahu, Anirban Bhowmick, and Mahesh Chandra. "VidTIMIT audio visual phoneme recognition using AAM visual features and human auditory motivated acoustic wavelet features." In 2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS). IEEE, 2015. http://dx.doi.org/10.1109/retis.2015.7232917.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Coop, Allan D. "Sonification, Musification, and Synthesis of Absolute Program Music." In The 22nd International Conference on Auditory Display. Arlington, Virginia: The International Community for Auditory Display, 2016. http://dx.doi.org/10.21785/icad2016.030.

Full text
Abstract:
When understood as a communication system, a musical work can be interpreted as data existing within three domains. In this interpretation an absolute domain is interposed as a communication channel between two programatic domains that act respectively as source and receiver. As a source, a programatic domain creates, evolves, organizes, and represents a musical work. When acting as a receiver it re-constitutes acoustic signals into unique auditory experience. The absolute domain transmits physical vibrations ranging from the stochastic structures of noise to the periodic waveforms of organized sound. Analysis of acoustic signals suggest recognition as a musical work requires signal periodicity to exceed some minimum. A methodological framework that satisfies recent definitions of sonification is outlined. This framework is proposed to extend to musification through incorporation of data features that represent more traditional elements of a musical work such as melody, harmony, and rhythm.
APA, Harvard, Vancouver, ISO, and other styles
7

Hyder, Rakib, Shabnam Ghaffarzadegan, Zhe Feng, John H. L. Hansen, and Taufiq Hasan. "Acoustic Scene Classification Using a CNN-SuperVector System Trained with Auditory and Spectrogram Image Features." In Interspeech 2017. ISCA: ISCA, 2017. http://dx.doi.org/10.21437/interspeech.2017-431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tolba, Selouani, and O'Shaughnessy. "Auditory-based acoustic distinctive features and spectral cues for automatic speech recognition using a multi-stream paradigm." In IEEE International Conference on Acoustics Speech and Signal Processing ICASSP-02. IEEE, 2002. http://dx.doi.org/10.1109/icassp.2002.1005870.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tolba, Hesham, Sid-Ahmed Selouani, and Douglas O'Shaughnessy. "Auditory-based acoustic distinctive features and spectral cues for automatic speech recognition using a multi-stream paradigm." In Proceedings of ICASSP '02. IEEE, 2002. http://dx.doi.org/10.1109/icassp.2002.5743869.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Selouani, Sid-Ahmed, Hesham Tolba, and Douglas O'Shaughnessy. "Auditory-based acoustic distinctive features and spectral cues for robust automatic speech recognition in Low-SNR car environments." In the 2003 Conference of the North American Chapter of the Association for Computational Linguistics. Morristown, NJ, USA: Association for Computational Linguistics, 2003. http://dx.doi.org/10.3115/1073483.1073514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography