Journal articles on the topic 'Psychoacoustic experiments'

To see the other types of publications on this topic, follow the link: Psychoacoustic experiments.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Psychoacoustic experiments.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Fritz, Claudia, Ian Cross, Brian C. J. Moore, and Jim Woodhouse. "Psychoacoustic experiments with virtual violins." Journal of the Acoustical Society of America 120, no. 5 (November 2006): 3364. http://dx.doi.org/10.1121/1.4781517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Roberts, Armisha, and Kyla McMullen. "The Effects of Laboratory Environment Type on Intermittent Sound Localization." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 66, no. 1 (September 2022): 1235–39. http://dx.doi.org/10.1177/1071181322661494.

Full text
Abstract:
Virtual reality (VR) and augmented reality (AR) are gaining commercial popularity. 3D sound guidelines for AR and VR are derived from psychoacoustic experiments performed in contrived, sterile laboratory settings. Often, these settings are expensive, inaccessible, and unattainable for researchers. The feasibility of conducting psychoacoustic experiments outside the laboratory remains unclear. To investigate, we explore 3D sound localization experiments in-lab (IL) and out-of-the lab (OL). The IL study condition was conducted as a traditional psychoacoustic experiment in a soundproof booth. The OL condition occurred in a quiet environment of the participants' choosing, using commercial-grade headphones. Localization performance did not vary significantly for OL participants compared to the IL participants, with larger variation observed in the IL condition. Participants needed significantly more time to complete the experiment IL than OL. The results suggest that conducting headphone-based psychoacoustic experiments outside the laboratory is feasible if completion time is negligible.
APA, Harvard, Vancouver, ISO, and other styles
3

Lapsley Miller, Judi A. "New techniques to reduce observer inconsistency in psychoacoustic experiments." Journal of the Acoustical Society of America 107, no. 5 (May 2000): 2914. http://dx.doi.org/10.1121/1.428850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Luo, Qi. "The Improving Effect of Intelligent Speech Recognition System on English Learning." Advances in Multimedia 2022 (March 10, 2022): 1–13. http://dx.doi.org/10.1155/2022/2910859.

Full text
Abstract:
To improve the effect of English learning in the context of smart education, this study combines speech coding to improve the intelligent speech recognition algorithm, builds an intelligent English learning system, combines the characteristics of human ears, and studies a coding strategy of a psychoacoustic masking model based on the characteristics of human ears. Moreover, this study analyzes in detail the basic principles and implementation process of the psychoacoustic model coding strategy based on the characteristics of the human ear and completes the channel selection by calculating the masking threshold. In addition, this study verifies the effectiveness of the algorithm in this study through simulation experiments. Finally, this study builds a smart speech recognition system based on this model and uses simulation experiments to verify the effect of smart speech recognition on English learning. To improve the voice recognition effect of smart speech, band-pass filtering and envelope detection adopt the gammatone filter bank and Meddis inner hair cell model in the mathematical model of the cochlear system; at the same time, the masking effect model of psychoacoustics is introduced in the channel selection stage to prevent noise. Sex has been improved, and the recognition effect of smart voice has been improved. The analysis shows that the intelligent speech recognition system proposed in this study can effectively improve the effect of English learning. In particular, it has a great effect on improving the effect of oral learning.
APA, Harvard, Vancouver, ISO, and other styles
5

Williams, H. E. F., G. L. Gibian, E. N. Harnden, and A. Evans. "Using MIDI, the standard Musical Instrument Digital Interface, for psychoacoustic experiments." Journal of the Acoustical Society of America 83, S1 (May 1988): S16. http://dx.doi.org/10.1121/1.2025224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cumming, W. T., and J. G. Wells. "Flexible PC-based workstation for auditory evoked potential and psychoacoustic experiments." Medical & Biological Engineering & Computing 30, no. 3 (May 1992): 373–76. http://dx.doi.org/10.1007/bf02446978.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Trautmann, Florian, Björn Knöfel, Welf-Guntram Drossel, Jan Troge, Markus Freund, Lars Penter, and Damian Anders. "Subjective hearing sensation of process variations at a milling machine. How reliable will chatter marks be detected?" INTER-NOISE and NOISE-CON Congress and Conference Proceedings 263, no. 2 (August 1, 2021): 4088–99. http://dx.doi.org/10.3397/in-2021-2599.

Full text
Abstract:
Intuition enables experienced machine operators to detect production errors and to identify their specific sources. A prominent example in machining are chatter marks caused by machining vibrations. The operator's assessment, if the process runs stable or not, is not exclusively based on technical parameters such as rotation frequency, tool diameter, or the number of teeth. Because the human ear is a powerful feature extraction and classification device, this study investigates to what degree the hearing sensation influences the operators decision making. A steel machining process with a design of experiments (DOE)-based variation of process parameters was conducted on a milling machine. Microphone and acceleration sensors recorded machining vibrations and machine operators documented their hearing sensation via survey sheet. In order to obtain the optimal dataset for calculating various psychoacoustic characteristics, a principle component analysis was conducted. The subsequent correlation analysis of all sensor data and the operator information suggest that psychoacoustic characteristics such as tonality and loudness are very good indicators of the process quality perceived by the operator. The results support the application of psychoacoustic technology for machine and process monitoring.
APA, Harvard, Vancouver, ISO, and other styles
8

Ewert, Stephan D., and Torsten Dau. "Reproducible psychoacoustic experiments and computational perception models in a modular software framework." Journal of the Acoustical Society of America 141, no. 5 (May 2017): 3630. http://dx.doi.org/10.1121/1.4987809.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

V. Kane, Prasad, and Atul B. Andhare. "End of the Assembly Line Gearbox Fault Inspection Using Artificial Neural Network and Support Vector Machines." March 24, No 1 (March 2019): 68–84. http://dx.doi.org/10.20855/ijav.2019.24.11258.

Full text
Abstract:
Gear fault diagnosis is important not only during the routine maintenance of machinery, but also during the inspection of newly manufactured gearboxes at the end of the assembly line. This paper discusses the application of an artificial neural network (ANN) and a support vector machine (SVM) for identifying faults in the gearbox, using the psychoacoustic and conventional statistical features extracted from acoustics and vibration signals. It is observed that at the end of the assembly line, the gearbox is tested by mounting it on a test bench and driving it by an electric motor. Based on the sound emitted while running on the test bench, the operator decides on the acceptance of the gearbox for further assembly on a vehicle or machine. This method of acceptance or rejection of the gearbox involves subjectivity and it is not reliable. Hence, it is important to have a reliable and objective fault detection and diagnosis method. To eliminate subjectivity, psychoacoustic features, which are derived from the science of listening in human beings, are proposed to be used as features, along with ANN and SVMs as classifiers. To ascertain the ability of the psychoacoustic features to classify faults, laboratory experiments are carried on a test setup by simulating faults like a gear shaft misalignment, a profile error of a gear tooth, a crack at the root of the tooth, and a broken tooth. ANN and SVM are trained with the psychoacoustic features extracted from the acoustic signal and other statistical features from the acoustics and vibration signals. The trained SVM and ANN are tested for fault classification for these features and their accuracy is compared. Fault classification accuracy is found to be 95.65% for ANN and 93.44% for SVM with psychoacoustic features and is found to be better than pure statistical features obtained from the vibration and acoustic signals. With the optimised ANN and SVM architecture, SVM is found to be performing better than ANN. It is concluded that the psychoacoustic features, along with the ANN and SVM method, could be adopted at the end of assembly line inspection to make the inspection process more objective.
APA, Harvard, Vancouver, ISO, and other styles
10

Wrzosek, Małgorzata, Justyna Maculewicz, Honorata Hafke-Dys, Agnieszka Nowik, Anna Preis, and Grzegorz Kroliczak. "Pitch Processing of Speech: Comparison of Psychoacoustic and Electrophysiological Data." Archives of Acoustics 38, no. 3 (September 1, 2013): 375–81. http://dx.doi.org/10.2478/aoa-2013-0044.

Full text
Abstract:
Abstract The present study consisted of two experiments. The goal of the first experiment was to establish the just noticeable differences for the fundamental frequency of the vowel /u/ by using the 2AFC method. We obtained the threshold value for 27 cents. This value is larger than the motor reaction values which had been observed in previous experiments (e.g. 9 or 19 cents). The second experiment was intended to provide neurophysiological confirmation of the detection of shifts in a frequency, using event-related potentials (ERPs). We concentrated on the mismatch negativity (MMN) - the component elicited by the change in the pattern of stimuli. Its occurrence is correlated with the discrimination threshold. In our study, MMN was observed for changes greater than 27 cents - shifts of ±50 and 100 cents (effect size - Cohen’s d = 2.259). MMN did not appear for changes of ±10 and 20 cents. The results showed that the values for which motor responses can be observed are indeed lower than those for perceptual thresholds.
APA, Harvard, Vancouver, ISO, and other styles
11

Steffman, Jeremy, and Sun-Ah Jun. "Listeners integrate pitch and durational cues to prosodic structure in word categorization." Proceedings of the Linguistic Society of America 4, no. 1 (March 15, 2019): 49. http://dx.doi.org/10.3765/plsa.v4i1.4536.

Full text
Abstract:
In this study we investigate how listeners perceive vowel duration as a cue to voicing based on changes in pitch height, using a 2AFC task in which they categorized a target word from a vowel duration continuum as “coat” or “code”. We consider this issue in light of (1) psychoacoustic perceptual interactions between pitch and duration and (2) compensatory effects for prosodically driven patterning of pitch and duration in the accentual/prominence-marking system of English. In two experiments we found that listeners’ interpretation of pitch as a psychoacoustic, or prosodic event is dependent on continuum step size and range. In Experiment 1 listeners exemplified the expected psychoacoustic pattern in categorization. In Experiment 2, we altered the duration continuum in an attempt to highlight pitch as a language-specific prosodic property and found that listeners do indeed compensate for prosodically driven patterning of pitch and duration. The results thus highlight flexibility in listeners’ interpretation of these acoustic dimensions. We argue that, in the right circumstances, prosodic patterns influence listeners’ interpretation of pitch and expectations about vowel duration in the perception of isolated words. Results are discussed in terms of more general implications for listeners’ perception of prosodic and segmental cues, and possibilities for cross-linguistic extension.
APA, Harvard, Vancouver, ISO, and other styles
12

Wendt, Florian, Gerriet K. Sharma, Matthias Frank, Franz Zotter, and Robert Höldrich. "Perception of Spatial Sound Phenomena Created by the Icosahedral Loudspeaker." Computer Music Journal 41, no. 1 (March 2017): 76–88. http://dx.doi.org/10.1162/comj_a_00396.

Full text
Abstract:
The icosahedral loudspeaker (IKO) is able to project strongly focused sound beams into arbitrary directions. Incorporating artistic experience and psychoacoustic research, this article presents three listening experiments that provide evidence for a common, intersubjective perception of spatial sonic phenomena created by the IKO. The experiments are designed on the basis of a hierarchical model of spatiosonic phenomena that exhibit increasing complexity, ranging from a single static sonic object to combinations of multiple, partly moving objects. The results are promising and explore new compositional perspectives in spatial computer music.
APA, Harvard, Vancouver, ISO, and other styles
13

Miner, Nadine E., Timothy E. Goldsmith, and Thomas P. Caudell. "Perceptual Validation Experiments for Evaluating the Quality of Wavelet-Synthesized Sounds." Presence: Teleoperators and Virtual Environments 11, no. 5 (October 2002): 508–24. http://dx.doi.org/10.1162/105474602320935847.

Full text
Abstract:
This paper describes three psychoacoustic experiments that evaluated the perceptual quality of sounds generated from a new wavelet-based synthesis technique. The synthesis technique provides a method for modeling and synthesizing perceptually compelling sound. The experiments define a methodology for evaluating the effectiveness of any synthesized sound. An identification task and a context-based rating task evaluated the perceptual quality of individual sounds. These experiments confirmed that the wavelet technique synthesizes a wide variety of compelling sounds from a small model set. The third experiment obtained sound similarity ratings. Psychological scaling methods were applied to the similarity ratings to generate both spatial and network models of the perceptual relations among the synthesized sounds. These analysis techniques helped to refine and extend the sound models. Overall, the studies provided a framework to validate synthesized sounds for a variety of applications including virtual reality and data sonification systems.
APA, Harvard, Vancouver, ISO, and other styles
14

Fishman, Yonatan I., and Mitchell Steinschneider. "Spectral Resolution of Monkey Primary Auditory Cortex (A1) Revealed With Two-Noise Masking." Journal of Neurophysiology 96, no. 3 (September 2006): 1105–15. http://dx.doi.org/10.1152/jn.00124.2006.

Full text
Abstract:
An important function of the auditory nervous system is to analyze the frequency content of environmental sounds. The neural structures involved in determining psychophysical frequency resolution remain unclear. Using a two-noise masking paradigm, the present study investigates the spectral resolution of neural populations in primary auditory cortex (A1) of awake macaques and the degree to which it matches psychophysical frequency resolution. Neural ensemble responses (auditory evoked potentials, multiunit activity, and current source density) evoked by a pulsed 60-dB SPL pure-tone signal fixed at the best frequency (BF) of the recorded neural populations were examined as a function of the frequency separation (ΔF) between the tone and two symmetrically flanking continuous 80-dB SPL, 50-Hz-wide bands of noise. ΔFs ranged from 0 to 50% of the BF, encompassing the range typically examined in psychoacoustic experiments. Responses to the signal were minimal for ΔF = 0% and progressively increased with ΔF, reaching a maximum at ΔF = 50%. Rounded exponential functions, used to model auditory filter shapes in psychoacoustic studies of frequency resolution, provided excellent fits to neural masking functions. Goodness-of-fit was greatest for response components in lamina 4 and lower lamina 3 and least for components recorded in more superficial cortical laminae. Physiological equivalent rectangular bandwidths (ERBs) increased with BF, measuring nearly 15% of the BF. These findings parallel results of psychoacoustic studies in both monkeys and humans, and thus indicate that a representation of perceptual frequency resolution is available at the level of A1.
APA, Harvard, Vancouver, ISO, and other styles
15

Schönwiesner, Marc, and Ole Bialas. "s(ound)lab: An easy to learn Python package for designing and running psychoacoustic experiments." Journal of Open Source Software 6, no. 62 (June 25, 2021): 3284. http://dx.doi.org/10.21105/joss.03284.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Wei, Xian Min. "Research of MP3 Audio Digital Watermark Algorithm Based on Hash Values." Advanced Materials Research 179-180 (January 2011): 830–35. http://dx.doi.org/10.4028/www.scientific.net/amr.179-180.830.

Full text
Abstract:
This paper puts forwards large audio blind watermark algorithm based on MDCT and compression principles of MP3.Using Hash function, psychoacoustic model and MP3 coding/decoding to realize real-time distilling, and validating watermarking recessive, capability, real-timing and etc through experiment and analysis. Experiments show that the scheme can obtain be a higher embedded capacity to change the original audio signal smaller, has good stealth, integrity watermark extraction time is short, and can be completed synchronously during the audio play process.
APA, Harvard, Vancouver, ISO, and other styles
17

König, Ronja, André Gerlach, Henry Schmidt, and Eike Stumpf. "Experimental investigation on acoustics and efficiency of rotor configurations for electric aerial vehicles." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 263, no. 6 (August 1, 2021): 323–34. http://dx.doi.org/10.3397/in-2021-1435.

Full text
Abstract:
Aerial vehicles based on distributed electric propulsion systems have gained great interest. Their rotors however create loud and annoying sound, what obstructs market success. Variations in rotor configuration can be observed on emerging concepts, whereby the main varied parameters are blade radius, number of blades and blade distribution. The focus of this paper is to identify how these parameters can be chosen to optimize efficiency and acoustics, including psychoacoustic metrics and sound quality of single rotors while hovering. Results from experimental investigations done in a hover-test-bench are presented. Rectangular, symmetric blades are used. Experiments are done varying blade radius (61mm to 126 mm), number of blades (2 to 8) and blade distribution (equal and unequal angles). Acoustic measurements are analyzed regarding microphone position, sound pressure level, spectral characteristics, psychoacoustic metrics and selected sound quality models. Results show, that variations in blade radius, number of blades and blade distribution can improve efficiency and acoustics. Influence of these parameters on the acoustic signature at constant rotational speed and at constant thrust is discussed. Conclusions for optimized rotor design at aerial vehicles are derived and supplemented by resulting boundary conditions like building space and weight.
APA, Harvard, Vancouver, ISO, and other styles
18

Kwon, Bomjun J. "AUX: A scripting language for auditory signal processing and software packages for psychoacoustic experiments and education." Behavior Research Methods 44, no. 2 (November 20, 2011): 361–73. http://dx.doi.org/10.3758/s13428-011-0161-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Eilers, Rebecca, D. K. Oller, Richard Urbano, and Debra Moroff. "Conflicting and Cooperating Cues." Journal of Speech, Language, and Hearing Research 32, no. 2 (June 1989): 307–16. http://dx.doi.org/10.1044/jshr.3202.307.

Full text
Abstract:
Three experiments were conducted to ascertain the relative salience of two cues for final consonant voicing in infants and adults. Experiment 1 was designed to investigate infant perception of periodicity of burst, vowel duration, and the two cues combined in a cooperating pattern. Experiment 2 was designed to examine infant perception of these same cues but in a conflicting pattern, that is, with extended duration associated with the voiceless final plosive. Experiment 3 examined perception of the stimuli from Experiments 1 and 2 with adult subjects. Results indicate that in both adults and infants combined cues facilitate discrimination of the phonemic contrast regardless of whether the cues cooperate or conflict. The three experiments taken together do not support a phonetic interpretation of conflicting/cooperating cues for the perception of final stop consonant voicing. Potential psychoacoustic explanations are discussed.
APA, Harvard, Vancouver, ISO, and other styles
20

Thompson, William Forde, and Richard Parncutt. "Perceptual Judgments of Triads and Dyads: Assessment of a Psychoacoustic Model." Music Perception 14, no. 3 (1997): 263–80. http://dx.doi.org/10.2307/40285721.

Full text
Abstract:
In two experiments, goodness-of-fit ratings of pairs of musical elements (triads, dyads, and octave-complex tones) were examined in view of a psychoacoustic model. The model, referred to as the pitch commonality model, evaluates the sharing of fundamental frequencies, overtones, and subharmonic tone sensations between sequential elements and also considers the effects of auditory masking within each element. Two other models were also assessed: a reduced model that considers the sharing of fundamental frequencies alone and the cycle-of-fifths model of key and chord relatedness. In Experiment 1, listeners rated the goodness of fit of 12 octave-complex tones following a major triad, major-third dyad, and perfect-fifth dyad. Multiple regression revealed that pitch commonality provided predictive power beyond that of the reduced model. A regression model based on pitch commonality and the cycle of fifths had a multiple R of .92. In Experiment 2, listeners rated how well a triad or dyad followed another triad or dyad. All pairings of the major triad, major-third dyad, and perfect-fifth dyad (pair types) were presented at various transpositions with respect to one another. Multiple regression revealed that pitch commonality again provided predictive power beyond that of the reduced model. A regression model based on pitch commonality, the cycle of fifths, and a preference for trials ending with a triad had a multiple R of .84. We discuss the role of psychoacoustic factors and knowledge of chord and key relationships in shaping the perception of harmonic material.
APA, Harvard, Vancouver, ISO, and other styles
21

He, Yuebo, Hui Gao, Hai Liu, and Guoxi Jing. "Identification of prominent noise components of an electric powertrain using a psychoacoustic model." Noise Control Engineering Journal 70, no. 2 (March 1, 2022): 103–14. http://dx.doi.org/10.3397/1/37709.

Full text
Abstract:
Because of the electric power transmission system has no sound masking effect compared with the traditional internal combustion power transmission system, electric powertrain noise has become the prominent noise of electric vehicles, adversely affecting the sound quality of the vehicle interior. Because of the strong coupling of motor and transmission noise, it is difficult to separate and identify the compositions of the electric powertrain by experiments. A psychoacoustic model is used to separate and identify the noise sources of the electric powertrain of a vehicle, considering the masking effect of the human ear. The electric powertrain noise is tested in a semi-anechoic chamber and recorded by a high-precision noise sensor. The noise source compositions of the electric powertrain are analyzed by the computational auditory scene analysis and robust independent component analysis. Five independent noise sources are obtained, i.e., the fundamental frequency of the first gear mesh noise, fundamental frequency of the second gear mesh noise, double frequency of the second gear mesh noise, radial electromagnetic force noise and stator slot harmonic noise. The results provide a guide for the optimization of the sound quality of the electric powertrain and for the improvement of the sound quality of the vehicle interior.
APA, Harvard, Vancouver, ISO, and other styles
22

Taghipour, Armin, and Eduardo Pelizzari. "Effects of Background Sounds on Annoyance Reaction to Foreground Sounds in Psychoacoustic Experiments in the Laboratory: Limits and Consequences." Applied Sciences 9, no. 9 (May 7, 2019): 1872. http://dx.doi.org/10.3390/app9091872.

Full text
Abstract:
In a variety of applications, e.g., psychoacoustic experiments, virtual sound propagation demonstration, or synthesized noise production, noise samples are played back in laboratories. To simulate realistic scenes or to mask unwanted background sounds, it is sometimes preferable to add background ambient sounds to the noise. However, this can influence noise perception. It should be ensured that either background sounds do not affect, e.g., annoyance from foreground noise or that possible effects can be quantified. Two laboratory experiments are reported, in which effects of mixing background sounds to foreground helicopter samples were investigated. By means of partially balanced incomplete block designs, possible effects of three independent variables, i.e., helicopter’s sound exposure level, background type, and background sound pressure level were tested on the dependent variable annoyance, rated on the ICBEN 11-point numerical scale. The main predictor of annoyance was helicopter’s sound exposure level. Stimuli with eventful background sounds were found to be more annoying than those with less eventful background sounds. Furthermore, background type and level interacted significantly. For the major part of the background sound level range, increasing the background level was associated with increased or decreased annoyance for stimuli with eventful and less eventful background sounds, respectively.
APA, Harvard, Vancouver, ISO, and other styles
23

Aiba, Eriko, Koji Kazai, Takayuki Shimotomai, Toshie Matsui, Minoru Tsuzaki, and Noriko Nagata. "Accuracy of Synchrony Judgment and its Relation to the Auditory Brainstem Response: the Difference Between Pianists and Non-Pianists." Journal of Advanced Computational Intelligence and Intelligent Informatics 15, no. 8 (October 20, 2011): 962–71. http://dx.doi.org/10.20965/jaciii.2011.p0962.

Full text
Abstract:
Synchrony judgment is one of the most important abilities for musicians. Only a few milliseconds of onset asynchrony result in a significant difference in musical expression. Using behavioural responses and Auditory Brainstem Responses (ABR), this study investigates whether synchrony judgment accuracy improves with training and, if so, whether physiological responses are also changed through training. Psychoacoustic experiments showed that accuracy of synchrony judgment of pianists was higher than that of non-pianists, implying that pianists’ ability to perceive tones increased through training. ABRmeasurements also showed differences between pianists and non-pianists. However, cochlear delay, an asymmetric aspect of temporal processing in the human auditory system, did not change with training. It is possible that training improved ability related to temporal tone perception and that training may increase synchrony in auditory nerve firing.
APA, Harvard, Vancouver, ISO, and other styles
24

Liu, Rong. "AUDITORY DISPLAY WITH SENSORY SUBSTITUTION FOR INTERNET-BASED TELEOPERATION: A FEASIBILITY STUDY." Biomedical Engineering: Applications, Basis and Communications 21, no. 02 (April 2009): 131–37. http://dx.doi.org/10.4015/s1016237209001155.

Full text
Abstract:
A critical challenge in telerobotic system is data communication over networks without performance guarantee. This paper proposes a novel way of using auditory feedback as the sensory feedback to ensure that the teleoperated robotic system still functions in a real-time fashion under the unfavorable communication conditions, such as image losses, visual failures, and low-bandwidth communication links. The newly proposed method is tested through psychoacoustic experiments with 10 subjects conducting real-time robotic navigation tasks. The performance is analyzed according to an objective point of view (time to finish task, distance away to the target measurements), as well as subjective workload assessments for different sensory feedbacks. Moreover, the bandwidth consumed when auditory information is applied is considerably lower, compared with the visual information. Preliminary results demonstrate the feasibility of auditory display as a complement or substitute to visual display for remote robotic navigation.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Yuxia, Xiaohu Yang, Hui Zhang, Lilong Xu, Can Xu, and Chang Liu. "Aging Effect on Categorical Perception of Mandarin Tones 2 and 3 and Thresholds of Pitch Contour Discrimination." American Journal of Audiology 26, no. 1 (March 2017): 18–26. http://dx.doi.org/10.1044/2016_aja-16-0020.

Full text
Abstract:
Purpose The purpose of the study was to examine the aging effect on the categorical perception of Mandarin Chinese Tone 2 (rising F0 pitch contour) and Tone 3 (falling-then-rising F0 pitch contour) as well as on the thresholds of pitch contour discrimination. Method Three experiments of Mandarin tone perception were conducted for younger and older listeners with Mandarin Chinese as the native language. The first 2 experiments were in the categorical perception paradigm: tone identification and tone discrimination for a series of stimuli, the F0 contour of which systematically varied from Tone 2 to Tone 3. In the third experiment, the just-noticeable differences of pitch contour discrimination were measured for both groups. Results In the measures of categorical perception, older listeners showed significantly shallower slopes in the tone identification function and significantly smaller peakedness in the tone discrimination function compared with younger listeners. Moreover, the thresholds of pitch contour discrimination were significantly higher for older listeners than for younger listeners. Conclusion These results suggest that aging reduced the categoricality of Mandarin tone perception and worsened the psychoacoustic capacity to discriminate pitch contour changes, thereby possibly leading to older listeners' difficulty in identifying Tones 2 and 3.
APA, Harvard, Vancouver, ISO, and other styles
26

Hall, Donald E. "Musical Dynamic Levels of Pipe Organ Sounds." Music Perception 10, no. 4 (1993): 417–34. http://dx.doi.org/10.2307/40285581.

Full text
Abstract:
The pipe organ offers the opportunity to conduct psychoacoustic experiments in which the sound of a natural instrument can be perfectly steady and reproducible. This study takes advantage of the pipe organ to concentrate on that aspect of musical dynamics determined by the physical parameters of steady sounds, leaving aside the admittedly important effects of other variables such as context and articulation. Juries of musicians and music students provided judgments of musical dynamic levels produced by steady sounding of various stops and combinations on two pipe organs. The physical strength of each of these sounds was measured, and they were analyzed in $\frac{1}{3}$ octave band spectra. Correlations between the physical parameters and the musical judgments were examined. Results of this study provide some support for the hypothesis that loudness calculated by a procedure such as Zwicker's will be a good predictor of the steady aspect of musical dynamic strength, whereas a simple unweighted sound level in decibels is rather poor.
APA, Harvard, Vancouver, ISO, and other styles
27

Burke, Elisa, and Johannes Hensel. "Sound Source System for Investigating the Auditory Perception of Infrasound Accompanied by Audio Sound." Acta Acustica united with Acustica 105, no. 5 (July 1, 2019): 869–74. http://dx.doi.org/10.3813/aaa.919366.

Full text
Abstract:
To gather more basic knowledge about both infrasound-perception mechanisms and the annoyance caused by infrasound, it is important to investigate the influence of the interaction between infrasound and sound at frequencies inside the common audio frequency range (audio sound) on the auditory perception. This paper gives a detailed description of a newly developed sound source system allowing simultaneous monaural stimulation of listeners with infrasound and audio-sound stimuli in psychoacoustic experiments. The sound source system covers a frequency range between 4 Hz and 6000 Hz. It can generate infrasound stimuli and audio-sound stimuli up to at least 123 dB SPL and 80 dB SPL, respectively, with inaudible harmonic distortions. Likewise, during simultaneous generation of high-level infrasound and audio sound, residual unwanted modulation frequencies remain imperceptible, owing to special design features. It can be concluded that the sound source system is suitable for investigating the auditory perception of infrasound accompanied by audio sound.
APA, Harvard, Vancouver, ISO, and other styles
28

Baumgartner, Robert, and Piotr Majdak. "Decision making in auditory externalization perception: model predictions for static conditions." Acta Acustica 5 (2021): 59. http://dx.doi.org/10.1051/aacus/2021053.

Full text
Abstract:
Under natural conditions, listeners perceptually attribute sounds to external objects in their environment. This core function of perceptual inference is often distorted when sounds are produced via hearing devices such as headphones or hearing aids, resulting in sources being perceived unrealistically close or even inside the head. Psychoacoustic studies suggest a mixed role of various monaural and interaural cues contributing to the externalization process. We developed a model framework for perceptual externalization able to probe the contribution of cue-specific expectation errors and to contrast dynamic versus static strategies for combining those errors within static listening environments. Effects of reverberation and visual information were not considered. The model was applied to various acoustic distortions as tested under various spatially static conditions in five previous experiments. Most accurate predictions were obtained for the combination of monaural and interaural spectral cues with a fixed relative weighting (approximately 60% of monaural and 40% of interaural). That model version was able to reproduce the externalization rating of the five experiments with an average error of 12% (relative to the full rating scale). Further, our results suggest that auditory externalization in spatially static listening situations underlies a fixed weighting of monaural and interaural spectral cues, rather than a dynamic selection of those auditory cues.
APA, Harvard, Vancouver, ISO, and other styles
29

Batlle-Roca, Roser, Perfecto Herrera-Boyer, Blai Meléndez, Emilio Molina, and Xavier Serra. "Towards a Characterization of Background Music Audibility in Broadcasted TV." International Journal of Environmental Research and Public Health 20, no. 1 (December 22, 2022): 123. http://dx.doi.org/10.3390/ijerph20010123.

Full text
Abstract:
In audiovisual contexts, different conventions determine the level at which background music is mixed into the final program, and sometimes, the mix renders the music to be practically or totally inaudible. From a perceptual point of view, the audibility of music is subject to auditory masking by other aural stimuli such as voice or additional sounds (e.g., applause, laughter, horns), and is also influenced by the visual content that accompanies the soundtrack, and by attentional and motivational factors. This situation is relevant to the music industry because, according to some copyright regulations, the non-audible background music must not generate any distribution rights, and the marginally audible background music must generate half of the standard value of audible music. In this study, we conduct two psychoacoustic experiments to identify several factors that influence background music perception, and their contribution to its variable audibility. Our experiments are based on auditory detection and chronometric tasks involving keyboard interactions with original TV content. From the collected data, we estimated a sound-to-music ratio range to define the audibility threshold limits of the barely audible class. In addition, results show that perception is affected by loudness level, listening condition, music sensitivity, and type of television content.
APA, Harvard, Vancouver, ISO, and other styles
30

Pleban, Dariusz. "Definition and Measure of the Sound Quality of the Machine." Archives of Acoustics 39, no. 1 (March 1, 2015): 17–23. http://dx.doi.org/10.2478/aoa-2014-0003.

Full text
Abstract:
Abstract The analysis of available literature indicates that tests of products sound quality, which would not involve participation of groups of listeners supposed to evaluate the sounds emitted by these products, are neither carried out in Poland, nor in the world. That results in the fact that the products sound quality is determined on the basis of psychoacoustic information and comprises both objective and subjective factors of sound perception. With reference to those factors and to different life cycles of the machine, an original definition of the “sound quality of the machine” has been developed and presented in this article. The global index of the acoustic quality of the machine, accounting for the relations between the noise level at the workstation and the selected parameters characterising both the machine's sound activity and the working environment, was adopted as the measure of the sound quality of the machine. The experiments that followed confirmed the appropriateness of the assessment made with the use of the global index of acoustic quality.
APA, Harvard, Vancouver, ISO, and other styles
31

Cucis, P. A., C. Berger-Vachon, R. Hermann, H. Thaï-Van, S. Gallego, and E. Truy. "Cochlear Implant: Effect of the Number of Channel and Frequency Selectivity on Speech Understanding in Noise Preliminary Results in Simulation with Normal-Hearing Subjects." Modelling, Measurement and Control C 81, no. 1-4 (December 31, 2020): 17–23. http://dx.doi.org/10.18280/mmc_c.811-404.

Full text
Abstract:
The cochlear implant is the most successful implantable device for the rehabilitation of profound deafness. However, in some cases, the electrical stimulation delivered by the electrode can spread inside the cochlea creating overlap and interaction between frequency channels. By using channel-selection algorithms like the “nofm” coding-strategy, channel interaction can be reduced. This paper describes the preliminary results of experiments conducted with normal hearing subjects (n = 9). Using a vocoder, the present study simulated the hearing through a cochlear implant. Speech understanding in noise was measured by varying the number of selected channels (“nofm”: 4, 8, 12 and 16of20) and the degree of simulated channel interaction (“Low”, “Medium”, “High”). Also, with the vocoder, we evaluated the impact of simulated channel interaction on frequency selectivity by measuring psychoacoustic tuning curves. The results showed a significant average effect of the signal-to-noise ratio (p < 0.0001), the degree of channel interaction (p < 0.0001) and the number of selected channels, (p = 0.029). The highest degree of channel interaction significantly decreases intelligibility as well as frequency selectivity. These results underline the importance of measuring channel interaction for cochlear implanted patients to have a prognostic test and to adjust fitting methods in consequence. The next step of this project will be to transpose these experiments to implant users, to support our results.
APA, Harvard, Vancouver, ISO, and other styles
32

Paté, Arthur, Nicolas Côté, Charles Croënne, Jérôme Vasseur, and Anne-Christine Hladky-Hennion. "Perception of loudness changes induced by a phononic crystal in specific frequency bands." Acta Acustica 6 (2022): 42. http://dx.doi.org/10.1051/aacus/2022037.

Full text
Abstract:
To study the influence of classical phononic crystal (PC) structures on the acoustical characteristics of a sound source, a combined acoustics/perceptual analysis is conducted on a PC specially designed to exhibit several spectral and wave vector properties in different audible frequency ranges. The properties, confirmed by both numerical calculations and experiments, consist in both partial and absolute band gaps, as well as a negative refraction band. A psychoacoustic feature, namely the loudness in third-octave bands, is estimated from numerical simulations of the acoustic field behind the crystal. Additional perceptual tests are conducted to evaluate the efficiency of the PC slab. In the frequency range of the band gaps, sound stimuli filtered by the PC’s impulse response are perceived as softer than stimuli resulting from a free-field propagation (FF), they also are perceived as equally (or close to equally) loud than sounds attenuated by a free-standing rigid wall (FS). In the frequency range of the focalization (negative refraction), PC sound stimuli sound louder than both FS and FF sound stimuli. The possibility of designing an efficient sound barrier based on the considered PC is finally discussed.
APA, Harvard, Vancouver, ISO, and other styles
33

Ramos, Oscar Alberto, and Fabián Carlos Tommasini. "Magnitude Modelling of HRTF Using Principal Component Analysis Applied to Complex Values." Archives of Acoustics 39, no. 4 (March 1, 2015): 477–82. http://dx.doi.org/10.2478/aoa-2014-0051.

Full text
Abstract:
Abstract Principal components analysis (PCA) is frequently used for modelling the magnitude of the head-related transfer functions (HRTFs). Assuming that the HRTFs are minimum phase systems, the phase is obtained from the Hilbert transform of the log-magnitude. In recent years, the PCA applied to HRTFs is also used to model individual HRTFs relating the PCA weights with anthropometric measurements of the head, torso and pinnae. The HRTF log-magnitude is the most used format of input data to the PCA, but it has been shown that if the input data is HRTF linear magnitude, the cumulative variance converges faster, and the mean square error (MSE) is smaller. This study demonstrates that PCA applied directly on HRTF complex values is even better than the two formats mentioned above, that is, the MSE is the smallest and the cumulative variance converges faster after the 8th principal component. Different objective experiments around all the median plane put in evidence the differences which, although small, seem to be perceptually detectable. To elucidate this point, psychoacoustic discrimination tests are done between measured and reconstructed HRTFs from the three types of input data mentioned, in the median plane between -45°. and +9°.
APA, Harvard, Vancouver, ISO, and other styles
34

YU, Lei, and Jian KANG. "USING ANN TO STUDY SOUND PREFERENCE EVALUATION IN URBAN OPEN SPACES." JOURNAL OF ENVIRONMENTAL ENGINEERING AND LANDSCAPE MANAGEMENT 23, no. 3 (September 29, 2015): 163–71. http://dx.doi.org/10.3846/16486897.2015.1050399.

Full text
Abstract:
In soundscape research, subjective preference evaluation of a sound is crucial. Based on a series of field studies and laboratory experiments, influence of sound category and psychoacoustic parameters on sound preference evaluation is examined. It has been found that sound category and loudness and sharpness are important. Regarding a previous study, age and education level are also important to influence sound preference evaluation. In order to understand user’s preference in terms of sound at a design stage, prediction of sound preference evaluation is essential. As sound preference evaluation is complicated and influenced by various factors linearly and non-linearly, artificial neural network (ANN) has been explored to make predictions of sound preference evaluation. A number of developed ANN models have been demonstrated, and it has been found that the models including input factors of sound category, loudness and sharpness produce better predictions than others. The best prediction model is the one that is based on an individual case study site. Based on the best prediction model, a mapping tool for sound preference evaluation has been developed and its usefulness for aiding landscape architects and urban designers has been demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
35

Johnson-Laird, Phil N., Olivia E. Kang, and Yuan Chang Leong. "On Musical Dissonance." Music Perception 30, no. 1 (September 1, 2012): 19–35. http://dx.doi.org/10.1525/mp.2012.30.1.19.

Full text
Abstract:
psychoacoustic theories of dissonance often follow Helmholtz and attribute it to partials (fundamental frequencies or overtones) near enough in frequency to affect the same region of the basilar membrane and therefore to cause roughness, i.e., rapid beating. In contrast, tonal theories attribute dissonance to violations of harmonic principles embodied in Western music. We propose a dual-process theory that embeds roughness within tonal principles. The theory predicts the robust increasing trend in the dissonance of triads: major &lt; minor &lt; diminished &lt; augmented. Previous experiments used too few chords for a comprehensive test of the theory, and so Experiment 1 examined the rated dissonance of all 55 possible three-note chords, and Experiment 2 examined a representative sample of 48 of the possible four-note chords. The participants' ratings concurred reliably and corroborated the dual-process theory. Experiment 3 showed that, as the theory predicts, consonant chords are rated as less dissonant when they occur in a tonal sequence (the cycle of fifths) than in a random sequence, whereas this manipulation has no reliable effect on dissonant chords outside common musical practice.
APA, Harvard, Vancouver, ISO, and other styles
36

Manohare, Manish, Bhavya Garg, E. Rajasekar, and Manoranjan Parida. "Evaluation of change in heart rate variability due to different soundscapes." Noise Mapping 9, no. 1 (January 1, 2022): 234–48. http://dx.doi.org/10.1515/noise-2022-0158.

Full text
Abstract:
Abstract Soundscapes affect the health and quality of life of humans. Noisy soundscapes have a negative impact on humans causing annoyance, sleep disturbance and cardiovascular issues. This paper analyses the change in heart rate variability (HRV) due to exposure of different soundscape stimuli. A total of 40 soundscape stimuli were collected from New Delhi India, which were grouped into three clusters, ‘Loud’, ‘Active’ and ‘Silent’, based on psychoacoustic indicators. Listening experiments were conducted with 25 healthy participants, during which electro-cardiography responses were collected as response variable. HRV analysis was performed to analyse the change in time domain (Heart rate, SDNN, NN50, pNN50) and frequency domain (VLF, LF, HF, LF/LF ratio) parameters. A significant change in heart rate is observed with an increase in loudness of stimuli. The change in HRV is analysed by considering noise sensitivity level of participants. A significant decrease in SDNN is noted for participants with high noise sensitivity. Frequency domain parameters of HRV did not exhibit a significant change due to noise exposure. A significant decrease in SDNN suggests imbalanced autonomic nervous system activation, which increases the risk of cardiovascular diseases, particularly for people with high noise sensitivity.
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Song, Roman Schlieper, Aly Tobbala, and Jürgen Peissig. "The Influence of Binaural Room Impulse Responses on Externalization in Virtual Reality Scenarios." Applied Sciences 11, no. 21 (October 30, 2021): 10198. http://dx.doi.org/10.3390/app112110198.

Full text
Abstract:
A headphone-based virtual sound image can not be perceived as perfectly externalized if the acoustic of the synthesized room does not match that of the real listening environment. This effect has been well explored and is known as the room divergence effect (RDE). The RDE is important for perceived externalization of virtual sounds if listeners are aware of the room-related auditory information provided by the listening environment. In the case of virtual reality (VR) applications, users get a visual impression of the virtual room, but may not be aware of the auditory information of this room. It is unknown whether the acoustic congruence between the synthesized (binaurally rendered) room and the visual-only virtual listening environment is important for externalization. VR-based psychoacoustic experiments were performed and the results reveal that perceived externalization of virtual sounds depends on listeners’ expectations of the acoustic of the visual-only virtual room. The virtual sound images can be perceived as externalized, although there is an acoustic divergence between the binaurally synthesized room and the visual-only virtual listening environment. However, the “correct” room information in binaural sounds may lead to degraded externalization if the acoustic properties of the room do not match listeners’ expectations.
APA, Harvard, Vancouver, ISO, and other styles
38

Tsujimura, Sohei, Motoki Yairi, Takayoshi Okita, and Mayu Nidaira. "Study on psychological evaluation model of a good conversation in knowledge creative activity by multiple people." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 263, no. 6 (August 1, 2021): 442–49. http://dx.doi.org/10.3397/in-2021-1478.

Full text
Abstract:
In recent years, Japanese companies are focusing on enhancing the knowledge creative activities of office workers, and the way of working in the office is shifting from the conventional divisional routine work to collaborative and creative work. On the other hand, office spaces are becoming quiet, and the number of extremely quiet them with noise levels below 40 dB is increasing. Previous studies have reported that a sound environment that is too quiet gives the worker the impression that it is difficult to have a conversation, further accumulation of research results is desired for the construction of a sound environment that enhances knowledge creative activities. Therefore, in this study, focusing on the relationship between sound environment and intellectual productivity, we investigated a sound environment suitable for knowledge creation activities by multiple people. Psychoacoustic experiments were conducted to examine the effects of sound pressure level (signal-to-noise ratio), type of sound and reverberation time of meeting room on the impression of "good conversation". Furthermore, using the psychological evaluation data of the experimental participants, the causal model of psychological evaluation of "good conversation" was examined by multiple regression analysis, and the psychological factors that contribute to the impression of it was clarified.
APA, Harvard, Vancouver, ISO, and other styles
39

Yin, Shanbin, Zhengqi Gu, Yiqi Zong, Ledian Zheng, Zhendong Yang, and Taiming Huang. "Sound quality evaluation of automobile side-window buffeting noise based on large-eddy simulation." Journal of Low Frequency Noise, Vibration and Active Control 38, no. 2 (December 11, 2018): 207–23. http://dx.doi.org/10.1177/1461348418816268.

Full text
Abstract:
Large-eddy simulation (LES) and Detached-eddy simulation (DES) were applied to a simple cavity model to calculate the wind buffeting noise respectively. The results were verified by wind tunnel experiments. The results show that LES is more suitable for wind buffeting noise calculation. LES method was employed to calculate automobile side-window buffeting noise. The correctness of results was validated by a road test. In this paper, the acoustically calculated sound pressure level (SPL) spectral curve is used as the initial signal of the acoustic post-processing. Four psychoacoustic objective parameters namely: loudness, sharpness, roughness and fluctuation were obtained by using Matlab R2016a to compile the calculation process. Sound quality evaluation (SQE) of the vehicle is performed via most frequently used SPL and four calculated vehicle comfort index. It can be concluded that with the increase of driving velocity, SPL and loudness show an increasing trend, while roughness, sharpness and fluctuation present a decreasing trend. It can be also summarised that with the increase of window opening degree, SPL and loudness show an increasing trend, sharpness presents a decreasing trend, and roughness and fluctuation display the trend of ups and downs. The main original contribution of this paper is the accurate calculation of wind buffeting noise and the summary of the changing rule of SPL, loudness, roughness, sharpness and fluctuation with the variation of velocity and window opening degree.
APA, Harvard, Vancouver, ISO, and other styles
40

Tsujimura, Sohei, Motoki Yairi, and Takayoshi Okita. "A Psychological Evaluation Model of a Good Conversation in Knowledge Creative Activities by Multiple People." Applied Sciences 12, no. 5 (February 22, 2022): 2265. http://dx.doi.org/10.3390/app12052265.

Full text
Abstract:
Japanese companies have been focusing on enhancing the knowledge creative activities of older office workers in recent years. In addition, the way of working in the office has been shifting from traditional divisional routine work to collaborative or creative work, and office spaces are becoming quieter, with an increasing number of extremely quiet spaces (noise level < 40 dB). A sound environment that is too quiet gives workers the impression that it is difficult to converse with others, because they are worried about what people around them may think. The appearance of the knowledge creative society in recent years has led to a desire for changes in the workplace environment to improve the productivity of intellectual activities. To realize a sound environment that encourages knowledge creative activities, study outcomes need to be accumulated. Therefore, to clarify what kind of sound environment would be appropriate for knowledge creative activities by multiple people, we conducted psychoacoustic experiments to examine the effects of sound pressure level (signal-to-noise ratio), type of sound, and reverberation time in conference rooms on the impression of a “good conversation”. In addition, we considered a causal model for the psychological evaluation of a “good conversation” by conducting a multiple regression analysis of psychological evaluations of the experimental participants. The results indicated that a sound environment considered too quiet for multiple people to have discussions about knowledge creative activities lowers the impression of a “good conversation”, whereas high levels of relaxation lead to the impression of a “good conversation”.
APA, Harvard, Vancouver, ISO, and other styles
41

Frühholz, Sascha, Joris Dietziker, Matthias Staib, and Wiebke Trost. "Neurocognitive processing efficiency for discriminating human non-alarm rather than alarm scream calls." PLOS Biology 19, no. 4 (April 13, 2021): e3000751. http://dx.doi.org/10.1371/journal.pbio.3000751.

Full text
Abstract:
Across many species, scream calls signal the affective significance of events to other agents. Scream calls were often thought to be of generic alarming and fearful nature, to signal potential threats, with instantaneous, involuntary, and accurate recognition by perceivers. However, scream calls are more diverse in their affective signaling nature than being limited to fearfully alarming a threat, and thus the broader sociobiological relevance of various scream types is unclear. Here we used 4 different psychoacoustic, perceptual decision-making, and neuroimaging experiments in humans to demonstrate the existence of at least 6 psychoacoustically distinctive types of scream calls of both alarming and non-alarming nature, rather than there being only screams caused by fear or aggression. Second, based on perceptual and processing sensitivity measures for decision-making during scream recognition, we found that alarm screams (with some exceptions) were overall discriminated the worst, were responded to the slowest, and were associated with a lower perceptual sensitivity for their recognition compared with non-alarm screams. Third, the neural processing of alarm compared with non-alarm screams during an implicit processing task elicited only minimal neural signal and connectivity in perceivers, contrary to the frequent assumption of a threat processing bias of the primate neural system. These findings show that scream calls are more diverse in their signaling and communicative nature in humans than previously assumed, and, in contrast to a commonly observed threat processing bias in perceptual discriminations and neural processes, we found that especially non-alarm screams, and positive screams in particular, seem to have higher efficiency in speeded discriminations and the implicit neural processing of various scream types in humans.
APA, Harvard, Vancouver, ISO, and other styles
42

Kludt, Eugen, Waldo Nogueira, Thomas Lenarz, and Andreas Buechner. "A sound coding strategy based on a temporal masking model for cochlear implants." PLOS ONE 16, no. 1 (January 8, 2021): e0244433. http://dx.doi.org/10.1371/journal.pone.0244433.

Full text
Abstract:
Auditory masking occurs when one sound is perceptually altered by the presence of another sound. Auditory masking in the frequency domain is known as simultaneous masking and in the time domain is known as temporal masking or non-simultaneous masking. This works presents a sound coding strategy that incorporates a temporal masking model to select the most relevant channels for stimulation in a cochlear implant (CI). A previous version of the strategy, termed psychoacoustic advanced combination encoder (PACE), only used a simultaneous masking model for the same purpose, for this reason the new strategy has been termed temporal-PACE (TPACE). We hypothesized that a sound coding strategy that focuses on stimulating the auditory nerve with pulses that are as masked as possible can improve speech intelligibility for CI users. The temporal masking model used within TPACE attenuates the simultaneous masking thresholds estimated by PACE over time. The attenuation is designed to fall exponentially with a strength determined by a single parameter, the temporal masking half-life T½. This parameter gives the time interval at which the simultaneous masking threshold is halved. The study group consisted of 24 postlingually deaf subjects with a minimum of six months experience after CI activation. A crossover design was used to compare four variants of the new temporal masking strategy TPACE (T½ ranging between 0.4 and 1.1 ms) with respect to the clinical MP3000 strategy, a commercial implementation of the PACE strategy, in two prospective, within-subject, repeated-measure experiments. The outcome measure was speech intelligibility in noise at 15 to 5 dB SNR. In two consecutive experiments, the TPACE with T½ of 0.5 ms obtained a speech performance increase of 11% and 10% with respect to the MP3000 (T½ = 0 ms), respectively. The improved speech test scores correlated with the clinical performance of the subjects: CI users with above-average outcome in their routine speech tests showed higher benefit with TPACE. It seems that the consideration of short-acting temporal masking can improve speech intelligibility in CI users. The half-live with the highest average speech perception benefit (0.5 ms) corresponds to time scales that are typical for neuronal refractory behavior.
APA, Harvard, Vancouver, ISO, and other styles
43

Kludt, Eugen, Waldo Nogueira, Thomas Lenarz, and Andreas Buechner. "A sound coding strategy based on a temporal masking model for cochlear implants." PLOS ONE 16, no. 1 (January 8, 2021): e0244433. http://dx.doi.org/10.1371/journal.pone.0244433.

Full text
Abstract:
Auditory masking occurs when one sound is perceptually altered by the presence of another sound. Auditory masking in the frequency domain is known as simultaneous masking and in the time domain is known as temporal masking or non-simultaneous masking. This works presents a sound coding strategy that incorporates a temporal masking model to select the most relevant channels for stimulation in a cochlear implant (CI). A previous version of the strategy, termed psychoacoustic advanced combination encoder (PACE), only used a simultaneous masking model for the same purpose, for this reason the new strategy has been termed temporal-PACE (TPACE). We hypothesized that a sound coding strategy that focuses on stimulating the auditory nerve with pulses that are as masked as possible can improve speech intelligibility for CI users. The temporal masking model used within TPACE attenuates the simultaneous masking thresholds estimated by PACE over time. The attenuation is designed to fall exponentially with a strength determined by a single parameter, the temporal masking half-life T½. This parameter gives the time interval at which the simultaneous masking threshold is halved. The study group consisted of 24 postlingually deaf subjects with a minimum of six months experience after CI activation. A crossover design was used to compare four variants of the new temporal masking strategy TPACE (T½ ranging between 0.4 and 1.1 ms) with respect to the clinical MP3000 strategy, a commercial implementation of the PACE strategy, in two prospective, within-subject, repeated-measure experiments. The outcome measure was speech intelligibility in noise at 15 to 5 dB SNR. In two consecutive experiments, the TPACE with T½ of 0.5 ms obtained a speech performance increase of 11% and 10% with respect to the MP3000 (T½ = 0 ms), respectively. The improved speech test scores correlated with the clinical performance of the subjects: CI users with above-average outcome in their routine speech tests showed higher benefit with TPACE. It seems that the consideration of short-acting temporal masking can improve speech intelligibility in CI users. The half-live with the highest average speech perception benefit (0.5 ms) corresponds to time scales that are typical for neuronal refractory behavior.
APA, Harvard, Vancouver, ISO, and other styles
44

Bell, Eamonn. "Cybernetics, Listening, and Sound-Studio Phenomenotechnique in Abraham Moles’s Théorie de l’information et perception esthétique (1958)." Resonance 2, no. 4 (2021): 523–58. http://dx.doi.org/10.1525/res.2021.2.4.523.

Full text
Abstract:
In his Théorie de l’information et perception esthétique (1958), the sociologist of culture Abraham Moles (1920–92) set out to demonstrate the applicability of information theory—a mathematical linchpin of cybernetics—to the arts more generally. Moles drew on classical psychophysics, Gestalt psychology, more modern behavioral psychology, and contemporary psychoacoustic research to advocate a cybernetic model of the perception and creation of art. Moles repeatedly returned to musical examples therein to make his case, leveraging his dual expertise in philosophy and electroacoustics, drawing on formative experiences with Pierre Schaeffer in Paris and Hermann Scherchen at his Gravesano studio. Moles’s interdisciplinary text found many attentive readers across Europe and, following an English translation by the precocious Joel E. Cohen (1966), the Anglophone academic world, but it was valued more as an inspiration for the burgeoning area of “information aesthetics” than as a source of hard scientific evidence. Drawing lightly on positions in the history and philosophy of science articulated by Gaston Bachelard (who supervised Moles’s second PhD, in philosophy) and Hans-Jörg Rheinberger suggests a change of emphasis away from its apparent scientific infelicities and toward Moles’s use of sound-studio technique, which is described with reference to the technologies available to Moles in the years leading up to the publication of the Théorie. Moles manipulated and processed sound recordings—filtering, clipping, and reversing them—in his attempts to empirically estimate the relative proportions of semantic and aesthetic information in speech and music. Moles’s text, when understood in tandem with the traces of his practical experiments in the sound studio, appears as an influential and occasionally prescient exposition of the many possible applications of the principles of information theory to the production, perception, and consumption of sound culture that makes ready use of the latest technical innovations in the media environment of its time.
APA, Harvard, Vancouver, ISO, and other styles
45

Vranić‐Sowers, S., H. Versnel, and S. A. Shamma. "Phase sensitivity in psychoacoustical and physiological experiments." Journal of the Acoustical Society of America 93, no. 4 (April 1993): 2410. http://dx.doi.org/10.1121/1.405938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Bolton, Matthew L., Xi Zheng, Meng Li, Judy Reed Edworthy, and Andrew D. Boyd. "An Experimental Validation of Masking in IEC 60601-1-8:2006-Compliant Alarm Sounds." Human Factors: The Journal of the Human Factors and Ergonomics Society 62, no. 6 (August 14, 2019): 954–72. http://dx.doi.org/10.1177/0018720819862911.

Full text
Abstract:
Objective This research investigated whether the psychoacoustics of simultaneous masking, which are integral to a model-checking-based method, previously developed for detecting perceivability problems in alarm configurations, could predict when IEC 60601-1-8-compliant medical alarm sounds are audible. Background The tonal nature of sounds prescribed by IEC 60601-1-8 makes them potentially susceptible to simultaneous masking: where concurrent sounds render one or more inaudible due to human sensory limitations. No work has experimentally assessed whether the psychoacoustics of simultaneous masking accurately predict IEC 60601-1-8 alarm perceivability. Method In two signal detection experiments, 28 nursing students judged whether alarm sounds were present in collections of concurrently sounding standard-compliant tones. The first experiment used alarm sounds with single-frequency (primary harmonic) tones. The second experiment’s sounds included the additional, standard-required frequencies (often called subharmonics). T tests compared miss, false alarm, sensitivity, and bias measures between masking and nonmasking conditions and between the two experiments. Results Miss rates were significantly higher and sensitivity was significantly lower for the masking condition than for the nonmasking one. There were no significant differences between the measures of the two experiments. Conclusion These results validate the predictions of the psychoacoustics of simultaneous masking for medical alarms and the masking detection capabilities of our method that relies on them. The results also show that masking of an alarm’s primary harmonic is sufficient to make an alarm sound indistinguishable. Application Findings have profound implications for medical alarm design, the international standard, and masking detection methods.
APA, Harvard, Vancouver, ISO, and other styles
47

Shore, Aimee, Anthony J. Tropiano, and William M. Hartmann. "Matched transaural synthesis with probe microphones for psychoacoustical experiments." Journal of the Acoustical Society of America 145, no. 3 (March 2019): 1313–30. http://dx.doi.org/10.1121/1.5092203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Kremer, Martina, and Detlef Krahe. "A learning program in psychoacoustics with interactive computer‐based listening experiments." Journal of the Acoustical Society of America 105, no. 2 (February 1999): 1214. http://dx.doi.org/10.1121/1.425855.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Nagel, Frederik, Reinhard Kopiez, Oliver Grewe, and Eckart Altenmüller. "Psychoacoustical correlates of musically induced chills." Musicae Scientiae 12, no. 1 (March 2008): 101–13. http://dx.doi.org/10.1177/102986490801200106.

Full text
Abstract:
Music listening is often accompanied by the experience of emotions, sometimes even by so-called “strong experiences of music” (SEMs). SEMs can include such pleasurable reactions as shivers down the spine or goose pimples, which are referred to as “chills”. In the present study, the role of psychoacoustical features was investigated with respect to the experience of chills. Psychoacoustical parameters of short musical segments (total duration: 20 s), characterized as chill- inducing, were analyzed and compared with musical excerpts which did not induce chill responses. A significant increase of loudness in the frequency range between 8 and 18 Bark (920–4400 Hz) was found in those excerpts for which chills were reported. Frequency-dependent changes of loudness seem to play an important role in the induction of chills.
APA, Harvard, Vancouver, ISO, and other styles
50

Klink, Karin B., Garnet Bendig, and Georg M. Klump. "Operant methods for mouse psychoacoustics." Behavior Research Methods 38, no. 1 (February 2006): 1–7. http://dx.doi.org/10.3758/bf03192744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography