Academic literature on the topic 'Simultaneous Sound Sources'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Simultaneous Sound Sources.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Simultaneous Sound Sources"

1

Xiang, Ning, and Christopher Landschoot. "Bayesian Inference for Acoustic Direction of Arrival Analysis Using Spherical Harmonics." Entropy 21, no. 6 (June 10, 2019): 579. http://dx.doi.org/10.3390/e21060579.

Full text
Abstract:
This work applies two levels of inference within a Bayesian framework to accomplish estimation of the directions of arrivals (DoAs) of sound sources. The sensing modality is a spherical microphone array based on spherical harmonics beamforming. When estimating the DoA, the acoustic signals may potentially contain one or multiple simultaneous sources. Using two levels of Bayesian inference, this work begins by estimating the correct number of sources via the higher level of inference, Bayesian model selection. It is followed by estimating the directional information of each source via the lower level of inference, Bayesian parameter estimation. This work formulates signal models using spherical harmonic beamforming that encodes the prior information on the sensor arrays in the form of analytical models with an unknown number of sound sources, and their locations. Available information on differences between the model and the sound signals as well as prior information on directions of arrivals are incorporated based on the principle of the maximum entropy. Two and three simultaneous sound sources have been experimentally tested without prior information on the number of sources. Bayesian inference provides unambiguous estimation on correct numbers of sources followed by the DoA estimations for each individual sound sources. This paper presents the Bayesian formulation, and analysis results to demonstrate the potential usefulness of the model-based Bayesian inference for complex acoustic environments with potentially multiple simultaneous sources.
APA, Harvard, Vancouver, ISO, and other styles
2

Frank, Matthias. "Source Width of Frontal Phantom Sources: Perception, Measurement, and Modeling." Archives of Acoustics 38, no. 3 (September 1, 2013): 311–19. http://dx.doi.org/10.2478/aoa-2013-0038.

Full text
Abstract:
Abstract Phantom sources are known to be perceived similar to real sound sources but with some differences. One of the differences is an increase of the perceived source width. This article discusses the perception, measurement, and modeling of source width for frontal phantom sources with different symmetrical arrangements of up to three active loudspeakers. The perceived source width is evaluated on the basis of a listening test. The test results are compared to technical measures that are applied in room acoustics: the inter-aural cross correlation coefficient (IACC) and the lateral energy fraction (LF). Adaptation of the latter measure makes it possible to predict the results by considering simultaneous sound incidence. Finally, a simple model is presented for the prediction of the perceived source width that does not require acoustic measurements as it is solely based on the loudspeaker directions and gains.
APA, Harvard, Vancouver, ISO, and other styles
3

Vannier, Michaël, and Etienne Parizet. "Loudness of a multi-tonal sound field, consisting of either one two-component complex sound source or two simultaneous spatially distributed sound sources." Journal of the Acoustical Society of America 136, no. 4 (October 2014): 2309. http://dx.doi.org/10.1121/1.4900356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Xiaohui, Hao Sun, and Heng Zhang. "A New Method of Simultaneous Localization and Mapping for Mobile Robots Using Acoustic Landmarks." Applied Sciences 9, no. 7 (March 30, 2019): 1352. http://dx.doi.org/10.3390/app9071352.

Full text
Abstract:
The simultaneous localization and mapping (SLAM) problem for mobile robots has always been a hotspot in the field of robotics. Simultaneous localization and mapping for robots using visual sensors and laser radar is easily affected by the field of view and ground conditions. According to the problems of traditional sensors applied in SLAM, this paper presents a novel method to perform SLAM using acoustic signals. This method enables robots equipped with sound sources, moving within a working environment and interacting with microphones of interest, to locate itself and map the objects simultaneously. In our case, a method of microphone localization based on a sound source array is proposed, and it was applied as a pre-processing step to the SLAM procedure. A microphone capable of receiving sound signals can be directly used as a feature landmark of a robot observation model without feature extraction. Meanwhile, to eliminate the random error caused by hardware equipment, a sound settled in the middle of two microphones was applied as a calibration sound source to determine the value of the random error. Simulations and realistic experimental results demonstrate the feasibility and effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
5

Suzuki, Takuya, Hiroaki Otsuka, Wataru Akahori, Yoshiaki Bando, and Hiroshi G. Okuno. "Influence of Different Impulse Response Measurement Signals on MUSIC-Based Sound Source Localization." Journal of Robotics and Mechatronics 29, no. 1 (February 20, 2017): 72–82. http://dx.doi.org/10.20965/jrm.2017.p0072.

Full text
Abstract:
[abstFig src='/00290001/07.jpg' width='300' text='Six impulse response measurement signals' ] Two major functions, sound source localization and sound source separation, provided by robot audition open source software HARK exploit the acoustic transfer functions of a microphone array to improve the performance. The acoustic transfer functions are calculated from the measured acoustic impulse response. In the measurement, special signals such as Time Stretched Pulse (TSP) are used to improve the signal-to-noise ratio of the measurement signals. Recent studies have identified the importance of selecting a measurement signal according to the applications. In this paper, we investigate how six measurement signals – up-TSP, down-TSP, M-Series, Log-SS, NW-SS, and MN-SS – influence the performance of the MUSIC-based sound source localization provided by HARK. Experiments with simulated sounds, up to three simultaneous sound sources, demonstrate no significant difference among the six measurement signals in the MUSIC-based sound source localization.
APA, Harvard, Vancouver, ISO, and other styles
6

MARKOU, Dimitris. "Exploring spatial patterns of environmental noise and perceived sound source dominance in urban areas. Case study: the city of Athens, Greece." European Journal of Geography 13, no. 4 (April 12, 2022): 60–78. http://dx.doi.org/10.48088/ejg.d.mar.13.2.060.078.

Full text
Abstract:
The aim of the present study is to map spatial patterns related to noise pollution and the acoustic environment -in a broader context- in the urban area of Athens, Greece. The primary goal of this thesis is to present a comprehensive approach that combines elements of two basic methodologies related to acoustic environment studies: a) noise mapping and b) the soundscape approach. The main inputs are environmental noise measurements and perceptual sound source-related observations. The results feature three noise pollution maps (LAeq,30 sec, L10, and L90 indices) and three sound source maps which reflect the way in which the human ear perceives the presence of sounds. Additionally, the question of whether the spatial distribution of sound source dominance can be explained by the dispersion of environmental noise levels was examined using geographically weighted regressions (GWR). The GWR models showed that sound source-related observations are explained to a significant extent by all three indicators. Four important findings emerge from the analysis. Firstly, areas with high levels of noise pollution are characterized by high to moderate presence of technological and absence of anthropic and natural sounds. Secondly, regions, where there is a simultaneous presence of all sound sources, are characterized by moderate to low noise levels. Thirdly, the absence of technological sounds is observed in quiet areas. Finally, areas featuring a moderate presence of technological and natural sounds are mostly urban green spaces built-in proximity to the main road network.
APA, Harvard, Vancouver, ISO, and other styles
7

Yaitskov, Ivan. "On the issue of formation the air noise component at workplaces of the diesel locomotives crews." MATEC Web of Conferences 224 (2018): 02024. http://dx.doi.org/10.1051/matecconf/201822402024.

Full text
Abstract:
The article is devoted to the general acoustic system of the diesel locomotives which is a combination of the diverse noise and vibration sources creating the increased levels of the sound pressure in the workplaces of the machinists and locomotive crews. Factually, the sound field at the calculated points is created by the simultaneous action of the air and structural noise component sources. It is included the emitters to the sources of the air noise component which inboard the body structures and, accordingly, emit sound energy into the closed air volumes. And it is the sources which are set externally in particular the “wheel-rail” subsystems. It can be assumed that among the internal sources the maximum sound power is radiated by the power system namely by the internal combustion engines. For different types of the diesel locomotives, the setting of the internal combustion engines has significant differences to the work places of the locomotive crews. Therefore, this article considers four computational schemes for the diesel locomotive, obtains the analytical dependences of the sound pressure levels and reduces to the convenient form for engineering calculations at the design stage of the research objects. To reduce the noise source of the internal combustion engine while it is appearing in the conditions of the machine-building processes is almost impossible. Moreover, the main practical and technological feasible ways are to choose sound-absorbing materials and achieve sound insulation based on the existing sanitary noise standards.
APA, Harvard, Vancouver, ISO, and other styles
8

Folland, Nicole A., Blake E. Butler, Jennifer E. Payne, and Laurel J. Trainor. "Cortical Representations Sensitive to the Number of Perceived Auditory Objects Emerge between 2 and 4 Months of Age: Electrophysiological Evidence." Journal of Cognitive Neuroscience 27, no. 5 (May 2015): 1060–67. http://dx.doi.org/10.1162/jocn_a_00764.

Full text
Abstract:
Sound waves emitted by two or more simultaneous sources reach the ear as one complex waveform. Auditory scene analysis involves parsing a complex waveform into separate perceptual representations of the sound sources [Bregman, A. S. Auditory scene analysis: The perceptual organization of sounds. London: MIT Press, 1990]. Harmonicity provides an important cue for auditory scene analysis. Normally, harmonics at integer multiples of a fundamental frequency are perceived as one sound with a pitch corresponding to the fundamental frequency. However, when one harmonic in such a complex, pitch-evoking sound is sufficiently mistuned, that harmonic emerges from the complex tone and is perceived as a separate auditory object. Previous work has shown that the percept of two objects is indexed in both children and adults by the object-related negativity component of the ERP derived from EEG recordings [Alain, C., Arnott, S. T., & Picton, T. W. Bottom–up and top–down influences on auditory scene analysis: Evidence from event-related brain potentials. Journal of Experimental Psychology: Human Perception and Performance, 27, 1072–1089, 2001]. Here we examine the emergence of object-related responses to an 8% harmonic mistuning in infants between 2 and 12 months of age. Two-month-old infants showed no significant object-related response. However, in 4- to 12-month-old infants, a significant frontally positive component was present, and by 8–12 months, a significant frontocentral object-related negativity was present, similar to that seen in older children and adults. This is in accordance with previous research demonstrating that infants younger than 4 months of age do not integrate harmonic information to perceive pitch when the fundamental is missing [He, C., Hotson, L., & Trainor, L. J. Maturation of cortical mismatch mismatch responses to occasional pitch change in early infancy: Effects of presentation rate and magnitude of change. Neuropsychologia, 47, 218–229, 2009]. The results indicate that the ability to use harmonic information to segregate simultaneous sounds emerges at the cortical level between 2 and 4 months of age.
APA, Harvard, Vancouver, ISO, and other styles
9

Hu, Jwu-Sheng, Chen-Yu Chan, Cheng-Kang Wang, Ming-Tang Lee, and Ching-Yi Kuo. "Simultaneous Localization of a Mobile Robot and Multiple Sound Sources Using a Microphone Array." Advanced Robotics 25, no. 1-2 (January 2011): 135–52. http://dx.doi.org/10.1163/016918610x538525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Valin, Jean-Marc, François Michaud, and Jean Rouat. "Robust localization and tracking of simultaneous moving sound sources using beamforming and particle filtering." Robotics and Autonomous Systems 55, no. 3 (March 2007): 216–28. http://dx.doi.org/10.1016/j.robot.2006.08.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Simultaneous Sound Sources"

1

Best, Virginia Ann. "Spatial Hearing with Simultaneous Sound Sources: A Psychophysical Investigation." Thesis, The University of Sydney, 2004. http://hdl.handle.net/2123/576.

Full text
Abstract:
This thesis provides an overview of work conducted to investigate human spatial hearing in situations involving multiple concurrent sound sources. Much is known about spatial hearing with single sound sources, including the acoustic cues to source location and the accuracy of localisation under different conditions. However, more recently interest has grown in the behaviour of listeners in more complex environments. Concurrent sound sources pose a particularly difficult problem for the auditory system, as their identities and locations must be extracted from a common set of sensory receptors and shared computational machinery. It is clear that humans have a rich perception of their auditory world, but just how concurrent sounds are processed, and how accurately, are issues that are poorly understood. This work attempts to fill a gap in our understanding by systematically examining spatial resolution with multiple sound sources. A series of psychophysical experiments was conducted on listeners with normal hearing to measure performance in spatial localisation and discrimination tasks involving more than one source. The general approach was to present sources that overlapped in both frequency and time in order to observe performance in the most challenging of situations. Furthermore, the role of two primary sets of location cues in concurrent source listening was probed by examining performance in different spatial dimensions. The binaural cues arise due to the separation of the two ears, and provide information about the lateral position of sound sources. The spectral cues result from location-dependent filtering by the head and pinnae, and allow vertical and front-rear auditory discrimination. Two sets of experiments are described that employed relatively simple broadband noise stimuli. In the first of these, two-point discrimination thresholds were measured using simultaneous noise bursts. It was found that the pair could be resolved only if a binaural difference was present; spectral cues did not appear to be sufficient. In the second set of experiments, the two stimuli were made distinguishable on the basis of their temporal envelopes, and the localisation of a designated target source was directly examined. Remarkably robust localisation was observed, despite the simultaneous masker, and both binaural and spectral cues appeared to be of use in this case. Small but persistent errors were observed, which in the lateral dimension represented a systematic shift away from the location of the masker. The errors can be explained by interference in the processing of the different location cues. Overall these experiments demonstrated that the spatial perception of concurrent sound sources is highly dependent on stimulus characteristics and configurations. This suggests that the underlying spatial representations are limited by the accuracy with which acoustic spatial cues can be extracted from a mixed signal. Three sets of experiments are then described that examined spatial performance with speech, a complex natural sound. The first measured how well speech is localised in isolation. This work demonstrated that speech contains high-frequency energy that is essential for accurate three-dimensional localisation. In the second set of experiments, spatial resolution for concurrent monosyllabic words was examined using similar approaches to those used for the concurrent noise experiments. It was found that resolution for concurrent speech stimuli was similar to resolution for concurrent noise stimuli. Importantly, listeners were limited in their ability to concurrently process the location-dependent spectral cues associated with two brief speech sources. In the final set of experiments, the role of spatial hearing was examined in a more relevant setting containing concurrent streams of sentence speech. It has long been known that binaural differences can aid segregation and enhance selective attention in such situations. The results presented here confirmed this finding and extended it to show that the spectral cues associated with different locations can also contribute. As a whole, this work provides an in-depth examination of spatial performance in concurrent source situations and delineates some of the limitations of this process. In general, spatial accuracy with concurrent sources is poorer than with single sound sources, as both binaural and spectral cues are subject to interference. Nonetheless, binaural cues are quite robust for representing concurrent source locations, and spectral cues can enhance spatial listening in many situations. The findings also highlight the intricate relationship that exists between spatial hearing, auditory object processing, and the allocation of attention in complex environments.
APA, Harvard, Vancouver, ISO, and other styles
2

Best, Virginia Ann. "Spatial Hearing with Simultaneous Sound Sources: A Psychophysical Investigation." University of Sydney. Medicine, 2004. http://hdl.handle.net/2123/576.

Full text
Abstract:
This thesis provides an overview of work conducted to investigate human spatial hearing in situations involving multiple concurrent sound sources. Much is known about spatial hearing with single sound sources, including the acoustic cues to source location and the accuracy of localisation under different conditions. However, more recently interest has grown in the behaviour of listeners in more complex environments. Concurrent sound sources pose a particularly difficult problem for the auditory system, as their identities and locations must be extracted from a common set of sensory receptors and shared computational machinery. It is clear that humans have a rich perception of their auditory world, but just how concurrent sounds are processed, and how accurately, are issues that are poorly understood. This work attempts to fill a gap in our understanding by systematically examining spatial resolution with multiple sound sources. A series of psychophysical experiments was conducted on listeners with normal hearing to measure performance in spatial localisation and discrimination tasks involving more than one source. The general approach was to present sources that overlapped in both frequency and time in order to observe performance in the most challenging of situations. Furthermore, the role of two primary sets of location cues in concurrent source listening was probed by examining performance in different spatial dimensions. The binaural cues arise due to the separation of the two ears, and provide information about the lateral position of sound sources. The spectral cues result from location-dependent filtering by the head and pinnae, and allow vertical and front-rear auditory discrimination. Two sets of experiments are described that employed relatively simple broadband noise stimuli. In the first of these, two-point discrimination thresholds were measured using simultaneous noise bursts. It was found that the pair could be resolved only if a binaural difference was present; spectral cues did not appear to be sufficient. In the second set of experiments, the two stimuli were made distinguishable on the basis of their temporal envelopes, and the localisation of a designated target source was directly examined. Remarkably robust localisation was observed, despite the simultaneous masker, and both binaural and spectral cues appeared to be of use in this case. Small but persistent errors were observed, which in the lateral dimension represented a systematic shift away from the location of the masker. The errors can be explained by interference in the processing of the different location cues. Overall these experiments demonstrated that the spatial perception of concurrent sound sources is highly dependent on stimulus characteristics and configurations. This suggests that the underlying spatial representations are limited by the accuracy with which acoustic spatial cues can be extracted from a mixed signal. Three sets of experiments are then described that examined spatial performance with speech, a complex natural sound. The first measured how well speech is localised in isolation. This work demonstrated that speech contains high-frequency energy that is essential for accurate three-dimensional localisation. In the second set of experiments, spatial resolution for concurrent monosyllabic words was examined using similar approaches to those used for the concurrent noise experiments. It was found that resolution for concurrent speech stimuli was similar to resolution for concurrent noise stimuli. Importantly, listeners were limited in their ability to concurrently process the location-dependent spectral cues associated with two brief speech sources. In the final set of experiments, the role of spatial hearing was examined in a more relevant setting containing concurrent streams of sentence speech. It has long been known that binaural differences can aid segregation and enhance selective attention in such situations. The results presented here confirmed this finding and extended it to show that the spectral cues associated with different locations can also contribute. As a whole, this work provides an in-depth examination of spatial performance in concurrent source situations and delineates some of the limitations of this process. In general, spatial accuracy with concurrent sources is poorer than with single sound sources, as both binaural and spectral cues are subject to interference. Nonetheless, binaural cues are quite robust for representing concurrent source locations, and spectral cues can enhance spatial listening in many situations. The findings also highlight the intricate relationship that exists between spatial hearing, auditory object processing, and the allocation of attention in complex environments.
APA, Harvard, Vancouver, ISO, and other styles
3

Minotto, Vicente Peruffo. "Audiovisual voice activity detection and localization of simultaneous speech sources." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/77231.

Full text
Abstract:
Em vista da tentência de se criarem intefaces entre humanos e máquinas que cada vez mais permitam meios simples de interação, é natural que sejam realizadas pesquisas em técnicas que procuram simular o meio mais convencional de comunicação que os humanos usam: a fala. No sistema auditivo humano, a voz é automaticamente processada pelo cérebro de modo efetivo e fácil, também comumente auxiliada por informações visuais, como movimentação labial e localizacão dos locutores. Este processamento realizado pelo cérebro inclui dois componentes importantes que a comunicação baseada em fala requere: Detecção de Atividade de Voz (Voice Activity Detection - VAD) e Localização de Fontes Sonoras (Sound Source Localization - SSL). Consequentemente, VAD e SSL também servem como ferramentas mandatórias de pré-processamento em aplicações de Interfaces Humano-Computador (Human Computer Interface - HCI), como no caso de reconhecimento automático de voz e identificação de locutor. Entretanto, VAD e SSL ainda são problemas desafiadores quando se lidando com cenários acústicos realísticos, particularmente na presença de ruído, reverberação e locutores simultâneos. Neste trabalho, são propostas abordagens para tratar tais problemas, para os casos de uma e múltiplas fontes sonoras, através do uso de informações audiovisuais, explorando-se variadas maneiras de se fundir as modalidades de áudio e vídeo. Este trabalho também emprega um arranjo de microfones para o processamento de som, o qual permite que as informações espaciais dos sinais acústicos sejam exploradas através do algoritmo estado-da-arte SRP (Steered Response Power). Por consequência adicional, uma eficiente implementação em GPU do SRP foi desenvolvida, possibilitando processamento em tempo real do algoritmo. Os experimentos realizados mostram uma acurácia média de 95% ao se efetuar VAD de até três locutores simultâneos, e um erro médio de 10cm ao se localizar tais locutores.
Given the tendency of creating interfaces between human and machines that increasingly allow simple ways of interaction, it is only natural that research effort is put into techniques that seek to simulate the most conventional mean of communication humans use: the speech. In the human auditory system, voice is automatically processed by the brain in an effortless and effective way, also commonly aided by visual cues, such as mouth movement and location of the speakers. This processing done by the brain includes two important components that speech-based communication require: Voice Activity Detection (VAD) and Sound Source Localization (SSL). Consequently, VAD and SSL also serve as mandatory preprocessing tools for high-end Human Computer Interface (HCI) applications in a computing environment, as the case of automatic speech recognition and speaker identification. However, VAD and SSL are still challenging problems when dealing with realistic acoustic scenarios, particularly in the presence of noise, reverberation and multiple simultaneous speakers. In this work we propose some approaches for tackling these problems using audiovisual information, both for the single source and the competing sources scenario, exploiting distinct ways of fusing the audio and video modalities. Our work also employs a microphone array for the audio processing, which allows the spatial information of the acoustic signals to be explored through the stateof- the art method Steered Response Power (SRP). As an additional consequence, a very fast GPU version of the SRP is developed, so that real-time processing is achieved. Our experiments show an average accuracy of 95% when performing VAD of up to three simultaneous speakers and an average error of 10cm when locating such speakers.
APA, Harvard, Vancouver, ISO, and other styles
4

Chan, Chen-Yu, and 詹鎮宇. "Simultaneous Localization of Mobile Robot and Unknown Number of Multiple Sound Sources." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/11290653893234555455.

Full text
Abstract:
碩士
國立交通大學
電機與控制工程系所
97
This work proposes a method that is able to simultaneously localize a mobile robot and unknown number of multiple sound sources in the environment. The reason of using sound sources as the landmarks in SLAM algorithm is presented. Several DOA estimation methods are described and a combinational one is used for real time application. After knowing the DOA information, a bearings-only SLAM (simultaneous localization and mapping) algorithm is introduced in detail, which contains the theoretical structure of Bayes filter. The estimated DOAs are known as the bearings information in the algorithm. As source signals are not persistent and there is no identification of the signal content, data association is unknown which is solved using particle filter. Modifications of the algorithm are made for real time application. Experimental results are presented to verify the effectiveness of the proposed approaches.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Simultaneous Sound Sources"

1

Mistrorigo, Alessandro. Phonodia. Venice: Edizioni Ca' Foscari, 2018. http://dx.doi.org/10.30687/978-88-6969-236-9.

Full text
Abstract:
This essay focuses on the ‘voice’ as it sounds in a specific type of recordings. This recordings always reproduce a poet performing a poem of his/her by reading it aloud. Nowadays this kind of recordings are quite common on Internet, while before the ’90 digital turn it was possible to find them only in specific collection of poetry books that came with a music cassette or a CD. These cultural objects, as other and more ancient analogic sources, were quite expensive to produce and acquire. However, all of them contain this same type of recoding which share the same characteristic: the author’s voice reading aloud a poem of his/her. By bearing in mind this specific cultural objet and its characteristics, this study aims to analyse the «intermedial relation» that occur between a poetic text and its recorded version with the author’s voice. This «intermedial relation» occurs especially when these two elements (text and voice) are juxtaposed and experienced simultaneously. In fact, some online archives dedicated to this type of recording present this configuration forcing the user to receive both text and voice in the same space and at the same time This specific configuration not just activates the intermedial relation, but also hybridises the status of both the reader, who become a «reader-listener», and the author, who become a «author-reader». By using an interdisciplinary approach that combines philosophy, psychology, anthropology, linguistics and cognitive sciences, the essay propose a method to «critically listening» some Spanish poets’ way of vocalising their poems. In addition, the book present Phonodia web archive built at the Ca’ Foscari University of Venice as a paradigmatic answer to editorial problems related to online multimedia archives dedicated to these specific recordings. An extent part of the book is dedicated to the twenty-eight interviews made to the Spanish contemporary poets who became part of Phonodia and agreed in discussing about their personal relation to ‘voice’ and how this element works in their creative practice.
APA, Harvard, Vancouver, ISO, and other styles
2

Tenney, James. The Several Dimensions of Pitch. Edited by Larry Polansky, Lauren Pratt, Robert Wannamaker, and Michael Winter. University of Illinois Press, 2017. http://dx.doi.org/10.5406/illinois/9780252038723.003.0017.

Full text
Abstract:
James Tenney explains the different mechanisms behind the simultaneous and consecutive relationships between pitches using ideas from evolution and neurocognition. He suggests that there are two different aspects of pitch perception and that one of those aspects can also be thought of as multidimensional. In considering such fundamental questions regarding the nature of auditory perception, Tenney refers to the evolution of hearing and considers two complementary if not contradictory things: distinguish between or among sounds issuing from different sound sources, and recognize when two or more sounds—though different—actually arise from a single sound source. The first mechanism is the basis for what Tenney calls the contour aspect of contour aspect of contour pitch perception. The other aspect of pitch perception has to do with the temporal ordering of the neural information. Tenney concludes by proposing a psychoacoustic explanation for contour formation based on the ear's temporal processing.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Simultaneous Sound Sources"

1

Firoozabadi, Ali Dehghan, Pablo Irarrazaval, Pablo Adasme, Hugo Durney, Miguel Sanhueza Olave, David Zabala-Blanco, and Cesar Azurdia-Meza. "Simultaneous Sound Source Localization by Proposed Cuboids Nested Microphone Array Based on Subband Generalized Eigenvalue Decomposition." In Advances in Intelligent Systems and Computing, 816–25. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58669-0_72.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Miholca, Amelia. "Between Zurich and Romania: A Dada Exchange." In Narratives Crossing Borders: The Dynamics of Cultural Interaction, 123–44. Stockholm University Press, 2021. http://dx.doi.org/10.16993/bbj.f.

Full text
Abstract:
In 1916, a group of ambitious artists set out to dismantle traditional art and its accompanied bourgeois culture. Living in Zurich, these artists—among them the Romanians Marcel Janco and Tristan Tzara, and the Germans Emmy Hennings and Hugo Ball—formulated the new Dada movement that would awaken new artistic and literary forms through a fusion of sound, theater, and abstract art. With absurd performances at Cabaret Voltaire, they mocked rationality, morality, and beauty. Within the Dada movement in Zurich, I would like to focus on the artists whose Romanian and Jewish heritage played a central role in Cabaret Voltaire and other Dada related events. Art historical scholarship on Dada minimized this heritage in order to situate Dada within the Western avant-garde canon. However, I argue that the five young Romanians who were present on the first night of Cabaret Voltaire on February 5, 1916 brought with them from their home country certain Jewish and Romanian folk traditions, which helped form Dada’s acclaimed reputation. The five Romanians—Tristan Tzara, Marcel Janco and his brothers Georges Janco and Jules Janco, and Arthur Segal—moved to Zurich either to escape military conscription or to continue their college studies. By the start of the twentieth-century, Romania’s intellectual scene was already a transcultural venture, with writers and artists studying and exhibiting in countries like France and Germany. Yet, Zurich’s international climate of émigrés from all over Europe allowed the young Romanians to fully expand beyond nationalistic confines and collaborate together with other exiled intellectuals. Tom Sandqvist’s book Dada East from 2007 is the most recent and most comprehensive study of the Romanian aspect of Dada. Sandqvist traces Janco’s and Tzara’s prolific, pre-Dada time in Bucharest, along with the folk and Jewish sources that Sandqvist claims influenced their Dada performances. For instance, Tzara’s simultaneous poems, which he performed at Cabaret Voltaire, may derive from nineteenth century Jewish theater in Romania and from Hasidic song rituals. Moreover, the Dada performances with grotesque masks created by Janco relate to the colinde festival in Romania’s peasant folk culture. In my paper, I aim to analyze Sandqvist’s claim and answer the following questions: to what extent did Janco and Tzara incorporate the colinde festival and Jewish theater and ritual? Was their Jewish identity more important to them than their Romanian identity? And, lastly, how did they carry Dada back to Romania after the war ended and the Dadaists in Zurich moved on to other cities?
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Simultaneous Sound Sources"

1

Keyrouz, Fakheredine. "Robotic Binaural Localization and Separation of Multiple Simultaneous Sound Sources." In 2017 IEEE 11th International Conference on Semantic Computing (ICSC). IEEE, 2017. http://dx.doi.org/10.1109/icsc.2017.18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Heli, Hedieh, and Hamid Reza Abutalebi. "Localization of multiple simultaneous sound sources in reverberant conditions using blind source separation methods." In 2011 International Symposium on Artificial Intelligence and Signal Processing (AISP). IEEE, 2011. http://dx.doi.org/10.1109/aisp.2011.5960978.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jwu-Sheng Hu, Chen-Yu Chan, Cheng-Kang Wang, and Chieh-Chih Wang. "Simultaneous localization of mobile robot and multiple sound sources using microphone array." In 2009 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2009. http://dx.doi.org/10.1109/robot.2009.5152813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sekiguchi, Kouhei, Yoshiaki Bando, Keisuke Nakamura, Kazuhiro Nakadai, Katsutoshi Itoyama, and Kazuyoshi Yoshii. "Online simultaneous localization and mapping of multiple sound sources and asynchronous microphone arrays." In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016. http://dx.doi.org/10.1109/iros.2016.7759311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Niino, Yukihito, Toshihiko Shiraishi, and Shin Morishita. "Blind Source Separation Using a Neural Network." In ASME 2008 International Mechanical Engineering Congress and Exposition. ASMEDC, 2008. http://dx.doi.org/10.1115/imece2008-67305.

Full text
Abstract:
Humans are able to well recognize mixtures of speech signals produced by two or more simultaneous speakers. This ability is known as cocktail party effect. To apply the cocktail party effect to engineering, we can construct novel systems of blind source separation such as current automatic speech recognition systems and active noise control systems under environment noises. A variety of methods have been developed to improve the performance of blind source separation in the presence of background noise or interfering speech. Considering blind source separation as the characteristics of human, artificial neural networks are suitable for it. In this paper, we proposed a method of blind source separation using a neural network. The present neural network can adaptively separate sound sources on training the internal parameters. The network was three-layered. Sound pressure was output from two sound sources and the mixed sound was measured with two microphones. The time history of microphone signals was input to the input layer of neural network. The two outputs of hidden layer were corresponding to the two sound pressure separated respectively. The two outputs of output layer were corresponding to the two microphone signals expected at next time step and compared with the actual microphone signals at next time step to train the neural network by a backpropagation method. In this procedure, the signal from each sound source was adaptively separated. There were two conditions of sound source, sinusoidal signals of 440 and 1000 Hz. In order to assess the performance of neural network numerically and experimentally, a basic independent component analysis (ICA) was conducted simultaneously. The results obtained are as follows. The performance of blind separation by the neural network was higher than the basic ICA. In addition, the neural network can successfully separate the sound source in spite of the position of sound sources.
APA, Harvard, Vancouver, ISO, and other styles
6

Hiramoto, Riho, Kuniaki Toyoda, and Hayato Mori. "Simultaneous Measurements of Velocity and Fluctuating Static-Pressure in a Circular Jet." In ASME/JSME 2003 4th Joint Fluids Summer Engineering Conference. ASMEDC, 2003. http://dx.doi.org/10.1115/fedsm2003-45612.

Full text
Abstract:
Velocity and fluctuating static-pressure were measured simultaneously in a circular jet in order to discuss the pressure transport of turbulent energy and the generation mechanism of aerodynamic sound in relation to vortical structure. A combined sensor, which consists of X-type hot-wire and pressure probe, was used for the simultaneous measurements. The air jet was issued from a sharp-edged circular orifice under the excitation at the interaction mode so that velocity and fluctuating static-pressure can be determined in space and time domains by using phase-average technique. The results suggest that the simultaneous measurements with the phase-average technique give us useful information to reveal the turbulent shear flow phenomena. In particular, the pressure transport is closely related to vortical structure, and intense sound sources are caused by the vortex merging and by the acceleration of vortices during vortex pairing.
APA, Harvard, Vancouver, ISO, and other styles
7

Hahn, Nara, and Sascha Spors. "Simultaneous Measurement of Spatial Room Impulse Responses from Multiple Sound Sources Using a Continuously Moving Microphone." In 2018 26th European Signal Processing Conference (EUSIPCO). IEEE, 2018. http://dx.doi.org/10.23919/eusipco.2018.8553532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Valin, J. M., F. Michaud, B. Hadjou, and J. Rouat. "Localization of simultaneous moving sound sources for mobile robot using a frequency- domain steered beamformer approach." In IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004. IEEE, 2004. http://dx.doi.org/10.1109/robot.2004.1307286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Heracleous, Panikos, Takeshi Yamada, Satoshi Nakamura, and Kiyohiro Shikano. "Simultaneous recognition of multiple sound sources based on 3-d n-best search using microphone array." In 6th European Conference on Speech Communication and Technology (Eurospeech 1999). ISCA: ISCA, 1999. http://dx.doi.org/10.21437/eurospeech.1999-21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Albers, A., and M. Dickerhof. "Simultaneous Monitoring of Rolling-Element and Journal Bearings Using Analysis of Structure-Born Ultrasound Acoustic Emissions." In ASME 2010 International Mechanical Engineering Congress and Exposition. ASMEDC, 2010. http://dx.doi.org/10.1115/imece2010-39814.

Full text
Abstract:
The application of Acoustic Emission technology for monitoring rolling element or hydrodynamic plain bearings has been addressed by several authors in former times. Most of these investigations took place under idealized conditions, to allow the concentration on one single source of emission, typically recorded by means of a piezoelectric sensor. This can be achieved by either eliminating other sources in advance or taking measures to shield them out (e. g. by placing the acoustic emission sensor very close to the source of interest), so that in consequence only one source of structure-born sound is present in the signal. With a practical orientation this is often not possible. In point of fact, a multitude of potential sources of emission can be worth considering, unfortunately superimposing one another. The investigations reported in this paper are therefore focused on the simultaneous monitoring of both bearing types mentioned above. Only one piezoelectric acoustic emission sensor is utilized, which is placed rather far away from the monitored bearings. By derivation of characteristic values from the sensor signal, different simulated defects can be detected reliably: seeded defects in the inner and outer race of rolling element bearings as well as the occurrence of mixed friction in the sliding surface bearing due to interrupted lubricant inflow.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography