Gotowa bibliografia na temat „Acoustic Scene Analysis”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Acoustic Scene Analysis”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Acoustic Scene Analysis"

1

Terez, Dmitry. "Acoustic scene analysis using microphone arrays." Journal of the Acoustical Society of America 128, nr 4 (październik 2010): 2442. http://dx.doi.org/10.1121/1.3508731.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Itatani, Naoya, i Georg M. Klump. "Animal models for auditory streaming". Philosophical Transactions of the Royal Society B: Biological Sciences 372, nr 1714 (19.02.2017): 20160112. http://dx.doi.org/10.1098/rstb.2016.0112.

Pełny tekst źródła
Streszczenie:
Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons’ response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’.
Style APA, Harvard, Vancouver, ISO itp.
3

Park, Sangwook, Woohyun Choi i Hanseok Ko. "Acoustic scene classification using recurrence quantification analysis". Journal of the Acoustical Society of Korea 35, nr 1 (31.01.2016): 42–48. http://dx.doi.org/10.7776/ask.2016.35.1.042.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Imoto, Keisuke. "Introduction to acoustic event and scene analysis". Acoustical Science and Technology 39, nr 3 (1.05.2018): 182–88. http://dx.doi.org/10.1250/ast.39.182.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Weisser, Adam, Jörg M. Buchholz, Chris Oreinos, Javier Badajoz-Davila, James Galloway, Timothy Beechey i Gitte Keidser. "The Ambisonic Recordings of Typical Environments (ARTE) Database". Acta Acustica united with Acustica 105, nr 4 (1.07.2019): 695–713. http://dx.doi.org/10.3813/aaa.919349.

Pełny tekst źródła
Streszczenie:
Everyday listening environments are characterized by far more complex spatial, spectral and temporal sound field distributions than the acoustic stimuli that are typically employed in controlled laboratory settings. As such, the reproduction of acoustic listening environments has become important for several research avenues related to sound perception, such as hearing loss rehabilitation, soundscapes, speech communication, auditory scene analysis, automatic scene classification, and room acoustics. However, the recordings of acoustic environments that are used as test material in these research areas are usually designed specifically for one study, or are provided in custom databases that cannot be universally adapted, beyond their original application. In this work we present the Ambisonic Recordings of Typical Environments (ARTE) database, which addresses several research needs simultaneously: realistic audio recordings that can be reproduced in 3D, 2D, or binaurally, with known acoustic properties, including absolute level and room impulse response. Multichannel higher-order ambisonic recordings of 13 realistic typical environments (e.g., office, cafè, dinner party, train station) were processed, acoustically analyzed, and subjectively evaluated to determine their perceived identity. The recordings are delivered in a generic format that may be reproduced with different hardware setups, and may also be used in binaural, or single-channel setups. Room impulse responses, as well as detailed acoustic analyses, of all environments supplement the recordings. The database is made open to the research community with the explicit intention to expand it in the future and include more scenes.
Style APA, Harvard, Vancouver, ISO itp.
6

Hou, Yuanbo, i Dick Botteldooren. "Artificial intelligence-based collaborative acoustic scene and event classification to support urban soundscape analysis and classification". INTER-NOISE and NOISE-CON Congress and Conference Proceedings 265, nr 1 (1.02.2023): 6466–73. http://dx.doi.org/10.3397/in_2022_0974.

Pełny tekst źródła
Streszczenie:
A human listener embedded in a sonic environment will rely on meaning given to sound events as well as on general acoustic features to analyse and appraise its soundscape. However, currently used measurable indicators for soundscape mainly focus on the latter and meaning is only included indirectly. Yet, today's artificial intelligence (AI) techniques allow to recognise a variety of sounds and thus assign meaning to them. Hence, we propose to combine a model for acoustic event classification trained on the large-scale environmental sound database AudioSet, with a scene classification algorithm that couples direct identification of acoustic features with these recognised sound for scene recognition. The combined model is trained on TUT2018, a database containing ten everyday scenes. Applying the resulting AI-model to the soundscapes of the world database without further training shows that the classification that is obtained correlates to perceived calmness and liveliness evaluated by a test panel. It also allows to unravel why an acoustic environment sounds like a lively square or a calm park by analysing the type of sounds and their occurrence pattern over time. Moreover, disturbance of the acoustic environment that is expected based on visual clues, by e.g. traffic can easily be recognised.
Style APA, Harvard, Vancouver, ISO itp.
7

Tang, Zhenyu, Nicholas J. Bryan, Dingzeyu Li, Timothy R. Langlois i Dinesh Manocha. "Scene-Aware Audio Rendering via Deep Acoustic Analysis". IEEE Transactions on Visualization and Computer Graphics 26, nr 5 (maj 2020): 1991–2001. http://dx.doi.org/10.1109/tvcg.2020.2973058.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Ellison, William T., Adam S. Frankel, David Zeddies, Kathleen J. Vigness Raposa i Cheryl Schroeder. "Underwater acoustic scene analysis: Exploration of appropriate metrics." Journal of the Acoustical Society of America 124, nr 4 (październik 2008): 2433. http://dx.doi.org/10.1121/1.4782511.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Makino, S. "Special Section on Acoustic Scene Analysis and Reproduction". IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E91-A, nr 6 (1.06.2008): 1301–2. http://dx.doi.org/10.1093/ietfec/e91-a.6.1301.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Wang, Mou, Xiao-Lei Zhang i Susanto Rahardja. "An Unsupervised Deep Learning System for Acoustic Scene Analysis". Applied Sciences 10, nr 6 (19.03.2020): 2076. http://dx.doi.org/10.3390/app10062076.

Pełny tekst źródła
Streszczenie:
Acoustic scene analysis has attracted a lot of attention recently. Existing methods are mostly supervised, which requires well-predefined acoustic scene categories and accurate labels. In practice, there exists a large amount of unlabeled audio data, but labeling large-scale data is not only costly but also time-consuming. Unsupervised acoustic scene analysis on the other hand does not require manual labeling but is known to have significantly lower performance and therefore has not been well explored. In this paper, a new unsupervised method based on deep auto-encoder networks and spectral clustering is proposed. It first extracts a bottleneck feature from the original acoustic feature of audio clips by an auto-encoder network, and then employs spectral clustering to further reduce the noise and unrelated information in the bottleneck feature. Finally, it conducts hierarchical clustering on the low-dimensional output of the spectral clustering. To fully utilize the spatial information of stereo audio, we further apply the binaural representation and conduct joint clustering on that. To the best of our knowledge, this is the first time that a binaural representation is being used in unsupervised learning. Experimental results show that the proposed method outperforms the state-of-the-art competing methods.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Acoustic Scene Analysis"

1

Kudo, Hiroaki, Jinji Chen i Noboru Ohnishi. "Scene Analysis by Clues from the Acoustic Signals". INTELLIGENT MEDIA INTEGRATION NAGOYA UNIVERSITY / COE, 2004. http://hdl.handle.net/2237/10426.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Ford, Logan H. "Large-scale acoustic scene analysis with deep residual networks". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123026.

Pełny tekst źródła
Streszczenie:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 63-66).
Many of the recent advances in audio event detection, particularly on the AudioSet dataset, have focused on improving performance using the released embeddings produced by a pre-trained model. In this work, we instead study the task of training a multi-label event classifier directly from the audio recordings of AudioSet. Using the audio recordings, not only are we able to reproduce results from prior work, we have also confirmed improvements of other proposed additions, such as an attention module. Moreover, by training the embedding network jointly with the additions, we achieve a mean Average Precision (mAP) of 0.392 and an area under ROC curve (AUC) of 0.971, surpassing the state-of-the-art without transfer learning from a large dataset. We also analyze the output activations of the network and find that the models are able to localize audio events when a finer time resolution is needed. In addition, we use this model in exploring multimodal learning, transfer learning, and realtime sound event detection tasks.
by Logan H. Ford.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Style APA, Harvard, Vancouver, ISO itp.
3

Teutsch, Heinz. "Wavefield decomposition using microphone arrays and its application to acoustic scene analysis". [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=97902806X.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

McMullan, Amanda R. "Electroencephalographic measures of auditory perception in dynamic acoustic environments". Thesis, Lethbridge, Alta. : University of Lethbridge, Dept. of Neuroscience, c2013, 2013. http://hdl.handle.net/10133/3354.

Pełny tekst źródła
Streszczenie:
We are capable of effortlessly parsing a complex scene presented to us. In order to do this, we must segregate objects from each other and from the background. While this process has been extensively studied in vision science, it remains relatively less understood in auditory science. This thesis sought to characterize the neuroelectric correlates of auditory scene analysis using electroencephalography. Chapter 2 determined components evoked by first-order energy boundaries and second-order pitch boundaries. Chapter 3 determined components evoked by first-order and second-order discontinuous motion boundaries. Both of these chapters focused on analysis of event-related potential (ERP) waveforms and time-frequency analysis. In addition, these chapters investigated the contralateral nature of a negative ERP component. These results extend the current knowledge of auditory scene analysis by providing a starting point for discussing and characterizing first-order and second-order boundaries in an auditory scene.
x, 90 leaves : col. ill. ; 29 cm
Style APA, Harvard, Vancouver, ISO itp.
5

Narayanan, Arun. "Computational auditory scene analysis and robust automatic speech recognition". The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1401460288.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Carlo, Diego Di. "Echo-aware signal processing for audio scene analysis". Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S075.

Pełny tekst źródła
Streszczenie:
La plupart des méthodes de traitement du signal audio considèrent la réverbération et en particulier les échos acoustiques comme une nuisance. Cependant, ceux-ci transmettent des informations spatiales et sémantiques importantes sur les sources sonores et des méthodes essayant de les prendre en compte ont donc récemment émergé.. Dans ce travail, nous nous concentrons sur deux directions. Tout d’abord, nous étudions la manière d’estimer les échos acoustiques à l’aveugle à partir d’enregistrements microphoniques. Deux approches sont proposées, l’une s’appuyant sur le cadre des dictionnaires continus, l’autre sur des techniques récentes d’apprentissage profond. Ensuite, nous nous concentrons sur l’extension de méthodes existantes d’analyse de scènes audio à leurs formes sensibles à l’écho. Le cadre NMF multicanal pour la séparation de sources audio, la méthode de localisation SRP-PHAT et le formateur de voies MVDR pour l’amélioration de la parole sont tous étendus pour prendre en compte les échos. Ces applications montrent comment un simple modèle d’écho peut conduire à une amélioration des performances
Most of audio signal processing methods regard reverberation and in particular acoustic echoes as a nuisance. However, they convey important spatial and semantic information about sound sources and, based on this, recent echo-aware methods have been proposed. In this work we focus on two directions. First, we study the how to estimate acoustic echoes blindly from microphone recordings. Two approaches are proposed, one leveraging on continuous dictionaries, one using recent deep learning techniques. Then, we focus on extending existing methods in audio scene analysis to their echo-aware forms. The Multichannel NMF framework for audio source separation, the SRP-PHAT localization method, and the MVDR beamformer for speech enhancement are all extended to their echo-aware versions
Style APA, Harvard, Vancouver, ISO itp.
7

Deleforge, Antoine. "Acoustic Space Mapping : A Machine Learning Approach to Sound Source Separation and Localization". Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM033/document.

Pełny tekst źródła
Streszczenie:
Dans cette thèse, nous abordons le problème longtemps étudié de la séparation et localisation binaurale (deux microphones) de sources sonores par l'apprentissage supervisé. Dans ce but, nous développons un nouveau paradigme dénommé projection d'espaces acoustiques, à la croisé des chemins entre la perception binaurale, de l'écoute robotisée, du traitement du signal audio, et de l'apprentissage automatisé. L'approche proposée consiste à apprendre un lien entre les indices auditifs perçus par le système et la position de la source sonore dans une autre modalité du système, comme l'espace visuelle ou l'espace moteur. Nous proposons de nouveaux protocoles expérimentaux permettant d'acquérir automatiquement de grands ensembles d'entraînement qui associent des telles données. Les jeux de données obtenus sont ensuite utilisés pour révéler certaines propriétés intrinsèques des espaces acoustiques, et conduisent au développement d'une famille générale de modèles probabilistes permettant la projection localement linéaire d'un espace de haute dimension vers un espace de basse dimension. Nous montrons que ces modèles unifient plusieurs méthodes de régression et de réduction de dimension existantes, tout en incluant un grand nombre de nouveaux modèles qui généralisent les précédents. Les popriétés et l'inférence de ces modèles sont détaillées en profondeur, et le net avantage des méthodes proposées par rapport à des techniques de l'état de l'art est établit sur différentes applications de projection d'espace, au delà du champs de l'analyse de scènes auditives. Nous montrons ensuite comment les méthodes proposées peuvent être étendues probabilistiquement pour s'attaquer au fameux problème de la soirée cocktail, c'est à dire localiser une ou plusieurs sources émettant simultanément dans un environnement réel, et reséparer les signaux mélangés. Nous montrons que les techniques qui en découlent accomplissent cette tâche avec une précision inégalée. Ceci démontre le rôle important de l'apprentissage et met en avant le paradigme de la projection d'espaces acoustiques comme un outil prometteur pour aborder de façon robuste les problèmes les plus difficiles de l'audition binaurale computationnelle
In this thesis, we address the long-studied problem of binaural (two microphones) sound source separation and localization through supervised leaning. To achieve this, we develop a new paradigm referred as acoustic space mapping, at the crossroads of binaural perception, robot hearing, audio signal processing and machine learning. The proposed approach consists in learning a link between auditory cues perceived by the system and the emitting sound source position in another modality of the system, such as the visual space or the motor space. We propose new experimental protocols to automatically gather large training sets that associates such data. Obtained datasets are then used to reveal some fundamental intrinsic properties of acoustic spaces and lead to the development of a general family of probabilistic models for locally-linear high- to low-dimensional space mapping. We show that these models unify several existing regression and dimensionality reduction techniques, while encompassing a large number of new models that generalize previous ones. The properties and inference of these models are thoroughly detailed, and the prominent advantage of proposed methods with respect to state-of-the-art techniques is established on different space mapping applications, beyond the scope of auditory scene analysis. We then show how the proposed methods can be probabilistically extended to tackle the long-known cocktail party problem, i.e., accurately localizing one or several sound sources emitting at the same time in a real-word environment, and separate the mixed signals. We show that resulting techniques perform these tasks with an unequaled accuracy. This demonstrates the important role of learning and puts forwards the acoustic space mapping paradigm as a promising tool for robustly addressing the most challenging problems in computational binaural audition
Style APA, Harvard, Vancouver, ISO itp.
8

Mouterde, Solveig. "Long-range discrimination of individual vocal signatures by a songbird : from propagation constraints to neural substrate". Thesis, Saint-Etienne, 2014. http://www.theses.fr/2014STET4012/document.

Pełny tekst źródła
Streszczenie:
L'un des plus grands défis posés par la communication est que l'information codée par l'émetteur est toujours modifiée avant d'atteindre le récepteur, et que celui-ci doit traiter cette information altérée afin de recouvrer le message. Ceci est particulièrement vrai pour la communication acoustique, où la transmission du son dans l'environnement est une source majeure de dégradation du signal, ce qui diminue l'intensité du signal relatif au bruit. La question de savoir comment les animaux transmettent l'information malgré ces conditions contraignantes a été l'objet de nombreuses études, portant soit sur l'émetteur soit sur le récepteur. Cependant, une recherche plus intégrée sur l'analyse de scènes auditives est nécessaire pour aborder cette tâche dans toute sa complexité. Le but de ma recherche était d'utiliser une approche transversale afin d'étudier comment les oiseaux s'adaptent aux contraintes de la communication à longue distance, en examinant le codage de l'information au niveau de l'émetteur, les dégradations du signal acoustiques dues à la propagation, et la discrimination de cette information dégradée par le récepteur, au niveau comportemental comme au niveau neuronal. J'ai basé mon travail sur l'idée de prendre en compte les problèmes réellement rencontrés par les animaux dans leur environnement naturel, et d'utiliser des stimuli reflétant la pertinence biologique des problèmes posés à ces animaux. J'ai choisi de me focaliser sur l'information d'identité individuelle contenue dans le cri de distance des diamants mandarins (Taeniopygia guttata) et d'examiner comment la signature vocale individuelle est codée, dégradée, puis discriminée et décodée, depuis l'émetteur jusqu'au récepteur. Cette étude montre que la signature individuelle des diamants mandarins est très résistante à la propagation, et que les paramètres acoustiques les plus individualisés varient selon la distance considérée. En testant des femelles dans les expériences de conditionnement opérant, j'ai pu montrer que celles-ci sont expertes pour discriminer entre les signature vocales dégradées de deux mâles, et qu'elles peuvent s'améliorer en s'entraînant. Enfin, j'ai montré que cette capacité de discrimination impressionnante existe aussi au niveau neuronal : nous avons montré l'existence d'une population de neurones pouvant discriminer des voix individuelles à différent degrés de dégradation, sans entrainement préalable. Ce niveau de traitement évolué, dans le cortex auditif primaire, ouvre la voie à de nouvelles recherches, à l'interface entre le traitement neuronal de l'information et le comportement
In communication systems, one of the biggest challenges is that the information encoded by the emitter is always modified before reaching the receiver, who has to process this altered information in order to recover the intended message. In acoustic communication particularly, the transmission of sound through the environment is a major source of signal degradation, caused by attenuation, absorption and reflections, all of which lead to decreases in the signal relative to the background noise. How animals deal with the need for exchanging information in spite of constraining conditions has been the subject of many studies either at the emitter or at the receiver's levels. However, a more integrated research about auditory scene analysis has seldom been used, and is needed to address the complexity of this process. The goal of my research was to use a transversal approach to study how birds adapt to the constraints of long distance communication by investigating the information coding at the emitter's level, the propagation-induced degradation of the acoustic signal, and the discrimination of this degraded information by the receiver at both the behavioral and neural levels. Taking into account the everyday issues faced by animals in their natural environment, and using stimuli and paradigms that reflected the behavioral relevance of these challenges, has been the cornerstone of my approach. Focusing on the information about individual identity in the distance calls of zebra finches Taeniopygia guttata, I investigated how the individual vocal signature is encoded, degraded, and finally discriminated, from the emitter to the receiver. This study shows that the individual signature of zebra finches is very resistant to propagation-induced degradation, and that the most individualized acoustic parameters vary depending on distance. Testing female birds in operant conditioning experiments, I showed that they are experts at discriminating between the degraded vocal signatures of two males, and that they can improve their ability substantially when they can train over increasing distances. Finally, I showed that this impressive discrimination ability also occurs at the neural level: we found a population of neurons in the avian auditory forebrain that discriminate individual voices with various degrees of propagation-induced degradation without prior familiarization or training. The finding of such a high-level auditory processing, in the primary auditory cortex, opens a new range of investigations, at the interface of neural processing and behavior
Style APA, Harvard, Vancouver, ISO itp.
9

Teki, S. "Cognitive analysis of complex acoustic scenes". Thesis, University College London (University of London), 2013. http://discovery.ucl.ac.uk/1413017/.

Pełny tekst źródła
Streszczenie:
Natural auditory scenes consist of a rich variety of temporally overlapping sounds that originate from multiple sources and locations and are characterized by distinct acoustic features. It is an important biological task to analyze such complex scenes and extract sounds of interest. The thesis addresses this question, also known as the “cocktail party problem” by developing an approach based on analysis of a novel stochastic signal contrary to deterministic narrowband signals used in previous work. This low-level signal, known as the Stochastic Figure-Ground (SFG) stimulus captures the spectrotemporal complexity of natural sound scenes and enables parametric control of stimulus features. In a series of experiments based on this stimulus, I have investigated specific behavioural and neural correlates of human auditory figure-ground segregation. This thesis is presented in seven sections. Chapter 1 reviews key aspects of auditory processing and existing models of auditory segregation. Chapter 2 presents the principles of the techniques used including psychophysics, modeling, functional Magnetic Resonance Imaging (fMRI) and Magnetoencephalography (MEG). Experimental work is presented in the following chapters and covers figure-ground segregation behaviour (Chapter 3), modeling of the SFG stimulus based on a temporal coherence model of auditory perceptual organization (Chapter 4), analysis of brain activity related to detection of salient targets in the SFG stimulus using fMRI (Chapter 5), and MEG respectively (Chapter 6). Finally, Chapter 7 concludes with a general discussion of the results and future directions for research. Overall, this body of work emphasizes the use of stochastic signals for auditory scene analysis and demonstrates an automatic, highly robust segregation mechanism in the auditory system that is sensitive to temporal correlations across frequency channels.
Style APA, Harvard, Vancouver, ISO itp.
10

Wang, Yuxuan. "Supervised Speech Separation Using Deep Neural Networks". The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1426366690.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Acoustic Scene Analysis"

1

author, Pikrakis Aggelos, red. Introduction to audio analysis: A MATLAB approach. Kidlington, Oxford: Academic Press is an imprint of Elsevier, 2014.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Zur Nieden, Gesa. Symmetries in Spaces, Symmetries in Listening. Redaktorzy Christian Thorau i Hansjakob Ziemer. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780190466961.013.16.

Pełny tekst źródła
Streszczenie:
Based on the importance of the concept of symmetry in French sociological aesthetics circa 1900, this chapter analyzes the convergence of theaters, musical form, and musical understanding. The analysis focuses on architectural shape, audience response, and the musical repertoire in the new theaters built in Barcelona (1847), Paris (1862), and Rome (1880). While these theaters were fashioned after the baroque form of the “teatro all’italiana” that prevailed in Italy, France, and Spain during the late nineteenth century, they provided huge spaces accommodating a socially mixed audience within an architecturally symmetrical form. Music critics often aligned acoustic sound waves with actual visibility in the auditorium, and semicircular structures in the scenography on stage may have affected the reception of the musical performance. The newly built theaters arrived at a time when the “classical” music scene and a certain canon was developed, opposing the more “intellectual” audiences and repertories of contemporary music.
Style APA, Harvard, Vancouver, ISO itp.
3

Pitozzi, Enrico. Body Soundscape. Redaktor Yael Kaduri. Oxford University Press, 2016. http://dx.doi.org/10.1093/oxfordhb/9780199841547.013.43.

Pełny tekst źródła
Streszczenie:
Starting from an interdisciplinary perspective of methodological integration of the concepts of body and sound in the contemporary dance scene, this chapter addresses the general aesthetic notion ofsonorous body. Through a survey of some key practices and pieces by Wayne McGregor, Ginette Laurin, Angelin Preljocaj, Cindy Van Acker and others, the author analyzes the audiovisual dimension of these works, developed with digital technologies and in a collaboration of choreographers with electronic musician and sound artists such as Scanner, Kasper T. Toeplitz, Granular Synthesis, and Mika Vainio. This audiovisual tension, defined as the sonorous body, can be read through two interpretations. In the first, thesound is a body, which means the electronic sound of the scene is an acoustic material. In the second, the body is a sound, which means the body of the dancers produces the soundscape of a scene.
Style APA, Harvard, Vancouver, ISO itp.
4

Kytö, Meri. Soundscapes of Istanbul in Turkish Film Soundtracks. Redaktorzy John Richardson, Claudia Gorbman i Carol Vernallis. Oxford University Press, 2013. http://dx.doi.org/10.1093/oxfordhb/9780199733866.013.0028.

Pełny tekst źródła
Streszczenie:
This article appears in theOxford Handbook of New Audiovisual Aestheticsedited by John Richardson, Claudia Gorbman, and Carol Vernallis. This chapter examines the changing relationships of sounds, places, and their cultural meanings in Turkish films located in Istanbul. Starting with a brief review of the historical context of Turkish film sound and sonic representations of Istanbul, the chapter then analyzes two recent films set in middle-class apartment homes,11’e 10 kalaandUzak, which represent the auteur vein of new Turkish cinema. Both feature subtle and delicate sound design and evidence a form of heightened realism that contrasts with traditional approaches, shifting the focus of Istanbul’s soundscapes from public to private. Although the locations and characters of both these films are remarkably similar, their soundtracks differ in rendering the experience of urbanity and strategies of acoustic privacy by the transcoding of soundmarks and the use of transphonia in scenes.
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Acoustic Scene Analysis"

1

de Cheveigné, Alain. "The Cancellation Principle in Acoustic Scene Analysis". W Speech Separation by Humans and Machines, 245–59. Boston, MA: Springer US, 2005. http://dx.doi.org/10.1007/0-387-22794-6_16.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Gold, Erica, i Dan McIntyre. "Chapter 4. What the /fʌk/? An acoustic-pragmatic analysis of implicated meaning in a scene from The Wire". W Linguistic Approaches to Literature, 74–91. Amsterdam: John Benjamins Publishing Company, 2019. http://dx.doi.org/10.1075/lal.35.04gol.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Serizel, Romain, Victor Bisot, Slim Essid i Gaël Richard. "Acoustic Features for Environmental Sound Analysis". W Computational Analysis of Sound Scenes and Events, 71–101. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Lemaitre, Guillaume, Nicolas Grimault i Clara Suied. "Acoustics and Psychoacoustics of Sound Scenes and Events". W Computational Analysis of Sound Scenes and Events, 41–67. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Pham, Lam, Hieu Tang, Anahid Jalali, Alexander Schindler, Ross King i Ian McLoughlin. "A Low-Complexity Deep Learning Framework For Acoustic Scene Classification". W Data Science – Analytics and Applications, 26–32. Wiesbaden: Springer Fachmedien Wiesbaden, 2022. http://dx.doi.org/10.1007/978-3-658-36295-9_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Huron, David. "Sources and Images". W Voice Leading. The MIT Press, 2016. http://dx.doi.org/10.7551/mitpress/9780262034852.003.0003.

Pełny tekst źródła
Streszczenie:
An introduction to the perception of sound is given, with special emphasis on topics useful for understanding the organization of music. The chapter covers essential concepts in acoustics and auditory perception, including basic auditory anatomy and physiology. Core concepts are defined such as vibrational mode, pure tone, complex tone, partial, harmonic, cochlea, basilar membrane, resolved partial, auditory image, auditory stream, acoustic scene, auditory scene, and auditory scene analysis.
Style APA, Harvard, Vancouver, ISO itp.
7

"An Auditory Scene Analysis Approach to Monaural Speech Segregation". W Topics in Acoustic Echo and Noise Control, 485–515. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/3-540-33213-8_12.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Huron, David. "The Cultural Connection". W Voice Leading. The MIT Press, 2016. http://dx.doi.org/10.7551/mitpress/9780262034852.003.0015.

Pełny tekst źródła
Streszczenie:
The disposition to parse auditory scenes is probably an evolved innate behavior. However, the means by which this is achieved likely involves a mix of innate and learned mechanisms. This chapter reviews research showing how the sonic environment plays a formative role in various aspects of auditory processing. What we commonly hear shapes how we hear sounds. For example, research shows that how musicians hear pitch is affected by what instrument they play. Even the language you speak has an impact on how you hear. It is wrong to assume that everyone parses an acoustic scene in the same way. In general, the research suggests that cultural background and individual experience may be directly relevant to our understanding of auditory scene analysis, and hence to voice leading.
Style APA, Harvard, Vancouver, ISO itp.
9

Epstein, Hugh. "An Audible World". W Hardy, Conrad and the Senses, 139–92. Edinburgh University Press, 2019. http://dx.doi.org/10.3366/edinburgh/9781474449861.003.0005.

Pełny tekst źródła
Streszczenie:
The chapter opens by contrasting the human capacities for audition as opposed to vision, and the qualities conveyed by sound as opposed to those by light. Despite the general acceptance of wave theory, from the nineteenth century through to today issues of auditory location, transmission and reception remain contested. The ‘auditory scene analysis’ conducted by the novels in this study sees/hears them as participating in this ontological and epistemological uncertainty. Both The Return of the Native and ‘Heart of Darkness’ powerfully evoke densely enveloping closed systems that are examined in terms of their circulating sounds, ‘acoustic pictures’ raised upon the air by sighs in Hardy and whispers in Conrad. Whilst the discussion of ‘Heart of Darkness’ shows that it is an individual voice, and particularly its ‘cry’, which provides a guiding thread for Marlow, when the chapter moves on to sound in Nostromo it is the ambient noise of a historically evolving modernity that carries the theme of the reach of ‘material interests’. Sounds, conceived as units of shock, provide the agitated fabric of this novel of jolts and collisions.
Style APA, Harvard, Vancouver, ISO itp.
10

Pisano, Giusy. "In Praise of the Sound Dissolve: Evanescences, Uncertainties, Fusions, Resonances". W Indefinite Visions, tłumaczy Elise Harris i Martine Beugnet. Edinburgh University Press, 2017. http://dx.doi.org/10.3366/edinburgh/9781474407120.003.0007.

Pełny tekst źródła
Streszczenie:
Blurred voices, undefined and acousmatic; noise transformed into music; the backdrop of a world of imperceptible yet present sound; gimmicks brought about as quickly as they disappear; the fading of words, noise, and music; reverberations . . . These are the common practices of the mise en scène of sound, both in so-called auteur cinema and in more popular cinema. Yet such practices are more generally attributed to images, while sounds are relegated to the more concrete, enclosed and well-defined. This chapter interrogates the causes of a misunderstanding: the confusion of the principles of reproduction, fidelity and real sounds and the inability of theoretical categorisations to analyse the mise en scène of sound.
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Acoustic Scene Analysis"

1

Imoto, Keisuke, Yasunori Ohishi, Hisashi Uematsu i Hitoshi Ohmuro. "Acoustic scene analysis based on latent acoustic topic and event allocation". W 2013 IEEE International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2013. http://dx.doi.org/10.1109/mlsp.2013.6661957.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Imoto, Keisuke, i Nobutaka Ono. "Acoustic scene analysis from acoustic event sequence with intermittent missing event". W ICASSP 2015 - 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015. http://dx.doi.org/10.1109/icassp.2015.7177951.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Wang, Weimin, Weiran Wang, Ming Sun i Chao Wang. "Acoustic Scene Analysis with Multi-Head Attention Networks". W Interspeech 2020. ISCA: ISCA, 2020. http://dx.doi.org/10.21437/interspeech.2020-1342.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Imoto, Keisuke, i Nobutaka Ono. "Online acoustic scene analysis based on nonparametric Bayesian model". W 2016 24th European Signal Processing Conference (EUSIPCO). IEEE, 2016. http://dx.doi.org/10.1109/eusipco.2016.7760396.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Kwon, Homin, Harish Krishnamoorthi, Visar Berisha i Andreas Spanias. "A sensor network for real-time acoustic scene analysis". W 2009 IEEE International Symposium on Circuits and Systems - ISCAS 2009. IEEE, 2009. http://dx.doi.org/10.1109/iscas.2009.5117712.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Basbug, Ahmet Melih, i Mustafa Sert. "Analysis of Deep Neural Network Models for Acoustic Scene Classification". W 2019 27th Signal Processing and Communications Applications Conference (SIU). IEEE, 2019. http://dx.doi.org/10.1109/siu.2019.8806301.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Ford, Logan, Hao Tang, François Grondin i James Glass. "A Deep Residual Network for Large-Scale Acoustic Scene Analysis". W Interspeech 2019. ISCA: ISCA, 2019. http://dx.doi.org/10.21437/interspeech.2019-2731.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Sharma, Pulkit, Vinayak Abrol i Anshul Thakur. "ASe: Acoustic Scene Embedding Using Deep Archetypal Analysis and GMM". W Interspeech 2018. ISCA: ISCA, 2018. http://dx.doi.org/10.21437/interspeech.2018-1481.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Imoto, Keisuke, i Nobutaka Ono. "Spatial-feature-based acoustic scene analysis using distributed microphone array". W 2015 23rd European Signal Processing Conference (EUSIPCO). IEEE, 2015. http://dx.doi.org/10.1109/eusipco.2015.7362480.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Imoto, Keisuke. "Acoustic Scene Analysis Using Partially Connected Microphones Based on Graph Cepstrum". W 2018 26th European Signal Processing Conference (EUSIPCO). IEEE, 2018. http://dx.doi.org/10.23919/eusipco.2018.8553385.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii