Auswahl der wissenschaftlichen Literatur zum Thema „Sound events“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Sound events" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Sound events"
Elizalde, Benjamin. „Categorization of sound events for automatic sound event classification“. Journal of the Acoustical Society of America 153, Nr. 3_supplement (01.03.2023): A364. http://dx.doi.org/10.1121/10.0019175.
Der volle Inhalt der QuelleAura, Karine, Guillaume Lemaitre und Patrick Susini. „Verbal imitations of sound events enable recognition of the imitated sound events“. Journal of the Acoustical Society of America 123, Nr. 5 (Mai 2008): 3414. http://dx.doi.org/10.1121/1.2934144.
Der volle Inhalt der QuelleNishida, Tsuruyo, Kazuhiko Kakehi und Takamasa Kyutoku. „Motion perception of the target sound event under the discriminated two sound events“. Journal of the Acoustical Society of America 120, Nr. 5 (November 2006): 3080. http://dx.doi.org/10.1121/1.4787419.
Der volle Inhalt der QuelleNakayama, Tsumugi, Taisuke Naito, Shunsuke Kouda und Takatoshi Yokota. „Determining disturbance sounds in aircraft sound events using a CNN-based method“. INTER-NOISE and NOISE-CON Congress and Conference Proceedings 268, Nr. 7 (30.11.2023): 1320–28. http://dx.doi.org/10.3397/in_2023_0196.
Der volle Inhalt der QuelleHara, Sunao, und Masanobu Abe. „Predictions for sound events and soundscape impressions from environmental sound using deep neural networks“. INTER-NOISE and NOISE-CON Congress and Conference Proceedings 268, Nr. 3 (30.11.2023): 5239–50. http://dx.doi.org/10.3397/in_2023_0739.
Der volle Inhalt der QuelleMaruyama, Hironori, Kosuke Okada und Isamu Motoyoshi. „A two-stage spectral model for sound texture perception: Synthesis and psychophysics“. i-Perception 14, Nr. 1 (Januar 2023): 204166952311573. http://dx.doi.org/10.1177/20416695231157349.
Der volle Inhalt der QuelleDomazetovska, Simona, Viktor Gavriloski, Maja Anachkova und Zlatko Petreski. „URBAN SOUND RECOGNITION USING DIFFERENT FEATURE EXTRACTION TECHNIQUES“. Facta Universitatis, Series: Automatic Control and Robotics 20, Nr. 3 (18.12.2021): 155. http://dx.doi.org/10.22190/fuacr211015012d.
Der volle Inhalt der QuelleMartinek, Jozef, P. Klco, M. Vrabec, T. Zatko, M. Tatar und M. Javorka. „Cough Sound Analysis“. Acta Medica Martiniana 13, Supplement-1 (01.03.2013): 15–20. http://dx.doi.org/10.2478/acm-2013-0002.
Der volle Inhalt der QuelleHeck, Jonas, Josep Llorca-Bofí, Christian Dreier und Michael Vorlaender. „Validation of auralized impulse responses considering masking, loudness and background noise“. Journal of the Acoustical Society of America 155, Nr. 3_Supplement (01.03.2024): A178. http://dx.doi.org/10.1121/10.0027231.
Der volle Inhalt der QuelleKim, Yunbin, Jaewon Sa, Yongwha Chung, Daihee Park und Sungju Lee. „Resource-Efficient Pet Dog Sound Events Classification Using LSTM-FCN Based on Time-Series Data“. Sensors 18, Nr. 11 (18.11.2018): 4019. http://dx.doi.org/10.3390/s18114019.
Der volle Inhalt der QuelleDissertationen zum Thema "Sound events"
Hay, Timothy Deane. „MAX-DOAS measurements of bromine explosion events in McMurdo Sound, Antarctica“. Thesis, University of Canterbury. Physics and Astronomy, 2010. http://hdl.handle.net/10092/5394.
Der volle Inhalt der QuelleGiannoulis, Dimitrios. „Recognition of sound sources and acoustic events in music and environmental audio“. Thesis, Queen Mary, University of London, 2014. http://qmro.qmul.ac.uk/xmlui/handle/123456789/9130.
Der volle Inhalt der QuellePAPETTI, Stefano. „Sound modeling issues in interactive sonification - From basic contact events to synthesis and manipulation tools“. Doctoral thesis, Università degli Studi di Verona, 2010. http://hdl.handle.net/11562/340961.
Der volle Inhalt der QuelleThe work presented in this thesis ranges over a variety of research topics, spacing from human-computer interaction to physical-modeling. What combines such broad areas of interest is the idea of using physically-based computer simulations of acoustic phenomena in order to provide human-computer interfaces with sound feedback which is consistent with the user interaction. In this regard, recent years have seen the emergence of several new disciplines that go under the name of -- to cite a few -- auditory display, sonification and sonic interaction design. This thesis deals with the design and implementation of efficient sound algorithms for interactive sonification. To this end, the physical modeling of everyday sounds is taken into account, that is sounds not belonging to the families of speech and musical sounds.
Olvera, Zambrano Mauricio Michel. „Robust sound event detection“. Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0324.
Der volle Inhalt der QuelleFrom industry to general interest applications, computational analysis of sound scenes and events allows us to interpret the continuous flow of everyday sounds. One of the main degradations encountered when moving from lab conditions to the real world is due to the fact that sound scenes are not composed of isolated events but of multiple simultaneous events. Differences between training and test conditions also often arise due to extrinsic factors such as the choice of recording hardware and microphone positions, as well as intrinsic factors of sound events, such as their frequency of occurrence, duration and variability. In this thesis, we investigate problems of practical interest for audio analysis tasks to achieve robustness in real scenarios.Firstly, we explore the separation of ambient sounds in a practical scenario in which multiple short duration sound events with fast varying spectral characteristics (i.e., foreground sounds) occur simultaneously with background stationary sounds. We introduce the foreground-background ambient sound separation task and investigate whether a deep neural network with auxiliary information about the statistics of the background sound can differentiate between rapidly- and slowly-varying spectro-temporal characteristics. Moreover, we explore the use of per-channel energy normalization (PCEN) as a suitable pre-processing and the ability of the separation model to generalize to unseen sound classes. Results on mixtures of isolated sounds from the DESED and Audioset datasets demonstrate the generalization capability of the proposed separation system, which is mainly due to PCEN.Secondly, we investigate how to improve the robustness of audio analysis systems under mismatched training and test conditions. We explore two distinct tasks: acoustic scene classification (ASC) with mismatched recording devices and training of sound event detection (SED) systems with synthetic and real data.In the context of ASC, without assuming the availability of recordings captured simultaneously by mismatched training and test recording devices, we assess the impact of moment normalization and matching strategies and their integration with unsupervised adversarial domain adaptation. Our results show the benefits and limitations of these adaptation strategies applied at different stages of the classification pipeline. The best strategy matches source domain performance in the target domain.In the context of SED, we propose a PCEN based acoustic front-end with learned parameters. Then, we study the joint training of SED with auxiliary classification branches that categorize sounds as foreground or background according to their spectral properties. We also assess the impact of aligning the distributions of synthetic and real data at the frame or segment level based on optimal transport. Finally, we integrate an active learning strategy in the adaptation procedure. Results on the DESED dataset indicate that these methods are beneficial for the SED task and that their combination further improves performance on real sound scenes
Beeferman, Leah. „JOURNEYS INTO THE UNKNOWN: A SERIES OF SCIENCE ARCHITECTURE TASKS AND EVENTS, SPACE-BOUND EXPLORATIONS AND FAR-TRAVELS, DISCOVERIES AND MISSES (NEAR AND FAR), IMAGINATIVE SPACE-GAZING AND RELATED INVESTIGATIONS, OBSERVATIONS, ORBITS, AND OTHER REPETITIOUS MONITORING TASKS“. VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/2164.
Der volle Inhalt der QuelleVESPERINI, FABIO. „Deep Learning for Sound Event Detection and Classification“. Doctoral thesis, Università Politecnica delle Marche, 2019. http://hdl.handle.net/11566/263536.
Der volle Inhalt der QuelleThe recent progress on acoustic signal processing and machine learning techniques have enabled the development of innovative technologies for automatic analysis of sound events. In particular, nowadays one of the hottest approach to this problem lays on the exploitation of Deep Learning techniques. As further proof, in several occasion neural architectures originally designed for other multimedia domains have been successfully proposed to process the audio signal. Indeed, although these technologies have been faced for a long time by statistical modelling algorithms such as Gaussian Mixture Models, Hidden Markov Models or Support Vector Machines, the new breakthrough of machine learning for audio processing has lead to encouraging results into the addressed tasks. Hence, this thesis reports an up-to-date state of the art and proposes several reliable DNN-based methods for Sound Event Detection (SED) and Sound Event Classification (SEC), with an overview of the Deep Neural Network (DNN) architectures used on purpose and of the evaluation procedures and metrics used in this research field. According to the recent trend, which shows an extensive employment of Convolutional Neural Networks (CNNs) for both SED and SEC tasks, this work reports also rather new approaches based on the Siamese DNN architecture or the novel Capsule computational units. Most of the reported systems have been designed in the occasion of international challenges. This allowed the access to public datasets, and to compare systems proposed by the most competitive research teams on a common basis. The case studies reported in this dissertation refer to applications in a variety of scenarios, ranging from unobtrusive health monitoring, audio-based surveillance, bio-acoustic monitoring and classification of the road surface conditions. These tasks face numerous challenges, particularly related to their application in real-life environments. Among these issues there are unbalancing of datasets, different acquisition setups, acoustic disturbance (i.e., background noise, reverberation and cross-talk) and polyphony. In particular, since multiple events are very likely to overlap in real life audio, two algorithms for polyphonic SED are reported in this thesis. A polyphonic SED algorithm can be considered as system which is able to perform contemporary detection - determining onset and offset time of the sound events - and classification - assigning a label to each of the events occurring in the audio stream.
Jackson, Asti Joy. „Structure of Sound“. Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/73778.
Der volle Inhalt der QuelleMaster of Architecture
Fonseca, Eduardo. „Training sound event classifiers using different types of supervision“. Doctoral thesis, Universitat Pompeu Fabra, 2021. http://hdl.handle.net/10803/673067.
Der volle Inhalt der QuelleEl interés en el reconocimiento automático de eventos sonoros se ha incrementado en los últimos años, motivado por nuevas aplicaciones en campos como la asistencia médica, smart homes, o urbanismo. Al comienzo de esta tesis, la investigación en clasificación de eventos sonoros se centraba principalmente en aprendizaje supervisado usando datasets pequeños, a menudo anotados cuidadosamente con vocabularios limitados a dominios específicos (como el urbano o el doméstico). Sin embargo, tales datasets no permiten entrenar clasificadores capaces de reconocer los cientos de eventos sonoros que ocurren en nuestro entorno, como silbidos de kettle, sonidos de pájaros, coches pasando, o diferentes alarmas. Al mismo tiempo, websites como Freesound o YouTube albergan grandes cantidades de datos de sonido ambiental, que pueden ser útiles para entrenar clasificadores con un vocabulario más extenso, particularmente utilizando métodos de deep learning que requieren gran cantidad de datos. Para avanzar el estado del arte en la clasificación de eventos sonoros, esta tesis investiga varios aspectos de la creación de datasets, así como de aprendizaje supervisado y no supervisado para entrenar clasificadores de eventos sonoros con un vocabulario extenso, utilizando diferentes tipos de supervisión de manera novedosa y alternativa. En concreto, nos centramos en aprendizaje supervisado usando etiquetas sin ruido y con ruido, así como en aprendizaje de representaciones auto-supervisado a partir de datos no etiquetados. La primera parte de esta tesis se centra en la creación de FSD50K, un dataset con más de 100h de audio etiquetado manualmente usando 200 clases de eventos sonoros. Presentamos una descripción detallada del proceso de creación y una caracterización exhaustiva del dataset. Además, exploramos modificaciones arquitectónicas para aumentar la invariancia frente a desplazamientos en CNNs, mejorando la robustez frente a desplazamientos de tiempo/frecuencia en los espectrogramas de entrada. En la segunda parte, nos centramos en entrenar clasificadores de eventos sonoros usando etiquetas con ruido. Primero, proponemos un dataset que permite la investigación del ruido de etiquetas real. Después, exploramos métodos agnósticos a la arquitectura de red para mitigar el efecto del ruido en las etiquetas durante el entrenamiento, incluyendo técnicas de regularización, funciones de coste robustas al ruido, y estrategias para rechazar ejemplos etiquetados con ruido. Además, desarrollamos un método teacher-student para abordar el problema de las etiquetas ausentes en datasets de eventos sonoros. En la tercera parte, proponemos algoritmos para aprender representaciones de audio a partir de datos sin etiquetar. En particular, desarrollamos métodos de aprendizaje contrastivos auto-supervisados, donde las representaciones se aprenden comparando pares de ejemplos calculados a través de métodos de aumento de datos y separación automática de sonido. Finalmente, reportamos sobre la organización de dos DCASE Challenge Tasks para el tageado automático de audio a partir de etiquetas ruidosas. Mediante la propuesta de datasets, así como de métodos de vanguardia y representaciones de audio, esta tesis contribuye al avance de la investigación abierta sobre eventos sonoros y a la transición del aprendizaje supervisado tradicional utilizando etiquetas sin ruido a otras estrategias de aprendizaje menos dependientes de costosos esfuerzos de anotación.
Pahar, Madhurananda. „A novel sound reconstruction technique based on a spike code (event) representation“. Thesis, University of Stirling, 2016. http://hdl.handle.net/1893/23025.
Der volle Inhalt der QuelleLabbé, Etienne. „Description automatique des événements sonores par des méthodes d'apprentissage profond“. Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES054.
Der volle Inhalt der QuelleIn the audio research field, the majority of machine learning systems focus on recognizing a limited number of sound events. However, when a machine interacts with real data, it must be able to handle much more varied and complex situations. To tackle this problem, annotators use natural language, which allows any sound information to be summarized. Automated Audio Captioning (AAC) was introduced recently to develop systems capable of automatically producing a description of any type of sound in text form. This task concerns all kinds of sound events such as environmental, urban, domestic sounds, sound effects, music or speech. This type of system could be used by people who are deaf or hard of hearing, and could improve the indexing of large audio databases. In the first part of this thesis, we present the state of the art of the AAC task through a global description of public datasets, learning methods, architectures and evaluation metrics. Using this knowledge, we then present the architecture of our first AAC system, which obtains encouraging scores on the main AAC metric named SPIDEr: 24.7% on the Clotho corpus and 40.1% on the AudioCaps corpus. Then, subsequently, we explore many aspects of AAC systems in the second part. We first focus on evaluation methods through the study of SPIDEr. For this, we propose a variant called SPIDEr-max, which considers several candidates for each audio file, and which shows that the SPIDEr metric is very sensitive to the predicted words. Then, we improve our reference system by exploring different architectures and numerous hyper-parameters to exceed the state of the art on AudioCaps (SPIDEr of 49.5%). Next, we explore a multi-task learning method aimed at improving the semantics of sentences generated by our system. Finally, we build a general and unbiased AAC system called CONETTE, which can generate different types of descriptions that approximate those of the target datasets. In the third and last part, we propose to study the capabilities of a AAC system to automatically search for audio content in a database. Our approach obtains competitive scores to systems dedicated to this task, while using fewer parameters. We also introduce semi-supervised methods to improve our system using new unlabeled audio data, and we show how pseudo-label generation can impact a AAC model. Finally, we studied the AAC systems in languages other than English: French, Spanish and German. In addition, we propose a system capable of producing all four languages at the same time, and we compare it with systems specialized in each language
Bücher zum Thema "Sound events"
Virtanen, Tuomas, Mark D. Plumbley und Dan Ellis, Hrsg. Computational Analysis of Sound Scenes and Events. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-63450-0.
Der volle Inhalt der QuelleMcCarthy, Jim. Voices of Latin rock: People and events that created this sound. Milwaukee, WI: Hal Leonard Corporation, 2004.
Den vollen Inhalt der Quelle findenMcCarthy, Jim. Voices of Latin rock: People and events that created this sound. Milwaukee, WI: Hal Leonard Corporation, 2005.
Den vollen Inhalt der Quelle findenThomas, Jeremy. Taking leave. London: Timewell, 2006.
Den vollen Inhalt der Quelle findenZealand, Radio New. Catalogue of Radio New Zealand recordings of Maori events, 1938-1950: RNZ 1-60. Auckland: Archive of Maori and Pacific Music, Anthropology Dept., University of Auckland, 1991.
Den vollen Inhalt der Quelle findenCorporation, British Broadcasting. Equestrian events. Princeton, N.J: Films for the Humanities & Sciences, 1991.
Den vollen Inhalt der Quelle findenTaylor, Fred. What, and Give Up Showbiz?: Six Decades in the Music Business. Blue Ridge Summit: Backbeat, 2020.
Den vollen Inhalt der Quelle findenCai, Wenyi. Yi tian 10 fen zhong, ying zhan xin wen Ying wen: Yue du, ting li, yu hui neng li yi ci yang cheng! 8. Aufl. Taibei Shi: Kai xin qi ye guan li gu wen you xian gong si, 2015.
Den vollen Inhalt der Quelle findenMarchetta, Vittorio. Passaggi di sound design: Riflessioni, competenze, oggetti-eventi. Milano: F. Angeli, 2010.
Den vollen Inhalt der Quelle findenBasile, Giuseppe. ' 80, new sound, new wave: Vita, musica ed eventi nella provincia italiana degli anni '80. Taranto: Geophonìe, 2007.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Sound events"
Toole, Floyd E. „Above the Transition Frequency: Acoustical Events and Perceptions“. In Sound Reproduction, 157–214. Third edition. | New York ; London : Routledge, 2017.: Routledge, 2017. http://dx.doi.org/10.4324/9781315686424-7.
Der volle Inhalt der QuelleToole, Floyd E. „Below the Transition Frequency: Acoustical Events and Perceptions“. In Sound Reproduction, 215–62. Third edition. | New York ; London : Routledge, 2017.: Routledge, 2017. http://dx.doi.org/10.4324/9781315686424-8.
Der volle Inhalt der QuelleGuastavino, Catherine. „Everyday Sound Categorization“. In Computational Analysis of Sound Scenes and Events, 183–213. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_7.
Der volle Inhalt der QuelleFont, Frederic, Gerard Roma und Xavier Serra. „Sound Sharing and Retrieval“. In Computational Analysis of Sound Scenes and Events, 279–301. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_10.
Der volle Inhalt der QuelleStein, Peter J. „Observation of the Sound Radiated by Individual Ice Fracturing Events“. In Sea Surface Sound, 533–44. Dordrecht: Springer Netherlands, 1988. http://dx.doi.org/10.1007/978-94-009-3017-9_38.
Der volle Inhalt der QuelleBello, Juan Pablo, Charlie Mydlarz und Justin Salamon. „Sound Analysis in Smart Cities“. In Computational Analysis of Sound Scenes and Events, 373–97. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_13.
Der volle Inhalt der QuelleLoar, Josh. „Conventions and Other Multi-room Live Events“. In The Sound System Design Primer, 417–21. New York, NY : Routledge, 2019.: Routledge, 2019. http://dx.doi.org/10.4324/9781315196817-44.
Der volle Inhalt der QuelleSerizel, Romain, Victor Bisot, Slim Essid und Gaël Richard. „Acoustic Features for Environmental Sound Analysis“. In Computational Analysis of Sound Scenes and Events, 71–101. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_4.
Der volle Inhalt der QuelleBenetos, Emmanouil, Dan Stowell und Mark D. Plumbley. „Approaches to Complex Sound Scene Analysis“. In Computational Analysis of Sound Scenes and Events, 215–42. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_8.
Der volle Inhalt der QuelleTheodorou, Theodoros, Iosif Mporas und Nikos Fakotakis. „Automatic Sound Recognition of Urban Environment Events“. In Speech and Computer, 129–36. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23132-7_16.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Sound events"
HILL, AJ, J. MULDER, J. BURTON, M. KOK und M. LAWRENCE. „A CRITICAL ANALYSIS OF SOUND LEVEL MONITORING METHODS AT LIVE EVENTS“. In Reproduced Sound 2022. Institute of Acoustics, 2022. http://dx.doi.org/10.25144/14142.
Der volle Inhalt der QuelleHOURANI, C., und AJ HILL. „TOWARDS A SUBJECTIVE QUANTIFICATION OF NOISE ANNOYANCE DUE TO OUTDOOR EVENTS“. In Reproduced Sound 2023. Institute of Acoustics, 2023. http://dx.doi.org/10.25144/16927.
Der volle Inhalt der QuelleMiyazaki, Koichi, Tomoki Hayashi, Tomoki Toda und Kazuya Takeda. „Connectionist Temporal Classification-based Sound Event Encoder for Converting Sound Events into Onomatopoeic Representations“. In 2018 26th European Signal Processing Conference (EUSIPCO). IEEE, 2018. http://dx.doi.org/10.23919/eusipco.2018.8553374.
Der volle Inhalt der QuelleWheeler, P., D. Sharp und S. Taherzadeh. „AN EVALUATION OF UK AND INTERNATIONAL GUIDANCE FOR THE CONTROL OF NOISE AT OUTDOOR EVENTS“. In REPRODUCED SOUND 2020. Institute of Acoustics, 2020. http://dx.doi.org/10.25144/13383.
Der volle Inhalt der QuelleBURTON, J., und AJ HILL. „USING COGNITIVE PSYCHOLOGY AND NEUROSCIENCE TO BETTER INFORM SOUND SYSTEM DESIGN AT LARGE MUSICAL EVENTS“. In Reproduced Sound 2022. Institute of Acoustics, 2022. http://dx.doi.org/10.25144/14148.
Der volle Inhalt der QuelleWheeler, P., D. Sharp und S. Taherzadeh. „AN EVALUATION OF UK AND INTERNATIONAL GUIDANCE FOR THE CONTROL OF NOISE AT OUTDOOR EVENTS“. In REPRODUCED SOUND 2020. Institute of Acoustics, 2020. http://dx.doi.org/10.25144/13383.
Der volle Inhalt der QuelleImoto, Keisuke, Noriyuki Tonami, Yuma Koizumi, Masahiro Yasuda, Ryosuke Yamanishi und Yoichi Yamashita. „Sound Event Detection by Multitask Learning of Sound Events and Scenes with Soft Scene Labels“. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9053912.
Der volle Inhalt der QuelleMaosheng Zhang, Ruimin Hu, Shihong Chen, Xiaochen Wang, Dengshi Li und Lin Jiang. „Spatial perception reproduction of sound events based on sound property coincidences“. In 2015 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2015. http://dx.doi.org/10.1109/icme.2015.7177412.
Der volle Inhalt der QuelleStanzial, Domenico, Giorgio Sacchi und Giuliano Schiffrer. „Active playback of acoustic quadraphonic sound events“. In 155th Meeting Acoustical Society of America. ASA, 2008. http://dx.doi.org/10.1121/1.2992204.
Der volle Inhalt der QuelleKumar, Anurag, Ankit Shah, Alexander Hauptmann und Bhiksha Raj. „Learning Sound Events from Webly Labeled Data“. In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/384.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Sound events"
Wilson, D. K., V. A. Nguyen, Nassy Srour und John Noble. Sound Exposure Calculations for Transient Events and Other Improvements to an Acoustical Tactical Decision Aid. Fort Belvoir, VA: Defense Technical Information Center, August 2002. http://dx.doi.org/10.21236/ada406703.
Der volle Inhalt der QuelleWilson, D., Vladimir Ostashev, Michael Shaw, Michael Muhlestein, John Weatherly, Michelle Swearingen und Sarah McComas. Infrasound propagation in the Arctic. Engineer Research and Development Center (U.S.), Dezember 2021. http://dx.doi.org/10.21079/11681/42683.
Der volle Inhalt der QuelleAlbright, Jeff, Kim Struthers, Lisa Baril und Mark Brunson. Natural resource conditions at Valles Caldera National Preserve: Findings & management considerations for selected resources. National Park Service, Juni 2022. http://dx.doi.org/10.36967/nrr-2293731.
Der volle Inhalt der QuelleYatsymirska, Mariya. MODERN MEDIA TEXT: POLITICAL NARRATIVES, MEANINGS AND SENSES, EMOTIONAL MARKERS. Ivan Franko National University of Lviv, Februar 2022. http://dx.doi.org/10.30970/vjo.2022.51.11411.
Der volle Inhalt der QuelleMasiero, Bruno, Marcio Henrique de Avelar und William D'Andrea Fonseca. International Year of Sound 2020 & 2021: Ano Internacional do Som prorrogado até 2021. William D’Andrea Fonseca, Juli 2020. http://dx.doi.org/10.55753/aev.v35e52.15.
Der volle Inhalt der QuelleMichalski, Ranny L. X. N., Bruno Masiero, William D’Andrea Fonseca und Márcio Avelar. Fim do Ano Internacional do Som: Fechamento do Ano Internacional do Som 2020 & 2021. Revista Acústica e Vibrações, Dezember 2021. http://dx.doi.org/10.55753/aev.v36e53.60.
Der volle Inhalt der QuelleLaw, Edward, Samuel Gan-Mor, Hazel Wetzstein und Dan Eisikowitch. Electrostatic Processes Underlying Natural and Mechanized Transfer of Pollen. United States Department of Agriculture, Mai 1998. http://dx.doi.org/10.32747/1998.7613035.bard.
Der volle Inhalt der QuelleMichelmore, Richard, Eviatar Nevo, Abraham Korol und Tzion Fahima. Genetic Diversity at Resistance Gene Clusters in Wild Populations of Lactuca. United States Department of Agriculture, Februar 2000. http://dx.doi.org/10.32747/2000.7573075.bard.
Der volle Inhalt der QuelleHayes. L51633 Investigation of WIC Test Variables. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), Februar 1991. http://dx.doi.org/10.55274/r0010107.
Der volle Inhalt der QuelleKanninen, M. F. L51718 Development and Validation of a Ductile Fracture Analysis Model. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), Mai 1994. http://dx.doi.org/10.55274/r0010321.
Der volle Inhalt der Quelle