Gotowa bibliografia na temat „Sound events”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Sound events”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Sound events"
Elizalde, Benjamin. "Categorization of sound events for automatic sound event classification". Journal of the Acoustical Society of America 153, nr 3_supplement (1.03.2023): A364. http://dx.doi.org/10.1121/10.0019175.
Pełny tekst źródłaAura, Karine, Guillaume Lemaitre i Patrick Susini. "Verbal imitations of sound events enable recognition of the imitated sound events". Journal of the Acoustical Society of America 123, nr 5 (maj 2008): 3414. http://dx.doi.org/10.1121/1.2934144.
Pełny tekst źródłaNishida, Tsuruyo, Kazuhiko Kakehi i Takamasa Kyutoku. "Motion perception of the target sound event under the discriminated two sound events". Journal of the Acoustical Society of America 120, nr 5 (listopad 2006): 3080. http://dx.doi.org/10.1121/1.4787419.
Pełny tekst źródłaNakayama, Tsumugi, Taisuke Naito, Shunsuke Kouda i Takatoshi Yokota. "Determining disturbance sounds in aircraft sound events using a CNN-based method". INTER-NOISE and NOISE-CON Congress and Conference Proceedings 268, nr 7 (30.11.2023): 1320–28. http://dx.doi.org/10.3397/in_2023_0196.
Pełny tekst źródłaHara, Sunao, i Masanobu Abe. "Predictions for sound events and soundscape impressions from environmental sound using deep neural networks". INTER-NOISE and NOISE-CON Congress and Conference Proceedings 268, nr 3 (30.11.2023): 5239–50. http://dx.doi.org/10.3397/in_2023_0739.
Pełny tekst źródłaMaruyama, Hironori, Kosuke Okada i Isamu Motoyoshi. "A two-stage spectral model for sound texture perception: Synthesis and psychophysics". i-Perception 14, nr 1 (styczeń 2023): 204166952311573. http://dx.doi.org/10.1177/20416695231157349.
Pełny tekst źródłaDomazetovska, Simona, Viktor Gavriloski, Maja Anachkova i Zlatko Petreski. "URBAN SOUND RECOGNITION USING DIFFERENT FEATURE EXTRACTION TECHNIQUES". Facta Universitatis, Series: Automatic Control and Robotics 20, nr 3 (18.12.2021): 155. http://dx.doi.org/10.22190/fuacr211015012d.
Pełny tekst źródłaMartinek, Jozef, P. Klco, M. Vrabec, T. Zatko, M. Tatar i M. Javorka. "Cough Sound Analysis". Acta Medica Martiniana 13, Supplement-1 (1.03.2013): 15–20. http://dx.doi.org/10.2478/acm-2013-0002.
Pełny tekst źródłaHeck, Jonas, Josep Llorca-Bofí, Christian Dreier i Michael Vorlaender. "Validation of auralized impulse responses considering masking, loudness and background noise". Journal of the Acoustical Society of America 155, nr 3_Supplement (1.03.2024): A178. http://dx.doi.org/10.1121/10.0027231.
Pełny tekst źródłaKim, Yunbin, Jaewon Sa, Yongwha Chung, Daihee Park i Sungju Lee. "Resource-Efficient Pet Dog Sound Events Classification Using LSTM-FCN Based on Time-Series Data". Sensors 18, nr 11 (18.11.2018): 4019. http://dx.doi.org/10.3390/s18114019.
Pełny tekst źródłaRozprawy doktorskie na temat "Sound events"
Hay, Timothy Deane. "MAX-DOAS measurements of bromine explosion events in McMurdo Sound, Antarctica". Thesis, University of Canterbury. Physics and Astronomy, 2010. http://hdl.handle.net/10092/5394.
Pełny tekst źródłaGiannoulis, Dimitrios. "Recognition of sound sources and acoustic events in music and environmental audio". Thesis, Queen Mary, University of London, 2014. http://qmro.qmul.ac.uk/xmlui/handle/123456789/9130.
Pełny tekst źródłaPAPETTI, Stefano. "Sound modeling issues in interactive sonification - From basic contact events to synthesis and manipulation tools". Doctoral thesis, Università degli Studi di Verona, 2010. http://hdl.handle.net/11562/340961.
Pełny tekst źródłaThe work presented in this thesis ranges over a variety of research topics, spacing from human-computer interaction to physical-modeling. What combines such broad areas of interest is the idea of using physically-based computer simulations of acoustic phenomena in order to provide human-computer interfaces with sound feedback which is consistent with the user interaction. In this regard, recent years have seen the emergence of several new disciplines that go under the name of -- to cite a few -- auditory display, sonification and sonic interaction design. This thesis deals with the design and implementation of efficient sound algorithms for interactive sonification. To this end, the physical modeling of everyday sounds is taken into account, that is sounds not belonging to the families of speech and musical sounds.
Olvera, Zambrano Mauricio Michel. "Robust sound event detection". Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0324.
Pełny tekst źródłaFrom industry to general interest applications, computational analysis of sound scenes and events allows us to interpret the continuous flow of everyday sounds. One of the main degradations encountered when moving from lab conditions to the real world is due to the fact that sound scenes are not composed of isolated events but of multiple simultaneous events. Differences between training and test conditions also often arise due to extrinsic factors such as the choice of recording hardware and microphone positions, as well as intrinsic factors of sound events, such as their frequency of occurrence, duration and variability. In this thesis, we investigate problems of practical interest for audio analysis tasks to achieve robustness in real scenarios.Firstly, we explore the separation of ambient sounds in a practical scenario in which multiple short duration sound events with fast varying spectral characteristics (i.e., foreground sounds) occur simultaneously with background stationary sounds. We introduce the foreground-background ambient sound separation task and investigate whether a deep neural network with auxiliary information about the statistics of the background sound can differentiate between rapidly- and slowly-varying spectro-temporal characteristics. Moreover, we explore the use of per-channel energy normalization (PCEN) as a suitable pre-processing and the ability of the separation model to generalize to unseen sound classes. Results on mixtures of isolated sounds from the DESED and Audioset datasets demonstrate the generalization capability of the proposed separation system, which is mainly due to PCEN.Secondly, we investigate how to improve the robustness of audio analysis systems under mismatched training and test conditions. We explore two distinct tasks: acoustic scene classification (ASC) with mismatched recording devices and training of sound event detection (SED) systems with synthetic and real data.In the context of ASC, without assuming the availability of recordings captured simultaneously by mismatched training and test recording devices, we assess the impact of moment normalization and matching strategies and their integration with unsupervised adversarial domain adaptation. Our results show the benefits and limitations of these adaptation strategies applied at different stages of the classification pipeline. The best strategy matches source domain performance in the target domain.In the context of SED, we propose a PCEN based acoustic front-end with learned parameters. Then, we study the joint training of SED with auxiliary classification branches that categorize sounds as foreground or background according to their spectral properties. We also assess the impact of aligning the distributions of synthetic and real data at the frame or segment level based on optimal transport. Finally, we integrate an active learning strategy in the adaptation procedure. Results on the DESED dataset indicate that these methods are beneficial for the SED task and that their combination further improves performance on real sound scenes
Beeferman, Leah. "JOURNEYS INTO THE UNKNOWN: A SERIES OF SCIENCE ARCHITECTURE TASKS AND EVENTS, SPACE-BOUND EXPLORATIONS AND FAR-TRAVELS, DISCOVERIES AND MISSES (NEAR AND FAR), IMAGINATIVE SPACE-GAZING AND RELATED INVESTIGATIONS, OBSERVATIONS, ORBITS, AND OTHER REPETITIOUS MONITORING TASKS". VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/2164.
Pełny tekst źródłaVESPERINI, FABIO. "Deep Learning for Sound Event Detection and Classification". Doctoral thesis, Università Politecnica delle Marche, 2019. http://hdl.handle.net/11566/263536.
Pełny tekst źródłaThe recent progress on acoustic signal processing and machine learning techniques have enabled the development of innovative technologies for automatic analysis of sound events. In particular, nowadays one of the hottest approach to this problem lays on the exploitation of Deep Learning techniques. As further proof, in several occasion neural architectures originally designed for other multimedia domains have been successfully proposed to process the audio signal. Indeed, although these technologies have been faced for a long time by statistical modelling algorithms such as Gaussian Mixture Models, Hidden Markov Models or Support Vector Machines, the new breakthrough of machine learning for audio processing has lead to encouraging results into the addressed tasks. Hence, this thesis reports an up-to-date state of the art and proposes several reliable DNN-based methods for Sound Event Detection (SED) and Sound Event Classification (SEC), with an overview of the Deep Neural Network (DNN) architectures used on purpose and of the evaluation procedures and metrics used in this research field. According to the recent trend, which shows an extensive employment of Convolutional Neural Networks (CNNs) for both SED and SEC tasks, this work reports also rather new approaches based on the Siamese DNN architecture or the novel Capsule computational units. Most of the reported systems have been designed in the occasion of international challenges. This allowed the access to public datasets, and to compare systems proposed by the most competitive research teams on a common basis. The case studies reported in this dissertation refer to applications in a variety of scenarios, ranging from unobtrusive health monitoring, audio-based surveillance, bio-acoustic monitoring and classification of the road surface conditions. These tasks face numerous challenges, particularly related to their application in real-life environments. Among these issues there are unbalancing of datasets, different acquisition setups, acoustic disturbance (i.e., background noise, reverberation and cross-talk) and polyphony. In particular, since multiple events are very likely to overlap in real life audio, two algorithms for polyphonic SED are reported in this thesis. A polyphonic SED algorithm can be considered as system which is able to perform contemporary detection - determining onset and offset time of the sound events - and classification - assigning a label to each of the events occurring in the audio stream.
Jackson, Asti Joy. "Structure of Sound". Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/73778.
Pełny tekst źródłaMaster of Architecture
Fonseca, Eduardo. "Training sound event classifiers using different types of supervision". Doctoral thesis, Universitat Pompeu Fabra, 2021. http://hdl.handle.net/10803/673067.
Pełny tekst źródłaEl interés en el reconocimiento automático de eventos sonoros se ha incrementado en los últimos años, motivado por nuevas aplicaciones en campos como la asistencia médica, smart homes, o urbanismo. Al comienzo de esta tesis, la investigación en clasificación de eventos sonoros se centraba principalmente en aprendizaje supervisado usando datasets pequeños, a menudo anotados cuidadosamente con vocabularios limitados a dominios específicos (como el urbano o el doméstico). Sin embargo, tales datasets no permiten entrenar clasificadores capaces de reconocer los cientos de eventos sonoros que ocurren en nuestro entorno, como silbidos de kettle, sonidos de pájaros, coches pasando, o diferentes alarmas. Al mismo tiempo, websites como Freesound o YouTube albergan grandes cantidades de datos de sonido ambiental, que pueden ser útiles para entrenar clasificadores con un vocabulario más extenso, particularmente utilizando métodos de deep learning que requieren gran cantidad de datos. Para avanzar el estado del arte en la clasificación de eventos sonoros, esta tesis investiga varios aspectos de la creación de datasets, así como de aprendizaje supervisado y no supervisado para entrenar clasificadores de eventos sonoros con un vocabulario extenso, utilizando diferentes tipos de supervisión de manera novedosa y alternativa. En concreto, nos centramos en aprendizaje supervisado usando etiquetas sin ruido y con ruido, así como en aprendizaje de representaciones auto-supervisado a partir de datos no etiquetados. La primera parte de esta tesis se centra en la creación de FSD50K, un dataset con más de 100h de audio etiquetado manualmente usando 200 clases de eventos sonoros. Presentamos una descripción detallada del proceso de creación y una caracterización exhaustiva del dataset. Además, exploramos modificaciones arquitectónicas para aumentar la invariancia frente a desplazamientos en CNNs, mejorando la robustez frente a desplazamientos de tiempo/frecuencia en los espectrogramas de entrada. En la segunda parte, nos centramos en entrenar clasificadores de eventos sonoros usando etiquetas con ruido. Primero, proponemos un dataset que permite la investigación del ruido de etiquetas real. Después, exploramos métodos agnósticos a la arquitectura de red para mitigar el efecto del ruido en las etiquetas durante el entrenamiento, incluyendo técnicas de regularización, funciones de coste robustas al ruido, y estrategias para rechazar ejemplos etiquetados con ruido. Además, desarrollamos un método teacher-student para abordar el problema de las etiquetas ausentes en datasets de eventos sonoros. En la tercera parte, proponemos algoritmos para aprender representaciones de audio a partir de datos sin etiquetar. En particular, desarrollamos métodos de aprendizaje contrastivos auto-supervisados, donde las representaciones se aprenden comparando pares de ejemplos calculados a través de métodos de aumento de datos y separación automática de sonido. Finalmente, reportamos sobre la organización de dos DCASE Challenge Tasks para el tageado automático de audio a partir de etiquetas ruidosas. Mediante la propuesta de datasets, así como de métodos de vanguardia y representaciones de audio, esta tesis contribuye al avance de la investigación abierta sobre eventos sonoros y a la transición del aprendizaje supervisado tradicional utilizando etiquetas sin ruido a otras estrategias de aprendizaje menos dependientes de costosos esfuerzos de anotación.
Pahar, Madhurananda. "A novel sound reconstruction technique based on a spike code (event) representation". Thesis, University of Stirling, 2016. http://hdl.handle.net/1893/23025.
Pełny tekst źródłaLabbé, Etienne. "Description automatique des événements sonores par des méthodes d'apprentissage profond". Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES054.
Pełny tekst źródłaIn the audio research field, the majority of machine learning systems focus on recognizing a limited number of sound events. However, when a machine interacts with real data, it must be able to handle much more varied and complex situations. To tackle this problem, annotators use natural language, which allows any sound information to be summarized. Automated Audio Captioning (AAC) was introduced recently to develop systems capable of automatically producing a description of any type of sound in text form. This task concerns all kinds of sound events such as environmental, urban, domestic sounds, sound effects, music or speech. This type of system could be used by people who are deaf or hard of hearing, and could improve the indexing of large audio databases. In the first part of this thesis, we present the state of the art of the AAC task through a global description of public datasets, learning methods, architectures and evaluation metrics. Using this knowledge, we then present the architecture of our first AAC system, which obtains encouraging scores on the main AAC metric named SPIDEr: 24.7% on the Clotho corpus and 40.1% on the AudioCaps corpus. Then, subsequently, we explore many aspects of AAC systems in the second part. We first focus on evaluation methods through the study of SPIDEr. For this, we propose a variant called SPIDEr-max, which considers several candidates for each audio file, and which shows that the SPIDEr metric is very sensitive to the predicted words. Then, we improve our reference system by exploring different architectures and numerous hyper-parameters to exceed the state of the art on AudioCaps (SPIDEr of 49.5%). Next, we explore a multi-task learning method aimed at improving the semantics of sentences generated by our system. Finally, we build a general and unbiased AAC system called CONETTE, which can generate different types of descriptions that approximate those of the target datasets. In the third and last part, we propose to study the capabilities of a AAC system to automatically search for audio content in a database. Our approach obtains competitive scores to systems dedicated to this task, while using fewer parameters. We also introduce semi-supervised methods to improve our system using new unlabeled audio data, and we show how pseudo-label generation can impact a AAC model. Finally, we studied the AAC systems in languages other than English: French, Spanish and German. In addition, we propose a system capable of producing all four languages at the same time, and we compare it with systems specialized in each language
Książki na temat "Sound events"
Virtanen, Tuomas, Mark D. Plumbley i Dan Ellis, red. Computational Analysis of Sound Scenes and Events. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-63450-0.
Pełny tekst źródłaMcCarthy, Jim. Voices of Latin rock: People and events that created this sound. Milwaukee, WI: Hal Leonard Corporation, 2004.
Znajdź pełny tekst źródłaMcCarthy, Jim. Voices of Latin rock: People and events that created this sound. Milwaukee, WI: Hal Leonard Corporation, 2005.
Znajdź pełny tekst źródłaThomas, Jeremy. Taking leave. London: Timewell, 2006.
Znajdź pełny tekst źródłaZealand, Radio New. Catalogue of Radio New Zealand recordings of Maori events, 1938-1950: RNZ 1-60. Auckland: Archive of Maori and Pacific Music, Anthropology Dept., University of Auckland, 1991.
Znajdź pełny tekst źródłaCorporation, British Broadcasting. Equestrian events. Princeton, N.J: Films for the Humanities & Sciences, 1991.
Znajdź pełny tekst źródłaTaylor, Fred. What, and Give Up Showbiz?: Six Decades in the Music Business. Blue Ridge Summit: Backbeat, 2020.
Znajdź pełny tekst źródłaCai, Wenyi. Yi tian 10 fen zhong, ying zhan xin wen Ying wen: Yue du, ting li, yu hui neng li yi ci yang cheng! Wyd. 8. Taibei Shi: Kai xin qi ye guan li gu wen you xian gong si, 2015.
Znajdź pełny tekst źródłaMarchetta, Vittorio. Passaggi di sound design: Riflessioni, competenze, oggetti-eventi. Milano: F. Angeli, 2010.
Znajdź pełny tekst źródłaBasile, Giuseppe. ' 80, new sound, new wave: Vita, musica ed eventi nella provincia italiana degli anni '80. Taranto: Geophonìe, 2007.
Znajdź pełny tekst źródłaCzęści książek na temat "Sound events"
Toole, Floyd E. "Above the Transition Frequency: Acoustical Events and Perceptions". W Sound Reproduction, 157–214. Third edition. | New York ; London : Routledge, 2017.: Routledge, 2017. http://dx.doi.org/10.4324/9781315686424-7.
Pełny tekst źródłaToole, Floyd E. "Below the Transition Frequency: Acoustical Events and Perceptions". W Sound Reproduction, 215–62. Third edition. | New York ; London : Routledge, 2017.: Routledge, 2017. http://dx.doi.org/10.4324/9781315686424-8.
Pełny tekst źródłaGuastavino, Catherine. "Everyday Sound Categorization". W Computational Analysis of Sound Scenes and Events, 183–213. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_7.
Pełny tekst źródłaFont, Frederic, Gerard Roma i Xavier Serra. "Sound Sharing and Retrieval". W Computational Analysis of Sound Scenes and Events, 279–301. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_10.
Pełny tekst źródłaStein, Peter J. "Observation of the Sound Radiated by Individual Ice Fracturing Events". W Sea Surface Sound, 533–44. Dordrecht: Springer Netherlands, 1988. http://dx.doi.org/10.1007/978-94-009-3017-9_38.
Pełny tekst źródłaBello, Juan Pablo, Charlie Mydlarz i Justin Salamon. "Sound Analysis in Smart Cities". W Computational Analysis of Sound Scenes and Events, 373–97. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_13.
Pełny tekst źródłaLoar, Josh. "Conventions and Other Multi-room Live Events". W The Sound System Design Primer, 417–21. New York, NY : Routledge, 2019.: Routledge, 2019. http://dx.doi.org/10.4324/9781315196817-44.
Pełny tekst źródłaSerizel, Romain, Victor Bisot, Slim Essid i Gaël Richard. "Acoustic Features for Environmental Sound Analysis". W Computational Analysis of Sound Scenes and Events, 71–101. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_4.
Pełny tekst źródłaBenetos, Emmanouil, Dan Stowell i Mark D. Plumbley. "Approaches to Complex Sound Scene Analysis". W Computational Analysis of Sound Scenes and Events, 215–42. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_8.
Pełny tekst źródłaTheodorou, Theodoros, Iosif Mporas i Nikos Fakotakis. "Automatic Sound Recognition of Urban Environment Events". W Speech and Computer, 129–36. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23132-7_16.
Pełny tekst źródłaStreszczenia konferencji na temat "Sound events"
HILL, AJ, J. MULDER, J. BURTON, M. KOK i M. LAWRENCE. "A CRITICAL ANALYSIS OF SOUND LEVEL MONITORING METHODS AT LIVE EVENTS". W Reproduced Sound 2022. Institute of Acoustics, 2022. http://dx.doi.org/10.25144/14142.
Pełny tekst źródłaHOURANI, C., i AJ HILL. "TOWARDS A SUBJECTIVE QUANTIFICATION OF NOISE ANNOYANCE DUE TO OUTDOOR EVENTS". W Reproduced Sound 2023. Institute of Acoustics, 2023. http://dx.doi.org/10.25144/16927.
Pełny tekst źródłaMiyazaki, Koichi, Tomoki Hayashi, Tomoki Toda i Kazuya Takeda. "Connectionist Temporal Classification-based Sound Event Encoder for Converting Sound Events into Onomatopoeic Representations". W 2018 26th European Signal Processing Conference (EUSIPCO). IEEE, 2018. http://dx.doi.org/10.23919/eusipco.2018.8553374.
Pełny tekst źródłaWheeler, P., D. Sharp i S. Taherzadeh. "AN EVALUATION OF UK AND INTERNATIONAL GUIDANCE FOR THE CONTROL OF NOISE AT OUTDOOR EVENTS". W REPRODUCED SOUND 2020. Institute of Acoustics, 2020. http://dx.doi.org/10.25144/13383.
Pełny tekst źródłaBURTON, J., i AJ HILL. "USING COGNITIVE PSYCHOLOGY AND NEUROSCIENCE TO BETTER INFORM SOUND SYSTEM DESIGN AT LARGE MUSICAL EVENTS". W Reproduced Sound 2022. Institute of Acoustics, 2022. http://dx.doi.org/10.25144/14148.
Pełny tekst źródłaWheeler, P., D. Sharp i S. Taherzadeh. "AN EVALUATION OF UK AND INTERNATIONAL GUIDANCE FOR THE CONTROL OF NOISE AT OUTDOOR EVENTS". W REPRODUCED SOUND 2020. Institute of Acoustics, 2020. http://dx.doi.org/10.25144/13383.
Pełny tekst źródłaImoto, Keisuke, Noriyuki Tonami, Yuma Koizumi, Masahiro Yasuda, Ryosuke Yamanishi i Yoichi Yamashita. "Sound Event Detection by Multitask Learning of Sound Events and Scenes with Soft Scene Labels". W ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9053912.
Pełny tekst źródłaMaosheng Zhang, Ruimin Hu, Shihong Chen, Xiaochen Wang, Dengshi Li i Lin Jiang. "Spatial perception reproduction of sound events based on sound property coincidences". W 2015 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2015. http://dx.doi.org/10.1109/icme.2015.7177412.
Pełny tekst źródłaStanzial, Domenico, Giorgio Sacchi i Giuliano Schiffrer. "Active playback of acoustic quadraphonic sound events". W 155th Meeting Acoustical Society of America. ASA, 2008. http://dx.doi.org/10.1121/1.2992204.
Pełny tekst źródłaKumar, Anurag, Ankit Shah, Alexander Hauptmann i Bhiksha Raj. "Learning Sound Events from Webly Labeled Data". W Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/384.
Pełny tekst źródłaRaporty organizacyjne na temat "Sound events"
Wilson, D. K., V. A. Nguyen, Nassy Srour i John Noble. Sound Exposure Calculations for Transient Events and Other Improvements to an Acoustical Tactical Decision Aid. Fort Belvoir, VA: Defense Technical Information Center, sierpień 2002. http://dx.doi.org/10.21236/ada406703.
Pełny tekst źródłaWilson, D., Vladimir Ostashev, Michael Shaw, Michael Muhlestein, John Weatherly, Michelle Swearingen i Sarah McComas. Infrasound propagation in the Arctic. Engineer Research and Development Center (U.S.), grudzień 2021. http://dx.doi.org/10.21079/11681/42683.
Pełny tekst źródłaAlbright, Jeff, Kim Struthers, Lisa Baril i Mark Brunson. Natural resource conditions at Valles Caldera National Preserve: Findings & management considerations for selected resources. National Park Service, czerwiec 2022. http://dx.doi.org/10.36967/nrr-2293731.
Pełny tekst źródłaYatsymirska, Mariya. MODERN MEDIA TEXT: POLITICAL NARRATIVES, MEANINGS AND SENSES, EMOTIONAL MARKERS. Ivan Franko National University of Lviv, luty 2022. http://dx.doi.org/10.30970/vjo.2022.51.11411.
Pełny tekst źródłaMasiero, Bruno, Marcio Henrique de Avelar i William D'Andrea Fonseca. International Year of Sound 2020 & 2021: Ano Internacional do Som prorrogado até 2021. William D’Andrea Fonseca, lipiec 2020. http://dx.doi.org/10.55753/aev.v35e52.15.
Pełny tekst źródłaMichalski, Ranny L. X. N., Bruno Masiero, William D’Andrea Fonseca i Márcio Avelar. Fim do Ano Internacional do Som: Fechamento do Ano Internacional do Som 2020 & 2021. Revista Acústica e Vibrações, grudzień 2021. http://dx.doi.org/10.55753/aev.v36e53.60.
Pełny tekst źródłaLaw, Edward, Samuel Gan-Mor, Hazel Wetzstein i Dan Eisikowitch. Electrostatic Processes Underlying Natural and Mechanized Transfer of Pollen. United States Department of Agriculture, maj 1998. http://dx.doi.org/10.32747/1998.7613035.bard.
Pełny tekst źródłaMichelmore, Richard, Eviatar Nevo, Abraham Korol i Tzion Fahima. Genetic Diversity at Resistance Gene Clusters in Wild Populations of Lactuca. United States Department of Agriculture, luty 2000. http://dx.doi.org/10.32747/2000.7573075.bard.
Pełny tekst źródłaHayes. L51633 Investigation of WIC Test Variables. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), luty 1991. http://dx.doi.org/10.55274/r0010107.
Pełny tekst źródłaKanninen, M. F. L51718 Development and Validation of a Ductile Fracture Analysis Model. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), maj 1994. http://dx.doi.org/10.55274/r0010321.
Pełny tekst źródła