Littérature scientifique sur le sujet « Sound events »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Sommaire
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Sound events ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Sound events"
Elizalde, Benjamin. « Categorization of sound events for automatic sound event classification ». Journal of the Acoustical Society of America 153, no 3_supplement (1 mars 2023) : A364. http://dx.doi.org/10.1121/10.0019175.
Texte intégralAura, Karine, Guillaume Lemaitre et Patrick Susini. « Verbal imitations of sound events enable recognition of the imitated sound events ». Journal of the Acoustical Society of America 123, no 5 (mai 2008) : 3414. http://dx.doi.org/10.1121/1.2934144.
Texte intégralNishida, Tsuruyo, Kazuhiko Kakehi et Takamasa Kyutoku. « Motion perception of the target sound event under the discriminated two sound events ». Journal of the Acoustical Society of America 120, no 5 (novembre 2006) : 3080. http://dx.doi.org/10.1121/1.4787419.
Texte intégralNakayama, Tsumugi, Taisuke Naito, Shunsuke Kouda et Takatoshi Yokota. « Determining disturbance sounds in aircraft sound events using a CNN-based method ». INTER-NOISE and NOISE-CON Congress and Conference Proceedings 268, no 7 (30 novembre 2023) : 1320–28. http://dx.doi.org/10.3397/in_2023_0196.
Texte intégralHara, Sunao, et Masanobu Abe. « Predictions for sound events and soundscape impressions from environmental sound using deep neural networks ». INTER-NOISE and NOISE-CON Congress and Conference Proceedings 268, no 3 (30 novembre 2023) : 5239–50. http://dx.doi.org/10.3397/in_2023_0739.
Texte intégralMaruyama, Hironori, Kosuke Okada et Isamu Motoyoshi. « A two-stage spectral model for sound texture perception : Synthesis and psychophysics ». i-Perception 14, no 1 (janvier 2023) : 204166952311573. http://dx.doi.org/10.1177/20416695231157349.
Texte intégralDomazetovska, Simona, Viktor Gavriloski, Maja Anachkova et Zlatko Petreski. « URBAN SOUND RECOGNITION USING DIFFERENT FEATURE EXTRACTION TECHNIQUES ». Facta Universitatis, Series : Automatic Control and Robotics 20, no 3 (18 décembre 2021) : 155. http://dx.doi.org/10.22190/fuacr211015012d.
Texte intégralMartinek, Jozef, P. Klco, M. Vrabec, T. Zatko, M. Tatar et M. Javorka. « Cough Sound Analysis ». Acta Medica Martiniana 13, Supplement-1 (1 mars 2013) : 15–20. http://dx.doi.org/10.2478/acm-2013-0002.
Texte intégralHeck, Jonas, Josep Llorca-Bofí, Christian Dreier et Michael Vorlaender. « Validation of auralized impulse responses considering masking, loudness and background noise ». Journal of the Acoustical Society of America 155, no 3_Supplement (1 mars 2024) : A178. http://dx.doi.org/10.1121/10.0027231.
Texte intégralKim, Yunbin, Jaewon Sa, Yongwha Chung, Daihee Park et Sungju Lee. « Resource-Efficient Pet Dog Sound Events Classification Using LSTM-FCN Based on Time-Series Data ». Sensors 18, no 11 (18 novembre 2018) : 4019. http://dx.doi.org/10.3390/s18114019.
Texte intégralThèses sur le sujet "Sound events"
Hay, Timothy Deane. « MAX-DOAS measurements of bromine explosion events in McMurdo Sound, Antarctica ». Thesis, University of Canterbury. Physics and Astronomy, 2010. http://hdl.handle.net/10092/5394.
Texte intégralGiannoulis, Dimitrios. « Recognition of sound sources and acoustic events in music and environmental audio ». Thesis, Queen Mary, University of London, 2014. http://qmro.qmul.ac.uk/xmlui/handle/123456789/9130.
Texte intégralPAPETTI, Stefano. « Sound modeling issues in interactive sonification - From basic contact events to synthesis and manipulation tools ». Doctoral thesis, Università degli Studi di Verona, 2010. http://hdl.handle.net/11562/340961.
Texte intégralThe work presented in this thesis ranges over a variety of research topics, spacing from human-computer interaction to physical-modeling. What combines such broad areas of interest is the idea of using physically-based computer simulations of acoustic phenomena in order to provide human-computer interfaces with sound feedback which is consistent with the user interaction. In this regard, recent years have seen the emergence of several new disciplines that go under the name of -- to cite a few -- auditory display, sonification and sonic interaction design. This thesis deals with the design and implementation of efficient sound algorithms for interactive sonification. To this end, the physical modeling of everyday sounds is taken into account, that is sounds not belonging to the families of speech and musical sounds.
Olvera, Zambrano Mauricio Michel. « Robust sound event detection ». Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0324.
Texte intégralFrom industry to general interest applications, computational analysis of sound scenes and events allows us to interpret the continuous flow of everyday sounds. One of the main degradations encountered when moving from lab conditions to the real world is due to the fact that sound scenes are not composed of isolated events but of multiple simultaneous events. Differences between training and test conditions also often arise due to extrinsic factors such as the choice of recording hardware and microphone positions, as well as intrinsic factors of sound events, such as their frequency of occurrence, duration and variability. In this thesis, we investigate problems of practical interest for audio analysis tasks to achieve robustness in real scenarios.Firstly, we explore the separation of ambient sounds in a practical scenario in which multiple short duration sound events with fast varying spectral characteristics (i.e., foreground sounds) occur simultaneously with background stationary sounds. We introduce the foreground-background ambient sound separation task and investigate whether a deep neural network with auxiliary information about the statistics of the background sound can differentiate between rapidly- and slowly-varying spectro-temporal characteristics. Moreover, we explore the use of per-channel energy normalization (PCEN) as a suitable pre-processing and the ability of the separation model to generalize to unseen sound classes. Results on mixtures of isolated sounds from the DESED and Audioset datasets demonstrate the generalization capability of the proposed separation system, which is mainly due to PCEN.Secondly, we investigate how to improve the robustness of audio analysis systems under mismatched training and test conditions. We explore two distinct tasks: acoustic scene classification (ASC) with mismatched recording devices and training of sound event detection (SED) systems with synthetic and real data.In the context of ASC, without assuming the availability of recordings captured simultaneously by mismatched training and test recording devices, we assess the impact of moment normalization and matching strategies and their integration with unsupervised adversarial domain adaptation. Our results show the benefits and limitations of these adaptation strategies applied at different stages of the classification pipeline. The best strategy matches source domain performance in the target domain.In the context of SED, we propose a PCEN based acoustic front-end with learned parameters. Then, we study the joint training of SED with auxiliary classification branches that categorize sounds as foreground or background according to their spectral properties. We also assess the impact of aligning the distributions of synthetic and real data at the frame or segment level based on optimal transport. Finally, we integrate an active learning strategy in the adaptation procedure. Results on the DESED dataset indicate that these methods are beneficial for the SED task and that their combination further improves performance on real sound scenes
Beeferman, Leah. « JOURNEYS INTO THE UNKNOWN : A SERIES OF SCIENCE ARCHITECTURE TASKS AND EVENTS, SPACE-BOUND EXPLORATIONS AND FAR-TRAVELS, DISCOVERIES AND MISSES (NEAR AND FAR), IMAGINATIVE SPACE-GAZING AND RELATED INVESTIGATIONS, OBSERVATIONS, ORBITS, AND OTHER REPETITIOUS MONITORING TASKS ». VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/2164.
Texte intégralVESPERINI, FABIO. « Deep Learning for Sound Event Detection and Classification ». Doctoral thesis, Università Politecnica delle Marche, 2019. http://hdl.handle.net/11566/263536.
Texte intégralThe recent progress on acoustic signal processing and machine learning techniques have enabled the development of innovative technologies for automatic analysis of sound events. In particular, nowadays one of the hottest approach to this problem lays on the exploitation of Deep Learning techniques. As further proof, in several occasion neural architectures originally designed for other multimedia domains have been successfully proposed to process the audio signal. Indeed, although these technologies have been faced for a long time by statistical modelling algorithms such as Gaussian Mixture Models, Hidden Markov Models or Support Vector Machines, the new breakthrough of machine learning for audio processing has lead to encouraging results into the addressed tasks. Hence, this thesis reports an up-to-date state of the art and proposes several reliable DNN-based methods for Sound Event Detection (SED) and Sound Event Classification (SEC), with an overview of the Deep Neural Network (DNN) architectures used on purpose and of the evaluation procedures and metrics used in this research field. According to the recent trend, which shows an extensive employment of Convolutional Neural Networks (CNNs) for both SED and SEC tasks, this work reports also rather new approaches based on the Siamese DNN architecture or the novel Capsule computational units. Most of the reported systems have been designed in the occasion of international challenges. This allowed the access to public datasets, and to compare systems proposed by the most competitive research teams on a common basis. The case studies reported in this dissertation refer to applications in a variety of scenarios, ranging from unobtrusive health monitoring, audio-based surveillance, bio-acoustic monitoring and classification of the road surface conditions. These tasks face numerous challenges, particularly related to their application in real-life environments. Among these issues there are unbalancing of datasets, different acquisition setups, acoustic disturbance (i.e., background noise, reverberation and cross-talk) and polyphony. In particular, since multiple events are very likely to overlap in real life audio, two algorithms for polyphonic SED are reported in this thesis. A polyphonic SED algorithm can be considered as system which is able to perform contemporary detection - determining onset and offset time of the sound events - and classification - assigning a label to each of the events occurring in the audio stream.
Jackson, Asti Joy. « Structure of Sound ». Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/73778.
Texte intégralMaster of Architecture
Fonseca, Eduardo. « Training sound event classifiers using different types of supervision ». Doctoral thesis, Universitat Pompeu Fabra, 2021. http://hdl.handle.net/10803/673067.
Texte intégralEl interés en el reconocimiento automático de eventos sonoros se ha incrementado en los últimos años, motivado por nuevas aplicaciones en campos como la asistencia médica, smart homes, o urbanismo. Al comienzo de esta tesis, la investigación en clasificación de eventos sonoros se centraba principalmente en aprendizaje supervisado usando datasets pequeños, a menudo anotados cuidadosamente con vocabularios limitados a dominios específicos (como el urbano o el doméstico). Sin embargo, tales datasets no permiten entrenar clasificadores capaces de reconocer los cientos de eventos sonoros que ocurren en nuestro entorno, como silbidos de kettle, sonidos de pájaros, coches pasando, o diferentes alarmas. Al mismo tiempo, websites como Freesound o YouTube albergan grandes cantidades de datos de sonido ambiental, que pueden ser útiles para entrenar clasificadores con un vocabulario más extenso, particularmente utilizando métodos de deep learning que requieren gran cantidad de datos. Para avanzar el estado del arte en la clasificación de eventos sonoros, esta tesis investiga varios aspectos de la creación de datasets, así como de aprendizaje supervisado y no supervisado para entrenar clasificadores de eventos sonoros con un vocabulario extenso, utilizando diferentes tipos de supervisión de manera novedosa y alternativa. En concreto, nos centramos en aprendizaje supervisado usando etiquetas sin ruido y con ruido, así como en aprendizaje de representaciones auto-supervisado a partir de datos no etiquetados. La primera parte de esta tesis se centra en la creación de FSD50K, un dataset con más de 100h de audio etiquetado manualmente usando 200 clases de eventos sonoros. Presentamos una descripción detallada del proceso de creación y una caracterización exhaustiva del dataset. Además, exploramos modificaciones arquitectónicas para aumentar la invariancia frente a desplazamientos en CNNs, mejorando la robustez frente a desplazamientos de tiempo/frecuencia en los espectrogramas de entrada. En la segunda parte, nos centramos en entrenar clasificadores de eventos sonoros usando etiquetas con ruido. Primero, proponemos un dataset que permite la investigación del ruido de etiquetas real. Después, exploramos métodos agnósticos a la arquitectura de red para mitigar el efecto del ruido en las etiquetas durante el entrenamiento, incluyendo técnicas de regularización, funciones de coste robustas al ruido, y estrategias para rechazar ejemplos etiquetados con ruido. Además, desarrollamos un método teacher-student para abordar el problema de las etiquetas ausentes en datasets de eventos sonoros. En la tercera parte, proponemos algoritmos para aprender representaciones de audio a partir de datos sin etiquetar. En particular, desarrollamos métodos de aprendizaje contrastivos auto-supervisados, donde las representaciones se aprenden comparando pares de ejemplos calculados a través de métodos de aumento de datos y separación automática de sonido. Finalmente, reportamos sobre la organización de dos DCASE Challenge Tasks para el tageado automático de audio a partir de etiquetas ruidosas. Mediante la propuesta de datasets, así como de métodos de vanguardia y representaciones de audio, esta tesis contribuye al avance de la investigación abierta sobre eventos sonoros y a la transición del aprendizaje supervisado tradicional utilizando etiquetas sin ruido a otras estrategias de aprendizaje menos dependientes de costosos esfuerzos de anotación.
Pahar, Madhurananda. « A novel sound reconstruction technique based on a spike code (event) representation ». Thesis, University of Stirling, 2016. http://hdl.handle.net/1893/23025.
Texte intégralLabbé, Etienne. « Description automatique des événements sonores par des méthodes d'apprentissage profond ». Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES054.
Texte intégralIn the audio research field, the majority of machine learning systems focus on recognizing a limited number of sound events. However, when a machine interacts with real data, it must be able to handle much more varied and complex situations. To tackle this problem, annotators use natural language, which allows any sound information to be summarized. Automated Audio Captioning (AAC) was introduced recently to develop systems capable of automatically producing a description of any type of sound in text form. This task concerns all kinds of sound events such as environmental, urban, domestic sounds, sound effects, music or speech. This type of system could be used by people who are deaf or hard of hearing, and could improve the indexing of large audio databases. In the first part of this thesis, we present the state of the art of the AAC task through a global description of public datasets, learning methods, architectures and evaluation metrics. Using this knowledge, we then present the architecture of our first AAC system, which obtains encouraging scores on the main AAC metric named SPIDEr: 24.7% on the Clotho corpus and 40.1% on the AudioCaps corpus. Then, subsequently, we explore many aspects of AAC systems in the second part. We first focus on evaluation methods through the study of SPIDEr. For this, we propose a variant called SPIDEr-max, which considers several candidates for each audio file, and which shows that the SPIDEr metric is very sensitive to the predicted words. Then, we improve our reference system by exploring different architectures and numerous hyper-parameters to exceed the state of the art on AudioCaps (SPIDEr of 49.5%). Next, we explore a multi-task learning method aimed at improving the semantics of sentences generated by our system. Finally, we build a general and unbiased AAC system called CONETTE, which can generate different types of descriptions that approximate those of the target datasets. In the third and last part, we propose to study the capabilities of a AAC system to automatically search for audio content in a database. Our approach obtains competitive scores to systems dedicated to this task, while using fewer parameters. We also introduce semi-supervised methods to improve our system using new unlabeled audio data, and we show how pseudo-label generation can impact a AAC model. Finally, we studied the AAC systems in languages other than English: French, Spanish and German. In addition, we propose a system capable of producing all four languages at the same time, and we compare it with systems specialized in each language
Livres sur le sujet "Sound events"
Virtanen, Tuomas, Mark D. Plumbley et Dan Ellis, dir. Computational Analysis of Sound Scenes and Events. Cham : Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-63450-0.
Texte intégralMcCarthy, Jim. Voices of Latin rock : People and events that created this sound. Milwaukee, WI : Hal Leonard Corporation, 2004.
Trouver le texte intégralMcCarthy, Jim. Voices of Latin rock : People and events that created this sound. Milwaukee, WI : Hal Leonard Corporation, 2005.
Trouver le texte intégralThomas, Jeremy. Taking leave. London : Timewell, 2006.
Trouver le texte intégralZealand, Radio New. Catalogue of Radio New Zealand recordings of Maori events, 1938-1950 : RNZ 1-60. Auckland : Archive of Maori and Pacific Music, Anthropology Dept., University of Auckland, 1991.
Trouver le texte intégralCorporation, British Broadcasting. Equestrian events. Princeton, N.J : Films for the Humanities & Sciences, 1991.
Trouver le texte intégralTaylor, Fred. What, and Give Up Showbiz ? : Six Decades in the Music Business. Blue Ridge Summit : Backbeat, 2020.
Trouver le texte intégralCai, Wenyi. Yi tian 10 fen zhong, ying zhan xin wen Ying wen : Yue du, ting li, yu hui neng li yi ci yang cheng ! 8e éd. Taibei Shi : Kai xin qi ye guan li gu wen you xian gong si, 2015.
Trouver le texte intégralMarchetta, Vittorio. Passaggi di sound design : Riflessioni, competenze, oggetti-eventi. Milano : F. Angeli, 2010.
Trouver le texte intégralBasile, Giuseppe. ' 80, new sound, new wave : Vita, musica ed eventi nella provincia italiana degli anni '80. Taranto : Geophonìe, 2007.
Trouver le texte intégralChapitres de livres sur le sujet "Sound events"
Toole, Floyd E. « Above the Transition Frequency : Acoustical Events and Perceptions ». Dans Sound Reproduction, 157–214. Third edition. | New York ; London : Routledge, 2017. : Routledge, 2017. http://dx.doi.org/10.4324/9781315686424-7.
Texte intégralToole, Floyd E. « Below the Transition Frequency : Acoustical Events and Perceptions ». Dans Sound Reproduction, 215–62. Third edition. | New York ; London : Routledge, 2017. : Routledge, 2017. http://dx.doi.org/10.4324/9781315686424-8.
Texte intégralGuastavino, Catherine. « Everyday Sound Categorization ». Dans Computational Analysis of Sound Scenes and Events, 183–213. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_7.
Texte intégralFont, Frederic, Gerard Roma et Xavier Serra. « Sound Sharing and Retrieval ». Dans Computational Analysis of Sound Scenes and Events, 279–301. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_10.
Texte intégralStein, Peter J. « Observation of the Sound Radiated by Individual Ice Fracturing Events ». Dans Sea Surface Sound, 533–44. Dordrecht : Springer Netherlands, 1988. http://dx.doi.org/10.1007/978-94-009-3017-9_38.
Texte intégralBello, Juan Pablo, Charlie Mydlarz et Justin Salamon. « Sound Analysis in Smart Cities ». Dans Computational Analysis of Sound Scenes and Events, 373–97. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_13.
Texte intégralLoar, Josh. « Conventions and Other Multi-room Live Events ». Dans The Sound System Design Primer, 417–21. New York, NY : Routledge, 2019. : Routledge, 2019. http://dx.doi.org/10.4324/9781315196817-44.
Texte intégralSerizel, Romain, Victor Bisot, Slim Essid et Gaël Richard. « Acoustic Features for Environmental Sound Analysis ». Dans Computational Analysis of Sound Scenes and Events, 71–101. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_4.
Texte intégralBenetos, Emmanouil, Dan Stowell et Mark D. Plumbley. « Approaches to Complex Sound Scene Analysis ». Dans Computational Analysis of Sound Scenes and Events, 215–42. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_8.
Texte intégralTheodorou, Theodoros, Iosif Mporas et Nikos Fakotakis. « Automatic Sound Recognition of Urban Environment Events ». Dans Speech and Computer, 129–36. Cham : Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23132-7_16.
Texte intégralActes de conférences sur le sujet "Sound events"
HILL, AJ, J. MULDER, J. BURTON, M. KOK et M. LAWRENCE. « A CRITICAL ANALYSIS OF SOUND LEVEL MONITORING METHODS AT LIVE EVENTS ». Dans Reproduced Sound 2022. Institute of Acoustics, 2022. http://dx.doi.org/10.25144/14142.
Texte intégralHOURANI, C., et AJ HILL. « TOWARDS A SUBJECTIVE QUANTIFICATION OF NOISE ANNOYANCE DUE TO OUTDOOR EVENTS ». Dans Reproduced Sound 2023. Institute of Acoustics, 2023. http://dx.doi.org/10.25144/16927.
Texte intégralMiyazaki, Koichi, Tomoki Hayashi, Tomoki Toda et Kazuya Takeda. « Connectionist Temporal Classification-based Sound Event Encoder for Converting Sound Events into Onomatopoeic Representations ». Dans 2018 26th European Signal Processing Conference (EUSIPCO). IEEE, 2018. http://dx.doi.org/10.23919/eusipco.2018.8553374.
Texte intégralWheeler, P., D. Sharp et S. Taherzadeh. « AN EVALUATION OF UK AND INTERNATIONAL GUIDANCE FOR THE CONTROL OF NOISE AT OUTDOOR EVENTS ». Dans REPRODUCED SOUND 2020. Institute of Acoustics, 2020. http://dx.doi.org/10.25144/13383.
Texte intégralBURTON, J., et AJ HILL. « USING COGNITIVE PSYCHOLOGY AND NEUROSCIENCE TO BETTER INFORM SOUND SYSTEM DESIGN AT LARGE MUSICAL EVENTS ». Dans Reproduced Sound 2022. Institute of Acoustics, 2022. http://dx.doi.org/10.25144/14148.
Texte intégralWheeler, P., D. Sharp et S. Taherzadeh. « AN EVALUATION OF UK AND INTERNATIONAL GUIDANCE FOR THE CONTROL OF NOISE AT OUTDOOR EVENTS ». Dans REPRODUCED SOUND 2020. Institute of Acoustics, 2020. http://dx.doi.org/10.25144/13383.
Texte intégralImoto, Keisuke, Noriyuki Tonami, Yuma Koizumi, Masahiro Yasuda, Ryosuke Yamanishi et Yoichi Yamashita. « Sound Event Detection by Multitask Learning of Sound Events and Scenes with Soft Scene Labels ». Dans ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9053912.
Texte intégralMaosheng Zhang, Ruimin Hu, Shihong Chen, Xiaochen Wang, Dengshi Li et Lin Jiang. « Spatial perception reproduction of sound events based on sound property coincidences ». Dans 2015 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2015. http://dx.doi.org/10.1109/icme.2015.7177412.
Texte intégralStanzial, Domenico, Giorgio Sacchi et Giuliano Schiffrer. « Active playback of acoustic quadraphonic sound events ». Dans 155th Meeting Acoustical Society of America. ASA, 2008. http://dx.doi.org/10.1121/1.2992204.
Texte intégralKumar, Anurag, Ankit Shah, Alexander Hauptmann et Bhiksha Raj. « Learning Sound Events from Webly Labeled Data ». Dans Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California : International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/384.
Texte intégralRapports d'organisations sur le sujet "Sound events"
Wilson, D. K., V. A. Nguyen, Nassy Srour et John Noble. Sound Exposure Calculations for Transient Events and Other Improvements to an Acoustical Tactical Decision Aid. Fort Belvoir, VA : Defense Technical Information Center, août 2002. http://dx.doi.org/10.21236/ada406703.
Texte intégralWilson, D., Vladimir Ostashev, Michael Shaw, Michael Muhlestein, John Weatherly, Michelle Swearingen et Sarah McComas. Infrasound propagation in the Arctic. Engineer Research and Development Center (U.S.), décembre 2021. http://dx.doi.org/10.21079/11681/42683.
Texte intégralAlbright, Jeff, Kim Struthers, Lisa Baril et Mark Brunson. Natural resource conditions at Valles Caldera National Preserve : Findings & ; management considerations for selected resources. National Park Service, juin 2022. http://dx.doi.org/10.36967/nrr-2293731.
Texte intégralYatsymirska, Mariya. MODERN MEDIA TEXT : POLITICAL NARRATIVES, MEANINGS AND SENSES, EMOTIONAL MARKERS. Ivan Franko National University of Lviv, février 2022. http://dx.doi.org/10.30970/vjo.2022.51.11411.
Texte intégralMasiero, Bruno, Marcio Henrique de Avelar et William D'Andrea Fonseca. International Year of Sound 2020 & ; 2021 : Ano Internacional do Som prorrogado até 2021. William D’Andrea Fonseca, juillet 2020. http://dx.doi.org/10.55753/aev.v35e52.15.
Texte intégralMichalski, Ranny L. X. N., Bruno Masiero, William D’Andrea Fonseca et Márcio Avelar. Fim do Ano Internacional do Som : Fechamento do Ano Internacional do Som 2020 & ; 2021. Revista Acústica e Vibrações, décembre 2021. http://dx.doi.org/10.55753/aev.v36e53.60.
Texte intégralLaw, Edward, Samuel Gan-Mor, Hazel Wetzstein et Dan Eisikowitch. Electrostatic Processes Underlying Natural and Mechanized Transfer of Pollen. United States Department of Agriculture, mai 1998. http://dx.doi.org/10.32747/1998.7613035.bard.
Texte intégralMichelmore, Richard, Eviatar Nevo, Abraham Korol et Tzion Fahima. Genetic Diversity at Resistance Gene Clusters in Wild Populations of Lactuca. United States Department of Agriculture, février 2000. http://dx.doi.org/10.32747/2000.7573075.bard.
Texte intégralHayes. L51633 Investigation of WIC Test Variables. Chantilly, Virginia : Pipeline Research Council International, Inc. (PRCI), février 1991. http://dx.doi.org/10.55274/r0010107.
Texte intégralKanninen, M. F. L51718 Development and Validation of a Ductile Fracture Analysis Model. Chantilly, Virginia : Pipeline Research Council International, Inc. (PRCI), mai 1994. http://dx.doi.org/10.55274/r0010321.
Texte intégral