Academic literature on the topic 'Sound events'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sound events.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Sound events"
Elizalde, Benjamin. "Categorization of sound events for automatic sound event classification." Journal of the Acoustical Society of America 153, no. 3_supplement (March 1, 2023): A364. http://dx.doi.org/10.1121/10.0019175.
Full textAura, Karine, Guillaume Lemaitre, and Patrick Susini. "Verbal imitations of sound events enable recognition of the imitated sound events." Journal of the Acoustical Society of America 123, no. 5 (May 2008): 3414. http://dx.doi.org/10.1121/1.2934144.
Full textNishida, Tsuruyo, Kazuhiko Kakehi, and Takamasa Kyutoku. "Motion perception of the target sound event under the discriminated two sound events." Journal of the Acoustical Society of America 120, no. 5 (November 2006): 3080. http://dx.doi.org/10.1121/1.4787419.
Full textNakayama, Tsumugi, Taisuke Naito, Shunsuke Kouda, and Takatoshi Yokota. "Determining disturbance sounds in aircraft sound events using a CNN-based method." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 268, no. 7 (November 30, 2023): 1320–28. http://dx.doi.org/10.3397/in_2023_0196.
Full textHara, Sunao, and Masanobu Abe. "Predictions for sound events and soundscape impressions from environmental sound using deep neural networks." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 268, no. 3 (November 30, 2023): 5239–50. http://dx.doi.org/10.3397/in_2023_0739.
Full textMaruyama, Hironori, Kosuke Okada, and Isamu Motoyoshi. "A two-stage spectral model for sound texture perception: Synthesis and psychophysics." i-Perception 14, no. 1 (January 2023): 204166952311573. http://dx.doi.org/10.1177/20416695231157349.
Full textDomazetovska, Simona, Viktor Gavriloski, Maja Anachkova, and Zlatko Petreski. "URBAN SOUND RECOGNITION USING DIFFERENT FEATURE EXTRACTION TECHNIQUES." Facta Universitatis, Series: Automatic Control and Robotics 20, no. 3 (December 18, 2021): 155. http://dx.doi.org/10.22190/fuacr211015012d.
Full textMartinek, Jozef, P. Klco, M. Vrabec, T. Zatko, M. Tatar, and M. Javorka. "Cough Sound Analysis." Acta Medica Martiniana 13, Supplement-1 (March 1, 2013): 15–20. http://dx.doi.org/10.2478/acm-2013-0002.
Full textHeck, Jonas, Josep Llorca-Bofí, Christian Dreier, and Michael Vorlaender. "Validation of auralized impulse responses considering masking, loudness and background noise." Journal of the Acoustical Society of America 155, no. 3_Supplement (March 1, 2024): A178. http://dx.doi.org/10.1121/10.0027231.
Full textKim, Yunbin, Jaewon Sa, Yongwha Chung, Daihee Park, and Sungju Lee. "Resource-Efficient Pet Dog Sound Events Classification Using LSTM-FCN Based on Time-Series Data." Sensors 18, no. 11 (November 18, 2018): 4019. http://dx.doi.org/10.3390/s18114019.
Full textDissertations / Theses on the topic "Sound events"
Hay, Timothy Deane. "MAX-DOAS measurements of bromine explosion events in McMurdo Sound, Antarctica." Thesis, University of Canterbury. Physics and Astronomy, 2010. http://hdl.handle.net/10092/5394.
Full textGiannoulis, Dimitrios. "Recognition of sound sources and acoustic events in music and environmental audio." Thesis, Queen Mary, University of London, 2014. http://qmro.qmul.ac.uk/xmlui/handle/123456789/9130.
Full textPAPETTI, Stefano. "Sound modeling issues in interactive sonification - From basic contact events to synthesis and manipulation tools." Doctoral thesis, Università degli Studi di Verona, 2010. http://hdl.handle.net/11562/340961.
Full textThe work presented in this thesis ranges over a variety of research topics, spacing from human-computer interaction to physical-modeling. What combines such broad areas of interest is the idea of using physically-based computer simulations of acoustic phenomena in order to provide human-computer interfaces with sound feedback which is consistent with the user interaction. In this regard, recent years have seen the emergence of several new disciplines that go under the name of -- to cite a few -- auditory display, sonification and sonic interaction design. This thesis deals with the design and implementation of efficient sound algorithms for interactive sonification. To this end, the physical modeling of everyday sounds is taken into account, that is sounds not belonging to the families of speech and musical sounds.
Olvera, Zambrano Mauricio Michel. "Robust sound event detection." Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0324.
Full textFrom industry to general interest applications, computational analysis of sound scenes and events allows us to interpret the continuous flow of everyday sounds. One of the main degradations encountered when moving from lab conditions to the real world is due to the fact that sound scenes are not composed of isolated events but of multiple simultaneous events. Differences between training and test conditions also often arise due to extrinsic factors such as the choice of recording hardware and microphone positions, as well as intrinsic factors of sound events, such as their frequency of occurrence, duration and variability. In this thesis, we investigate problems of practical interest for audio analysis tasks to achieve robustness in real scenarios.Firstly, we explore the separation of ambient sounds in a practical scenario in which multiple short duration sound events with fast varying spectral characteristics (i.e., foreground sounds) occur simultaneously with background stationary sounds. We introduce the foreground-background ambient sound separation task and investigate whether a deep neural network with auxiliary information about the statistics of the background sound can differentiate between rapidly- and slowly-varying spectro-temporal characteristics. Moreover, we explore the use of per-channel energy normalization (PCEN) as a suitable pre-processing and the ability of the separation model to generalize to unseen sound classes. Results on mixtures of isolated sounds from the DESED and Audioset datasets demonstrate the generalization capability of the proposed separation system, which is mainly due to PCEN.Secondly, we investigate how to improve the robustness of audio analysis systems under mismatched training and test conditions. We explore two distinct tasks: acoustic scene classification (ASC) with mismatched recording devices and training of sound event detection (SED) systems with synthetic and real data.In the context of ASC, without assuming the availability of recordings captured simultaneously by mismatched training and test recording devices, we assess the impact of moment normalization and matching strategies and their integration with unsupervised adversarial domain adaptation. Our results show the benefits and limitations of these adaptation strategies applied at different stages of the classification pipeline. The best strategy matches source domain performance in the target domain.In the context of SED, we propose a PCEN based acoustic front-end with learned parameters. Then, we study the joint training of SED with auxiliary classification branches that categorize sounds as foreground or background according to their spectral properties. We also assess the impact of aligning the distributions of synthetic and real data at the frame or segment level based on optimal transport. Finally, we integrate an active learning strategy in the adaptation procedure. Results on the DESED dataset indicate that these methods are beneficial for the SED task and that their combination further improves performance on real sound scenes
Beeferman, Leah. "JOURNEYS INTO THE UNKNOWN: A SERIES OF SCIENCE ARCHITECTURE TASKS AND EVENTS, SPACE-BOUND EXPLORATIONS AND FAR-TRAVELS, DISCOVERIES AND MISSES (NEAR AND FAR), IMAGINATIVE SPACE-GAZING AND RELATED INVESTIGATIONS, OBSERVATIONS, ORBITS, AND OTHER REPETITIOUS MONITORING TASKS." VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/2164.
Full textVESPERINI, FABIO. "Deep Learning for Sound Event Detection and Classification." Doctoral thesis, Università Politecnica delle Marche, 2019. http://hdl.handle.net/11566/263536.
Full textThe recent progress on acoustic signal processing and machine learning techniques have enabled the development of innovative technologies for automatic analysis of sound events. In particular, nowadays one of the hottest approach to this problem lays on the exploitation of Deep Learning techniques. As further proof, in several occasion neural architectures originally designed for other multimedia domains have been successfully proposed to process the audio signal. Indeed, although these technologies have been faced for a long time by statistical modelling algorithms such as Gaussian Mixture Models, Hidden Markov Models or Support Vector Machines, the new breakthrough of machine learning for audio processing has lead to encouraging results into the addressed tasks. Hence, this thesis reports an up-to-date state of the art and proposes several reliable DNN-based methods for Sound Event Detection (SED) and Sound Event Classification (SEC), with an overview of the Deep Neural Network (DNN) architectures used on purpose and of the evaluation procedures and metrics used in this research field. According to the recent trend, which shows an extensive employment of Convolutional Neural Networks (CNNs) for both SED and SEC tasks, this work reports also rather new approaches based on the Siamese DNN architecture or the novel Capsule computational units. Most of the reported systems have been designed in the occasion of international challenges. This allowed the access to public datasets, and to compare systems proposed by the most competitive research teams on a common basis. The case studies reported in this dissertation refer to applications in a variety of scenarios, ranging from unobtrusive health monitoring, audio-based surveillance, bio-acoustic monitoring and classification of the road surface conditions. These tasks face numerous challenges, particularly related to their application in real-life environments. Among these issues there are unbalancing of datasets, different acquisition setups, acoustic disturbance (i.e., background noise, reverberation and cross-talk) and polyphony. In particular, since multiple events are very likely to overlap in real life audio, two algorithms for polyphonic SED are reported in this thesis. A polyphonic SED algorithm can be considered as system which is able to perform contemporary detection - determining onset and offset time of the sound events - and classification - assigning a label to each of the events occurring in the audio stream.
Jackson, Asti Joy. "Structure of Sound." Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/73778.
Full textMaster of Architecture
Fonseca, Eduardo. "Training sound event classifiers using different types of supervision." Doctoral thesis, Universitat Pompeu Fabra, 2021. http://hdl.handle.net/10803/673067.
Full textEl interés en el reconocimiento automático de eventos sonoros se ha incrementado en los últimos años, motivado por nuevas aplicaciones en campos como la asistencia médica, smart homes, o urbanismo. Al comienzo de esta tesis, la investigación en clasificación de eventos sonoros se centraba principalmente en aprendizaje supervisado usando datasets pequeños, a menudo anotados cuidadosamente con vocabularios limitados a dominios específicos (como el urbano o el doméstico). Sin embargo, tales datasets no permiten entrenar clasificadores capaces de reconocer los cientos de eventos sonoros que ocurren en nuestro entorno, como silbidos de kettle, sonidos de pájaros, coches pasando, o diferentes alarmas. Al mismo tiempo, websites como Freesound o YouTube albergan grandes cantidades de datos de sonido ambiental, que pueden ser útiles para entrenar clasificadores con un vocabulario más extenso, particularmente utilizando métodos de deep learning que requieren gran cantidad de datos. Para avanzar el estado del arte en la clasificación de eventos sonoros, esta tesis investiga varios aspectos de la creación de datasets, así como de aprendizaje supervisado y no supervisado para entrenar clasificadores de eventos sonoros con un vocabulario extenso, utilizando diferentes tipos de supervisión de manera novedosa y alternativa. En concreto, nos centramos en aprendizaje supervisado usando etiquetas sin ruido y con ruido, así como en aprendizaje de representaciones auto-supervisado a partir de datos no etiquetados. La primera parte de esta tesis se centra en la creación de FSD50K, un dataset con más de 100h de audio etiquetado manualmente usando 200 clases de eventos sonoros. Presentamos una descripción detallada del proceso de creación y una caracterización exhaustiva del dataset. Además, exploramos modificaciones arquitectónicas para aumentar la invariancia frente a desplazamientos en CNNs, mejorando la robustez frente a desplazamientos de tiempo/frecuencia en los espectrogramas de entrada. En la segunda parte, nos centramos en entrenar clasificadores de eventos sonoros usando etiquetas con ruido. Primero, proponemos un dataset que permite la investigación del ruido de etiquetas real. Después, exploramos métodos agnósticos a la arquitectura de red para mitigar el efecto del ruido en las etiquetas durante el entrenamiento, incluyendo técnicas de regularización, funciones de coste robustas al ruido, y estrategias para rechazar ejemplos etiquetados con ruido. Además, desarrollamos un método teacher-student para abordar el problema de las etiquetas ausentes en datasets de eventos sonoros. En la tercera parte, proponemos algoritmos para aprender representaciones de audio a partir de datos sin etiquetar. En particular, desarrollamos métodos de aprendizaje contrastivos auto-supervisados, donde las representaciones se aprenden comparando pares de ejemplos calculados a través de métodos de aumento de datos y separación automática de sonido. Finalmente, reportamos sobre la organización de dos DCASE Challenge Tasks para el tageado automático de audio a partir de etiquetas ruidosas. Mediante la propuesta de datasets, así como de métodos de vanguardia y representaciones de audio, esta tesis contribuye al avance de la investigación abierta sobre eventos sonoros y a la transición del aprendizaje supervisado tradicional utilizando etiquetas sin ruido a otras estrategias de aprendizaje menos dependientes de costosos esfuerzos de anotación.
Pahar, Madhurananda. "A novel sound reconstruction technique based on a spike code (event) representation." Thesis, University of Stirling, 2016. http://hdl.handle.net/1893/23025.
Full textLabbé, Etienne. "Description automatique des événements sonores par des méthodes d'apprentissage profond." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES054.
Full textIn the audio research field, the majority of machine learning systems focus on recognizing a limited number of sound events. However, when a machine interacts with real data, it must be able to handle much more varied and complex situations. To tackle this problem, annotators use natural language, which allows any sound information to be summarized. Automated Audio Captioning (AAC) was introduced recently to develop systems capable of automatically producing a description of any type of sound in text form. This task concerns all kinds of sound events such as environmental, urban, domestic sounds, sound effects, music or speech. This type of system could be used by people who are deaf or hard of hearing, and could improve the indexing of large audio databases. In the first part of this thesis, we present the state of the art of the AAC task through a global description of public datasets, learning methods, architectures and evaluation metrics. Using this knowledge, we then present the architecture of our first AAC system, which obtains encouraging scores on the main AAC metric named SPIDEr: 24.7% on the Clotho corpus and 40.1% on the AudioCaps corpus. Then, subsequently, we explore many aspects of AAC systems in the second part. We first focus on evaluation methods through the study of SPIDEr. For this, we propose a variant called SPIDEr-max, which considers several candidates for each audio file, and which shows that the SPIDEr metric is very sensitive to the predicted words. Then, we improve our reference system by exploring different architectures and numerous hyper-parameters to exceed the state of the art on AudioCaps (SPIDEr of 49.5%). Next, we explore a multi-task learning method aimed at improving the semantics of sentences generated by our system. Finally, we build a general and unbiased AAC system called CONETTE, which can generate different types of descriptions that approximate those of the target datasets. In the third and last part, we propose to study the capabilities of a AAC system to automatically search for audio content in a database. Our approach obtains competitive scores to systems dedicated to this task, while using fewer parameters. We also introduce semi-supervised methods to improve our system using new unlabeled audio data, and we show how pseudo-label generation can impact a AAC model. Finally, we studied the AAC systems in languages other than English: French, Spanish and German. In addition, we propose a system capable of producing all four languages at the same time, and we compare it with systems specialized in each language
Books on the topic "Sound events"
Virtanen, Tuomas, Mark D. Plumbley, and Dan Ellis, eds. Computational Analysis of Sound Scenes and Events. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-63450-0.
Full textMcCarthy, Jim. Voices of Latin rock: People and events that created this sound. Milwaukee, WI: Hal Leonard Corporation, 2004.
Find full textMcCarthy, Jim. Voices of Latin rock: People and events that created this sound. Milwaukee, WI: Hal Leonard Corporation, 2005.
Find full textThomas, Jeremy. Taking leave. London: Timewell, 2006.
Find full textZealand, Radio New. Catalogue of Radio New Zealand recordings of Maori events, 1938-1950: RNZ 1-60. Auckland: Archive of Maori and Pacific Music, Anthropology Dept., University of Auckland, 1991.
Find full textCorporation, British Broadcasting. Equestrian events. Princeton, N.J: Films for the Humanities & Sciences, 1991.
Find full textTaylor, Fred. What, and Give Up Showbiz?: Six Decades in the Music Business. Blue Ridge Summit: Backbeat, 2020.
Find full textCai, Wenyi. Yi tian 10 fen zhong, ying zhan xin wen Ying wen: Yue du, ting li, yu hui neng li yi ci yang cheng! 8th ed. Taibei Shi: Kai xin qi ye guan li gu wen you xian gong si, 2015.
Find full textMarchetta, Vittorio. Passaggi di sound design: Riflessioni, competenze, oggetti-eventi. Milano: F. Angeli, 2010.
Find full textBasile, Giuseppe. ' 80, new sound, new wave: Vita, musica ed eventi nella provincia italiana degli anni '80. Taranto: Geophonìe, 2007.
Find full textBook chapters on the topic "Sound events"
Toole, Floyd E. "Above the Transition Frequency: Acoustical Events and Perceptions." In Sound Reproduction, 157–214. Third edition. | New York ; London : Routledge, 2017.: Routledge, 2017. http://dx.doi.org/10.4324/9781315686424-7.
Full textToole, Floyd E. "Below the Transition Frequency: Acoustical Events and Perceptions." In Sound Reproduction, 215–62. Third edition. | New York ; London : Routledge, 2017.: Routledge, 2017. http://dx.doi.org/10.4324/9781315686424-8.
Full textGuastavino, Catherine. "Everyday Sound Categorization." In Computational Analysis of Sound Scenes and Events, 183–213. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_7.
Full textFont, Frederic, Gerard Roma, and Xavier Serra. "Sound Sharing and Retrieval." In Computational Analysis of Sound Scenes and Events, 279–301. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_10.
Full textStein, Peter J. "Observation of the Sound Radiated by Individual Ice Fracturing Events." In Sea Surface Sound, 533–44. Dordrecht: Springer Netherlands, 1988. http://dx.doi.org/10.1007/978-94-009-3017-9_38.
Full textBello, Juan Pablo, Charlie Mydlarz, and Justin Salamon. "Sound Analysis in Smart Cities." In Computational Analysis of Sound Scenes and Events, 373–97. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_13.
Full textLoar, Josh. "Conventions and Other Multi-room Live Events." In The Sound System Design Primer, 417–21. New York, NY : Routledge, 2019.: Routledge, 2019. http://dx.doi.org/10.4324/9781315196817-44.
Full textSerizel, Romain, Victor Bisot, Slim Essid, and Gaël Richard. "Acoustic Features for Environmental Sound Analysis." In Computational Analysis of Sound Scenes and Events, 71–101. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_4.
Full textBenetos, Emmanouil, Dan Stowell, and Mark D. Plumbley. "Approaches to Complex Sound Scene Analysis." In Computational Analysis of Sound Scenes and Events, 215–42. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_8.
Full textTheodorou, Theodoros, Iosif Mporas, and Nikos Fakotakis. "Automatic Sound Recognition of Urban Environment Events." In Speech and Computer, 129–36. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23132-7_16.
Full textConference papers on the topic "Sound events"
HILL, AJ, J. MULDER, J. BURTON, M. KOK, and M. LAWRENCE. "A CRITICAL ANALYSIS OF SOUND LEVEL MONITORING METHODS AT LIVE EVENTS." In Reproduced Sound 2022. Institute of Acoustics, 2022. http://dx.doi.org/10.25144/14142.
Full textHOURANI, C., and AJ HILL. "TOWARDS A SUBJECTIVE QUANTIFICATION OF NOISE ANNOYANCE DUE TO OUTDOOR EVENTS." In Reproduced Sound 2023. Institute of Acoustics, 2023. http://dx.doi.org/10.25144/16927.
Full textMiyazaki, Koichi, Tomoki Hayashi, Tomoki Toda, and Kazuya Takeda. "Connectionist Temporal Classification-based Sound Event Encoder for Converting Sound Events into Onomatopoeic Representations." In 2018 26th European Signal Processing Conference (EUSIPCO). IEEE, 2018. http://dx.doi.org/10.23919/eusipco.2018.8553374.
Full textWheeler, P., D. Sharp, and S. Taherzadeh. "AN EVALUATION OF UK AND INTERNATIONAL GUIDANCE FOR THE CONTROL OF NOISE AT OUTDOOR EVENTS." In REPRODUCED SOUND 2020. Institute of Acoustics, 2020. http://dx.doi.org/10.25144/13383.
Full textBURTON, J., and AJ HILL. "USING COGNITIVE PSYCHOLOGY AND NEUROSCIENCE TO BETTER INFORM SOUND SYSTEM DESIGN AT LARGE MUSICAL EVENTS." In Reproduced Sound 2022. Institute of Acoustics, 2022. http://dx.doi.org/10.25144/14148.
Full textWheeler, P., D. Sharp, and S. Taherzadeh. "AN EVALUATION OF UK AND INTERNATIONAL GUIDANCE FOR THE CONTROL OF NOISE AT OUTDOOR EVENTS." In REPRODUCED SOUND 2020. Institute of Acoustics, 2020. http://dx.doi.org/10.25144/13383.
Full textImoto, Keisuke, Noriyuki Tonami, Yuma Koizumi, Masahiro Yasuda, Ryosuke Yamanishi, and Yoichi Yamashita. "Sound Event Detection by Multitask Learning of Sound Events and Scenes with Soft Scene Labels." In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9053912.
Full textMaosheng Zhang, Ruimin Hu, Shihong Chen, Xiaochen Wang, Dengshi Li, and Lin Jiang. "Spatial perception reproduction of sound events based on sound property coincidences." In 2015 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2015. http://dx.doi.org/10.1109/icme.2015.7177412.
Full textStanzial, Domenico, Giorgio Sacchi, and Giuliano Schiffrer. "Active playback of acoustic quadraphonic sound events." In 155th Meeting Acoustical Society of America. ASA, 2008. http://dx.doi.org/10.1121/1.2992204.
Full textKumar, Anurag, Ankit Shah, Alexander Hauptmann, and Bhiksha Raj. "Learning Sound Events from Webly Labeled Data." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/384.
Full textReports on the topic "Sound events"
Wilson, D. K., V. A. Nguyen, Nassy Srour, and John Noble. Sound Exposure Calculations for Transient Events and Other Improvements to an Acoustical Tactical Decision Aid. Fort Belvoir, VA: Defense Technical Information Center, August 2002. http://dx.doi.org/10.21236/ada406703.
Full textWilson, D., Vladimir Ostashev, Michael Shaw, Michael Muhlestein, John Weatherly, Michelle Swearingen, and Sarah McComas. Infrasound propagation in the Arctic. Engineer Research and Development Center (U.S.), December 2021. http://dx.doi.org/10.21079/11681/42683.
Full textAlbright, Jeff, Kim Struthers, Lisa Baril, and Mark Brunson. Natural resource conditions at Valles Caldera National Preserve: Findings & management considerations for selected resources. National Park Service, June 2022. http://dx.doi.org/10.36967/nrr-2293731.
Full textYatsymirska, Mariya. MODERN MEDIA TEXT: POLITICAL NARRATIVES, MEANINGS AND SENSES, EMOTIONAL MARKERS. Ivan Franko National University of Lviv, February 2022. http://dx.doi.org/10.30970/vjo.2022.51.11411.
Full textMasiero, Bruno, Marcio Henrique de Avelar, and William D'Andrea Fonseca. International Year of Sound 2020 & 2021: Ano Internacional do Som prorrogado até 2021. William D’Andrea Fonseca, July 2020. http://dx.doi.org/10.55753/aev.v35e52.15.
Full textMichalski, Ranny L. X. N., Bruno Masiero, William D’Andrea Fonseca, and Márcio Avelar. Fim do Ano Internacional do Som: Fechamento do Ano Internacional do Som 2020 & 2021. Revista Acústica e Vibrações, December 2021. http://dx.doi.org/10.55753/aev.v36e53.60.
Full textLaw, Edward, Samuel Gan-Mor, Hazel Wetzstein, and Dan Eisikowitch. Electrostatic Processes Underlying Natural and Mechanized Transfer of Pollen. United States Department of Agriculture, May 1998. http://dx.doi.org/10.32747/1998.7613035.bard.
Full textMichelmore, Richard, Eviatar Nevo, Abraham Korol, and Tzion Fahima. Genetic Diversity at Resistance Gene Clusters in Wild Populations of Lactuca. United States Department of Agriculture, February 2000. http://dx.doi.org/10.32747/2000.7573075.bard.
Full textHayes. L51633 Investigation of WIC Test Variables. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), February 1991. http://dx.doi.org/10.55274/r0010107.
Full textKanninen, M. F. L51718 Development and Validation of a Ductile Fracture Analysis Model. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), May 1994. http://dx.doi.org/10.55274/r0010321.
Full text