Literatura académica sobre el tema "Sound events"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Sound events".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Sound events"
Elizalde, Benjamin. "Categorization of sound events for automatic sound event classification". Journal of the Acoustical Society of America 153, n.º 3_supplement (1 de marzo de 2023): A364. http://dx.doi.org/10.1121/10.0019175.
Texto completoAura, Karine, Guillaume Lemaitre y Patrick Susini. "Verbal imitations of sound events enable recognition of the imitated sound events". Journal of the Acoustical Society of America 123, n.º 5 (mayo de 2008): 3414. http://dx.doi.org/10.1121/1.2934144.
Texto completoNishida, Tsuruyo, Kazuhiko Kakehi y Takamasa Kyutoku. "Motion perception of the target sound event under the discriminated two sound events". Journal of the Acoustical Society of America 120, n.º 5 (noviembre de 2006): 3080. http://dx.doi.org/10.1121/1.4787419.
Texto completoNakayama, Tsumugi, Taisuke Naito, Shunsuke Kouda y Takatoshi Yokota. "Determining disturbance sounds in aircraft sound events using a CNN-based method". INTER-NOISE and NOISE-CON Congress and Conference Proceedings 268, n.º 7 (30 de noviembre de 2023): 1320–28. http://dx.doi.org/10.3397/in_2023_0196.
Texto completoHara, Sunao y Masanobu Abe. "Predictions for sound events and soundscape impressions from environmental sound using deep neural networks". INTER-NOISE and NOISE-CON Congress and Conference Proceedings 268, n.º 3 (30 de noviembre de 2023): 5239–50. http://dx.doi.org/10.3397/in_2023_0739.
Texto completoMaruyama, Hironori, Kosuke Okada y Isamu Motoyoshi. "A two-stage spectral model for sound texture perception: Synthesis and psychophysics". i-Perception 14, n.º 1 (enero de 2023): 204166952311573. http://dx.doi.org/10.1177/20416695231157349.
Texto completoDomazetovska, Simona, Viktor Gavriloski, Maja Anachkova y Zlatko Petreski. "URBAN SOUND RECOGNITION USING DIFFERENT FEATURE EXTRACTION TECHNIQUES". Facta Universitatis, Series: Automatic Control and Robotics 20, n.º 3 (18 de diciembre de 2021): 155. http://dx.doi.org/10.22190/fuacr211015012d.
Texto completoMartinek, Jozef, P. Klco, M. Vrabec, T. Zatko, M. Tatar y M. Javorka. "Cough Sound Analysis". Acta Medica Martiniana 13, Supplement-1 (1 de marzo de 2013): 15–20. http://dx.doi.org/10.2478/acm-2013-0002.
Texto completoHeck, Jonas, Josep Llorca-Bofí, Christian Dreier y Michael Vorlaender. "Validation of auralized impulse responses considering masking, loudness and background noise". Journal of the Acoustical Society of America 155, n.º 3_Supplement (1 de marzo de 2024): A178. http://dx.doi.org/10.1121/10.0027231.
Texto completoKim, Yunbin, Jaewon Sa, Yongwha Chung, Daihee Park y Sungju Lee. "Resource-Efficient Pet Dog Sound Events Classification Using LSTM-FCN Based on Time-Series Data". Sensors 18, n.º 11 (18 de noviembre de 2018): 4019. http://dx.doi.org/10.3390/s18114019.
Texto completoTesis sobre el tema "Sound events"
Hay, Timothy Deane. "MAX-DOAS measurements of bromine explosion events in McMurdo Sound, Antarctica". Thesis, University of Canterbury. Physics and Astronomy, 2010. http://hdl.handle.net/10092/5394.
Texto completoGiannoulis, Dimitrios. "Recognition of sound sources and acoustic events in music and environmental audio". Thesis, Queen Mary, University of London, 2014. http://qmro.qmul.ac.uk/xmlui/handle/123456789/9130.
Texto completoPAPETTI, Stefano. "Sound modeling issues in interactive sonification - From basic contact events to synthesis and manipulation tools". Doctoral thesis, Università degli Studi di Verona, 2010. http://hdl.handle.net/11562/340961.
Texto completoThe work presented in this thesis ranges over a variety of research topics, spacing from human-computer interaction to physical-modeling. What combines such broad areas of interest is the idea of using physically-based computer simulations of acoustic phenomena in order to provide human-computer interfaces with sound feedback which is consistent with the user interaction. In this regard, recent years have seen the emergence of several new disciplines that go under the name of -- to cite a few -- auditory display, sonification and sonic interaction design. This thesis deals with the design and implementation of efficient sound algorithms for interactive sonification. To this end, the physical modeling of everyday sounds is taken into account, that is sounds not belonging to the families of speech and musical sounds.
Olvera, Zambrano Mauricio Michel. "Robust sound event detection". Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0324.
Texto completoFrom industry to general interest applications, computational analysis of sound scenes and events allows us to interpret the continuous flow of everyday sounds. One of the main degradations encountered when moving from lab conditions to the real world is due to the fact that sound scenes are not composed of isolated events but of multiple simultaneous events. Differences between training and test conditions also often arise due to extrinsic factors such as the choice of recording hardware and microphone positions, as well as intrinsic factors of sound events, such as their frequency of occurrence, duration and variability. In this thesis, we investigate problems of practical interest for audio analysis tasks to achieve robustness in real scenarios.Firstly, we explore the separation of ambient sounds in a practical scenario in which multiple short duration sound events with fast varying spectral characteristics (i.e., foreground sounds) occur simultaneously with background stationary sounds. We introduce the foreground-background ambient sound separation task and investigate whether a deep neural network with auxiliary information about the statistics of the background sound can differentiate between rapidly- and slowly-varying spectro-temporal characteristics. Moreover, we explore the use of per-channel energy normalization (PCEN) as a suitable pre-processing and the ability of the separation model to generalize to unseen sound classes. Results on mixtures of isolated sounds from the DESED and Audioset datasets demonstrate the generalization capability of the proposed separation system, which is mainly due to PCEN.Secondly, we investigate how to improve the robustness of audio analysis systems under mismatched training and test conditions. We explore two distinct tasks: acoustic scene classification (ASC) with mismatched recording devices and training of sound event detection (SED) systems with synthetic and real data.In the context of ASC, without assuming the availability of recordings captured simultaneously by mismatched training and test recording devices, we assess the impact of moment normalization and matching strategies and their integration with unsupervised adversarial domain adaptation. Our results show the benefits and limitations of these adaptation strategies applied at different stages of the classification pipeline. The best strategy matches source domain performance in the target domain.In the context of SED, we propose a PCEN based acoustic front-end with learned parameters. Then, we study the joint training of SED with auxiliary classification branches that categorize sounds as foreground or background according to their spectral properties. We also assess the impact of aligning the distributions of synthetic and real data at the frame or segment level based on optimal transport. Finally, we integrate an active learning strategy in the adaptation procedure. Results on the DESED dataset indicate that these methods are beneficial for the SED task and that their combination further improves performance on real sound scenes
Beeferman, Leah. "JOURNEYS INTO THE UNKNOWN: A SERIES OF SCIENCE ARCHITECTURE TASKS AND EVENTS, SPACE-BOUND EXPLORATIONS AND FAR-TRAVELS, DISCOVERIES AND MISSES (NEAR AND FAR), IMAGINATIVE SPACE-GAZING AND RELATED INVESTIGATIONS, OBSERVATIONS, ORBITS, AND OTHER REPETITIOUS MONITORING TASKS". VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/2164.
Texto completoVESPERINI, FABIO. "Deep Learning for Sound Event Detection and Classification". Doctoral thesis, Università Politecnica delle Marche, 2019. http://hdl.handle.net/11566/263536.
Texto completoThe recent progress on acoustic signal processing and machine learning techniques have enabled the development of innovative technologies for automatic analysis of sound events. In particular, nowadays one of the hottest approach to this problem lays on the exploitation of Deep Learning techniques. As further proof, in several occasion neural architectures originally designed for other multimedia domains have been successfully proposed to process the audio signal. Indeed, although these technologies have been faced for a long time by statistical modelling algorithms such as Gaussian Mixture Models, Hidden Markov Models or Support Vector Machines, the new breakthrough of machine learning for audio processing has lead to encouraging results into the addressed tasks. Hence, this thesis reports an up-to-date state of the art and proposes several reliable DNN-based methods for Sound Event Detection (SED) and Sound Event Classification (SEC), with an overview of the Deep Neural Network (DNN) architectures used on purpose and of the evaluation procedures and metrics used in this research field. According to the recent trend, which shows an extensive employment of Convolutional Neural Networks (CNNs) for both SED and SEC tasks, this work reports also rather new approaches based on the Siamese DNN architecture or the novel Capsule computational units. Most of the reported systems have been designed in the occasion of international challenges. This allowed the access to public datasets, and to compare systems proposed by the most competitive research teams on a common basis. The case studies reported in this dissertation refer to applications in a variety of scenarios, ranging from unobtrusive health monitoring, audio-based surveillance, bio-acoustic monitoring and classification of the road surface conditions. These tasks face numerous challenges, particularly related to their application in real-life environments. Among these issues there are unbalancing of datasets, different acquisition setups, acoustic disturbance (i.e., background noise, reverberation and cross-talk) and polyphony. In particular, since multiple events are very likely to overlap in real life audio, two algorithms for polyphonic SED are reported in this thesis. A polyphonic SED algorithm can be considered as system which is able to perform contemporary detection - determining onset and offset time of the sound events - and classification - assigning a label to each of the events occurring in the audio stream.
Jackson, Asti Joy. "Structure of Sound". Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/73778.
Texto completoMaster of Architecture
Fonseca, Eduardo. "Training sound event classifiers using different types of supervision". Doctoral thesis, Universitat Pompeu Fabra, 2021. http://hdl.handle.net/10803/673067.
Texto completoEl interés en el reconocimiento automático de eventos sonoros se ha incrementado en los últimos años, motivado por nuevas aplicaciones en campos como la asistencia médica, smart homes, o urbanismo. Al comienzo de esta tesis, la investigación en clasificación de eventos sonoros se centraba principalmente en aprendizaje supervisado usando datasets pequeños, a menudo anotados cuidadosamente con vocabularios limitados a dominios específicos (como el urbano o el doméstico). Sin embargo, tales datasets no permiten entrenar clasificadores capaces de reconocer los cientos de eventos sonoros que ocurren en nuestro entorno, como silbidos de kettle, sonidos de pájaros, coches pasando, o diferentes alarmas. Al mismo tiempo, websites como Freesound o YouTube albergan grandes cantidades de datos de sonido ambiental, que pueden ser útiles para entrenar clasificadores con un vocabulario más extenso, particularmente utilizando métodos de deep learning que requieren gran cantidad de datos. Para avanzar el estado del arte en la clasificación de eventos sonoros, esta tesis investiga varios aspectos de la creación de datasets, así como de aprendizaje supervisado y no supervisado para entrenar clasificadores de eventos sonoros con un vocabulario extenso, utilizando diferentes tipos de supervisión de manera novedosa y alternativa. En concreto, nos centramos en aprendizaje supervisado usando etiquetas sin ruido y con ruido, así como en aprendizaje de representaciones auto-supervisado a partir de datos no etiquetados. La primera parte de esta tesis se centra en la creación de FSD50K, un dataset con más de 100h de audio etiquetado manualmente usando 200 clases de eventos sonoros. Presentamos una descripción detallada del proceso de creación y una caracterización exhaustiva del dataset. Además, exploramos modificaciones arquitectónicas para aumentar la invariancia frente a desplazamientos en CNNs, mejorando la robustez frente a desplazamientos de tiempo/frecuencia en los espectrogramas de entrada. En la segunda parte, nos centramos en entrenar clasificadores de eventos sonoros usando etiquetas con ruido. Primero, proponemos un dataset que permite la investigación del ruido de etiquetas real. Después, exploramos métodos agnósticos a la arquitectura de red para mitigar el efecto del ruido en las etiquetas durante el entrenamiento, incluyendo técnicas de regularización, funciones de coste robustas al ruido, y estrategias para rechazar ejemplos etiquetados con ruido. Además, desarrollamos un método teacher-student para abordar el problema de las etiquetas ausentes en datasets de eventos sonoros. En la tercera parte, proponemos algoritmos para aprender representaciones de audio a partir de datos sin etiquetar. En particular, desarrollamos métodos de aprendizaje contrastivos auto-supervisados, donde las representaciones se aprenden comparando pares de ejemplos calculados a través de métodos de aumento de datos y separación automática de sonido. Finalmente, reportamos sobre la organización de dos DCASE Challenge Tasks para el tageado automático de audio a partir de etiquetas ruidosas. Mediante la propuesta de datasets, así como de métodos de vanguardia y representaciones de audio, esta tesis contribuye al avance de la investigación abierta sobre eventos sonoros y a la transición del aprendizaje supervisado tradicional utilizando etiquetas sin ruido a otras estrategias de aprendizaje menos dependientes de costosos esfuerzos de anotación.
Pahar, Madhurananda. "A novel sound reconstruction technique based on a spike code (event) representation". Thesis, University of Stirling, 2016. http://hdl.handle.net/1893/23025.
Texto completoLabbé, Etienne. "Description automatique des événements sonores par des méthodes d'apprentissage profond". Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES054.
Texto completoIn the audio research field, the majority of machine learning systems focus on recognizing a limited number of sound events. However, when a machine interacts with real data, it must be able to handle much more varied and complex situations. To tackle this problem, annotators use natural language, which allows any sound information to be summarized. Automated Audio Captioning (AAC) was introduced recently to develop systems capable of automatically producing a description of any type of sound in text form. This task concerns all kinds of sound events such as environmental, urban, domestic sounds, sound effects, music or speech. This type of system could be used by people who are deaf or hard of hearing, and could improve the indexing of large audio databases. In the first part of this thesis, we present the state of the art of the AAC task through a global description of public datasets, learning methods, architectures and evaluation metrics. Using this knowledge, we then present the architecture of our first AAC system, which obtains encouraging scores on the main AAC metric named SPIDEr: 24.7% on the Clotho corpus and 40.1% on the AudioCaps corpus. Then, subsequently, we explore many aspects of AAC systems in the second part. We first focus on evaluation methods through the study of SPIDEr. For this, we propose a variant called SPIDEr-max, which considers several candidates for each audio file, and which shows that the SPIDEr metric is very sensitive to the predicted words. Then, we improve our reference system by exploring different architectures and numerous hyper-parameters to exceed the state of the art on AudioCaps (SPIDEr of 49.5%). Next, we explore a multi-task learning method aimed at improving the semantics of sentences generated by our system. Finally, we build a general and unbiased AAC system called CONETTE, which can generate different types of descriptions that approximate those of the target datasets. In the third and last part, we propose to study the capabilities of a AAC system to automatically search for audio content in a database. Our approach obtains competitive scores to systems dedicated to this task, while using fewer parameters. We also introduce semi-supervised methods to improve our system using new unlabeled audio data, and we show how pseudo-label generation can impact a AAC model. Finally, we studied the AAC systems in languages other than English: French, Spanish and German. In addition, we propose a system capable of producing all four languages at the same time, and we compare it with systems specialized in each language
Libros sobre el tema "Sound events"
Virtanen, Tuomas, Mark D. Plumbley y Dan Ellis, eds. Computational Analysis of Sound Scenes and Events. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-63450-0.
Texto completoMcCarthy, Jim. Voices of Latin rock: People and events that created this sound. Milwaukee, WI: Hal Leonard Corporation, 2004.
Buscar texto completoMcCarthy, Jim. Voices of Latin rock: People and events that created this sound. Milwaukee, WI: Hal Leonard Corporation, 2005.
Buscar texto completoThomas, Jeremy. Taking leave. London: Timewell, 2006.
Buscar texto completoZealand, Radio New. Catalogue of Radio New Zealand recordings of Maori events, 1938-1950: RNZ 1-60. Auckland: Archive of Maori and Pacific Music, Anthropology Dept., University of Auckland, 1991.
Buscar texto completoCorporation, British Broadcasting. Equestrian events. Princeton, N.J: Films for the Humanities & Sciences, 1991.
Buscar texto completoTaylor, Fred. What, and Give Up Showbiz?: Six Decades in the Music Business. Blue Ridge Summit: Backbeat, 2020.
Buscar texto completoCai, Wenyi. Yi tian 10 fen zhong, ying zhan xin wen Ying wen: Yue du, ting li, yu hui neng li yi ci yang cheng! 8a ed. Taibei Shi: Kai xin qi ye guan li gu wen you xian gong si, 2015.
Buscar texto completoMarchetta, Vittorio. Passaggi di sound design: Riflessioni, competenze, oggetti-eventi. Milano: F. Angeli, 2010.
Buscar texto completoBasile, Giuseppe. ' 80, new sound, new wave: Vita, musica ed eventi nella provincia italiana degli anni '80. Taranto: Geophonìe, 2007.
Buscar texto completoCapítulos de libros sobre el tema "Sound events"
Toole, Floyd E. "Above the Transition Frequency: Acoustical Events and Perceptions". En Sound Reproduction, 157–214. Third edition. | New York ; London : Routledge, 2017.: Routledge, 2017. http://dx.doi.org/10.4324/9781315686424-7.
Texto completoToole, Floyd E. "Below the Transition Frequency: Acoustical Events and Perceptions". En Sound Reproduction, 215–62. Third edition. | New York ; London : Routledge, 2017.: Routledge, 2017. http://dx.doi.org/10.4324/9781315686424-8.
Texto completoGuastavino, Catherine. "Everyday Sound Categorization". En Computational Analysis of Sound Scenes and Events, 183–213. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_7.
Texto completoFont, Frederic, Gerard Roma y Xavier Serra. "Sound Sharing and Retrieval". En Computational Analysis of Sound Scenes and Events, 279–301. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_10.
Texto completoStein, Peter J. "Observation of the Sound Radiated by Individual Ice Fracturing Events". En Sea Surface Sound, 533–44. Dordrecht: Springer Netherlands, 1988. http://dx.doi.org/10.1007/978-94-009-3017-9_38.
Texto completoBello, Juan Pablo, Charlie Mydlarz y Justin Salamon. "Sound Analysis in Smart Cities". En Computational Analysis of Sound Scenes and Events, 373–97. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_13.
Texto completoLoar, Josh. "Conventions and Other Multi-room Live Events". En The Sound System Design Primer, 417–21. New York, NY : Routledge, 2019.: Routledge, 2019. http://dx.doi.org/10.4324/9781315196817-44.
Texto completoSerizel, Romain, Victor Bisot, Slim Essid y Gaël Richard. "Acoustic Features for Environmental Sound Analysis". En Computational Analysis of Sound Scenes and Events, 71–101. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_4.
Texto completoBenetos, Emmanouil, Dan Stowell y Mark D. Plumbley. "Approaches to Complex Sound Scene Analysis". En Computational Analysis of Sound Scenes and Events, 215–42. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_8.
Texto completoTheodorou, Theodoros, Iosif Mporas y Nikos Fakotakis. "Automatic Sound Recognition of Urban Environment Events". En Speech and Computer, 129–36. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23132-7_16.
Texto completoActas de conferencias sobre el tema "Sound events"
HILL, AJ, J. MULDER, J. BURTON, M. KOK y M. LAWRENCE. "A CRITICAL ANALYSIS OF SOUND LEVEL MONITORING METHODS AT LIVE EVENTS". En Reproduced Sound 2022. Institute of Acoustics, 2022. http://dx.doi.org/10.25144/14142.
Texto completoHOURANI, C. y AJ HILL. "TOWARDS A SUBJECTIVE QUANTIFICATION OF NOISE ANNOYANCE DUE TO OUTDOOR EVENTS". En Reproduced Sound 2023. Institute of Acoustics, 2023. http://dx.doi.org/10.25144/16927.
Texto completoMiyazaki, Koichi, Tomoki Hayashi, Tomoki Toda y Kazuya Takeda. "Connectionist Temporal Classification-based Sound Event Encoder for Converting Sound Events into Onomatopoeic Representations". En 2018 26th European Signal Processing Conference (EUSIPCO). IEEE, 2018. http://dx.doi.org/10.23919/eusipco.2018.8553374.
Texto completoWheeler, P., D. Sharp y S. Taherzadeh. "AN EVALUATION OF UK AND INTERNATIONAL GUIDANCE FOR THE CONTROL OF NOISE AT OUTDOOR EVENTS". En REPRODUCED SOUND 2020. Institute of Acoustics, 2020. http://dx.doi.org/10.25144/13383.
Texto completoBURTON, J. y AJ HILL. "USING COGNITIVE PSYCHOLOGY AND NEUROSCIENCE TO BETTER INFORM SOUND SYSTEM DESIGN AT LARGE MUSICAL EVENTS". En Reproduced Sound 2022. Institute of Acoustics, 2022. http://dx.doi.org/10.25144/14148.
Texto completoWheeler, P., D. Sharp y S. Taherzadeh. "AN EVALUATION OF UK AND INTERNATIONAL GUIDANCE FOR THE CONTROL OF NOISE AT OUTDOOR EVENTS". En REPRODUCED SOUND 2020. Institute of Acoustics, 2020. http://dx.doi.org/10.25144/13383.
Texto completoImoto, Keisuke, Noriyuki Tonami, Yuma Koizumi, Masahiro Yasuda, Ryosuke Yamanishi y Yoichi Yamashita. "Sound Event Detection by Multitask Learning of Sound Events and Scenes with Soft Scene Labels". En ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9053912.
Texto completoMaosheng Zhang, Ruimin Hu, Shihong Chen, Xiaochen Wang, Dengshi Li y Lin Jiang. "Spatial perception reproduction of sound events based on sound property coincidences". En 2015 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2015. http://dx.doi.org/10.1109/icme.2015.7177412.
Texto completoStanzial, Domenico, Giorgio Sacchi y Giuliano Schiffrer. "Active playback of acoustic quadraphonic sound events". En 155th Meeting Acoustical Society of America. ASA, 2008. http://dx.doi.org/10.1121/1.2992204.
Texto completoKumar, Anurag, Ankit Shah, Alexander Hauptmann y Bhiksha Raj. "Learning Sound Events from Webly Labeled Data". En Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/384.
Texto completoInformes sobre el tema "Sound events"
Wilson, D. K., V. A. Nguyen, Nassy Srour y John Noble. Sound Exposure Calculations for Transient Events and Other Improvements to an Acoustical Tactical Decision Aid. Fort Belvoir, VA: Defense Technical Information Center, agosto de 2002. http://dx.doi.org/10.21236/ada406703.
Texto completoWilson, D., Vladimir Ostashev, Michael Shaw, Michael Muhlestein, John Weatherly, Michelle Swearingen y Sarah McComas. Infrasound propagation in the Arctic. Engineer Research and Development Center (U.S.), diciembre de 2021. http://dx.doi.org/10.21079/11681/42683.
Texto completoAlbright, Jeff, Kim Struthers, Lisa Baril y Mark Brunson. Natural resource conditions at Valles Caldera National Preserve: Findings & management considerations for selected resources. National Park Service, junio de 2022. http://dx.doi.org/10.36967/nrr-2293731.
Texto completoYatsymirska, Mariya. MODERN MEDIA TEXT: POLITICAL NARRATIVES, MEANINGS AND SENSES, EMOTIONAL MARKERS. Ivan Franko National University of Lviv, febrero de 2022. http://dx.doi.org/10.30970/vjo.2022.51.11411.
Texto completoMasiero, Bruno, Marcio Henrique de Avelar y William D'Andrea Fonseca. International Year of Sound 2020 & 2021: Ano Internacional do Som prorrogado até 2021. William D’Andrea Fonseca, julio de 2020. http://dx.doi.org/10.55753/aev.v35e52.15.
Texto completoMichalski, Ranny L. X. N., Bruno Masiero, William D’Andrea Fonseca y Márcio Avelar. Fim do Ano Internacional do Som: Fechamento do Ano Internacional do Som 2020 & 2021. Revista Acústica e Vibrações, diciembre de 2021. http://dx.doi.org/10.55753/aev.v36e53.60.
Texto completoLaw, Edward, Samuel Gan-Mor, Hazel Wetzstein y Dan Eisikowitch. Electrostatic Processes Underlying Natural and Mechanized Transfer of Pollen. United States Department of Agriculture, mayo de 1998. http://dx.doi.org/10.32747/1998.7613035.bard.
Texto completoMichelmore, Richard, Eviatar Nevo, Abraham Korol y Tzion Fahima. Genetic Diversity at Resistance Gene Clusters in Wild Populations of Lactuca. United States Department of Agriculture, febrero de 2000. http://dx.doi.org/10.32747/2000.7573075.bard.
Texto completoHayes. L51633 Investigation of WIC Test Variables. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), febrero de 1991. http://dx.doi.org/10.55274/r0010107.
Texto completoKanninen, M. F. L51718 Development and Validation of a Ductile Fracture Analysis Model. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), mayo de 1994. http://dx.doi.org/10.55274/r0010321.
Texto completo