Tesis sobre el tema "Sound events"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Sound events".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Hay, Timothy Deane. "MAX-DOAS measurements of bromine explosion events in McMurdo Sound, Antarctica". Thesis, University of Canterbury. Physics and Astronomy, 2010. http://hdl.handle.net/10092/5394.
Texto completoGiannoulis, Dimitrios. "Recognition of sound sources and acoustic events in music and environmental audio". Thesis, Queen Mary, University of London, 2014. http://qmro.qmul.ac.uk/xmlui/handle/123456789/9130.
Texto completoPAPETTI, Stefano. "Sound modeling issues in interactive sonification - From basic contact events to synthesis and manipulation tools". Doctoral thesis, Università degli Studi di Verona, 2010. http://hdl.handle.net/11562/340961.
Texto completoThe work presented in this thesis ranges over a variety of research topics, spacing from human-computer interaction to physical-modeling. What combines such broad areas of interest is the idea of using physically-based computer simulations of acoustic phenomena in order to provide human-computer interfaces with sound feedback which is consistent with the user interaction. In this regard, recent years have seen the emergence of several new disciplines that go under the name of -- to cite a few -- auditory display, sonification and sonic interaction design. This thesis deals with the design and implementation of efficient sound algorithms for interactive sonification. To this end, the physical modeling of everyday sounds is taken into account, that is sounds not belonging to the families of speech and musical sounds.
Olvera, Zambrano Mauricio Michel. "Robust sound event detection". Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0324.
Texto completoFrom industry to general interest applications, computational analysis of sound scenes and events allows us to interpret the continuous flow of everyday sounds. One of the main degradations encountered when moving from lab conditions to the real world is due to the fact that sound scenes are not composed of isolated events but of multiple simultaneous events. Differences between training and test conditions also often arise due to extrinsic factors such as the choice of recording hardware and microphone positions, as well as intrinsic factors of sound events, such as their frequency of occurrence, duration and variability. In this thesis, we investigate problems of practical interest for audio analysis tasks to achieve robustness in real scenarios.Firstly, we explore the separation of ambient sounds in a practical scenario in which multiple short duration sound events with fast varying spectral characteristics (i.e., foreground sounds) occur simultaneously with background stationary sounds. We introduce the foreground-background ambient sound separation task and investigate whether a deep neural network with auxiliary information about the statistics of the background sound can differentiate between rapidly- and slowly-varying spectro-temporal characteristics. Moreover, we explore the use of per-channel energy normalization (PCEN) as a suitable pre-processing and the ability of the separation model to generalize to unseen sound classes. Results on mixtures of isolated sounds from the DESED and Audioset datasets demonstrate the generalization capability of the proposed separation system, which is mainly due to PCEN.Secondly, we investigate how to improve the robustness of audio analysis systems under mismatched training and test conditions. We explore two distinct tasks: acoustic scene classification (ASC) with mismatched recording devices and training of sound event detection (SED) systems with synthetic and real data.In the context of ASC, without assuming the availability of recordings captured simultaneously by mismatched training and test recording devices, we assess the impact of moment normalization and matching strategies and their integration with unsupervised adversarial domain adaptation. Our results show the benefits and limitations of these adaptation strategies applied at different stages of the classification pipeline. The best strategy matches source domain performance in the target domain.In the context of SED, we propose a PCEN based acoustic front-end with learned parameters. Then, we study the joint training of SED with auxiliary classification branches that categorize sounds as foreground or background according to their spectral properties. We also assess the impact of aligning the distributions of synthetic and real data at the frame or segment level based on optimal transport. Finally, we integrate an active learning strategy in the adaptation procedure. Results on the DESED dataset indicate that these methods are beneficial for the SED task and that their combination further improves performance on real sound scenes
Beeferman, Leah. "JOURNEYS INTO THE UNKNOWN: A SERIES OF SCIENCE ARCHITECTURE TASKS AND EVENTS, SPACE-BOUND EXPLORATIONS AND FAR-TRAVELS, DISCOVERIES AND MISSES (NEAR AND FAR), IMAGINATIVE SPACE-GAZING AND RELATED INVESTIGATIONS, OBSERVATIONS, ORBITS, AND OTHER REPETITIOUS MONITORING TASKS". VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/2164.
Texto completoVESPERINI, FABIO. "Deep Learning for Sound Event Detection and Classification". Doctoral thesis, Università Politecnica delle Marche, 2019. http://hdl.handle.net/11566/263536.
Texto completoThe recent progress on acoustic signal processing and machine learning techniques have enabled the development of innovative technologies for automatic analysis of sound events. In particular, nowadays one of the hottest approach to this problem lays on the exploitation of Deep Learning techniques. As further proof, in several occasion neural architectures originally designed for other multimedia domains have been successfully proposed to process the audio signal. Indeed, although these technologies have been faced for a long time by statistical modelling algorithms such as Gaussian Mixture Models, Hidden Markov Models or Support Vector Machines, the new breakthrough of machine learning for audio processing has lead to encouraging results into the addressed tasks. Hence, this thesis reports an up-to-date state of the art and proposes several reliable DNN-based methods for Sound Event Detection (SED) and Sound Event Classification (SEC), with an overview of the Deep Neural Network (DNN) architectures used on purpose and of the evaluation procedures and metrics used in this research field. According to the recent trend, which shows an extensive employment of Convolutional Neural Networks (CNNs) for both SED and SEC tasks, this work reports also rather new approaches based on the Siamese DNN architecture or the novel Capsule computational units. Most of the reported systems have been designed in the occasion of international challenges. This allowed the access to public datasets, and to compare systems proposed by the most competitive research teams on a common basis. The case studies reported in this dissertation refer to applications in a variety of scenarios, ranging from unobtrusive health monitoring, audio-based surveillance, bio-acoustic monitoring and classification of the road surface conditions. These tasks face numerous challenges, particularly related to their application in real-life environments. Among these issues there are unbalancing of datasets, different acquisition setups, acoustic disturbance (i.e., background noise, reverberation and cross-talk) and polyphony. In particular, since multiple events are very likely to overlap in real life audio, two algorithms for polyphonic SED are reported in this thesis. A polyphonic SED algorithm can be considered as system which is able to perform contemporary detection - determining onset and offset time of the sound events - and classification - assigning a label to each of the events occurring in the audio stream.
Jackson, Asti Joy. "Structure of Sound". Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/73778.
Texto completoMaster of Architecture
Fonseca, Eduardo. "Training sound event classifiers using different types of supervision". Doctoral thesis, Universitat Pompeu Fabra, 2021. http://hdl.handle.net/10803/673067.
Texto completoEl interés en el reconocimiento automático de eventos sonoros se ha incrementado en los últimos años, motivado por nuevas aplicaciones en campos como la asistencia médica, smart homes, o urbanismo. Al comienzo de esta tesis, la investigación en clasificación de eventos sonoros se centraba principalmente en aprendizaje supervisado usando datasets pequeños, a menudo anotados cuidadosamente con vocabularios limitados a dominios específicos (como el urbano o el doméstico). Sin embargo, tales datasets no permiten entrenar clasificadores capaces de reconocer los cientos de eventos sonoros que ocurren en nuestro entorno, como silbidos de kettle, sonidos de pájaros, coches pasando, o diferentes alarmas. Al mismo tiempo, websites como Freesound o YouTube albergan grandes cantidades de datos de sonido ambiental, que pueden ser útiles para entrenar clasificadores con un vocabulario más extenso, particularmente utilizando métodos de deep learning que requieren gran cantidad de datos. Para avanzar el estado del arte en la clasificación de eventos sonoros, esta tesis investiga varios aspectos de la creación de datasets, así como de aprendizaje supervisado y no supervisado para entrenar clasificadores de eventos sonoros con un vocabulario extenso, utilizando diferentes tipos de supervisión de manera novedosa y alternativa. En concreto, nos centramos en aprendizaje supervisado usando etiquetas sin ruido y con ruido, así como en aprendizaje de representaciones auto-supervisado a partir de datos no etiquetados. La primera parte de esta tesis se centra en la creación de FSD50K, un dataset con más de 100h de audio etiquetado manualmente usando 200 clases de eventos sonoros. Presentamos una descripción detallada del proceso de creación y una caracterización exhaustiva del dataset. Además, exploramos modificaciones arquitectónicas para aumentar la invariancia frente a desplazamientos en CNNs, mejorando la robustez frente a desplazamientos de tiempo/frecuencia en los espectrogramas de entrada. En la segunda parte, nos centramos en entrenar clasificadores de eventos sonoros usando etiquetas con ruido. Primero, proponemos un dataset que permite la investigación del ruido de etiquetas real. Después, exploramos métodos agnósticos a la arquitectura de red para mitigar el efecto del ruido en las etiquetas durante el entrenamiento, incluyendo técnicas de regularización, funciones de coste robustas al ruido, y estrategias para rechazar ejemplos etiquetados con ruido. Además, desarrollamos un método teacher-student para abordar el problema de las etiquetas ausentes en datasets de eventos sonoros. En la tercera parte, proponemos algoritmos para aprender representaciones de audio a partir de datos sin etiquetar. En particular, desarrollamos métodos de aprendizaje contrastivos auto-supervisados, donde las representaciones se aprenden comparando pares de ejemplos calculados a través de métodos de aumento de datos y separación automática de sonido. Finalmente, reportamos sobre la organización de dos DCASE Challenge Tasks para el tageado automático de audio a partir de etiquetas ruidosas. Mediante la propuesta de datasets, así como de métodos de vanguardia y representaciones de audio, esta tesis contribuye al avance de la investigación abierta sobre eventos sonoros y a la transición del aprendizaje supervisado tradicional utilizando etiquetas sin ruido a otras estrategias de aprendizaje menos dependientes de costosos esfuerzos de anotación.
Pahar, Madhurananda. "A novel sound reconstruction technique based on a spike code (event) representation". Thesis, University of Stirling, 2016. http://hdl.handle.net/1893/23025.
Texto completoLabbé, Etienne. "Description automatique des événements sonores par des méthodes d'apprentissage profond". Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES054.
Texto completoIn the audio research field, the majority of machine learning systems focus on recognizing a limited number of sound events. However, when a machine interacts with real data, it must be able to handle much more varied and complex situations. To tackle this problem, annotators use natural language, which allows any sound information to be summarized. Automated Audio Captioning (AAC) was introduced recently to develop systems capable of automatically producing a description of any type of sound in text form. This task concerns all kinds of sound events such as environmental, urban, domestic sounds, sound effects, music or speech. This type of system could be used by people who are deaf or hard of hearing, and could improve the indexing of large audio databases. In the first part of this thesis, we present the state of the art of the AAC task through a global description of public datasets, learning methods, architectures and evaluation metrics. Using this knowledge, we then present the architecture of our first AAC system, which obtains encouraging scores on the main AAC metric named SPIDEr: 24.7% on the Clotho corpus and 40.1% on the AudioCaps corpus. Then, subsequently, we explore many aspects of AAC systems in the second part. We first focus on evaluation methods through the study of SPIDEr. For this, we propose a variant called SPIDEr-max, which considers several candidates for each audio file, and which shows that the SPIDEr metric is very sensitive to the predicted words. Then, we improve our reference system by exploring different architectures and numerous hyper-parameters to exceed the state of the art on AudioCaps (SPIDEr of 49.5%). Next, we explore a multi-task learning method aimed at improving the semantics of sentences generated by our system. Finally, we build a general and unbiased AAC system called CONETTE, which can generate different types of descriptions that approximate those of the target datasets. In the third and last part, we propose to study the capabilities of a AAC system to automatically search for audio content in a database. Our approach obtains competitive scores to systems dedicated to this task, while using fewer parameters. We also introduce semi-supervised methods to improve our system using new unlabeled audio data, and we show how pseudo-label generation can impact a AAC model. Finally, we studied the AAC systems in languages other than English: French, Spanish and German. In addition, we propose a system capable of producing all four languages at the same time, and we compare it with systems specialized in each language
Kahl, Stefan. "Identifying Birds by Sound: Large-scale Acoustic Event Recognition for Avian Activity Monitoring". Universitätsverlag Chemnitz, 2019. https://monarch.qucosa.de/id/qucosa%3A36986.
Texto completoDie automatisierte Überwachung der Vogelstimmenaktivität und der Artenvielfalt kann ein revolutionäres Werkzeug für Ornithologen, Naturschützer und Vogelbeobachter sein, um bei der langfristigen Überwachung kritischer Umweltnischen zu helfen. Tiefe künstliche neuronale Netzwerke haben die traditionellen Klassifikatoren im Bereich der visuellen Erkennung und akustische Ereignisklassifizierung übertroffen. Dennoch erfordern tiefe neuronale Netze Expertenwissen, um leistungsstarke Modelle zu entwickeln, trainieren und testen. Mit dieser Einschränkung und unter Berücksichtigung der Anforderungen zukünftiger Anwendungen wurde eine umfangreiche Forschungsplattform zur automatisierten Überwachung der Vogelaktivität entwickelt: BirdNET. Das daraus resultierende Benchmark-System liefert state-of-the-art Ergebnisse in verschiedenen akustischen Bereichen und wurde verwendet, um Expertenwerkzeuge und öffentliche Demonstratoren zu entwickeln, die dazu beitragen können, die Demokratisierung des wissenschaftlichen Fortschritts und zukünftige Naturschutzbemühungen voranzutreiben.
Erik, Rosshagen. "Sync Event : The Ethnographic Allegory of Unsere Afrikareise". Thesis, Stockholms universitet, Institutionen för mediestudier, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-131291.
Texto completoWidmann, Andreas, Thomas Gruber, Teija Kujala, Mari Tervaniemi y Erich Schröger. "Binding Symbols and Sounds: Evidence from Event-Related Oscillatory Gamma- Band Activity". Oxford University Press, 2007. https://ul.qucosa.de/id/qucosa%3A32714.
Texto completoRosshagen, Erik. "Sync Event : The Ethnographic Allegory of Unsere Afrikareise". Thesis, Stockholms universitet, Filmvetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-183273.
Texto completoBrooks, Julian. "Being sound : FLOSS, flow and event in the composition and ensemble performance of free open computer music". Thesis, University of Huddersfield, 2016. http://eprints.hud.ac.uk/id/eprint/31370/.
Texto completoMaiste, Anita. "Human auditory event-related potentials to frequency changes in speech and non-speech sounds". Thesis, University of Ottawa (Canada), 1989. http://hdl.handle.net/10393/5899.
Texto completoPieszek, Marika, Erich Schröger y Andreas Widmann. "Separate and concurrent symbolic predictions of sound features are processed differently". Universitätsbibliothek Leipzig, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-155242.
Texto completoBarton, Antony James. "Signal processing techniques for data reduction and event recognition in cough counting". Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/signal-processing-techniques-for-data-reduction-and-event-recognition-in-cough-counting(dc73495a-35b0-4d17-a6f8-cc2f88008659).html.
Texto completoTrowitzsch, Ivo [Verfasser], Klaus [Akademischer Betreuer] Obermayer, Klaus [Gutachter] Obermayer, Dorothea [Gutachter] Kolossa y Thomas [Gutachter] Sikora. "Robust sound event detection in binaural computational auditory scene analysis / Ivo Trowitzsch ; Gutachter: Klaus Obermayer, Dorothea Kolossa, Thomas Sikora ; Betreuer: Klaus Obermayer". Berlin : Technische Universität Berlin, 2020. http://d-nb.info/1210055120/34.
Texto completoNatzén, Christopher. "The Coming of Sound Film in Sweden 1928-1932 : New and Old Technologies". Doctoral thesis, Stockholms universitet, Filmvetenskapliga institutionen, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-42168.
Texto completoKahl, Stefan [Verfasser], Maximilian [Akademischer Betreuer] Eibl, Maximilian [Gutachter] Eibl, Marc [Gutachter] Ritter y Holger [Akademischer Betreuer] Klinck. "Identifying Birds by Sound: Large-scale Acoustic Event Recognition for Avian Activity Monitoring / Stefan Kahl ; Gutachter: Maximilian Eibl, Marc Ritter ; Maximilian Eibl, Holger Klinck". Chemnitz : Universitätsverlag Chemnitz, 2020. http://d-nb.info/1219664502/34.
Texto completoMafe, Majena. "Soundage : a practice-led approach to Gertrude Stein, sound, and generative language". Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/63361/1/Majena_Mafe_Thesis.pdf.
Texto completoKlobása, Jiří. "Systém pro záznam a opakování událostí pro zvukové systémy". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-234948.
Texto completoRijo, Sara Marina Albino. "Técnicas de deep learning para detecção de eventos em áudio: treino de modelos acústicos a partir de sinais puros". Master's thesis, Universidade de Évora, 2018. http://hdl.handle.net/10174/22275.
Texto completoFerroudj, Meriem. "Detection of rain in acoustic recordings of the environment using machine learning techniques". Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/82848/1/Meriem_Ferroudj_Thesis.pdf.
Texto completoAfyonoglu, Kirbas Yeliz. "Neuronal activity to environmental sounds when presented together with semantically related words : An MMN study on monolingual and bilingual processing of homophones". Thesis, Stockholms universitet, Avdelningen för allmän språkvetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-170303.
Texto completoNeuronal aktivitet av en- och tvåspråkiga till miljöljud och ord som är semantiskt relaterade till dem studerades med hjälp av Mismatch Negativity (MMN) komponent av event-relaterade potentialer. MMN förväntades spegla språkvalsprocessen i tvåspråkiga baserad på semantik och fonologi. I detta avseende presenterades interlingual-homofoner ’car’ (bil) och ’kar’ (snö) som lexikala stimuli tillsammans med semantiskt besläktade miljöljud i ett passivt auditivt oddball paradigm. De lexikala stimuli spelades in av en modersmålstalare av engelska. Tre turkiska-engelska sena tvåspråkiga och en modersmålstalare av engelska deltog i studien. En tidig MMN framkallades i båda grupperna med en fördelning över de främre central- och centrala områdena över skalp med amplitud vid -2,5 med 113 ms latens. Detta indikerar att deltagarna var sensitiva till de akustiska förändringarna mellan de två olika typerna av stimuli. Den vidare undersökningen av samspelet mellan miljöljud och semantik visade inget avgörande resultat. Dessutom, var det också ett inkonklusivt resultat som handlade om att huruvida tvåspråkiga deltagarna använde subfonemiska signalerna i de presenterade auditiva lexikala stimuli eller inte.
Baczkowski, Antoine. "L'expérience festive au sein de la vie : pour une sociologie de la place des raves, des free parties et des teknivals dans le parcours biographique". Thesis, Besançon, 2014. http://www.theses.fr/2014BESA1025.
Texto completoRaves, free parties and teknivals are techno parties that, in France, are the origin of moral panic. Their social history and phenomena are those of state suppression. This thesis attempts to catch (the essence of) the experience of these parties given a temporal and biographical event: The notewor1hy event (Michèle Leclerc-Olive, 1997). ln the first place the ravers and other voluntary members from Sound Systems told about themselves within the frame of biographical interviews. Then, as a second step, the goal was to identify, with our help, what guided and decided their path. Was their life only a "blast"? An aesthetically pleasing? Would a techno "activist" be marked only by celebration? Life stories of Marc, Fabrice, Marie Sophie and Gino helped us to bring a clear answer to these questions
Pollatou, Efpraxia. "Sounds of satire, echoes of madness : performance and evaluation in Cefalonia, Greece". Thesis, University of St Andrews, 2009. http://hdl.handle.net/10023/1016.
Texto completoGuerrasio, Francesca. "Les territoires sonores de Salvatore Sciarrino. L’écoute écologique, le théâtre musical, l’esthétique figurale". Thesis, Paris 4, 2012. http://www.theses.fr/2012PA040046.
Texto completoThe goal of this research is to shed light on the logical trail that is hidden behind Sciarrino’s compositions for theater; it will help us reduce the distance between the artist and the recipient of his works. At first, we will ponder the cultural circumstances, which played an important and decisive part in the composer’s artistic creations. Then, we will devote time to observe, analyze and understand his first writing on aesthetics : Les figure della musica da Beethoven a oggi.The analysis of the unique musical forms created by Sciarrino will prove useful as we :a. reveal the forma mentis of the composer and the innumerable aspects of his personality ;b. trace his organic language back to the system of norms and laws from which it originates.Without involving ourselves in the category of musical semantics, we will nonetheless consider musical language and its levels of communication. In fact, Sciarrino’s research appears as the special instance of a language that is being created for the artistic world in general and not exclusively for the musical world.In order to recount the most singular aspects of his artistic work, we will focus on one of the fundamental concepts of his teachings : that music derives entirely from the sonic reality, which surrounds us and which therefore constitutes its essence.The sonic event, conceived as a living organism, refers to the idea of ecological listening. It is the faculty of transforming natural sounds within a musical language, by listening in perspective.The dramatic and dramaturgic division put in place by Sciarrino offers the listener and the performer of his music a study on perception, of both text and music, and immerges them in a world characterized by uncertainty and imperceptibility. Hence the originality of his musical research and dramaturgic project, which goal is not to make the visual reality coincide with the sonic reality, but to evoke the former with imperceptible sounds, ones that border on silence
Kempf, Alexandre. "Nonlinear encoding of sounds in the auditory cortex Temporal asymmetries in auditory coding and perception reflect multi-layered nonlinearities Cortical recruitment determines learning dynamics and strategy Interactions between nonlinear features in the mouse auditory cortex Context-dependent signaling of coincident auditory and visual events in primary visual cortex". Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCB085.
Texto completoPerceptual objects are the elementary units used by the brain to construct an inner world representation of the environment from multiple physical sources, like light or sound waves. While the physical signals are first encoded by receptors in peripheral organs into neuroelectric signals, the emergence of perceptual object require extensive processing in the central nervous system which is not yet fully characterized. Interestingly, recent advances in deep learning shows that implementing series of nonlinear and linear operations is a very efficient way to create models that categorize visual and auditory perceptual objects similarly to humans. In contrast, most of the current knowledge about the auditory system concentrates on linear transformations. In order to establish a clear example of the contribution of auditory system nonlinearities to perception, we studied the encoding of sounds with an increasing intensity (up ramps) and a decreasing intensity (down ramps) in the mouse auditory cortex. Two behavioral tasks showed evidence that these two sounds are perceived with unequal salience despite carrying the same physical energy and spectral content, a phenomenon incompatible with linear processing. Recording the activity of large cortical populations for up- and down-ramping sounds, we found that cortex encodes them into distinct sets of non-linear features, and that asymmetric feature selection explained the perceptual asymmetry. To complement these results, we also showed that, in reinforcement learning models, the amount of neural activity triggered by a stimulus (e.g. a sound) impacts learning speed and strategy. Interestingly very similar effects were observed in sound discrimination behavior and could be explain by the amount of cortical activity triggered by the discriminated sounds. This altogether establishes that auditory system nonlinearities have an impact on perception and behavior. To more extensively identify the nonlinearities that influence sounds encoding, we then recorded the activity of around 60,000 neurons sampling the entire horizontal extent of auditory cortex. Beyond the fine scale tonotopic organization uncovered with this dataset, we identified and quantified 7 nonlinearities. We found interestingly that different nonlinearities can interact with each other in a non-trivial manner. The knowledge of these interactions carry good promises to refine auditory processing model. Finally, we wondered if the nonlinear processes are also important for multisensory integration. We measured how visual inputs and sounds combine in the visual and auditory cortex using calcium imaging in mice. We found no modulation of supragranular auditory cortex in response to visual stimuli, as observed in previous others studies. We observed that auditory cortex inputs to visual cortex affect visual responses concomitant to a sound. Interestingly, we found that auditory cortex projections to visual cortex preferentially channel activity from neurons encoding a particular non-linear feature: the loud onset of sudden sounds. As a result, visual cortex activity for an image combined with a loud sound is higher than for the image alone or combine with a quiet sound. Moreover, this boosting effect is highly nonlinear. This result suggests that loud sound onsets are behaviorally relevant in the visual system, possibly to indicate the presence of a new perceptual objects in the visual field, which could represent potential threats. As a conclusion, our results show that nonlinearities are ubiquitous in sound processing by the brain and also play a role in the integration of auditory information with visual information. In addition, it is not only crucial to account for these nonlinearities to understand how perceptual representations are formed but also to predict how these representations impact behavior
House, Kayli. "Pilgrim carnival". Thesis, view full-text document. Access restricted to the University of North Texas campus, 2002. http://www.library.unt.edu/theses/open/20022/house%5Fkayli/index.htm.
Texto completoA two-week event in four parts: invitation, installation, reception, and thank-you card. Installation for 2 hosts, 2 ushers, photographer, 4 posers, exerciser, sound persons, and blindfolded guests, with a mix of live and recorded sounds. Includes instructions for performance. Includes bibliographical references (p. 66-67).
Lafay, Grégoire. "Simulation de scènes sonores environnementales : Application à l’analyse sensorielle et l’analyse automatique". Thesis, Ecole centrale de Nantes, 2016. http://www.theses.fr/2016ECDN0007/document.
Texto completoThis thesis deals with environmental scene analysis, the auditory result of mixing separate but concurrent emitting sources. The sound environment is a complex object, which opens the field of possible research beyond the specific areas that are speech or music. For a person to make sense of its sonic environment, the involved process relies on both the perceived data and its context. For each experiment, one must be, as much as possible,in control of the evaluated stimuli, whether the field of investigation is perception or machine learning. Nevertheless, the sound environment needs to be studied in an ecological framework, using real recordings of sounds as stimuli rather than synthetic pure tones. We therefore propose a model of sound scenes allowing us to simulate complex sound environments from isolated sound recordings. The high level structural properties of the simulated scenes -- such as the type of sources, their sound levels or the event density -- are set by the experimenter. Based on knowledge of the human auditory system, the model abstracts the sound environment as a composite object, a sum of soundsources. The usefulness of the proposed model is assessed on two areas of investigation. The first is related to the soundscape perception issue, where the model is used to propose an innovative experimental protocol to study pleasantness perception of urban soundscape. The second tackles the major issue of evaluation in machine listening, for which we consider simulated data in order to powerfully assess the generalization capacities of automatic sound event detection systems
"Re-Sonification of Objects, Events, and Environments". Doctoral diss., 2013. http://hdl.handle.net/2286/R.I.17897.
Texto completoDissertation/Thesis
Ph.D. Electrical Engineering 2013
Freitas, João Pedro Alves de. "Uma Experiência de Produção Musical - Relatório de Estágio no Jazz ao Centro Clube – Salão Brazil". Master's thesis, 2020. http://hdl.handle.net/10316/93717.
Texto completoUma Experiência de Produção MusicalEste relatório resume o trabalho que foi desenvolvido por mim entre 20 de outubro de 2019 e 1 de fevereiro de 2020 na Associação sem fins lucrativos Jazz ao Centro Clube (JACC). Este trabalho foi, maioritariamente realizado no Salão Brazil, espaço explorado pelo JACC.O Estágio teve como Orientadores José Miguel Pereira, Presidente do JACC e Adriana Ávila, Vice-Presidente do JACC. Este Relatório foi feito com a Orientação do Professor Doutor Paulo Estudante, do Departamento de História, Estudos Europeus, Arqueologia e Artes da Faculdade de Letras da Universidade de Coimbra.No começo deste trabalho, é feito um pequeno enquadramento teórico, principalmente, sobre políticas culturais e onde e de que forma estas atuam, com o objetivo de se chegar ao ponto de vista do associativismo e, por consequência, do trabalho desenvolvido no estágio e dos seus moldes. Para tal, começa-se por se procurar um conceito geral de Política Cultural. Com este estabelecido, segue-se o percurso destas políticas em Portugal e, por fim, no meio local, dos municípios. O passo seguinte resume a história do Jazz ao Centro Clube, desde a sua fundação. São também descritas as principais atividades da associação.De seguida, explicita-se detalhadamente, num Diário de Bordo compilado ao longo da duração do estágio, as tarefas levadas a cabo pelo estagiário nos diferentes dias e os contactos feitos com a restante equipa, como momentos de aprendizagem ou trabalho em conjunto. Avança-se, posteriormente, para uma descrição detalhada do trabalho de produção executado e das diversas tarefas, tentando explicar, com um formato a assemelhar-se a um tutorial, que passos são necessários para executar essas tarefas e para que é que estas são precisas.Faz-se o mesmo sobre o trabalho técnico, utilizando um caso real e simulando a montagem do material para um concerto da banda “PAUS”, como feito para o concerto desta do dia 24 de janeiro de 2020 no Salão Brazil.
An Experience in Musical ProductionThis report summarizes the work that was developed by me between October 20th, 2019 and February 1st, 2020 at the Nonprofit Association Jazz ao Centro Clube (JACC). This work was mostly carried out at Salão Brazil, the base of operations of JACC.The internship was oriented by José Miguel Pereira, President of JACC and Adriana Ávila, Vice President of JACC. This report was made under the guidance of Professor Paulo Estudante, from the Departamento de História, Estudos Europeus, Arqueologia e Artes da Faculdade de Letras da Universidade de Coimbra.At the beginning of this paper, a small theoretical framework is made, mainly about cultural policies and where and how they act, with the aim of reaching the point of view of associations and, consequently, of the work developed in the internship. To do this, we begin by looking for a general concept of Cultural Policies. With this established we follow the course of these policies in Portugal and, finally, in a local environment like counties. The next step was to detail the history of Jazz ao Centro Clube, from its. The main activities of the Association are also described.Then, the tasks carried out by the trainee on the different days and the contacts made with the rest of the team, such as moments of learning or working together, were detailed in a Journal that was compiled along the duration of the internship.After that, it proceeds to a detailed description of the production work performed and the various tasks, trying to explain, in a format similar to a tutorial, what steps are needed to perform these tasks and what they are needed for.The same is done about the technical work, using a real scenario and simulating the assembly of the material for a concert by the band “PAUS”, as done for their concert on January 24th, 2020 at Salão Brazil.
Lin, Chang Hong y 林昶宏. "A Study on Robust Sound Event Recognition". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/vg9tn5.
Texto completo國立中央大學
資訊工程學系
102
In recent years, environmental sound recognition has become a new research topic in home automation. In home automation systems, the sound recognized by the system becomes the basis for performing certain tasks. However, there are various disturbances which may cause recognition system to fail in real world applications. For example, a target source is mixed with another sound due to simultaneous occurrence, or the sound received by the applications is exposed to background noise. To resolve these two issues, we totally propose three robust processing methods in this dissertation. We firstly propose a mixed sound verification method to deal with simultaneous occurrence of sounds. For the problem of background noise, this dissertation adopts two approaches to reduce the impact on recognition. The first approach is sound enhancement, which suppresses the noise of received sound before feature extraction. The second approach is to simultaneously remove noise and extract feature (implements feature extraction and denoising simultaneously), called robust feature extraction. To handle the problem of simultaneous occurrences of multiple sounds, this study proposes a framework, which consists of sound separation and sound verification techniques based on a wireless sensor network (WSN). For the problem of reducing noice from the input audio, we propose a fast subspace based sound enhancement method to filter background noise on signal subspace. For the approach of robust feature extraction, we proposed a novel feature extraction approach called nonuniform scale-frequency map for environmental sound recognition. Furthermore, the experimental results demonstrate the robustness and feasibility of the three proposed systems are superior to baseline systems.
Tseng, Yu-Hao y 曾郁豪. "Parallel Capsule Neural Networks for Sound Event Detection". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/33jxk4.
Texto completo國立中央大學
通訊工程學系
107
The research of artificial intelligence has never stopped for more than 60 years. With the rapid development of technology, we hope that computers can have the same learning ability as human beings. In recent years, more and more people invest in the field of machine learning and deep learning, because of the success of the alpha go. Many different network architectures have been developed to allow computers to assist humans in detecting and classifying data. We used the Capsule Neural Network (CapsNets) in deep learning as a method. Propose a system for sound event detection. The extracted features are sent to the neural network for training through the vector. In addition to capsule network can effectively identify overlapping events, we expand the capsule network into a parallel capsule network, let per capsule can learn more features. Compared with DCASE 2017 Baseline, our proposed method error rate is reduced by about 41%. Compared with the architecture of the first place in DCASE 2017 challenge, the error rate also dropped by about 26%.
Galileu, João Pedro Duarte. "Urban Sound Event Classification for Audio-Based Surveillance Systems". Master's thesis, 2020. https://hdl.handle.net/10216/126838.
Texto completoGalileu, João Pedro Duarte. "Urban Sound Event Classification for Audio-Based Surveillance Systems". Dissertação, 2020. https://hdl.handle.net/10216/126838.
Texto completoLiao, Jyun-Ci y 廖俊祺. "Environmental Sound Event Classification Based on Modulation Spectral Vectors". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/77th6k.
Texto completo國立清華大學
電機工程學系所
105
The Gaussian mixture model (GMM) has developed well both in the speech and sound recognition, but it does not perform well in the high background noisy environment. This thesis proposes a method combining short-term and long-term features to overcome this issue. Here the short-term features are Mel-frequency cepstral coefficients (MFCCs) and the long-term features are the modulation spectral vectors (MSVs) calculated in the frequency domain. The MSVs contains the envelope message of signals which is a good feature against high noise. For robustness against noise, this thesis proposes a method to learn noisy data while training on GMMs. This method could raise the recognition accuracy in the low singal-to-noise ratio (SNR) case. The method was evaluated on a database which consists of 8 different indoor sound event classes. It achieves > 80 % accuracy at 0 dB SNR.
Chou, Szu-Yu y 周思瑜. "Attention-based Sound Event Recognition using Weakly Labeled Data". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/cfexpb.
Texto completo國立臺灣大學
資訊網路與多媒體研究所
107
Understanding the surrounding environment and ongoing events through acoustic cues, or the so-called ``sound intelligence,'' is a critical piece of the Artificial Intelligence (AI) puzzle. Human is able to recognize not only the sounds of speech utterance or musical piece, but also animal sounds, natural sounds and common everyday environmental sounds. With sound intelligence, an AI can do much better in applications such as smart surveillance, smart city, smart car, and smart factory. As a result, recent years have witnessed great and rapid progress in recognizing various sound events in daily environments. Most current research proposes a framework based on fully-supervised deep learning techniques using strongly labeled data. However, the labeled data for sound event recognition generally lack detailed annotations in time due to the high cost of the labeling process. This dissertation makes the following four contributions in recognizing sound events using weakly labeled data. First, we propose an attention-based model that recognizes transient sound events relying on only weakly labeled data. This task is challenging because weakly labeled data only provide annotations on the clip level, but some sound events appear only for a short period of time in an audio clip. We address this lack of detailed annotations with a novel attentional supervision mechanism that we propose. The resulting model, dubbed the M&mnet, outperforms all the other existing models on AudioSet, a collection of two million weakly-labeled audio clips released by Google in 2017. Second, we address the challenge to recognize sound events with only a few training examples of each class. This problem is critical in that fully-supervised learning algorithms cannot learn well when the data is sparse. We propose a novel attentional similarity module to guide the learning model to pay attention to specific segments of a long audio clip for recognizing sound events. We show that this module greatly improves the performance of few-shot sound recognition. Third, we propose FrameCNN, a novel weakly-supervised learning framework that improves the performance of convolutional neural network (CNN) for acoustic event detection by attending to details of each sound at various temporal levels. In the large-scale weakly supervised sound event detection for smart cars , we obtained a F-score 53.8% for sound event audio tagging, compared to the baseline of 19.8%, and a F-score 32.8% for sound event detection, compared to the baseline of 11.4%. Lastly, we attempt to build a noise-robust sound event detection model for mobile or embedded applications. We desire the model to be applicable in a real-world environment, with low memory usage and limited detection latency. By combining several state-of-the-art techniques in building deep learning models, we are able to implement a baby cry detector on the Raspberry Pi that can run in real time. We find that our model can effectively detect baby cries in various noisy conditions, whereas the baby cry detector available on the flagship smartphone of Samsung (as of late 2018) cannot.
Liu, Chen-Hung y 劉振宏. "A Deep Neural Network for Sound Event Recognition Implemented in Microcontroller". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/3zf6sx.
Texto completo國立中央大學
資訊工程學系在職專班
107
Typical deep neural networks require the use of considerable memories and high-speed floating-point arithmetic; hence, it is difficult to apply it to microcontroller-embedded platforms with limited hardware resources. Deep neural networks can be successfully applied in recognizing sound events. To facilitate the implementation of microcontroller platforms in deep sound event recognition, this study proposed a quantization strategy to compress deep neural networks and optimize the recognition performance and hardware resource needs. This study adopted the depthwise separable convolutional neural network (DS-CNN) structure to establish the neural network model for sound event recognition. Mel-frequency cepstral coefficients (MFCC) that extract sound were used as the features to train recognition models. Through the quantization process, the quantized weight parameters were input into an ARM Cortex-M7 microcontroller to facilitate verification. The neural network model that completed training on a personal computer platform reached a recognition rate of 82%. After the model was quantized and transferred to a microcontroller unit, the recognition rate dropped to 60% with the recognition speed remaining at 0.2 second. The result verified that the proposed method can enable the deep neural network model training on a personal computer to be transferred to microcontroller units while maintaining acceptable recognition performance and recognition rates. The results can extend the deep learning artificial intelligence technologies to numerous applications with low requirement of hardware resources.
Kogeler, Konstantin Friedrich. "Business adaptive strategies in crises : the impact of the Corona Crisis on Crystal Sound an Event Technology Provider". Master's thesis, 2021. http://hdl.handle.net/10400.14/35171.
Texto completoA pandemia COVID-19 tem um impacto significativo no setor de eventos. Grandes eventos estão totalmente proibidos na Alemanha desde março de 2020. A Crystal Sound, fornecedora de tecnologia de eventos, teve que ajustar as suas operações, bem como a sua estratégia ao ambiente externo alterado. Para compensar a queda nas vendas, a Crystal Sound cortou custos. Além disso, a Crystal Sound lançou com concorrentes uma entidade legal para organizar eventos disponíveis a partir de carros. Essa integração vertical mudou a Crystal Sound de fornecedor de serviços puro para um organizador integrado de eventos sofistifcados. Esta aliança estratégica alcançou um monopólio regional, permitindo preços mais altos e à geração de alguns ganhos. Além disso, o Crystal Sound mudou seu foco para soluções digitais como streaming de eventos. No entanto, esses ajustes não foram suficientes para o breakeven. Portanto, a Crystal Sound decidiu vender alguns dos ativos não-correntes para contabilizar ativos antes ausentes e equilibrar o balanço. No ano de 2021, o Crystal Sound gerará novamente um prejuízo. A administração deve decidir se compensa a perda injetando capital ou levantando empréstimos adicionais. Esta dissertação mostra como uma empresa é capaz de responder rapidamente a mudanças significativas no ambiente externo. Ajustes no portfólio de produtos, integração vertical, conluio com concorrentes, revelação de reservas ocultas do balanço patrimonial e enorme corte de custos permitiram que a Crystal Sound gerasse lucro no ano de 2020, apesar da pandemia e da redução maciça na receita de clientes tradicionais.
Hou, Shih Yen y 侯詩彥. "Sound Event Detection Using Different Feature Extraction and Multi-label Classification Methods". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/t4tvf9.
Texto completo國立清華大學
電機工程學系
105
Events occur simultaneously at some scenes. The ability that people are able to detect these events and analyse these scenes by listening is called auditory scene analysis, and the study to let computers have this ability is called computational auditory scene analysis. Sound event detection is a topic that relates to computational auditory scene analysis, which focuses on converting acoustic signal to concrete descriptions of its corresponding sound events. This technology can be used in many applications, such as house security, healthcare, and so on. Through methods developed in pattern recognition, acoustic signal can be turned into feature vectors first, then learning methods can be applied to train models with these feature vectors and their corresponding event labels. Since the data used here were recorded in environments with multiple sound sources, polyphonic sound event detection is required such that the system can detect multiple events at the same time. Compared to monophonic sound event detection, polyphonic sound event detection is more complicated and can be viewed as multi-label classification. This research used TUT Sound Events 2016 database, which was published by Detection and Classification of Acoustic Scenes and Events 2016 (DCASE 2016). The baseline system of the database used mel frequency cepstral coefficients for feature extraction and Gaussian mixture models for multi-label classification. This paper tries to improve the performance by introducing another feature extraction method based on a human auditory model and multi-label classification methods based on deep neural networks. After trying different combination of feature extraction method and multi-label classification methods, the proposed method reduces the error rate by 0.04 and increases the F-score by 6.6% compared to baseline.
Huang, Jong-Yi y 黃仲逸. "Artificial intelligence (AI) sound event classification based on deep neural networks (DNNs)". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/asqgqh.
Texto completoWang, Chun-Hao y 王君豪. "Sound Event Detection Based on Partitioned Autoencoder and Convolutional Recurrent Neural Network". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/7rna2b.
Texto completo國立清華大學
電機工程學系
107
In this thesis, a noise reduction process and a sound event detection (SED) system are used in detecting DCASE2017 TUT Sound Events 2017 dataset [1] which contains six sound events in a total of 32 audio recordings (24 audio recordings in the development set and 8 audio recordings in the evaluation set). It is a polyphonic task and the trained SED model have to detect the sound events with their onset time and offset time. The purpose of the noise reduction is to observe whether it is helpful in the training process of the sound event detection task. In this thesis, a partitioned autoencoder [2] is adopted for noise reduction. In the sound event detection part, a convolutional recurrent neural network (CRNN) [3][4] which won the first prize in the task of "sound event detection in real life" in DCASE2017 is adopted. The original log mel-band energies, the denoised log mel-band energies, and the augmented log mel-band energies which combine both of above are the input features of the CRNN. From the training results, it reveals that the SED models trained with the denoised features have better performance in some sound events by showing the lower medians of the testing error rates or showing the better distribution of the testing error rates. Furthermore, the final performance of the error rates reveals that the models trained with the denoised features can achieve the best testing error rate of 0.622 in the development set and the best testing error rate of 0.744 in the evaluation set. The testing error rate could be improved further if choosing the best model of each class across 3 kinds of features.
Cervantes, Gamboa Laura. "Sounds like music ritual speech events among the Bribri Indians of Costa Rica /". 2003. http://wwwlib.umi.com/cr/utexas/fullcit?p3110754.
Texto completoEngelbrecht, B. J. "Sound art in Johannesburg: a critical review 2005-2009". Thesis, 2010. http://hdl.handle.net/10539/8228.
Texto completoLiao, Wei-Chung y 廖唯鈞. "Joint Kernel Dictionary Learning via Collaborative Representation and Its Application to Sound Event Classification". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/4srjsy.
Texto completo國立中央大學
資訊工程學系
102
Environment sound classification is become more and more popular in humans daily life, such as security surveillance, environment detection, human health care. To accurately classify the sound from different event, we can use the traditional SVM, GMM and the popular SRC to obtain well classification result. In this paper, we present a joint kernel dictionary learning (JKDL) method base on sparse representation. Using ℓ2-norm instead of ℓ1-norm can reserve the performance but reduce the computation time massively. Adding the classification error term into the objective function to train a simple linear classifier enhanced the relationship between the classifier and dictionary. Kernel method plays an important role which efficiently strengthen the reconstructive and discriminative ability. The dictionary update step is iteratively performed by taking partial derivatives on objective function in feature space. Online dictionary learning approach handle dynamic training data more efficient than batch approach. Experiments on a 17 classes sound database indicates that the proposed method can achieve an high accuracy rate about 80.56%. Also, the average executing time of a testing data is notably faster than SRC and CRC.
Cornell, Sonia [Verfasser]. "Mapping speech sound to mental representation : neurophonological evidence from event-related brain potentials / vorgelegt von Sonia Cornell". 2010. http://d-nb.info/1010932861/34.
Texto completoMonteiro, Bruno Daniel Gonçalves. "Eventos culturais como ferramenta de gestão para o turismo em Portugal : estudo de caso NOS Primavera Sound 2013 e 2014". Master's thesis, 2014. http://hdl.handle.net/11067/1977.
Texto completoDissertação de mestrado realizada no âmbito do Mestrado em Gestão.
A celebração de eventos culturais pode desempenhar vários papéis importantes, que vão desde a atração de turistas, à animação que se prolonga no tempo, à dinamização de outras atividades, até a elemento de suporte à criação de uma imagem do destino turístico. Deste modo, a multiplicação de eventos culturais, nomeadamente festivais, promove um impacto no turismo nas empresas dos diferentes serviços da região destino, o que tem consequentemente reflexos ao nível do emprego e no consumo. Verificando a elevada concorrência entre o turismo de cidades e os eventos culturais, tornou-se importante determinar o impacto direto da gestão do evento, bem como na cidade onde este se realiza (como por exemplo, o impacto no alojamento, nas deslocações, nas compras/presentes, cultura tura/lazer e no recinto), o impacto induzido (considerando os gastos nas refeições) e os principais fatores de satisfação juntamente com a possibilidade de retorno. Nesta dissertação avaliam-se os principais determinantes dos dois tipos diferentes de comportamento das despesas dos turistas num festival de música: as despesas por dia na cidade do evento e as despesas no próprio recinto durante o evento. Ao nível da satisfação dos visitantes, além de se verificar que os visitantes mais satisfeitos gastaram mais no recinto do evento, verificou-se também que os principais fatores foram agrupados nos palcos de música, na duração do festival e horário dos concertos, nas condições do recinto, nos restaurantes e sanitários e no mercado Primavera. Ou seja, no caso em estudo, estes fatores revelaram-se decisivos para a escolha do evento. Provamos que os turistas que se deslocaram propositadamente para o evento e os estrangeiros gastam mais por dia no recinto, enquanto relativamente aos gastos na cidade, os estrangeiros gastam mais do que os cidadãos nacionais. Estes resultados indicam que a residência dos turistas tem um impacto positivo nos gastos. Ficou ainda demonstrado que as despesas na cidade durante o festival são determinadas pela idade, estado civil, escolaridade, rendimento e nacionalidade do turista, enquanto as despesas no próprio recinto do evento são explicadas pelo género, idade, rendimento, nacionalidade, ano de realização, vinda propositada para o evento e tipo de bilhete adquirido. Este tipo de informações são uma importante ferramenta de gestão para o turismo local e para a decisão do desempenho e planeamento do evento. O turismo local pode não só concentrar em atrair turistas, mas também em fazer o evento se tornar memorável em cooperação com a empresa do evento.
Abstract: There are numerous important roles that can be found in the celebration of cultural events. These go from tourist attraction to the animation of attractions that are repeated, mostly on an annual basis, amplification of several other side-activities or the becoming of a supporting element for the creation of a certain touristic destination’s widespread image. This way, the spread of said cultural events, like music festivals, and the legions of tourists it attracts, cause a considerably big impact on the cities and regions where these festivals are held, on businesses from different sectors. All of this cause visible reflections on the city/region’s jobs and economy. If one takes into account the high competition present within cities between the tourism and their cultural events, determining the direct impact from the event’s management has become important, as well as taking into account the city where it takes place (for example: the impact on accommodation, dislocations, shopping, culture and leisure and the very festival grounds), the induced impact (considering what’s spent on meals) and the main factors of satisfaction as well as the possibility of people wanting to return. In this essay, we will take a look at and evaluate the main factors which determine the two different types of behavior regarding expenses tourists have on a music festival: These are analyzed per day, in the city where the event takes place and the expenses in the festival grounds, during the realization of the event. Regarding the satisfaction of those who come to the events, and going beyond the realization that the most satisfied visitors have spent more money in the festival grounds, it was also verified that the items loaded on the factors were focused mainly on the stages, duration of the festival and concert schedules, conditions of the area, restaurants and sanitaria as well as the “Primavera” market. This means that, in the case of study, said factors have proven themselves as decisive towards the choice of going to the event. It has been proven that tourists who allocated themselves to the event on purpose do spend more money each day on the festival grounds, while, regarding the expenses at city level, foreigners spend more than national citizens. These results indicate the visit of these tourists has a positive impact on spends. Furthermore, it was demonstrated that the expenses on city level, during the days in which the festival is held, are determined mostly by the tourists’ age, marital status, scholarship, yield and nationality, while expenses on the festival grounds are determined mainly by gender, age, yield, nationality, current year, specific attending of the event and type of ticket acquired. In conclusion, this type of information is an important management tool for local tourism, as well as decisions regarding the performance and planning of events. Local tourism may concentrate not only on attracting tourists, but also on seeing the event becomes memorable, cooperating with the company that promotes/organizes the event.