Literatura científica selecionada sobre o tema "Sound events"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Sound events".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Sound events"

1

Elizalde, Benjamin. "Categorization of sound events for automatic sound event classification". Journal of the Acoustical Society of America 153, n.º 3_supplement (1 de março de 2023): A364. http://dx.doi.org/10.1121/10.0019175.

Texto completo da fonte
Resumo:
To train Machine Listening models that classify sounds we need to define recognizable names, attributes, relations, and interactions that produce acoustic phenomena. In this talk, we will review examples of different types of categorizations and how they drive Machine Listening. Categorization of sounds guides the annotation processes of audio datasets and the design of models, but at the same time can limit performance and quality of expression of acoustic phenomena. Examples of categories can be simply named after the sound source or inspired by Cognition (e.g., taxonomies), Psychoacoustics (e.g., adjectives), and Psychomechanics (e.g., materials). These types of classes are often defined by one or two words. Moreover, to acoustically identify sound events we may require instead a sentence providing a description. For example, “malfunctioning escalator” versus “a repeated low-frequency scraping and rubber band snapping.” In any case, we still have limited lexicalized terms in language to describe acoustic phenomena. Language determines a listener's perception and expressiveness of a perceived phenomenon. For example, the sound of water is one of the most distinguishable sounds, but how to describe it without using the word water? Despite limitations in language to describe acoustic phenomena, we should still be able to automatically recognize acoustic content in an audio signal at least as well as humans do.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Aura, Karine, Guillaume Lemaitre e Patrick Susini. "Verbal imitations of sound events enable recognition of the imitated sound events". Journal of the Acoustical Society of America 123, n.º 5 (maio de 2008): 3414. http://dx.doi.org/10.1121/1.2934144.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Nishida, Tsuruyo, Kazuhiko Kakehi e Takamasa Kyutoku. "Motion perception of the target sound event under the discriminated two sound events". Journal of the Acoustical Society of America 120, n.º 5 (novembro de 2006): 3080. http://dx.doi.org/10.1121/1.4787419.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Nakayama, Tsumugi, Taisuke Naito, Shunsuke Kouda e Takatoshi Yokota. "Determining disturbance sounds in aircraft sound events using a CNN-based method". INTER-NOISE and NOISE-CON Congress and Conference Proceedings 268, n.º 7 (30 de novembro de 2023): 1320–28. http://dx.doi.org/10.3397/in_2023_0196.

Texto completo da fonte
Resumo:
In this paper, we propose a method to determine whether an aircraft sound event contains disturbance sounds through a combination of sound source recognition models developed using a convolutional neural network. First, considering road traffic noise as a disturbance sound, distinct recognition models for aircraft and road traffic noise were developed. Second, simulated signals in which aircraft noise and road traffic noise were superimposed with different signal-to-noise ratios were input into the two recognition models. We investigated the variations in the output of each recognition model with respect to the signal-to-noise ratio. Subsequently, we obtained the output of each model for the case in which disturbance sounds affected an aircraft sound event. Third, we measured the aircraft noise around an airport, and the aircraft sound events on which road traffic noise was superimposed were input into the recognition models to examine the proposed method. It was confirmed that the proposed method helps detect the disturbance noise when intermittent noise from passing vehicles is included in an aircraft sound event.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Hara, Sunao, e Masanobu Abe. "Predictions for sound events and soundscape impressions from environmental sound using deep neural networks". INTER-NOISE and NOISE-CON Congress and Conference Proceedings 268, n.º 3 (30 de novembro de 2023): 5239–50. http://dx.doi.org/10.3397/in_2023_0739.

Texto completo da fonte
Resumo:
In this study, we investigate methods for quantifying soundscape impressions, meaning pleasantness and eventfulness, from environmental sounds. From a point of view of Machine Learning (ML) research areas, acoustic scene classification (ASC) tasks and sound event classification (SEC) tasks are intensively studied and their results are helpful to consider soundscape impressions. In general, while most of ASCs and SECs use only sound for the classifications, a soundscape impression should not be perceived just from a sound but perceived from a sound with a landscape by human beings. Therefore, to establish automatic quantification of soundscape impressions, it should use other information such as landscape in addition to sound. First, we tackle to predict of two soundscape impressions using sound data collected by the cloud sensing method. For this purpose, we have proposed prediction method of soundscape impressions using environmental sounds and aerial photographs. Second, we also tackle to establish environmental sound classification using feature extractor trained by Variational Autoencoder (VAE). The feature extractor by VAE can be trained as unsupervised learning, therefore, it could be promising approaches for the growth dataset like as our cloud-sensing data collection schemes. Finally, we discuss about an integration for these methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Maruyama, Hironori, Kosuke Okada e Isamu Motoyoshi. "A two-stage spectral model for sound texture perception: Synthesis and psychophysics". i-Perception 14, n.º 1 (janeiro de 2023): 204166952311573. http://dx.doi.org/10.1177/20416695231157349.

Texto completo da fonte
Resumo:
The natural environment is filled with a variety of auditory events such as wind blowing, water flowing, and fire crackling. It has been suggested that the perception of such textural sounds is based on the statistics of the natural auditory events. Inspired by a recent spectral model for visual texture perception, we propose a model that can describe the perceived sound texture only with the linear spectrum and the energy spectrum. We tested the validity of the model by using synthetic noise sounds that preserve the two-stage amplitude spectra of the original sound. Psychophysical experiment showed that our synthetic noises were perceived as like the original sounds for 120 real-world auditory events. The performance was comparable with the synthetic sounds produced by McDermott-Simoncelli's model which considers various classes of auditory statistics. The results support the notion that the perception of natural sound textures is predictable by the two-stage spectral signals.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Domazetovska, Simona, Viktor Gavriloski, Maja Anachkova e Zlatko Petreski. "URBAN SOUND RECOGNITION USING DIFFERENT FEATURE EXTRACTION TECHNIQUES". Facta Universitatis, Series: Automatic Control and Robotics 20, n.º 3 (18 de dezembro de 2021): 155. http://dx.doi.org/10.22190/fuacr211015012d.

Texto completo da fonte
Resumo:
The application of the advanced methods for noise analysis in the urban areas through the development of systems for classification of sound events significantly improves and simplifies the process of noise assessment. The main purpose of sound recognition and classification systems is to develop algorithms that can detect and classify sound events that occur in the chosen environment, giving an appropriate response to their users. In this research, a supervised system for recognition and classification of sound events has been established through the development of feature extraction techniques based on digital signal processing of the audio signals that are further used as an input parameter in the machine learning algorithms for classification of the sound events. Various audio parameters were extracted and processed in order to choose the best set of parameters that result in better recognition of the class to which the sounds belong. The created acoustic event detection and classification (AED/C) system could be further implemented in sound sensors for automatic control of environmental noise using the source classification that leads to reduced amount of required human validation of the sound level measurements since the target noise source is evidently defined.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Martinek, Jozef, P. Klco, M. Vrabec, T. Zatko, M. Tatar e M. Javorka. "Cough Sound Analysis". Acta Medica Martiniana 13, Supplement-1 (1 de março de 2013): 15–20. http://dx.doi.org/10.2478/acm-2013-0002.

Texto completo da fonte
Resumo:
Abstract Cough is the most common symptom of many respiratory diseases. Currently, no standardized methods exist for objective monitoring of cough, which could be commercially available and clinically acceptable. Our aim is to develop an algorithm which will be capable, according to the sound events analysis, to perform objective ambulatory and automated monitoring of frequency of cough. Because speech is the most common sound in 24-hour recordings, the first step for developing this algorithm is to distinguish between cough sound and speech. For this purpose we obtained recordings from 20 healthy volunteers. All subjects performed continuous reading of the text from the book with voluntary coughs at the indicated instants. The obtained sounds were analyzed using by linear and non-linear analysis in the time and frequency domain. We used the classification tree for the distinction between cough sound and speech. The median sensitivity was 100% and the median specificity was 95%. In the next step we enlarged the analyzed sound events. Apart from cough sounds and speech the analyzed sounds were induced sneezing, voluntary throat and nasopharynx clearing, voluntary forced ventilation, laughing, voluntary snoring, eructation, nasal blowing and loud swallowing. The sound events were obtained from 32 healthy volunteers and for their analysis and classification we used the same algorithm as in previous study. The median sensitivity was 86% and median specificity was 91%. In the final step, we tested the effectiveness of our developed algorithm for distinction between cough and non-cough sounds produced during normal daily activities in patients suffering from respiratory diseases. Our study group consisted from 9 patients suffering from respiratory diseases. The recording time was 5 hours. The number of coughs counted by our algorithm was compared with manual cough counts done by two skilled co-workers. We have found that the number of cough analyzed by our algorithm and manual counting, as well, were disproportionately different. For that reason we have used another methods for the distinction of cough sound from non-cough sounds. We have compared the classification tree and artificial neural networks. Median sensitivity was increasing from 28% (classification tree) to 82% (artificial neural network), while the median specificity was not changed significantly. We have enlarged our characteristic parameters of the Mel frequency cepstral coefficients, the weighted Euclidean distance and the first and second derivative in time. Likewise the modification of classification algorithm is under our interest
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Heck, Jonas, Josep Llorca-Bofí, Christian Dreier e Michael Vorlaender. "Validation of auralized impulse responses considering masking, loudness and background noise". Journal of the Acoustical Society of America 155, n.º 3_Supplement (1 de março de 2024): A178. http://dx.doi.org/10.1121/10.0027231.

Texto completo da fonte
Resumo:
The use of outdoor virtual scenarios promises to have a lot of potential to facilitate reproducible sensory evaluation experiments in laboratory. Auralizations allow for the integration of simulated or measured sound sources and transfer paths between the sources and receivers. Nonetheless, pure simulations can lack perfect plausibility. This contribution investigates the augmentation of auralized outdoor scenes based on simulated impulse responses (IRs) by ambient or background sounds. For this purpose, foreground events such as car pass-bys are created by simplified simulation of impulse responses. Due to their large number of events, however, ambient sounds are typically not simulated. Instead, spherical microphone array recordings can be used to capture the background sound. Using synthesized car sounds, we examine how much the augmentation by background sound improves the auditory plausibility of simulated impulse responses in comparison with the equivalent measured ones.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Kim, Yunbin, Jaewon Sa, Yongwha Chung, Daihee Park e Sungju Lee. "Resource-Efficient Pet Dog Sound Events Classification Using LSTM-FCN Based on Time-Series Data". Sensors 18, n.º 11 (18 de novembro de 2018): 4019. http://dx.doi.org/10.3390/s18114019.

Texto completo da fonte
Resumo:
The use of IoT (Internet of Things) technology for the management of pet dogs left alone at home is increasing. This includes tasks such as automatic feeding, operation of play equipment, and location detection. Classification of the vocalizations of pet dogs using information from a sound sensor is an important method to analyze the behavior or emotions of dogs that are left alone. These sounds should be acquired by attaching the IoT sound sensor to the dog, and then classifying the sound events (e.g., barking, growling, howling, and whining). However, sound sensors tend to transmit large amounts of data and consume considerable amounts of power, which presents issues in the case of resource-constrained IoT sensor devices. In this paper, we propose a way to classify pet dog sound events and improve resource efficiency without significant degradation of accuracy. To achieve this, we only acquire the intensity data of sounds by using a relatively resource-efficient noise sensor. This presents issues as well, since it is difficult to achieve sufficient classification accuracy using only intensity data due to the loss of information from the sound events. To address this problem and avoid significant degradation of classification accuracy, we apply long short-term memory-fully convolutional network (LSTM-FCN), which is a deep learning method, to analyze time-series data, and exploit bicubic interpolation. Based on experimental results, the proposed method based on noise sensors (i.e., Shapelet and LSTM-FCN for time-series) was found to improve energy efficiency by 10 times without significant degradation of accuracy compared to typical methods based on sound sensors (i.e., mel-frequency cepstrum coefficient (MFCC), spectrogram, and mel-spectrum for feature extraction, and support vector machine (SVM) and k-nearest neighbor (K-NN) for classification).
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Sound events"

1

Hay, Timothy Deane. "MAX-DOAS measurements of bromine explosion events in McMurdo Sound, Antarctica". Thesis, University of Canterbury. Physics and Astronomy, 2010. http://hdl.handle.net/10092/5394.

Texto completo da fonte
Resumo:
Reactive halogen species (RHS) are responsible for ozone depletion and oxidation of gaseous elemental mercury and dimethyl sulphide in the polar boundary layer, but the sources and mechanisms controlling their catalytic reaction cycles are still not completely understood. To further investigate these processes, ground– based Multi–Axis Differential Optical Absorption Spectroscopy (MAX-DOAS) observations of boundary layer BrO and IO were made from a portable instrument platform in McMurdo Sound during the Antarctic spring of 2006 and 2007. Measurements of surface ozone, temperature, pressure, humidity, and wind speed and direction were also made, along with fourteen tethersonde soundings and the collection of snow samples for mercury analysis. A spherical multiple scattering Monte Carlo radiative transfer model (RTM) was developed for the simulation of box-air-mass-factors (box-AMFs), which are used to determine the weighting functions and forward model differential slant column densities (DSCDs) required for optimal estimation. The RTM employed the backward adjoint simulation technique for the fast calculation of box-AMFs for specific solar zenith angles (SZA) and MAX-DOAS measurement geometries. Rayleigh and Henyey-Greenstein scattering, ground topography and reflection, refraction, and molecular absorption by multiple species were included. Radiance and box-AMF simulations for MAX-DOAS measurements were compared with nine other RTMs and showed good agreement. A maximum a posteriori (MAP) optimal estimation algorithm was developed to retrieve trace gas concentration profiles from the DSCDs derived from the DOAS analysis of the measured absorption spectra. The retrieval algorithm was validated by performing an inversion of artificial DSCDs, simulated from known NO2 profiles. Profiles with a maximum concentration near the ground were generally well reproduced, but the retrieval of elevated layers was less accurate. Retrieved partial vertical column densities (VCDs) were similar to the known values, and investigation of the averaging kernels indicated that these were the most reliable retrieval product. NO₂ profiles were also retrieved from measurements made at an NO₂ measurement and profiling intercomparison campaign in Cabauw, Netherlands in July 2009. Boundary layer BrO was observed on several days throughout both measurement periods in McMurdo Sound, with a maximum retrieved surface mixing ratio of 14.4±0.3 ppt. The median partial VCDs up to 3km were 9.7±0.07 x 10¹² molec cm ⁻ in 2007, with a maximum of 2.3±0.07 x 10¹³ molec cm⁻², and 7.4±0.06 x 10¹² molec cm⁻² in 2006, with a maximum of 1.05 ± 0.07 x 1013 molec cm⁻². The median mixing ratio of 7.5±0.5 ppt for 2007 was significantly higher than the median of 5.2±0.5 ppt observed in 2006, which may be related to the more extensive first year sea ice in 2007. These values are consistent with, though lower than estimated boundary layer BrO concentrations at other polar coastal sites. Four out of five observed partial ozone depletion events (ODEs) occurred during strong winds and blowing snow, while BrO was present in the boundary layer in both stormy and calm conditions, consistent with the activation of RHS in these two weather extremes. Air mass back trajectories, modelled using the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model, indicated that the events were locally produced rather than transported from other sea ice zones. Boundary layer IO mixing ratios of 0.5–2.5±0.2 ppt were observed on several days. These values are low compared to measurements at Halley and Neumayer Stations, as well as mid-latitudes. Significantly higher total mercury concentrations observed in 2007 may be related to the higher boundary layer BrO concentrations, but further measurements are required to verify this.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Giannoulis, Dimitrios. "Recognition of sound sources and acoustic events in music and environmental audio". Thesis, Queen Mary, University of London, 2014. http://qmro.qmul.ac.uk/xmlui/handle/123456789/9130.

Texto completo da fonte
Resumo:
Hearing, together with other senses, enables us to perceive the surrounding world through sensory data we constantly receive. The information carried in this data allow us to classify the environment and the objects in it. In modern society the loud and noisy acoustic environment that surrounds us makes the task of "listening" quite challenging, probably more so than ever before. There is a lot of information that has to be filtered to separate the sounds we want to hear at from unwanted noise and interference. And yet, humans, as other living organisms, have a remarkable ability to identify and track the sounds they want, irrespectively of the number of them, the degree of overlap and the interference that surrounds them. To this day, the task of building systems that try to "listen" to the surrounding environment and identify sounds in it the same way humans do is a challenging one, and even though we have made steps towards reaching human performance we are still a long way from building systems able to identify and track most if not all the different sounds within an acoustic scene. In this thesis, we deal with the tasks of recognising sound sources or acoustic events in two distinct cases of audio – music and more generic environmental sounds. We reformulate the problem and redefine the task associated with each case. Music can also be regarded as a multisound source environment where the different sound sources (musical instruments) activate at different times, and the task of recognising the musical instruments is then a central part of the more generic process of automatic music transcription. The principal question we address is whether we could develop a system able to recognise musical instruments in a multi-instrument scenario where many different instruments are active at the same time, and for that we draw influence from human performance. The proposed system is based on missing feature theory and we find that the method is able to retain high performance even under the most adverse of listening conditions (i.e. low signal-to-noise ratio). Finally, we propose a technique to fuse this system with another that deals with automatic music transcription in an attempt to inform and improve the overall performance. For a more generic environmental audio scene, things are less clear and the amount of research conducted in the area is still scarce. The central issue here, is to formulate the problem of sound recognition, define the subtasks and associated difficulties. We have set up and run a worldwide challenge and created datasets that is intended to enable researchers to perform better quality research in the field. We have also developed proposed systems that could serve as baseline techniques for future research and also compared existing state-of-the-art algorithms to one another, and also against human performance, in an effort to highlight strengths and weaknesses of existing methodologies.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

PAPETTI, Stefano. "Sound modeling issues in interactive sonification - From basic contact events to synthesis and manipulation tools". Doctoral thesis, Università degli Studi di Verona, 2010. http://hdl.handle.net/11562/340961.

Texto completo da fonte
Resumo:
Questa tesi affronta una varietà di temi di ricerca, spaziando dalla interazione uomo-macchina alla modellizzazione fisica. Ciò che unisce queste ampie aree di interesse è l'idea di utilizzare simulazioni numeriche di fenomeni acustici basate sulla fisica, al fine di implementare interfacce uomo-macchina che offrano feedback sonoro coerente con l'interazione dell'utente. A questo proposito, negli ultimi anni sono nate numerose nuove discipline che vanno sotto il nome di -- per citarne alcune -- auditory display, sonificazione e sonic interaction design. In questa tesi vengono trattate la progettazione e la realizzazione di algoritmi audio efficienti per la sonificazione interattiva. A tale scopo si fa uso di tecniche di modellazione fisica di suoni ecologici (everyday sounds), ovvero suoni che non rientrano nelle famiglie del parlato e dei suoni musicali.
The work presented in this thesis ranges over a variety of research topics, spacing from human-computer interaction to physical-modeling. What combines such broad areas of interest is the idea of using physically-based computer simulations of acoustic phenomena in order to provide human-computer interfaces with sound feedback which is consistent with the user interaction. In this regard, recent years have seen the emergence of several new disciplines that go under the name of -- to cite a few -- auditory display, sonification and sonic interaction design. This thesis deals with the design and implementation of efficient sound algorithms for interactive sonification. To this end, the physical modeling of everyday sounds is taken into account, that is sounds not belonging to the families of speech and musical sounds.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Olvera, Zambrano Mauricio Michel. "Robust sound event detection". Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0324.

Texto completo da fonte
Resumo:
De l'industrie aux applications d'intérêt général, l'analyse automatique des scènes et événements sonores permet d'interpréter le flux continu de sons quotidiens. Une des principales dégradations rencontrées lors du passage des conditions de laboratoire au monde réel est due au fait que les scènes sonores ne sont pas composées d'événements isolés mais de plusieurs événements simultanés. Des différences entre les conditions d'apprentissage et de test surviennent aussi souvent en raison de facteurs extrinsèques, tels que le choix du matériel d'enregistrement et des positions des microphones, et de facteurs intrinsèques aux événements sonores, tels que leur fréquence d'occurrence, leur durée et leur variabilité. Dans cette thèse, nous étudions des problèmes d'intérêt pratique pour les tâches d'analyse sonore afin d'atteindre la robustesse dans des scénarios réels.Premièrement, nous explorons la séparation des sons ambiants dans un scénario pratique dans lequel plusieurs événements sonores de courte durée avec des caractéristiques spectrales à variation rapide (c'est-à-dire des sons d'avant-plan) se produisent simultanément à des sons stationnaires d'arrière-plan. Nous introduisons la tâche de séparation du son d'avant-plan et d'arrière-plan et examinons si un réseau de neurones profond avec des informations auxiliaires sur les statistiques du son d'arrière-plan peut différencier les caractéristiques spectro-temporelles à variation rapide et lente. De plus, nous explorons l'usage de la normalisation de l'énergie par canal (PCEN) comme prétraitement et la capacité du modèle de séparation à généraliser à des classes sonores non vues à l'apprentissage. Les résultats sur les mélanges de sons isolés à partir des jeux de données DESED et Audioset démontrent la capacité de généralisation du système de séparation proposé, qui est principalement due à PCEN.Deuxièmement, nous étudions comment améliorer la robustesse des systèmes d'analyse sonore dans des conditions d'apprentissage et de test différentes. Nous explorons deux tâches distinctes~: la classification de scène sonore (ASC) avec des matériels d'enregistrement différents et l'apprentissage de systèmes de détection d'événements sonores (SED) avec des données synthétiques et réelles.Dans le contexte de l'ASC, sans présumer de la disponibilité d'enregistrements capturés simultanément par les matériels d'enregistrement d'apprentissage et de test, nous évaluons l'impact des stratégies de normalisation et d'appariement des moments et leur intégration avec l'adaptation de domaine antagoniste non supervisée. Nos résultats montrent les avantages et les limites de ces stratégies d'adaptation appliquées à différentes étapes du pipeline de classification. La meilleure stratégie atteint les performances du domaine source dans le domaine cible.Dans le cadre de la SED, nous proposons un prétraitement basé sur PCEN avec des paramètres appris. Ensuite, nous étudions l'apprentissage conjoint du système de SED et de branches de classification auxiliaires qui catégorisent les sons en avant-plan ou arrière-plan selon leurs propriétés spectrales. Nous évaluons également l'impact de l'alignement des distributions des données synthétiques et réelles au niveau de la trame ou du segment par transport optimal. Enfin, nous intégrons une stratégie d'apprentissage actif dans la procédure d'adaptation. Les résultats sur le jeu de données DESED indiquent que ces méthodes sont bénéfiques pour la tâche de SED et que leur combinaison améliore encore les performances sur les scènes sonores réelles
From industry to general interest applications, computational analysis of sound scenes and events allows us to interpret the continuous flow of everyday sounds. One of the main degradations encountered when moving from lab conditions to the real world is due to the fact that sound scenes are not composed of isolated events but of multiple simultaneous events. Differences between training and test conditions also often arise due to extrinsic factors such as the choice of recording hardware and microphone positions, as well as intrinsic factors of sound events, such as their frequency of occurrence, duration and variability. In this thesis, we investigate problems of practical interest for audio analysis tasks to achieve robustness in real scenarios.Firstly, we explore the separation of ambient sounds in a practical scenario in which multiple short duration sound events with fast varying spectral characteristics (i.e., foreground sounds) occur simultaneously with background stationary sounds. We introduce the foreground-background ambient sound separation task and investigate whether a deep neural network with auxiliary information about the statistics of the background sound can differentiate between rapidly- and slowly-varying spectro-temporal characteristics. Moreover, we explore the use of per-channel energy normalization (PCEN) as a suitable pre-processing and the ability of the separation model to generalize to unseen sound classes. Results on mixtures of isolated sounds from the DESED and Audioset datasets demonstrate the generalization capability of the proposed separation system, which is mainly due to PCEN.Secondly, we investigate how to improve the robustness of audio analysis systems under mismatched training and test conditions. We explore two distinct tasks: acoustic scene classification (ASC) with mismatched recording devices and training of sound event detection (SED) systems with synthetic and real data.In the context of ASC, without assuming the availability of recordings captured simultaneously by mismatched training and test recording devices, we assess the impact of moment normalization and matching strategies and their integration with unsupervised adversarial domain adaptation. Our results show the benefits and limitations of these adaptation strategies applied at different stages of the classification pipeline. The best strategy matches source domain performance in the target domain.In the context of SED, we propose a PCEN based acoustic front-end with learned parameters. Then, we study the joint training of SED with auxiliary classification branches that categorize sounds as foreground or background according to their spectral properties. We also assess the impact of aligning the distributions of synthetic and real data at the frame or segment level based on optimal transport. Finally, we integrate an active learning strategy in the adaptation procedure. Results on the DESED dataset indicate that these methods are beneficial for the SED task and that their combination further improves performance on real sound scenes
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Beeferman, Leah. "JOURNEYS INTO THE UNKNOWN: A SERIES OF SCIENCE ARCHITECTURE TASKS AND EVENTS, SPACE-BOUND EXPLORATIONS AND FAR-TRAVELS, DISCOVERIES AND MISSES (NEAR AND FAR), IMAGINATIVE SPACE-GAZING AND RELATED INVESTIGATIONS, OBSERVATIONS, ORBITS, AND OTHER REPETITIOUS MONITORING TASKS". VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/2164.

Texto completo da fonte
Resumo:
This thesis expansively and inclusively puts forth the imaginings, research, processes and experiences behind my two thesis exhibitions, "Journeys into the unknown: a series of science architecture tasks and events, space-bound explorations and far-travels, discoveries and misses (near and far), imaginative space-gazing and related investigations, observations, orbits, and other repetitious monitoring tasks" and "Timed travel: asystematic accounts of regular and geometrical timekeeping, orbital flight, repetitive rotations and other journeys into actual time and slow space." It begins with an abstract interpretation of the dial: a tool not limited to scientific measurement but, instead, a gauge of an object’s overall position and general status. Equal parts scientific information, abstracted and fictionalized instruments and facts, and the personal experiences which provided these concrete informational elements with psychological and metaphorical meaning, this document is as much a record of time as it is an elucidation of my artistic practice and methodology.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

VESPERINI, FABIO. "Deep Learning for Sound Event Detection and Classification". Doctoral thesis, Università Politecnica delle Marche, 2019. http://hdl.handle.net/11566/263536.

Texto completo da fonte
Resumo:
I recenti progressi riguardanti l’elaborazione del segnale acustico e le tecniche di machine learning hanno permesso lo sviluppo di tecnologie innovative per l’ana- lisi automatica di eventi sonori. In particolare, uno degli approcci attualmente piu` in voga in questo ambito consiste nell’impiego di tecniche di Deep Learning (DL). Tradizionalmente, tali algoritmi si basavano su tecniche di di modellazio- ne statistica come i Gaussian Mixture Models, gli Hidden Markov Models o le Support Vector Machines, ma il recente ritorno di interesse verso gli strumenti di apprendimento automatico come il DL ha condotto a risultati incoraggianti. Questa tesi riporta uno stato dell’arte aggiornato e propone diversi metodi basati su deep neural networks (DNN) per il Sound Event Detection (SED) ed il Sound Event Classification (SEC), congintamente ad una panoramica sulle procedure e le metriche di valutazione utilizzate in questo campo di ricerca. In particolare, la tendenza recente mostra un ampio impiego di reti neurali di tipo convoluzionale (CNN) per il SED ed il SEC. Questo lavoro include anche approcci innovativi basati sull’architettura DNN siamese o sulle nuove unita` computazionali chiamate Capsule. La maggior parte dei sistemi sono stati pro- gettati in occasione di challenge internazionali. Cio` ha consentito l’accesso a dataset pubblici e la possibilita` di confrontare su una base comune le prestazioni dei sistemi proposti dai team di ricerca piu` competitivi. I casi di studio riportati fanno riferimento ad applicazioni rivolte ad una gran- de varieta` di scenari che includono tra gli altri la diagnosi non invasiva, il monitoraggio bio-acustico e la classificazione delle condizioni della superficie stradale. Tra le complessita` a cui si deve fare fronte per permettere l’applicazione questi sistemi in ambienti reali vi sono lo sbilanciamento dei dataset, diversi setup di acquisizione, eventuali disturbi acustici e la polifonia. In particolare, un algoritmo per il SED polifonico puo` essere considerato come un sistema in grado di eseguire contemporaneamente il rilevamento e la classificazione degli eventi che si verificano nel flusso audio.
The recent progress on acoustic signal processing and machine learning techniques have enabled the development of innovative technologies for automatic analysis of sound events. In particular, nowadays one of the hottest approach to this problem lays on the exploitation of Deep Learning techniques. As further proof, in several occasion neural architectures originally designed for other multimedia domains have been successfully proposed to process the audio signal. Indeed, although these technologies have been faced for a long time by statistical modelling algorithms such as Gaussian Mixture Models, Hidden Markov Models or Support Vector Machines, the new breakthrough of machine learning for audio processing has lead to encouraging results into the addressed tasks. Hence, this thesis reports an up-to-date state of the art and proposes several reliable DNN-based methods for Sound Event Detection (SED) and Sound Event Classification (SEC), with an overview of the Deep Neural Network (DNN) architectures used on purpose and of the evaluation procedures and metrics used in this research field. According to the recent trend, which shows an extensive employment of Convolutional Neural Networks (CNNs) for both SED and SEC tasks, this work reports also rather new approaches based on the Siamese DNN architecture or the novel Capsule computational units. Most of the reported systems have been designed in the occasion of international challenges. This allowed the access to public datasets, and to compare systems proposed by the most competitive research teams on a common basis. The case studies reported in this dissertation refer to applications in a variety of scenarios, ranging from unobtrusive health monitoring, audio-based surveillance, bio-acoustic monitoring and classification of the road surface conditions. These tasks face numerous challenges, particularly related to their application in real-life environments. Among these issues there are unbalancing of datasets, different acquisition setups, acoustic disturbance (i.e., background noise, reverberation and cross-talk) and polyphony. In particular, since multiple events are very likely to overlap in real life audio, two algorithms for polyphonic SED are reported in this thesis. A polyphonic SED algorithm can be considered as system which is able to perform contemporary detection - determining onset and offset time of the sound events - and classification - assigning a label to each of the events occurring in the audio stream.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Jackson, Asti Joy. "Structure of Sound". Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/73778.

Texto completo da fonte
Resumo:
This thesis creates a complementarity relationship with the use of timber and concrete as primary structural and accent materials. Key elements of this thesis include (1) The development of a wood latticing system (2) Stairs that posses a strong sculptural language (3) The Lantern, a free standing lobby/box office, clad in wood and glass (4) Circulation towers that accommodate balcony seating. Studies of these elements went through many iterations resulting in over one hundred drawings. Progression of these drawings are directed to the interpretation of building form and the interaction with the site. These concepts are then implemented in the design of a multifaceted music venue located on a hillside in the New River Valley. Minutes from the college town of Blacksburg, Virginia this event complex caters to an array of musical functions. Spaces include The Lantern, which is a multipurpose lobby/lounge, the main auditorium, and an outdoor amphitheater.
Master of Architecture
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Fonseca, Eduardo. "Training sound event classifiers using different types of supervision". Doctoral thesis, Universitat Pompeu Fabra, 2021. http://hdl.handle.net/10803/673067.

Texto completo da fonte
Resumo:
The automatic recognition of sound events has gained attention in the past few years, motivated by emerging applications in fields such as healthcare, smart homes, or urban planning. When the work for this thesis started, research on sound event classification was mainly focused on supervised learning using small datasets, often carefully annotated with vocabularies limited to specific domains (e.g., urban or domestic). However, such small datasets do not support training classifiers able to recognize hundreds of sound events occurring in our everyday environment, such as kettle whistles, bird tweets, cars passing by, or different types of alarms. At the same time, large amounts of environmental sound data are hosted in websites such as Freesound or YouTube, which can be convenient for training large-vocabulary classifiers, particularly using data-hungry deep learning approaches. To advance the state-of-the-art in sound event classification, this thesis investigates several strands of dataset creation as well as supervised and unsupervised learning to train large-vocabulary sound event classifiers, using different types of supervision in novel and alternative ways. Specifically, we focus on supervised learning using clean and noisy labels, as well as self-supervised representation learning from unlabeled data. The first part of this thesis focuses on the creation of FSD50K, a large-vocabulary dataset with over 100h of audio manually labeled using 200 classes of sound events. We provide a detailed description of the creation process and a comprehensive characterization of the dataset. In addition, we explore architectural modifications to increase shift invariance in CNNs, improving robustness to time/frequency shifts in input spectrograms. In the second part, we focus on training sound event classifiers using noisy labels. First, we propose a dataset that supports the investigation of real label noise. Then, we explore network-agnostic approaches to mitigate the effect of label noise during training, including regularization techniques, noise-robust loss functions, and strategies to reject noisy labeled examples. Further, we develop a teacher-student framework to address the problem of missing labels in sound event datasets. In the third part, we propose algorithms to learn audio representations from unlabeled data. In particular, we develop self-supervised contrastive learning frameworks, where representations are learned by comparing pairs of examples computed via data augmentation and automatic sound separation methods. Finally, we report on the organization of two DCASE Challenge Tasks on automatic audio tagging with noisy labels. By providing data resources as well as state-of-the-art approaches and audio representations, this thesis contributes to the advancement of open sound event research, and to the transition from traditional supervised learning using clean labels to other learning strategies less dependent on costly annotation efforts.
El interés en el reconocimiento automático de eventos sonoros se ha incrementado en los últimos años, motivado por nuevas aplicaciones en campos como la asistencia médica, smart homes, o urbanismo. Al comienzo de esta tesis, la investigación en clasificación de eventos sonoros se centraba principalmente en aprendizaje supervisado usando datasets pequeños, a menudo anotados cuidadosamente con vocabularios limitados a dominios específicos (como el urbano o el doméstico). Sin embargo, tales datasets no permiten entrenar clasificadores capaces de reconocer los cientos de eventos sonoros que ocurren en nuestro entorno, como silbidos de kettle, sonidos de pájaros, coches pasando, o diferentes alarmas. Al mismo tiempo, websites como Freesound o YouTube albergan grandes cantidades de datos de sonido ambiental, que pueden ser útiles para entrenar clasificadores con un vocabulario más extenso, particularmente utilizando métodos de deep learning que requieren gran cantidad de datos. Para avanzar el estado del arte en la clasificación de eventos sonoros, esta tesis investiga varios aspectos de la creación de datasets, así como de aprendizaje supervisado y no supervisado para entrenar clasificadores de eventos sonoros con un vocabulario extenso, utilizando diferentes tipos de supervisión de manera novedosa y alternativa. En concreto, nos centramos en aprendizaje supervisado usando etiquetas sin ruido y con ruido, así como en aprendizaje de representaciones auto-supervisado a partir de datos no etiquetados. La primera parte de esta tesis se centra en la creación de FSD50K, un dataset con más de 100h de audio etiquetado manualmente usando 200 clases de eventos sonoros. Presentamos una descripción detallada del proceso de creación y una caracterización exhaustiva del dataset. Además, exploramos modificaciones arquitectónicas para aumentar la invariancia frente a desplazamientos en CNNs, mejorando la robustez frente a desplazamientos de tiempo/frecuencia en los espectrogramas de entrada. En la segunda parte, nos centramos en entrenar clasificadores de eventos sonoros usando etiquetas con ruido. Primero, proponemos un dataset que permite la investigación del ruido de etiquetas real. Después, exploramos métodos agnósticos a la arquitectura de red para mitigar el efecto del ruido en las etiquetas durante el entrenamiento, incluyendo técnicas de regularización, funciones de coste robustas al ruido, y estrategias para rechazar ejemplos etiquetados con ruido. Además, desarrollamos un método teacher-student para abordar el problema de las etiquetas ausentes en datasets de eventos sonoros. En la tercera parte, proponemos algoritmos para aprender representaciones de audio a partir de datos sin etiquetar. En particular, desarrollamos métodos de aprendizaje contrastivos auto-supervisados, donde las representaciones se aprenden comparando pares de ejemplos calculados a través de métodos de aumento de datos y separación automática de sonido. Finalmente, reportamos sobre la organización de dos DCASE Challenge Tasks para el tageado automático de audio a partir de etiquetas ruidosas. Mediante la propuesta de datasets, así como de métodos de vanguardia y representaciones de audio, esta tesis contribuye al avance de la investigación abierta sobre eventos sonoros y a la transición del aprendizaje supervisado tradicional utilizando etiquetas sin ruido a otras estrategias de aprendizaje menos dependientes de costosos esfuerzos de anotación.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Pahar, Madhurananda. "A novel sound reconstruction technique based on a spike code (event) representation". Thesis, University of Stirling, 2016. http://hdl.handle.net/1893/23025.

Texto completo da fonte
Resumo:
This thesis focuses on the re-generation of sound from a spike based coding system. Three different types of spike based coding system have been analyzed. Two of them are biologically inspired spike based coding systems i.e. the spikes are generated in a similar way to how our auditory nerves generate spikes. They have been called AN (Auditory Nerve) spikes and AN Onset (Amplitude Modulated Onset) spikes. Sounds have been re-generated from spikes generated by both of those spike coding technique. A related event based coding technique has been developed by Koickal and the sounds have been re-generated from spikes generated by Koickal's spike coding technique and the results are compared. Our brain does not reconstruct sound from the spikes received from auditory nerves, it interprets it. But by reconstructing sounds from these spike coding techniques, we will be able to identify which spike based technique is better and more efficient for coding different types of sounds. Many issues and challenges arise in reconstructing sound from spikes and they are discussed. The AN spike technique generates the most spikes of the techniques tested, followed by Koickal's technique (54.4% lower) and the AN Onset technique (85.6% lower). Both subjective and objective types of testing have been carried out to assess the quality of reconstructed sounds from these three spike coding techniques. Four types of sounds have been used in the subjective test: string, percussion, male voice and female voice. In the objective test, these four types and many other types of sounds have been included. From the results, it has been established that AN spikes generates the best quality of decoded sounds but it produces many more spikes than the others. AN Onset spikes generates better quality of decoded sounds than Koickal's technique for most of sounds except choir type of sounds and noises, however AN Onset spikes produces 68.5% fewer spikes than Koickal's spikes. This provides evidences that AN Onset spikes can outperform Koickal's spikes for most of the sound types.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Labbé, Etienne. "Description automatique des événements sonores par des méthodes d'apprentissage profond". Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES054.

Texto completo da fonte
Resumo:
Dans le domaine de l'audio, la majorité des systèmes d'apprentissage automatique se concentrent sur la reconnaissance d'un nombre restreint d'événements sonores. Cependant, lorsqu'une machine est en interaction avec des données réelles, elle doit pouvoir traiter des situations beaucoup plus variées et complexes. Pour traiter ce problème, les annotateurs ont recours au langage naturel, qui permet de résumer n'importe quelle information sonore. La Description Textuelle Automatique de l'Audio (DTAA ou Automated Audio Captioning en anglais) a été introduite récemment afin de développer des systèmes capables de produire automatiquement une description de tout type de son sous forme de texte. Cette tâche concerne toutes sortes d'événements sonores comme des sons environnementaux, urbains, domestiques, des bruitages, de la musique ou de parole. Ce type de système pourrait être utilisé par des personnes sourdes ou malentendantes, et pourrait améliorer l'indexation de grandes bases de données audio. Dans la première partie de cette thèse, nous présentons l'état de l'art de la tâche de DTAA au travers d'une description globale des jeux de données publics, méthodes d'apprentissage, architectures et métriques d'évaluation. À l'aide de ces connaissances, nous présentons ensuite l'architecture de notre premier système de DTAA, qui obtient des scores encourageants sur la principale métrique de DTAA nommée SPIDEr : 24,7 % sur le corpus Clotho et 40,1 % sur le corpus AudioCaps. Dans une seconde partie, nous explorons de nombreux aspects des systèmes de DTAA. Nous nous focalisons en premier lieu sur les méthodes d'évaluations au travers de l'étude de SPIDEr. Pour cela, nous proposons une variante nommée SPIDEr-max, qui considère plusieurs candidats pour chaque fichier audio, et qui montre que la métrique SPIDEr est très sensible aux mots prédits. Puis, nous améliorons notre système de référence en explorant différentes architectures et de nombreux hyper-paramètres pour dépasser l'état de l'art sur AudioCaps (SPIDEr de 49,5 %). Ensuite, nous explorons une méthode d'apprentissage multitâche visant à améliorer la sémantique des phrases générées par notre système. Enfin, nous construisons un système de DTAA généraliste et sans biais nommé CONETTE, pouvant générer différents types de descriptions qui se rapprochent de celles des jeux de données cibles. Dans la troisième et dernière partie, nous proposons d'étudier les capacités d'un système de DTAA pour rechercher automatiquement du contenu audio dans une base de données. Notre approche obtient des scores comparables aux systèmes dédiés à cette tâche, alors que nous utilisons moins de paramètres. Nous introduisons également des méthodes semi-supervisées afin d'améliorer notre système à l'aide de nouvelles données audio non annotées, et nous montrons comment la génération de pseudo-étiquettes peut impacter un modèle de DTAA. Enfin, nous avons étudié les systèmes de DTAA dans d'autres langues que l'anglais : français, espagnol et allemand. De plus, nous proposons un système capable de produire les quatre langues en même temps, et nous le comparons avec les systèmes spécialisés dans chaque langue
In the audio research field, the majority of machine learning systems focus on recognizing a limited number of sound events. However, when a machine interacts with real data, it must be able to handle much more varied and complex situations. To tackle this problem, annotators use natural language, which allows any sound information to be summarized. Automated Audio Captioning (AAC) was introduced recently to develop systems capable of automatically producing a description of any type of sound in text form. This task concerns all kinds of sound events such as environmental, urban, domestic sounds, sound effects, music or speech. This type of system could be used by people who are deaf or hard of hearing, and could improve the indexing of large audio databases. In the first part of this thesis, we present the state of the art of the AAC task through a global description of public datasets, learning methods, architectures and evaluation metrics. Using this knowledge, we then present the architecture of our first AAC system, which obtains encouraging scores on the main AAC metric named SPIDEr: 24.7% on the Clotho corpus and 40.1% on the AudioCaps corpus. Then, subsequently, we explore many aspects of AAC systems in the second part. We first focus on evaluation methods through the study of SPIDEr. For this, we propose a variant called SPIDEr-max, which considers several candidates for each audio file, and which shows that the SPIDEr metric is very sensitive to the predicted words. Then, we improve our reference system by exploring different architectures and numerous hyper-parameters to exceed the state of the art on AudioCaps (SPIDEr of 49.5%). Next, we explore a multi-task learning method aimed at improving the semantics of sentences generated by our system. Finally, we build a general and unbiased AAC system called CONETTE, which can generate different types of descriptions that approximate those of the target datasets. In the third and last part, we propose to study the capabilities of a AAC system to automatically search for audio content in a database. Our approach obtains competitive scores to systems dedicated to this task, while using fewer parameters. We also introduce semi-supervised methods to improve our system using new unlabeled audio data, and we show how pseudo-label generation can impact a AAC model. Finally, we studied the AAC systems in languages other than English: French, Spanish and German. In addition, we propose a system capable of producing all four languages at the same time, and we compare it with systems specialized in each language
Estilos ABNT, Harvard, Vancouver, APA, etc.

Livros sobre o assunto "Sound events"

1

Virtanen, Tuomas, Mark D. Plumbley e Dan Ellis, eds. Computational Analysis of Sound Scenes and Events. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-63450-0.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

McCarthy, Jim. Voices of Latin rock: People and events that created this sound. Milwaukee, WI: Hal Leonard Corporation, 2004.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

McCarthy, Jim. Voices of Latin rock: People and events that created this sound. Milwaukee, WI: Hal Leonard Corporation, 2005.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Thomas, Jeremy. Taking leave. London: Timewell, 2006.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Zealand, Radio New. Catalogue of Radio New Zealand recordings of Maori events, 1938-1950: RNZ 1-60. Auckland: Archive of Maori and Pacific Music, Anthropology Dept., University of Auckland, 1991.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Corporation, British Broadcasting. Equestrian events. Princeton, N.J: Films for the Humanities & Sciences, 1991.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Taylor, Fred. What, and Give Up Showbiz?: Six Decades in the Music Business. Blue Ridge Summit: Backbeat, 2020.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Cai, Wenyi. Yi tian 10 fen zhong, ying zhan xin wen Ying wen: Yue du, ting li, yu hui neng li yi ci yang cheng! 8a ed. Taibei Shi: Kai xin qi ye guan li gu wen you xian gong si, 2015.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Marchetta, Vittorio. Passaggi di sound design: Riflessioni, competenze, oggetti-eventi. Milano: F. Angeli, 2010.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Basile, Giuseppe. ' 80, new sound, new wave: Vita, musica ed eventi nella provincia italiana degli anni '80. Taranto: Geophonìe, 2007.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Sound events"

1

Toole, Floyd E. "Above the Transition Frequency: Acoustical Events and Perceptions". In Sound Reproduction, 157–214. Third edition. | New York ; London : Routledge, 2017.: Routledge, 2017. http://dx.doi.org/10.4324/9781315686424-7.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Toole, Floyd E. "Below the Transition Frequency: Acoustical Events and Perceptions". In Sound Reproduction, 215–62. Third edition. | New York ; London : Routledge, 2017.: Routledge, 2017. http://dx.doi.org/10.4324/9781315686424-8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Guastavino, Catherine. "Everyday Sound Categorization". In Computational Analysis of Sound Scenes and Events, 183–213. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_7.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Font, Frederic, Gerard Roma e Xavier Serra. "Sound Sharing and Retrieval". In Computational Analysis of Sound Scenes and Events, 279–301. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_10.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Stein, Peter J. "Observation of the Sound Radiated by Individual Ice Fracturing Events". In Sea Surface Sound, 533–44. Dordrecht: Springer Netherlands, 1988. http://dx.doi.org/10.1007/978-94-009-3017-9_38.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Bello, Juan Pablo, Charlie Mydlarz e Justin Salamon. "Sound Analysis in Smart Cities". In Computational Analysis of Sound Scenes and Events, 373–97. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_13.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Loar, Josh. "Conventions and Other Multi-room Live Events". In The Sound System Design Primer, 417–21. New York, NY : Routledge, 2019.: Routledge, 2019. http://dx.doi.org/10.4324/9781315196817-44.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Serizel, Romain, Victor Bisot, Slim Essid e Gaël Richard. "Acoustic Features for Environmental Sound Analysis". In Computational Analysis of Sound Scenes and Events, 71–101. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Benetos, Emmanouil, Dan Stowell e Mark D. Plumbley. "Approaches to Complex Sound Scene Analysis". In Computational Analysis of Sound Scenes and Events, 215–42. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Theodorou, Theodoros, Iosif Mporas e Nikos Fakotakis. "Automatic Sound Recognition of Urban Environment Events". In Speech and Computer, 129–36. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23132-7_16.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Sound events"

1

HILL, AJ, J. MULDER, J. BURTON, M. KOK e M. LAWRENCE. "A CRITICAL ANALYSIS OF SOUND LEVEL MONITORING METHODS AT LIVE EVENTS". In Reproduced Sound 2022. Institute of Acoustics, 2022. http://dx.doi.org/10.25144/14142.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

HOURANI, C., e AJ HILL. "TOWARDS A SUBJECTIVE QUANTIFICATION OF NOISE ANNOYANCE DUE TO OUTDOOR EVENTS". In Reproduced Sound 2023. Institute of Acoustics, 2023. http://dx.doi.org/10.25144/16927.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Miyazaki, Koichi, Tomoki Hayashi, Tomoki Toda e Kazuya Takeda. "Connectionist Temporal Classification-based Sound Event Encoder for Converting Sound Events into Onomatopoeic Representations". In 2018 26th European Signal Processing Conference (EUSIPCO). IEEE, 2018. http://dx.doi.org/10.23919/eusipco.2018.8553374.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Wheeler, P., D. Sharp e S. Taherzadeh. "AN EVALUATION OF UK AND INTERNATIONAL GUIDANCE FOR THE CONTROL OF NOISE AT OUTDOOR EVENTS". In REPRODUCED SOUND 2020. Institute of Acoustics, 2020. http://dx.doi.org/10.25144/13383.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

BURTON, J., e AJ HILL. "USING COGNITIVE PSYCHOLOGY AND NEUROSCIENCE TO BETTER INFORM SOUND SYSTEM DESIGN AT LARGE MUSICAL EVENTS". In Reproduced Sound 2022. Institute of Acoustics, 2022. http://dx.doi.org/10.25144/14148.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Wheeler, P., D. Sharp e S. Taherzadeh. "AN EVALUATION OF UK AND INTERNATIONAL GUIDANCE FOR THE CONTROL OF NOISE AT OUTDOOR EVENTS". In REPRODUCED SOUND 2020. Institute of Acoustics, 2020. http://dx.doi.org/10.25144/13383.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Imoto, Keisuke, Noriyuki Tonami, Yuma Koizumi, Masahiro Yasuda, Ryosuke Yamanishi e Yoichi Yamashita. "Sound Event Detection by Multitask Learning of Sound Events and Scenes with Soft Scene Labels". In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9053912.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Maosheng Zhang, Ruimin Hu, Shihong Chen, Xiaochen Wang, Dengshi Li e Lin Jiang. "Spatial perception reproduction of sound events based on sound property coincidences". In 2015 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2015. http://dx.doi.org/10.1109/icme.2015.7177412.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Stanzial, Domenico, Giorgio Sacchi e Giuliano Schiffrer. "Active playback of acoustic quadraphonic sound events". In 155th Meeting Acoustical Society of America. ASA, 2008. http://dx.doi.org/10.1121/1.2992204.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Kumar, Anurag, Ankit Shah, Alexander Hauptmann e Bhiksha Raj. "Learning Sound Events from Webly Labeled Data". In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/384.

Texto completo da fonte
Resumo:
In the last couple of years, weakly labeled learning has turned out to be an exciting approach for audio event detection. In this work, we introduce webly labeled learning for sound events which aims to remove human supervision altogether from the learning process. We first develop a method of obtaining labeled audio data from the web (albeit noisy), in which no manual labeling is involved. We then describe methods to efficiently learn from these webly labeled audio recordings. In our proposed system, WeblyNet, two deep neural networks co-teach each other to robustly learn from webly labeled data, leading to around 17% relative improvement over the baseline method. The method also involves transfer learning to obtain efficient representations.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Relatórios de organizações sobre o assunto "Sound events"

1

Wilson, D. K., V. A. Nguyen, Nassy Srour e John Noble. Sound Exposure Calculations for Transient Events and Other Improvements to an Acoustical Tactical Decision Aid. Fort Belvoir, VA: Defense Technical Information Center, agosto de 2002. http://dx.doi.org/10.21236/ada406703.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Wilson, D., Vladimir Ostashev, Michael Shaw, Michael Muhlestein, John Weatherly, Michelle Swearingen e Sarah McComas. Infrasound propagation in the Arctic. Engineer Research and Development Center (U.S.), dezembro de 2021. http://dx.doi.org/10.21079/11681/42683.

Texto completo da fonte
Resumo:
This report summarizes results of the basic research project “Infrasound Propagation in the Arctic.” The scientific objective of this project was to provide a baseline understanding of the characteristic horizontal propagation distances, frequency dependencies, and conditions leading to enhanced propagation of infrasound in the Arctic region. The approach emphasized theory and numerical modeling as an initial step toward improving understanding of the basic phenomenology, and thus lay the foundation for productive experiments in the future. The modeling approach combined mesoscale numerical weather forecasts from the Polar Weather Research and Forecasting model with advanced acoustic propagation calculations. The project produced significant advances with regard to parabolic equation modeling of sound propagation in a windy atmosphere. For the polar low, interesting interactions with the stratosphere were found, which could possibly be used to provide early warning of strong stratospheric warming events (i.e., the polar vortex). The katabatic wind resulted in a very strong low-level duct, which, when combined with a highly reflective icy ground surface, leads to efficient long-distance propagation. This information is useful in devising strategies for positioning sensors to monitor environmental phenomena and human activities.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Albright, Jeff, Kim Struthers, Lisa Baril e Mark Brunson. Natural resource conditions at Valles Caldera National Preserve: Findings & management considerations for selected resources. National Park Service, junho de 2022. http://dx.doi.org/10.36967/nrr-2293731.

Texto completo da fonte
Resumo:
Valles Caldera National Preserve (VALL) encompasses 35,977 ha (88,900 ac) in the Jemez Mountains of north-central New Mexico and is surrounded by the Santa Fe National Forest, the Pueblo of Santa Clara, and Bandelier National Monument. VALL’s explosive volcanic origin, about 1.23 million years ago, formed the Valles Caldera—a broad, 19- to 24-km (12- to 15-mi) wide circular depression. It is one of the world’s best examples of a young caldera (in geologic time) and serves as the model for understanding caldera resurgence worldwide. A series of resurgent eruptions and magmatic intrusive events followed the original explosion, creating numerous volcanic domes in present day VALL—one of which is Redondo Peak at an elevation of 3,430 m (11,254 ft), which is the second highest peak in the Jemez Mountains. In fact, VALL in its entirety is a high-elevation preserve that hosts a rich assemblage of vegetation, wildlife, and volcanic resources. The National Park Service (NPS) Natural Resource Condition Assessment (NRCA) Program selected VALL to pilot its new NRCA project series. VALL managers and the NRCA Program selected seven focal study resources for condition evaluation. To help us understand what is causing change in resource conditions, we selected a subset of drivers and stressors known or suspected of influencing the preserve’s resources. What is causing change in resource conditions? Mean temperatures during the spring and summer months are increasing, but warming is slower at VALL than for neighboring areas (e.g., Bandelier National Monument). The proportion of precipitation received as snow has declined. From 2000 to 2018, forest pests damaged or killed 75% of the preserve’s forested areas. Only small, forested areas in VALL were affected by forest pests after the 2011 Las Conchas and the 2013 Thompson Ridge fires. The all-sky light pollution model and the sound pressure level model predict the lowest degree of impacts from light and sound to be in the western half of the preserve.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Yatsymirska, Mariya. MODERN MEDIA TEXT: POLITICAL NARRATIVES, MEANINGS AND SENSES, EMOTIONAL MARKERS. Ivan Franko National University of Lviv, fevereiro de 2022. http://dx.doi.org/10.30970/vjo.2022.51.11411.

Texto completo da fonte
Resumo:
The article examines modern media texts in the field of political journalism; the role of information narratives and emotional markers in media doctrine is clarified; verbal expression of rational meanings in the articles of famous Ukrainian analysts is shown. Popular theories of emotions in the process of cognition are considered, their relationship with the author’s personality, reader psychology and gonzo journalism is shown. Since the media text, in contrast to the text, is a product of social communication, the main narrative is information with the intention of influencing public opinion. Media text implies the presence of the author as a creator of meanings. In addition, media texts have universal features: word, sound, visuality (stills, photos, videos). They are traditionally divided into radio, TV, newspaper and Internet texts. The concepts of multimedia and hypertext are related to online texts. Web combinations, especially in political journalism, have intensified the interactive branching of nonlinear texts that cannot be published in traditional media. The Internet as a medium has created the conditions for the exchange of ideas in the most emotional way. Hence Gonzo’s interest in journalism, which expresses impressions of certain events in words and epithets, regardless of their stylistic affiliation. There are many such examples on social media in connection with the events surrounding the Wagnerians, the Poroshenko case, Russia’s new aggression against Ukraine, and others. Thus, the study of new features of media text in the context of modern political narratives and emotional markers is important in media research. The article focuses review of etymology, origin and features of using lexemes “cмисл (meaning)” and “сенс (sense)” in linguistic practice of Ukrainians results in the development of meanings and functional stylistic coloring in the usage of these units. Lexemes “cмисл (meaning)” and “сенс (sense)” are used as synonyms, but there are specific fields of meanings where they cannot be interchanged: lexeme “сенс (sense)” should be used when it comes to reasonable grounds for something, lexeme “cмисл (meaning)” should be used when it comes to notion, concept, understanding. Modern political texts are most prominent in genres such as interviews with politicians, political commentaries, analytical articles by media experts and journalists, political reviews, political portraits, political talk shows, and conversations about recent events, accompanied by effective emotional narratives. Etymologically, the concept of “narrative” is associated with the Latin adjective “gnarus” – expert. Speakers, philosophers, and literary critics considered narrative an “example of the human mind.” In modern media texts it is not only “story”, “explanation”, “message techniques”, “chronological reproduction of events”, but first of all the semantic load and what subjective meanings the author voices; it is a process of logical presentation of arguments (narration). The highly professional narrator uses narration as a “method of organizing discourse” around facts and impressions, impresses with his political erudition, extraordinary intelligence and creativity. Some of the above theses are reflected in the following illustrations from the Ukrainian media: “Culture outside politics” – a pro-Russian narrative…” (MP Gabibullayeva); “The next will be Russia – in the post-Soviet space is the Arab Spring…” (journalist Vitaly Portnikov); “In Russia, only the collapse of Ukraine will be perceived as success” (Pavel Klimkin); “Our army is fighting, hiding from the leadership” (Yuri Butusov).
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Masiero, Bruno, Marcio Henrique de Avelar e William D'Andrea Fonseca. International Year of Sound 2020 & 2021: Ano Internacional do Som prorrogado até 2021. William D’Andrea Fonseca, julho de 2020. http://dx.doi.org/10.55753/aev.v35e52.15.

Texto completo da fonte
Resumo:
O ano de 2020 começou em festa para a comunidade acústica com a abertura do "International Year of Sound'' (IYS) no Grande Anfiteatro da Universidade Sorbonne em Paris, França. Aqui no Brasil, a abertura oficial do Ano Internacional do Som aconteceu em 06 de março e foi marcada por um concerto da Orquestra Sinfônica da Unicamp. A orquestra levou o público a uma viagem pelos universos fantásticos de algumas óperas alemãs, como João e Maria, A Flauta Mágica, O Navio Fantasma, entre outras. Mas com as restrições impostas em todo o mundo para reduzir a disseminação da Covid-19, a Comissão Internacional de Acústica (ICA) decidiu, no fim de março, que o IYS se tornaria uma celebração do som com duração de 2 anos. Esta extensão permitirá que eventos adiados e novos eventos e atividades programados para 2021 sejam incluídos na lista de eventos do IYS. Além disso, o prazo de apresentação de propostas para competições estudantis com temática sobre som apoiadas pelo IYS foi estendido até o final de 2020.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Michalski, Ranny L. X. N., Bruno Masiero, William D’Andrea Fonseca e Márcio Avelar. Fim do Ano Internacional do Som: Fechamento do Ano Internacional do Som 2020 & 2021. Revista Acústica e Vibrações, dezembro de 2021. http://dx.doi.org/10.55753/aev.v36e53.60.

Texto completo da fonte
Resumo:
O Ano Internacional do Som (International Year of Sound — IYS) teve início em 2020 e foi prorrogado por mais um ano, em decorrência da pandemia da COVID-19. O presente artigo reúne informações sobre eventos e ações relacionados ao IYS que aconteceram no Brasil ao longo dos seus dois anos de duração.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Law, Edward, Samuel Gan-Mor, Hazel Wetzstein e Dan Eisikowitch. Electrostatic Processes Underlying Natural and Mechanized Transfer of Pollen. United States Department of Agriculture, maio de 1998. http://dx.doi.org/10.32747/1998.7613035.bard.

Texto completo da fonte
Resumo:
The project objective was to more fully understand how the motion of pollen grains may be controlled by electrostatic forces, and to develop a reliable mechanized pollination system based upon sound electrostatic and aerodynamic principles. Theoretical and experimental analyses and computer simulation methods which investigated electrostatic aspects of natural pollen transfer by insects found that: a) actively flying honeybees accumulate ~ 23 pC average charge (93 pC max.) which elevates their bodies to ~ 47 V likely by triboelectrification, inducing ~ 10 fC of opposite charge onto nearby pollen grains, and overcoming their typically 0.3-3.9 nN detachment force resulting in non-contact electrostatic pollen transfer across a 5 mm or greater air gap from anther-to-bee, thus providing a theoretical basis for earlier experimental observations and "buzz pollination" events; b) charge-relaxation characteristics measured for flower structural components (viz., 3 ns and 25 ns time constants, respectively, for the stigma-style vs. waxy petal surfaces) ensure them to be electrically appropriate targets for electrodeposition of charged pollen grains but not differing sufficiently to facilitate electrodynamic focusing onto the stigma; c) conventional electrostatic focusing beneficially concentrates pollen-deposition electric fields onto the pistill tip by 3-fold as compared to that onto underlying flower structures; and d) pollen viability is adequately maintained following exposure to particulate charging/management fields exceeding 2 MV/m. Laboratory- and field-scale processes/prototype machines for electrostatic application of pollen were successfully developed to dispense pollen in both a dry-powder phase and in a liquid-carried phase utilizing corona, triboelectric, and induction particulate-charging methods; pollen-charge levels attained (~ 1-10 mC/kg) provide pollen-deposition forces 10-, 77-, and 100-fold greater than gravity, respectively, for such charged pollen grains subjected to a 1 kV/cm electric field. Lab and field evaluations have documented charged vs. ukncharged pollen deposition to be significantly (a = 0.01-0.05) increased by 3.9-5.6 times. Orchard trials showed initial fruit set on branches individually treated with electrostatically applied pollen to typically increase up to ~ 2-fold vs. uncharged pollen applications; however, whole-tree applications have not significantly shown similar levels of benefit and corrective measures continue. Project results thus contribute important basic knowledge and applied electrostatics technology which will provide agriculture with alternative/supplemental mechanized pollination systems as tranditional pollen-transfer vectors are further endangered by natural and man-fade factors.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Michelmore, Richard, Eviatar Nevo, Abraham Korol e Tzion Fahima. Genetic Diversity at Resistance Gene Clusters in Wild Populations of Lactuca. United States Department of Agriculture, fevereiro de 2000. http://dx.doi.org/10.32747/2000.7573075.bard.

Texto completo da fonte
Resumo:
Genetic resistance is often the least expensive, most effective, and ecologically-sound method of disease control. It is becoming apparent that plant genomes contain large numbers of disease resistance genes. However, the numbers of different resistance specificities within a genepool and the genetic mechanisms generating diversity are poorly understood. Our objectives were to characterize diversity in clusters of resistance genes in wild progenitors of cultivated lettuce in Israel and California in comparison to diversity within cultivated lettuce, and to determine the extent of gene flow, recombination, and genetic instability in generating variation within clusters of resistance genes. Genetic diversity of resistance genes was analyzed in wild and cultivated germplasm using molecular markers derived from lettuce resistance gene sequences of the NBS-LRR type that mapped to the major cluster if resistance genes in lettuce (Sicard et al. 1999). Three molecular markers, one microsatellite marker and two SCAR markers that amplified LRR- encoding regions, were developed from sequences of resistance gene homologs at the Dm3 cluster (RGC2s) in lettuce. Variation for these markers was assessed in germplasm including 74 genotypes of cultivated lettuce, L. saliva and 71 accessions of the three wild Lactuca spp., L. serriola, L. saligna and L. virosa that represent the major species in the sexually accessible genepool for lettuce. Diversity was also studied within and between natural populations of L. serriola from Israel and California. Large numbers of haplotypes were detected indicating the presence of numerous resistance genes in wild species. We documented a variety of genetic events occurring at clusters of resistance genes for the second objective (Sicard et al., 1999; Woo el al., in prep; Kuang et al., in prepb). The diversity of resistance genes in haplotypes provided evidence for gene duplication and unequal crossing over during the evolution of this cluster of resistance genes. Comparison of nine resistance genes in cv. Diana identified 22 gene conversion and five intergenic recombinations. We cloned and sequenced a 700 bp region from the middle of RGC2 genes from six genotypes, two each from L. saliva, L. serriola, and L. saligna . We have identified over 60 unique RGC2 sequences. Phylogenetic analysis surprisingly demonstrated much greater similarity between than within genotypes. This led to the realization that resistance genes are evolving much slower than had previously been assumed and to a new model as to how resistance genes are evolving (Michelmore and Meyers, 1998). The genetic structure of L. serriola was studied using 319 AFLP markers (Kuang et al., in prepa). Forty-one populations from Turkey, Armenia, Israel, and California as well as seven European countries were examined. AFLP marker data showed that the Turkish and Armenian populations were the most polymorphic populations and the European populations were the least. The Davis, CA population, a recent post-Columbian colonization, showed medium genetic diversity and was genetically close to the Turkish populations. Our results suggest that Turkey - Armenia may be the center of origin and diversity of L. serriola and may therefore have the greatest diversity of resistance genes. Our characterization of the diversity of resistance genes and the genetic mechanisms generating it will allow informed exploration, in situ and ex situ conservation, and utilization of germplasm resources for disease control. The results of this project provide the basis for our future research work, which will lead to a detailed understanding of the evolution of resistance genes in plants.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Hayes. L51633 Investigation of WIC Test Variables. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), fevereiro de 1991. http://dx.doi.org/10.55274/r0010107.

Texto completo da fonte
Resumo:
The Welding Institute of Canada (WIC) has developed a test for pipelines which determine threshold preheat levels necessary to avoid hydrogen cracking in the root region of circumferential welds. This test is simple to perform, reproducible and reasonably inexpensive. Experience has shown, however, that preheat levels determined by the WIC test are typically higher than normally required to achieve, sound, crack-free welds in the field. Consequently, there is a need to characterize the WIC test in more detail even though significant work was performed by other researchers. Through further characterization, pipeline operators will be able to formulate safe and economical welding procedures with more accurate preheat levels. The following report describes the experimental approach, the processes used for analysis, the results, the conclusions drawn and the recommendations for future work.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Kanninen, M. F. L51718 Development and Validation of a Ductile Fracture Analysis Model. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), maio de 1994. http://dx.doi.org/10.55274/r0010321.

Texto completo da fonte
Resumo:
In close cooperation with the Centro Sviluppo Materiali (CSM) and SNAM of Italy, with several years of support from the PRCI NG-18 committee, the Southwest Research Institute (SwRI) has developed and validated a "first principles" predictive model for ductile fracture in a gas transmission pipeline. In particular, the coordinated SwRI and CSM projects for the PRC -supplemented by work contributed by SNAM - has established a theoretically valid methodology and an accompanying line pipe material characterization procedure for gas industry use. This progress provides a theoretically sound framework for designing and operating gas transmission pipelines to be without risk of a large-scale ductile fracture event. However, there remained two important aspects of this technology that needed to be addressed before practical use of the methodology could be made by gas transmission companies. First, because the preceding projects concentrated on pipes with natural gas, to cover the full range of gas transmission pipeline service, the approach needed to be extended to include the effects of gases rich in hydrocarbons. Second, as the number of full-scale pipe fracture experiments that were included in the developmental phase of the research were limited, other data for validation of the model needed to be identified and employed. These two aspects of the ductile fracture methodology development process were conducted concurrently, and have now been completed. The progress that has been provided in detail in this report. The work is culminated by a relation through which the methodology can be applied by pipeline engineers to assess the possibility of a ductile fracture propagation. This report describes the development of a predictive model for ductile fracture in a gas transmission pipeline, thus providing a theoretically sound framework for designing and operating gas pipelines to be without risk of a large-scale ductile fracture event. The model represents an improvement on a number of empirical relations used in designing natural gas pipelines in that this model has been generalized to consider a wide-range of hydrocarbon contents and validated through both additional full-scale instrumented tests carried out by Centro Sviluppo Materiali of Italy and computer simulations conducted at Southwest Research Institute. Application of the model in pipeline design is based on determination of the maximum driving force for fracture, as described in the report, and contrasting this value with measured material resistance that provides a basis for assessing the likelihood of ductile fracture occurring. For existing pipelines the procedure can be used to obtain the maximum operating line pressure that will not put the pipeline at risk of ductile fracture.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia