Artykuły w czasopismach na temat „Sound events”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Sound events.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Sound events”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Elizalde, Benjamin. "Categorization of sound events for automatic sound event classification". Journal of the Acoustical Society of America 153, nr 3_supplement (1.03.2023): A364. http://dx.doi.org/10.1121/10.0019175.

Pełny tekst źródła
Streszczenie:
To train Machine Listening models that classify sounds we need to define recognizable names, attributes, relations, and interactions that produce acoustic phenomena. In this talk, we will review examples of different types of categorizations and how they drive Machine Listening. Categorization of sounds guides the annotation processes of audio datasets and the design of models, but at the same time can limit performance and quality of expression of acoustic phenomena. Examples of categories can be simply named after the sound source or inspired by Cognition (e.g., taxonomies), Psychoacoustics (e.g., adjectives), and Psychomechanics (e.g., materials). These types of classes are often defined by one or two words. Moreover, to acoustically identify sound events we may require instead a sentence providing a description. For example, “malfunctioning escalator” versus “a repeated low-frequency scraping and rubber band snapping.” In any case, we still have limited lexicalized terms in language to describe acoustic phenomena. Language determines a listener's perception and expressiveness of a perceived phenomenon. For example, the sound of water is one of the most distinguishable sounds, but how to describe it without using the word water? Despite limitations in language to describe acoustic phenomena, we should still be able to automatically recognize acoustic content in an audio signal at least as well as humans do.
Style APA, Harvard, Vancouver, ISO itp.
2

Aura, Karine, Guillaume Lemaitre i Patrick Susini. "Verbal imitations of sound events enable recognition of the imitated sound events". Journal of the Acoustical Society of America 123, nr 5 (maj 2008): 3414. http://dx.doi.org/10.1121/1.2934144.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Nishida, Tsuruyo, Kazuhiko Kakehi i Takamasa Kyutoku. "Motion perception of the target sound event under the discriminated two sound events". Journal of the Acoustical Society of America 120, nr 5 (listopad 2006): 3080. http://dx.doi.org/10.1121/1.4787419.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Nakayama, Tsumugi, Taisuke Naito, Shunsuke Kouda i Takatoshi Yokota. "Determining disturbance sounds in aircraft sound events using a CNN-based method". INTER-NOISE and NOISE-CON Congress and Conference Proceedings 268, nr 7 (30.11.2023): 1320–28. http://dx.doi.org/10.3397/in_2023_0196.

Pełny tekst źródła
Streszczenie:
In this paper, we propose a method to determine whether an aircraft sound event contains disturbance sounds through a combination of sound source recognition models developed using a convolutional neural network. First, considering road traffic noise as a disturbance sound, distinct recognition models for aircraft and road traffic noise were developed. Second, simulated signals in which aircraft noise and road traffic noise were superimposed with different signal-to-noise ratios were input into the two recognition models. We investigated the variations in the output of each recognition model with respect to the signal-to-noise ratio. Subsequently, we obtained the output of each model for the case in which disturbance sounds affected an aircraft sound event. Third, we measured the aircraft noise around an airport, and the aircraft sound events on which road traffic noise was superimposed were input into the recognition models to examine the proposed method. It was confirmed that the proposed method helps detect the disturbance noise when intermittent noise from passing vehicles is included in an aircraft sound event.
Style APA, Harvard, Vancouver, ISO itp.
5

Hara, Sunao, i Masanobu Abe. "Predictions for sound events and soundscape impressions from environmental sound using deep neural networks". INTER-NOISE and NOISE-CON Congress and Conference Proceedings 268, nr 3 (30.11.2023): 5239–50. http://dx.doi.org/10.3397/in_2023_0739.

Pełny tekst źródła
Streszczenie:
In this study, we investigate methods for quantifying soundscape impressions, meaning pleasantness and eventfulness, from environmental sounds. From a point of view of Machine Learning (ML) research areas, acoustic scene classification (ASC) tasks and sound event classification (SEC) tasks are intensively studied and their results are helpful to consider soundscape impressions. In general, while most of ASCs and SECs use only sound for the classifications, a soundscape impression should not be perceived just from a sound but perceived from a sound with a landscape by human beings. Therefore, to establish automatic quantification of soundscape impressions, it should use other information such as landscape in addition to sound. First, we tackle to predict of two soundscape impressions using sound data collected by the cloud sensing method. For this purpose, we have proposed prediction method of soundscape impressions using environmental sounds and aerial photographs. Second, we also tackle to establish environmental sound classification using feature extractor trained by Variational Autoencoder (VAE). The feature extractor by VAE can be trained as unsupervised learning, therefore, it could be promising approaches for the growth dataset like as our cloud-sensing data collection schemes. Finally, we discuss about an integration for these methods.
Style APA, Harvard, Vancouver, ISO itp.
6

Maruyama, Hironori, Kosuke Okada i Isamu Motoyoshi. "A two-stage spectral model for sound texture perception: Synthesis and psychophysics". i-Perception 14, nr 1 (styczeń 2023): 204166952311573. http://dx.doi.org/10.1177/20416695231157349.

Pełny tekst źródła
Streszczenie:
The natural environment is filled with a variety of auditory events such as wind blowing, water flowing, and fire crackling. It has been suggested that the perception of such textural sounds is based on the statistics of the natural auditory events. Inspired by a recent spectral model for visual texture perception, we propose a model that can describe the perceived sound texture only with the linear spectrum and the energy spectrum. We tested the validity of the model by using synthetic noise sounds that preserve the two-stage amplitude spectra of the original sound. Psychophysical experiment showed that our synthetic noises were perceived as like the original sounds for 120 real-world auditory events. The performance was comparable with the synthetic sounds produced by McDermott-Simoncelli's model which considers various classes of auditory statistics. The results support the notion that the perception of natural sound textures is predictable by the two-stage spectral signals.
Style APA, Harvard, Vancouver, ISO itp.
7

Domazetovska, Simona, Viktor Gavriloski, Maja Anachkova i Zlatko Petreski. "URBAN SOUND RECOGNITION USING DIFFERENT FEATURE EXTRACTION TECHNIQUES". Facta Universitatis, Series: Automatic Control and Robotics 20, nr 3 (18.12.2021): 155. http://dx.doi.org/10.22190/fuacr211015012d.

Pełny tekst źródła
Streszczenie:
The application of the advanced methods for noise analysis in the urban areas through the development of systems for classification of sound events significantly improves and simplifies the process of noise assessment. The main purpose of sound recognition and classification systems is to develop algorithms that can detect and classify sound events that occur in the chosen environment, giving an appropriate response to their users. In this research, a supervised system for recognition and classification of sound events has been established through the development of feature extraction techniques based on digital signal processing of the audio signals that are further used as an input parameter in the machine learning algorithms for classification of the sound events. Various audio parameters were extracted and processed in order to choose the best set of parameters that result in better recognition of the class to which the sounds belong. The created acoustic event detection and classification (AED/C) system could be further implemented in sound sensors for automatic control of environmental noise using the source classification that leads to reduced amount of required human validation of the sound level measurements since the target noise source is evidently defined.
Style APA, Harvard, Vancouver, ISO itp.
8

Martinek, Jozef, P. Klco, M. Vrabec, T. Zatko, M. Tatar i M. Javorka. "Cough Sound Analysis". Acta Medica Martiniana 13, Supplement-1 (1.03.2013): 15–20. http://dx.doi.org/10.2478/acm-2013-0002.

Pełny tekst źródła
Streszczenie:
Abstract Cough is the most common symptom of many respiratory diseases. Currently, no standardized methods exist for objective monitoring of cough, which could be commercially available and clinically acceptable. Our aim is to develop an algorithm which will be capable, according to the sound events analysis, to perform objective ambulatory and automated monitoring of frequency of cough. Because speech is the most common sound in 24-hour recordings, the first step for developing this algorithm is to distinguish between cough sound and speech. For this purpose we obtained recordings from 20 healthy volunteers. All subjects performed continuous reading of the text from the book with voluntary coughs at the indicated instants. The obtained sounds were analyzed using by linear and non-linear analysis in the time and frequency domain. We used the classification tree for the distinction between cough sound and speech. The median sensitivity was 100% and the median specificity was 95%. In the next step we enlarged the analyzed sound events. Apart from cough sounds and speech the analyzed sounds were induced sneezing, voluntary throat and nasopharynx clearing, voluntary forced ventilation, laughing, voluntary snoring, eructation, nasal blowing and loud swallowing. The sound events were obtained from 32 healthy volunteers and for their analysis and classification we used the same algorithm as in previous study. The median sensitivity was 86% and median specificity was 91%. In the final step, we tested the effectiveness of our developed algorithm for distinction between cough and non-cough sounds produced during normal daily activities in patients suffering from respiratory diseases. Our study group consisted from 9 patients suffering from respiratory diseases. The recording time was 5 hours. The number of coughs counted by our algorithm was compared with manual cough counts done by two skilled co-workers. We have found that the number of cough analyzed by our algorithm and manual counting, as well, were disproportionately different. For that reason we have used another methods for the distinction of cough sound from non-cough sounds. We have compared the classification tree and artificial neural networks. Median sensitivity was increasing from 28% (classification tree) to 82% (artificial neural network), while the median specificity was not changed significantly. We have enlarged our characteristic parameters of the Mel frequency cepstral coefficients, the weighted Euclidean distance and the first and second derivative in time. Likewise the modification of classification algorithm is under our interest
Style APA, Harvard, Vancouver, ISO itp.
9

Heck, Jonas, Josep Llorca-Bofí, Christian Dreier i Michael Vorlaender. "Validation of auralized impulse responses considering masking, loudness and background noise". Journal of the Acoustical Society of America 155, nr 3_Supplement (1.03.2024): A178. http://dx.doi.org/10.1121/10.0027231.

Pełny tekst źródła
Streszczenie:
The use of outdoor virtual scenarios promises to have a lot of potential to facilitate reproducible sensory evaluation experiments in laboratory. Auralizations allow for the integration of simulated or measured sound sources and transfer paths between the sources and receivers. Nonetheless, pure simulations can lack perfect plausibility. This contribution investigates the augmentation of auralized outdoor scenes based on simulated impulse responses (IRs) by ambient or background sounds. For this purpose, foreground events such as car pass-bys are created by simplified simulation of impulse responses. Due to their large number of events, however, ambient sounds are typically not simulated. Instead, spherical microphone array recordings can be used to capture the background sound. Using synthesized car sounds, we examine how much the augmentation by background sound improves the auditory plausibility of simulated impulse responses in comparison with the equivalent measured ones.
Style APA, Harvard, Vancouver, ISO itp.
10

Kim, Yunbin, Jaewon Sa, Yongwha Chung, Daihee Park i Sungju Lee. "Resource-Efficient Pet Dog Sound Events Classification Using LSTM-FCN Based on Time-Series Data". Sensors 18, nr 11 (18.11.2018): 4019. http://dx.doi.org/10.3390/s18114019.

Pełny tekst źródła
Streszczenie:
The use of IoT (Internet of Things) technology for the management of pet dogs left alone at home is increasing. This includes tasks such as automatic feeding, operation of play equipment, and location detection. Classification of the vocalizations of pet dogs using information from a sound sensor is an important method to analyze the behavior or emotions of dogs that are left alone. These sounds should be acquired by attaching the IoT sound sensor to the dog, and then classifying the sound events (e.g., barking, growling, howling, and whining). However, sound sensors tend to transmit large amounts of data and consume considerable amounts of power, which presents issues in the case of resource-constrained IoT sensor devices. In this paper, we propose a way to classify pet dog sound events and improve resource efficiency without significant degradation of accuracy. To achieve this, we only acquire the intensity data of sounds by using a relatively resource-efficient noise sensor. This presents issues as well, since it is difficult to achieve sufficient classification accuracy using only intensity data due to the loss of information from the sound events. To address this problem and avoid significant degradation of classification accuracy, we apply long short-term memory-fully convolutional network (LSTM-FCN), which is a deep learning method, to analyze time-series data, and exploit bicubic interpolation. Based on experimental results, the proposed method based on noise sensors (i.e., Shapelet and LSTM-FCN for time-series) was found to improve energy efficiency by 10 times without significant degradation of accuracy compared to typical methods based on sound sensors (i.e., mel-frequency cepstrum coefficient (MFCC), spectrogram, and mel-spectrum for feature extraction, and support vector machine (SVM) and k-nearest neighbor (K-NN) for classification).
Style APA, Harvard, Vancouver, ISO itp.
11

REDFERN, NICK. "Sound in Horror Film Trailers". Music, Sound, and the Moving Image: Volume 14, Issue 1 14, nr 1 (1.07.2020): 47–71. http://dx.doi.org/10.3828/msmi.2020.4.

Pełny tekst źródła
Streszczenie:
In this paper I analyse the soundtracks of fifty horror film trailers, combining formal analysis of the soundtracks with quantitative methods to describe and analyse how sound creates a dominant emotional tone for audiences through the use of different types of sounds (dialogue, music, and sound effects) and the different sound envelopes of affective events. The results show that horror trailers have a three-part structure that involves establishing the narrative, emotionally engaging the audience, and communicating marketing information. The soundtrack is organised in such a way that different functions are handled by different components in different segments of the soundtrack: dialogue bears responsibility for what we know and the sound for what we feel. Music is employed in a limited number of ways that are ironic, clichéd, and rarely contribute to the dominant emotional tone. Different types of sonic affective events fulfil different roles within horror trailers in relation to narrative, emotion, and marketing. I identify two features not previously discussed in relation to quantitative analysis of film soundtracks: an affective event based on the reactions of characters in horror trailers and the presence of nonlinear features in the sound design of affective events.
Style APA, Harvard, Vancouver, ISO itp.
12

Morris, Robert. "Aspects of Performance Practice in Morton Feldman's Last Pieces". MusMat: Brazilian Journal of Music and Mathematics IV, nr 2 (28.12.2020): 28–40. http://dx.doi.org/10.46926/musmat.2020v4n2.28-40.

Pełny tekst źródła
Streszczenie:
Morton Feldman’s Last Pieces for piano solo of 1959 poses an interesting interpretive problem for the performer. As in many Feldman compositions of the 1950s and 60s, the first movement of the work is notated as a series of "sound events" to be played by the performer choosing the durations for each event. The only tempo indications are "Slow. Soft. Durations are free." This situation is complicated by Feldman’s remark about a similar work from 1960, "[I chose] intervals that seemed to erase or cancel out each sound as soon as we hear the next." I interpret this intension to keep the piece fresh and appealing from sound to sound. So, how the pianist supposed to play Last Pieces in order to supplement the composers desire for a sound to "cancel out" preceding sounds? To answer this question, I propose a way of assessing the salience of each sound event in the first movement of Last Pieces, using various means of associating each of its 43 sound events according chord spacing, register, center pitch and bandwidth, pitch intervals, pitch-classes, set-class, and figured bass. From this data, one has an idea about how to perform the work to minimize similarity relations between adjacent pairs of sound events so that they can have the cancelling effect the composer desired. As a secondary result of this analysis, many cohesive compositional relations come to light even if the work was composed "intuitively".
Style APA, Harvard, Vancouver, ISO itp.
13

Nwankwo, Mary, Qi Meng, Da Yang i Fangfang Liu. "Effects of Forest on Birdsong and Human Acoustic Perception in Urban Parks: A Case Study in Nigeria". Forests 13, nr 7 (24.06.2022): 994. http://dx.doi.org/10.3390/f13070994.

Pełny tekst źródła
Streszczenie:
The quality of the natural sound environment is important for the well-being of humans and for urban sustainability. Therefore, it is important to study how the soundscape of the natural environment affects humans with respect to the different densities of vegetation, and how this affects the frequency of singing events and the sound pressure levels of common birds that generate natural sounds in a commonly visited urban park in Abuja, Nigeria. This study involves the recording of birdsongs, the measurement of sound pressure levels, and a questionnaire evaluation of sound perception and the degree of acoustic comfort in the park. Acoustic comfort, which affects humans, describes the fundamental feelings of users towards the acoustic environment. The results show that first, there is a significant difference between the frequency of singing events of birds for each category of vegetation density (low, medium, and high density) under cloudy and sunny weather conditions, but there is no significant difference during rainy weather. Secondly, the measured sound pressure levels of the birdsongs are affected by vegetation density. This study shows a significant difference between the sound pressure levels of birdsongs and the vegetation density under cloudy, sunny, and rainy weather conditions. In addition, the frequency of singing events of birds is affected by the sound pressure levels of birdsongs with respect to different vegetation densities under different weather conditions. Thirdly, the results from the respondents (N = 160) in this study indicated that the acoustic perception of the park was described as being pleasant, vibrant, eventful, calming, and not considered to be chaotic or annoying in any sense. It also shows that the human perception of birdsong in the park was moderately to strongly correlated with different densities of vegetation, and that demographics play an important role in how natural sounds are perceived in the environment under different weather conditions.
Style APA, Harvard, Vancouver, ISO itp.
14

Kovalenko, Andriy, i Anton Poroshenko. "ANALYSIS OF THE SOUND EVENT DETECTION METHODS AND SYSTEMS". Advanced Information Systems 6, nr 1 (6.04.2022): 65–69. http://dx.doi.org/10.20998/2522-9052.2022.1.11.

Pełny tekst źródła
Streszczenie:
Detection and recognition of loud sounds and characteristic noises can significantly increase the level of safety and ensure timely response to various emergency situations. Audio event detection is the first step in recognizing audio signals in a continuous audio input stream. This article presents a number of problems that are associated with the development of sound event detection systems, such as the deviation for each environment and each sound category, overlapping audio events, unreliable training data, etc. Both methods for detecting monophonic impulsive audio event and polyphonic sound event detection methods which are used in the state-of-the-art sound event detection systems are presented. Such systems are presented in Detection and Classification of Acoustic Scenes and Events (DCASE) challenges and workshops, which take place every year. Beside a majority of works focusing on the improving overall performance in terms of accuracy many other aspects have also been studied. Several systems presented at DCASE 2021 task 4 were considered, and based on their analysis, there was a conclusion about possible future for sound event detection systems. Also the actual directions in the development of modern audio analytics systems are presented, including the study and use of various architectures of neural networks, the use of several data augmentation techniques, such as universal sound separation, etc.
Style APA, Harvard, Vancouver, ISO itp.
15

McDermott, Joshua H., Vinayak Agarwal, Fernanda De La Torre i James Traer. "Understanding auditory intuitive physics". Journal of the Acoustical Society of America 153, nr 3_supplement (1.03.2023): A364. http://dx.doi.org/10.1121/10.0019174.

Pełny tekst źródła
Streszczenie:
Sound is often caused by physical interactions between objects. Humans have some ability to discern these interactions by listening. This talk will describe our lab’s efforts to understand these perceptual inferences. Our research involves three inter-related approaches. First, we make physical and acoustic measurements of everyday objects and surfacesto characterize the sound of real-world objects. Second, we develop methods to synthesize sounds from physical interactions (impacts, scrapes, and rolls). Third, we use these sounds to investigate the perceptual processes and representations underlying auditory intuitive physics. The results reveal regularities in object sounds that have been internalized by humans and that are used to infer object properties and physical events from sound.
Style APA, Harvard, Vancouver, ISO itp.
16

Sanal-Hayes, Nilihan E. M., Lawrence D. Hayes, Peter Walker, Jacqueline L. Mair i J. Gavin Bremner. "Adults Do Not Appropriately Consider Mass Cues of Object Brightness and Pitch Sound to Judge Outcomes of Collision Events". Applied Sciences 12, nr 17 (24.08.2022): 8463. http://dx.doi.org/10.3390/app12178463.

Pełny tekst źródła
Streszczenie:
Adults judge darker objects to be heavier in weight than brighter objects, and objects which make lower pitch sounds as heavier in weight than objects making higher pitch sounds. It is unknown whether adults would make similar pairings if they saw these object properties in collision events. Two experiments examined adults’ judgements of computer-generated collision events based on object brightness and collision pitch sound. These experiments were designed as a precursor for an infant study, to validate the phenomenon. Results from the first experiment revealed that adults rated the bright ball likely event (where the bright ball displaced a stationary object a short distance after colliding with it) higher than the bright ball unlikely event. Conversely, adults rated the dark ball unlikely event (where the dark ball displaced a stationary object a short distance after colliding with it) higher than the dark ball likely event. Results from the second experiment demonstrated that adults judged the low pitch unlikely event (where the ball displaced a stationary object a short distance with a low pitch sound) higher than the low pitch likely event. Moreover, adults judged the high pitch likely event (where the ball displaced a stationary object a short distance with a high pitch sound) higher than the high pitch unlikely event. Results of these experiments suggest adults do not appropriately consider object brightness and pitch sound in collision events.
Style APA, Harvard, Vancouver, ISO itp.
17

Marcell, Michael, Maria Malatanos, Connie Leahy i Cadie Comeaux. "Identifying, rating, and remembering environmental sound events". Behavior Research Methods 39, nr 3 (sierpień 2007): 561–69. http://dx.doi.org/10.3758/bf03193026.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Goricke, Rudolf. "Device for stereophonic recording of sound events". Journal of the Acoustical Society of America 93, nr 3 (marzec 1993): 1682. http://dx.doi.org/10.1121/1.406732.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Stanzial, Domenico, Giorgio Sacchi i Giuliano Schiffrer. "Active playback of acoustic quadraphonic sound events". Journal of the Acoustical Society of America 123, nr 5 (maj 2008): 3093. http://dx.doi.org/10.1121/1.2932933.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Noesselt, Toemme, Daniel Bergmann, Maria Hake, Hans-Jochen Heinze i Robert Fendrich. "Sound increases the saliency of visual events". Brain Research 1220 (lipiec 2008): 157–63. http://dx.doi.org/10.1016/j.brainres.2007.12.060.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Giladi, Ran. "Real-time identification of aircraft sound events". Transportation Research Part D: Transport and Environment 87 (październik 2020): 102527. http://dx.doi.org/10.1016/j.trd.2020.102527.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Loughrin, John, Stacy Antle, Karamat Sistani i Nanh Lovanh. "In Situ Acoustic Treatment of Anaerobic Digesters to Improve Biogas Yields". Environments 7, nr 2 (8.02.2020): 11. http://dx.doi.org/10.3390/environments7020011.

Pełny tekst źródła
Streszczenie:
Sound has the potential to increase biogas yields and enhance wastewater degradation in anaerobic digesters. To assess this potential, two pilot-scale digestion systems were operated, with one exposed to sound at less than 10 kHz and with one acting as a control. Sounds used were sine waves, broadband noise, and orchestral compositions. Weekly biogas production from sound-treated digesters was 18,900 L, more than twice that of the control digester. The sound-treated digesters were primarily exposed to orchestral compositions, because this made cavitational events easier to identify and because harmonic and amplitude shifts in music seem to induce more cavitation. Background recordings from the sound-treated digester were louder and had more cavitational events than those of the control digester, which we ascribe to enhanced microbial growth and the resulting accelerated sludge breakdown. Acoustic cavitation, vibrational energy imparted to wastewater and sludge, and mixing due to a release of bubbles from the sludge may all act in concert to accelerate wastewater degradation and boost biogas production.
Style APA, Harvard, Vancouver, ISO itp.
23

Dunn, David, i René van Peer. "Music, Language and Environment". Leonardo Music Journal 9 (grudzień 1999): 63–67. http://dx.doi.org/10.1162/096112199750316820.

Pełny tekst źródła
Streszczenie:
Interviewed by music journalist René van Peer, the composer and sound recordist David Dunn discusses the sound work he has done in natural environments, his motivations for doing this work, and the thoughts and theories he has developed from it. Most of these works are unique events created for a specific time and location or for specific circumstances. In these events, the sounds generated by the players set up interactions with their immediate surroundings. Soundscape recordings are another aspect of Dunn's work. His work in different natural and cultural environments has enabled him to research areas where music and language intersect.
Style APA, Harvard, Vancouver, ISO itp.
24

Thoen, Bart, Stijn Wielandt i Lieven De Strycker. "Improving AoA Localization Accuracy in Wireless Acoustic Sensor Networks with Angular Probability Density Functions". Sensors 19, nr 4 (21.02.2019): 900. http://dx.doi.org/10.3390/s19040900.

Pełny tekst źródła
Streszczenie:
Advances in energy efficient electronic components create new opportunities for wireless acoustic sensor networks. Such sensors can be deployed to localize unwanted and unexpected sound events in surveillance applications, home assisted living, etc. This research focused on a wireless acoustic sensor network with low-profile low-power linear MEMS microphone arrays, enabling the retrieval of angular information of sound events. The angular information was wirelessly transmitted to a central server, which estimated the location of the sound event. Common angle-of-arrival localization approaches use triangulation, however this article presents a way of using angular probability density functions combined with a matching algorithm to localize sound events. First, two computationally efficient delay-based angle-of-arrival calculation methods were investigated. The matching algorithm is described and compared to a common triangulation approach. The two localization algorithms were experimentally evaluated in a 4.25 m by 9.20 m room, localizing white noise and vocal sounds. The results demonstrate the superior accuracy of the proposed matching algorithm over a common triangulation approach. When localizing a white noise source, an accuracy improvement of up to 114% was achieved.
Style APA, Harvard, Vancouver, ISO itp.
25

JENSEN, KRISTOFFER. "Atomic noise". Organised Sound 10, nr 1 (kwiecień 2005): 75–81. http://dx.doi.org/10.1017/s1355771805000695.

Pełny tekst źródła
Streszczenie:
Stochastic, unvoiced sounds are abundant in music and musical sounds. Without irregularities, the music and sounds become dull and lifeless. This paper presents work on unvoiced sounds that is believed to be useful in noise music. Several methods for obtaining a gradual change towards static white noise are presented. The random values (Dice), random events (Geiger) and random frequencies (Cymbal) noise types are shown to produce many useful sounds. Atomic noise encompasses all three noise types, while adding much more subtle variations and more life to the noise. Methods for obtaining a harmonic sound from the noise are introduced. These methods take advantage of the stochastic nature of the model, facilitating a gradual change from the stochastic sound to the noisy harmonic sound. In addition, the frozen noise repetitions are shown to produce unexpected pitch jumps with a potentially useful musical structure.
Style APA, Harvard, Vancouver, ISO itp.
26

WANG, YA-LUN, CHIA-YU LIN, SHIH-PIN HUANG, CHIA-YUN LEE, MAO-NING TUANMU i TZI-YUAN WANG. "Chub movement is attracted by the collision sounds associated with spawning activities". Zootaxa 5189, nr 1 (23.09.2022): 308–17. http://dx.doi.org/10.11646/zootaxa.5189.1.27.

Pełny tekst źródła
Streszczenie:
Cyprinids (carps, chubs and minnows) possess well-developed hearing and high sensitivity to sound pressure. The sensitive hearing may assist cyprinids with searching for food, territory defense, and mating behavior. Many paired fishes violently shake in sand and gravel while spawning in rivers. However, no study has examined the ecological importance of the collision sound made by the behavior. This study examined whether cohabitated chubs (Opsariichthys evolans and Zacco platypus) use the collision sound as a signal to locate spawning events so they can be a male satellite or egg eater. Three types of sounds (i.e., collision sound, music noise and ambient noise) were played with or without jerkbaits at the midstream of the Keelung River, Taiwan during the spawning season in 2018 and 2019. Generalized linear mixed models were then built to examine the effects of the sound types and the presence of jerkbaits on the number of individuals that the two chubs attracted. Results showed significantly different levels of attractiveness among the three sound types, with the collision sound attracting most fishes, including both females and males, followed by music noise and ambient noise. The presence of jerkbaits increased the number of fishes attracted, but the effect was only statistically marginally significant. These results suggest that the collision sound as an acoustic signal may be more important than a visual signal for the chubs to locate spawning events of other mating pairs, probably because of the longer transmission distance of the former. The present study demonstrates the ecological meanings of the collision sounds made in association with spawning activities of the chubs and implies that the native chub's spawning activities may be affected by the introduced Z. platypus. More studies on the interactions between these cohabitated chubs will benefit the conservation of native chubs.
Style APA, Harvard, Vancouver, ISO itp.
27

Kitazaki, Michiteru. "Human temporal coordination of visual and auditory events in virtual reality". Seeing and Perceiving 25 (2012): 31. http://dx.doi.org/10.1163/187847612x646532.

Pełny tekst źródła
Streszczenie:
Since the speed of sound is much slower than light, we sometimes hear a sound later than an accompanying light event (e.g., thunder and lightning at a far distance). However, Sugita and Suzuki (2003) reported that our brain coordinates a sound and its accompanying light to be perceived simultaneously within 20 m distance. Thus, the light accompanied with physically delayed sound is perceived simultaneously with the sound in near field. We aimed to test if this sound–light coordination occurs in a virtual-reality environment and investigate effects of binocular disparity and motion parallax. Six naive participants observed visual stimuli on a 120-inch screen in a darkroom and heard auditory stimuli from a headphone. A ball was presented in a textured corridor and its distance from the participant was varied from 3–20 m. The ball changed to be in red before or after a short (10 ms) white noise (time difference: −120, −60, −30, 0, +30, +60, +120 ms), and participants judged temporal order of the color-change and the sound. We varied visual depth cues (binocular disparity and motion parallax) in the virtual-reality environment, and measured the physical delay at which visual and auditory events were perceived simultaneously. In terms of the results, we did not find sound–light coordination without binocular disparity or motion parallax, but found it with both cues. These results suggest that binocular disparity and motion parallax are effective for sound–light coordination in virtual-reality environment, and richness of depth cues are important for the coordination.
Style APA, Harvard, Vancouver, ISO itp.
28

Henriques, J. Tomás, Sofia Cavaco i Nuno Correia. "See-Through-Sound". Journal of Information Technology Research 7, nr 1 (styczeń 2014): 59–77. http://dx.doi.org/10.4018/jitr.2014010105.

Pełny tekst źródła
Streszczenie:
See-Through-Sound is a research project that is aimed at creating an innovative solution for mapping visual information into the auditory realm, enabling a spatial environment and its unique features to be described as organized sonic events. Of particular interest to us has been the creation of a tool for people with vision disabilities to help them perceive and recognize objects and features of their environment through sonic representations of light, color and shapes. Applications for sighted people have also been explored as sonification methods for monitoring changes in color within a broad range of scenarios, as well as advanced motion detection. The benefits and promise of this technology are far reaching; it goes beyond mere medical and scientific applications. Ultimately the main goal of this research project is the attempt to systematize and create a universal vocabulary of sonic events that map visual data into auditory data, both for man and machine use.
Style APA, Harvard, Vancouver, ISO itp.
29

Chemistruck, Mike, Andrew Allen, John Snyder i Nikunj Raghuvanshi. "Efficient acoustic perception for virtual AI agents". Proceedings of the ACM on Computer Graphics and Interactive Techniques 4, nr 3 (22.09.2021): 1–13. http://dx.doi.org/10.1145/3480139.

Pełny tekst źródła
Streszczenie:
We model acoustic perception in AI agents efficiently within complex scenes with many sound events. The key idea is to employ perceptual parameters that capture how each sound event propagates through the scene to the agent's location. This naturally conforms virtual perception to human. We propose a simplified auditory masking model that limits localization capability in the presence of distracting sounds. We show that anisotropic reflections as well as the initial sound serve as useful localization cues. Our system is simple, fast, and modular and obtains natural results in our tests, letting agents navigate through passageways and portals by sound alone, and anticipate or track occluded but audible targets. Source code is provided.
Style APA, Harvard, Vancouver, ISO itp.
30

Baliram Singh, Rajesh, Hanqi Zhuang i Jeet Kiran Pawani. "Data Collection, Modeling, and Classification for Gunshot and Gunshot-like Audio Events: A Case Study". Sensors 21, nr 21 (3.11.2021): 7320. http://dx.doi.org/10.3390/s21217320.

Pełny tekst źródła
Streszczenie:
Distinguishing between a dangerous audio event like a gun firing and other non-life-threatening events, such as a plastic bag bursting, can mean the difference between life and death and, therefore, the necessary and unnecessary deployment of public safety personnel. Sounds generated by plastic bag explosions are often confused with real gunshot sounds, by either humans or computer algorithms. As a case study, the research reported in this paper offers insight into sounds of plastic bag explosions and gunshots. An experimental study in this research reveals that a deep learning-based classification model trained with a popular urban sound dataset containing gunshot sounds cannot distinguish plastic bag pop sounds from gunshot sounds. This study further shows that the same deep learning model, if trained with a dataset containing plastic pop sounds, can effectively detect the non-life-threatening sounds. For this purpose, first, a collection of plastic bag-popping sounds was recorded in different environments with varying parameters, such as plastic bag size and distance from the recording microphones. The audio clips’ duration ranged from 400 ms to 600 ms. This collection of data was then used, together with a gunshot sound dataset, to train a classification model based on a convolutional neural network (CNN) to differentiate life-threatening gunshot events from non-life-threatening plastic bag explosion events. A comparison between two feature extraction methods, the Mel-frequency cepstral coefficients (MFCC) and Mel-spectrograms, was also done. Experimental studies conducted in this research show that once the plastic bag pop sounds are injected into model training, the CNN classification model performs well in distinguishing actual gunshot sounds from plastic bag sounds.
Style APA, Harvard, Vancouver, ISO itp.
31

Schneider, Sebastian, i Paul Wilhelm Dierkes. "Localize Animal Sound Events Reliably (LASER): A New Software for Sound Localization in Zoos". Journal of Zoological and Botanical Gardens 2, nr 2 (1.04.2021): 146–63. http://dx.doi.org/10.3390/jzbg2020011.

Pełny tekst źródła
Streszczenie:
Locating a vocalizing animal can be useful in many fields of bioacoustics and behavioral research, and is often done in the wild, covering large areas. In zoos, however, the application of this method becomes particularly difficult, because, on the one hand, the animals are in a relatively small area and, on the other hand, reverberant environments and background noise complicate the analysis. Nevertheless, by localizing and analyzing animal sounds, valuable information on physiological state, sex, subspecies, reproductive state, social status, and animal welfare can be gathered. Therefore, we developed a sound localization software that is able to estimate the position of a vocalizing animal precisely, making it possible to assign the vocalization to the corresponding individual, even under difficult conditions. In this study, the accuracy and reliability of the software is tested under various conditions. Different vocalizations were played back through a loudspeaker and recorded with several microphones to verify the accuracy. In addition, tests were carried out under real conditions using the example of the giant otter enclosure at Dortmund Zoo, Germany. The results show that the software can estimate the correct position of a sound source with a high accuracy (median of the deviation 0.234 m). Consequently, this software could make an important contribution to basic research via position determination and the associated differentiation of individuals, and could be relevant in a long-term application for monitoring animal welfare in zoos.
Style APA, Harvard, Vancouver, ISO itp.
32

Ntalampiras, Stavros. "Audio Pattern Recognition of Baby Crying Sound Events". Journal of the Audio Engineering Society 63, nr 5 (22.05.2015): 358–69. http://dx.doi.org/10.17743/jaes.2015.0025.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Morfi, Veronica, Robert F. Lachlan i Dan Stowell. "Deep perceptual embeddings for unlabelled animal sound events". Journal of the Acoustical Society of America 150, nr 1 (lipiec 2021): 2–11. http://dx.doi.org/10.1121/10.0005475.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Castelvecchi, Davide. "Using sound to explore events of the Universe". Nature 597, nr 7874 (30.08.2021): 144. http://dx.doi.org/10.1038/d41586-021-02347-3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Chen, Yi-Chuan, i Su-Ling Yeh. "Visual events modulated by sound in repetition blindness". Psychonomic Bulletin & Review 15, nr 2 (kwiecień 2008): 404–8. http://dx.doi.org/10.3758/pbr.15.2.404.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Lemaitre, Guillaume, Arnaud Dessein, Patrick Susini i Karine Aura. "Vocal Imitations and the Identification of Sound Events". Ecological Psychology 23, nr 4 (7.11.2011): 267–307. http://dx.doi.org/10.1080/10407413.2011.617225.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Ogg, Mattson, Thomas A. Carlson i L. Robert Slevc. "The Rapid Emergence of Auditory Object Representations in Cortex Reflect Central Acoustic Attributes". Journal of Cognitive Neuroscience 32, nr 1 (styczeń 2020): 111–23. http://dx.doi.org/10.1162/jocn_a_01472.

Pełny tekst źródła
Streszczenie:
Human listeners are bombarded by acoustic information that the brain rapidly organizes into coherent percepts of objects and events in the environment, which aids speech and music perception. The efficiency of auditory object recognition belies the critical constraint that acoustic stimuli necessarily require time to unfold. Using magnetoencephalography, we studied the time course of the neural processes that transform dynamic acoustic information into auditory object representations. Participants listened to a diverse set of 36 tokens comprising everyday sounds from a typical human environment. Multivariate pattern analysis was used to decode the sound tokens from the magnetoencephalographic recordings. We show that sound tokens can be decoded from brain activity beginning 90 msec after stimulus onset with peak decoding performance occurring at 155 msec poststimulus onset. Decoding performance was primarily driven by differences between category representations (e.g., environmental vs. instrument sounds), although within-category decoding was better than chance. Representational similarity analysis revealed that these emerging neural representations were related to harmonic and spectrotemporal differences among the stimuli, which correspond to canonical acoustic features processed by the auditory pathway. Our findings begin to link the processing of physical sound properties with the perception of auditory objects and events in cortex.
Style APA, Harvard, Vancouver, ISO itp.
38

Franzoni, Valentina, Giulio Biondi i Alfredo Milani. "Emotional sounds of crowds: spectrogram-based analysis using deep learning". Multimedia Tools and Applications 79, nr 47-48 (17.08.2020): 36063–75. http://dx.doi.org/10.1007/s11042-020-09428-x.

Pełny tekst źródła
Streszczenie:
AbstractCrowds express emotions as a collective individual, which is evident from the sounds that a crowd produces in particular events, e.g., collective booing, laughing or cheering in sports matches, movies, theaters, concerts, political demonstrations, and riots. A critical question concerning the innovative concept of crowd emotions is whether the emotional content of crowd sounds can be characterized by frequency-amplitude features, using analysis techniques similar to those applied on individual voices, where deep learning classification is applied to spectrogram images derived by sound transformations. In this work, we present a technique based on the generation of sound spectrograms from fragments of fixed length, extracted from original audio clips recorded in high-attendance events, where the crowd acts as a collective individual. Transfer learning techniques are used on a convolutional neural network, pre-trained on low-level features using the well-known ImageNet extensive dataset of visual knowledge. The original sound clips are filtered and normalized in amplitude for a correct spectrogram generation, on which we fine-tune the domain-specific features. Experiments held on the finally trained Convolutional Neural Network show promising performances of the proposed model to classify the emotions of the crowd.
Style APA, Harvard, Vancouver, ISO itp.
39

Gavrovska, Ana, Goran Zajić, Vesna Bogdanović, Irini Reljin i Branimir Reljin. "Identification of S1 and S2 Heart Sound Patterns Based on Fractal Theory and Shape Context". Complexity 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/1580414.

Pełny tekst źródła
Streszczenie:
There has been a sustained effort in the research community over the recent years to develop algorithms that automatically analyze heart sounds. One of the major challenges is identifying primary heart sounds, S1 and S2, as they represent reference events for the analysis. The study presented in this paper analyzes the possibility of improving the structure characterization based on shape context and structure assessment using a small number of descriptors. Particularly, for the primary sound characterization, an adaptive waveform filtering is applied based on blanket fractal dimension for each preprocessed sound candidate belonging to pediatric subjects. This is followed by applying the shape based methods selected for the structure assessment of primary heart sounds. Different methods, such as the fractal ones, are used for the comparison. The analysis of heart sound patterns is performed using support vector machine classifier showing promising results (above 95% accuracy). The obtained results suggest that it is possible to improve the identification process using the shape related methods which are rarely applied. This can be helpful for applications involving automatic heart sound analysis.
Style APA, Harvard, Vancouver, ISO itp.
40

Ciaburro, Giuseppe. "Sound Event Detection in Underground Parking Garage Using Convolutional Neural Network". Big Data and Cognitive Computing 4, nr 3 (17.08.2020): 20. http://dx.doi.org/10.3390/bdcc4030020.

Pełny tekst źródła
Streszczenie:
Parking is a crucial element in urban mobility management. The availability of parking areas makes it easier to use a service, determining its success. Proper parking management allows economic operators located nearby to increase their business revenue. Underground parking areas during off-peak hours are uncrowded places, where user safety is guaranteed by company overseers. Due to the large size, ensuring adequate surveillance would require many operators to increase the costs of parking fees. To reduce costs, video surveillance systems are used, in which an operator monitors many areas. However, some activities are beyond the control of this technology. In this work, a procedure to identify sound events in an underground garage is developed. The aim of the work is to detect sounds identifying dangerous situations and to activate an automatic alert that draws the attention of surveillance in that area. To do this, the sounds of a parking sector were detected with the use of sound sensors. These sounds were analyzed by a sound detector based on convolutional neural networks. The procedure returned high accuracy in identifying a car crash in an underground parking area.
Style APA, Harvard, Vancouver, ISO itp.
41

Luzzi, Sergio, Chiara Bartalucci, Sara Delle Macchie, Rosella Natale i Paola Pulella. "THE INTERNATIONAL YEAR OF SOUND INITIATIVES FOR WORLDWIDE AWARENESS ABOUT THE IMPORTANCE OF SOUND". VOLUME 39, VOLUME 39 (2021): 135. http://dx.doi.org/10.36336/akustika202139135.

Pełny tekst źródła
Streszczenie:
The International Year of Sound is a global initiative promoted by UNESCO Charter of Sound 39C/59 2017: “The Importance of Sound in Today’s World: Promoting Best Practices”. It consists of coordinated activities on regional, national, and international levels, organized by the International Commission for Acoustics. These activities aim to stimulate the understanding of the importance of sound in all aspects of our society throughout the world. In this paper, the aim of the International Year of Sound is described, together with the organization of activities and planned initiatives. Despite the pandemic context has caused the rescheduling of most of the events, the International Year of Sound has been going on, thanks to the awareness-raising projects of the National Commissions for Acoustics. A particular focus on “My world of sounds”, the International Competition for students, has been done. As a matter of fact, it has been observed that the awareness of younger generations plays a crucial role in the spread of correct acoustic behaviours.
Style APA, Harvard, Vancouver, ISO itp.
42

Miyauchi, Ryota, Dea-Gee Kang, Yukio Iwaya i Yôiti Suzuki. "Relative Localization of Auditory and Visual Events Presented in Peripheral Visual Field". Multisensory Research 27, nr 1 (2014): 1–16. http://dx.doi.org/10.1163/22134808-00002442.

Pełny tekst źródła
Streszczenie:
The brain apparently remaps the perceived locations of simultaneous auditory and visual events into a unified audio-visual space to integrate and/or compare multisensory inputs. However, there is little qualitative or quantitative data on how simultaneous auditory and visual events are located in the peripheral visual field (i.e., outside a few degrees of the fovea). We presented a sound burst and a flashing light simultaneously not only in the central visual field but also in the peripheral visual field and measured the relative perceived locations of the sound and flash. The results revealed that the sound and flash were perceptually located at the same location when the sound was presented at a 5° periphery of the flash, even when the participants’ eyes were fixed. Measurements of the unisensory locations of each sound and flash in a pointing task demonstrated that the perceived location of the sound shifted toward the front, while the perceived location of the flash shifted toward the periphery. As a result, the discrepancy between the perceptual location of the sound and the flash was around 4°. This suggests that the brain maps the unisensory locations of auditory and visual events into a unified audio-visual space, enabling it to generate unisensory spatial information about the events.
Style APA, Harvard, Vancouver, ISO itp.
43

BAILES, SARA JANE, ALEXANDRINA HEMSLEY, ROYONA MITRA, RAJNI SHAH, ARABELLA STANGER i JEREMY TOUSSAINT-BAPTISTE. "Unsettling Sound: Some Traces". Theatre Research International 46, nr 2 (lipiec 2021): 230–45. http://dx.doi.org/10.1017/s0307883321000134.

Pełny tekst źródła
Streszczenie:
Unsettling Sound examined the at once destabilizing and liberatory experiential dimensions of sounding through three dialogue-events, opening up possible meanings, sensations, ideological and bodily potentialities of the sonic, and the sonic potentialities of bodies. This series of conversations, co-devised by Sara Jane Bailes and Arabella Stanger, took place remotely and online. Here, traces of the events are gathered into print. Alexandrina Hemsley reflects on their exchange with Seke Chimutengwende and Xana in the midst of creating an audio version of the stage show, Black Holes (2018). Rajni Shah and Royona Mitra share voice letters on the impossibilities of anti-racist institutions and dreaming new worlds. Jeremy Toussaint-Baptiste shares a new conceptual score, Étude for Not Knowing #1 (2021).
Style APA, Harvard, Vancouver, ISO itp.
44

Hayashi, Shota, Meiyo Tamaoka, Tomoya Tateishi, Yuki Murota, Ibuki Handa i Yasunari Miyazaki. "A New Feature with the Potential to Detect the Severity of Obstructive Sleep Apnoea via Snoring Sound Analysis". International Journal of Environmental Research and Public Health 17, nr 8 (24.04.2020): 2951. http://dx.doi.org/10.3390/ijerph17082951.

Pełny tekst źródła
Streszczenie:
The severity of obstructive sleep apnoea (OSA) is diagnosed with polysomnography (PSG), during which patients are monitored by over 20 physiological sensors overnight. These sensors often bother patients and may affect patients’ sleep and OSA. This study aimed to investigate a method for analyzing patient snore sounds to detect the severity of OSA. Using a microphone placed at the patient’s bedside, the snoring and breathing sounds of 22 participants were recorded while they simultaneously underwent PSG. We examined some features from the snoring and breathing sounds and examined the correlation between these features and the snore-specific apnoea-hypopnea index (ssAHI), defined as the number of apnoea and hypopnea events during the hour before a snore episode. Statistical analyses revealed that the ssAHI was positively correlated with the Mel frequency cepstral coefficients (MFCC) and volume information (VI). Based on clustering results, mild snore sound episodes and snore sound episodes from mild OSA patients were mainly classified into cluster 1. The results of clustering severe snore sound episodes and snore sound episodes from severe OSA patients were mainly classified into cluster 2. The features of snoring sounds that we identified have the potential to detect the severity of OSA.
Style APA, Harvard, Vancouver, ISO itp.
45

Ittelson, William H. "The Perception of Nonmaterial Objects and Events". Leonardo 40, nr 3 (czerwiec 2007): 279–83. http://dx.doi.org/10.1162/leon.2007.40.3.279.

Pełny tekst źródła
Streszczenie:
All animals receive light and sound from the surrounding world and use this input to provide information about the material properties of that world. Humans, in addition, are able to utilize information in light and sound that has nothing to do with its material source but is about objects and events that are not materially present and may have no material existence at all. The author argues that this perceptual capacity is a necessary condition for the development of the arts, the humanities, science and all that is considered uniquely human.
Style APA, Harvard, Vancouver, ISO itp.
46

Kikuchi, Mumi, Tomonari Akamatsu, Daniel Gonzalez-Socoloske, Diogo A. de Souza, Leon D. Olivera-Gomez i Vera M. F. da Silva. "Detection of manatee feeding events by animal-borne underwater sound recorders". Journal of the Marine Biological Association of the United Kingdom 94, nr 6 (13.11.2013): 1139–46. http://dx.doi.org/10.1017/s0025315413001343.

Pełny tekst źródła
Streszczenie:
Studies of the feeding behaviour of aquatic species in their natural environment are difficult, since direct observations are rarely possible. In this study, a newly developed animal-borne underwater sound recorder (AUSOMS-mini) was applied to captive Amazonian (Trichechus inunguis) and Antillean (Trichechus manatus manatus) manatees in order to directly record their feeding sounds. Different species of aquatic plants were offered to the manatees separately. Feeding sounds were automatically extracted using a custom program developed with MATLAB. Compared to ground truth data, the program correctly detected 65–79% of the feeding events, with a 7.3% or lower false alarm rate, which suggests that this methodology is a useful recorder of manatee feeding events. All manatees foraged during both the daytime and night-time. However, manatees tended to be less active and masticated slower during the night than during the day. The manatee mastication cycle duration depended on plant species and individual. This animal-borne acoustic monitoring system could greatly increase our knowledge of manatee feeding ecology by providing the exact time, duration and number of feeding events, and potentially the plant species being fed on.
Style APA, Harvard, Vancouver, ISO itp.
47

Jadhav, Swapnil, Sarvesh Karpe i Siuli Das. "Sound Classification Using Python". ITM Web of Conferences 40 (2021): 03024. http://dx.doi.org/10.1051/itmconf/20214003024.

Pełny tekst źródła
Streszczenie:
Sound assumes a significant part in human existence. It is one of the fundamental tangible data which we get or see from the climate and their components which have three principal credits viz. Sufficiency (Loudness of the sound), Frequency (The pitch of the sound), Timbre (Quality of the sound or the personality of the sound for example the Sound contrast between a piano and a violin). It is an event generated from the action. Humans are highly efficient to learn and recognize new and various types of sounds and sound events. There is a lot of research work going on Automatic sound classification and it is used in various real-world applications. The paper proposes an examination of an establishment disturbance classifier reliant upon a model affirmation approach using a neural organization. The signs submitted to the neural association are depicted through a lot of 12 MFCC (Mel Frequency Cepstral Coefficient) limits routinely present toward the front finish of an adaptable terminal. The introduction of the classifier, assessed as far as percent misclassification, show an exactness going between 73 % and 95 % relying upon the term of the choice window. Transmitting sound using a machine and expecting an output is considered a highly accurate deep learning task. This technology is used in our smartphones with mobile assistants such as Siri, Alexa, Google Assistant. In the case of the Google Speech recognition data set over 94 percent accuracy is obtained when trying to identify one of 20 words, silence or unknown. It is a very difficult task to recognize audio or sound events systematically and work on it for identification and give output. We are going to work on it using python programming language and some deep learning techniques. It’s a basic model that we are trying to develop, taking the next step to the innovative model that can help society and also which represent the innovative ideas of Engineering Students.
Style APA, Harvard, Vancouver, ISO itp.
48

Wyerman, Barry, i Robert Unetich. "Pickleball Sound 101 - The Statistics of Pickleball Sound and a Recommended Noise Standard for Pickleball Play". INTER-NOISE and NOISE-CON Congress and Conference Proceedings 266, nr 2 (25.05.2023): 1–9. http://dx.doi.org/10.3397/nc_2023_0001.

Pełny tekst źródła
Streszczenie:
Sound from a pickleball game is a random series of impact sounds each time a paddle strikes the ball. The sound level from these impacts varies depending on the paddles and balls used, the skill of each player, and the force of each impact. A simplified measurement method and a common metric were used to measure the time varying nature of pickleball impacts as individual events. The sound measured from pickleball play exhibited a normal distribution that allowed statistical techniques to be applied to these measurements. Estimates were made of the percentage of time that maximum sound levels would be exceeded. Current noise ordinances fail to quantify the annoyance from these short duration random impacts. This results in measured pickleball sound levels that are not a noise violation but remain a source of complaints from nearby residents. Noise limits for pickleball play are proposed using this common metric so that pickleball sound can be effectively quantified and managed.
Style APA, Harvard, Vancouver, ISO itp.
49

Barcelo Perez, Carlos, i Yamile Gonzalez Sanchez. "Non usual urban sounds in a western neighborhood of Havana city". MOJ Public Health 8, nr 4 (19.07.2019): 130–34. http://dx.doi.org/10.15406/mojph.2019.08.00297.

Pełny tekst źródła
Streszczenie:
Main noise source in the city of Havana is road traffic, but many noise complaints are connected to a mechanical source and audio exposures as level induced from air conditioners or fun which escapes from entertainment places. Last year some complaints by noise exposure to the so called “noise attacks” were referred by American diplomats. This paper intends to present a sound characterization of possible noise events experienced including audible structure of this acoustic episodes (target sound). Article could contribute to clarify nature of the so-called noise attacks which have worsened the diplomatic relations between the United States and Cuba. Some recordings of the target sound supplied were assessed and also, background noise was featured through sound level measurements. It was watched that background noise at night is connected several times to presence or absence of crickets calls in green spaces at a residential neighborhood, Western of Havana. Target sound is described on behalf its possible noise level and frequency structure, by two main shapes, one of them reflects a broadband noise with a high pitch overlapped. The other one has no high frequency (pitch). Several target sounds episodes recorded show certain likeness to the background noise with cricket sounds added. Ultrasounds and microwaves are not supported as source of target sounds connected to noise attacks. Sound estimated levels and frequency structure do not support hypothesis of induced health impairment. Sometimes target sound level is weak, close to background. Directional sound seems not able to penetrate isolated built environment.
Style APA, Harvard, Vancouver, ISO itp.
50

Shen Song, Shen Song, Cong Zhang Shen Song i Xinyuan You Cong Zhang. "Decoupling Temporal Convolutional Networks Model in Sound Event Detection and Localization". 網際網路技術學刊 24, nr 1 (styczeń 2023): 089–99. http://dx.doi.org/10.53106/160792642023012401009.

Pełny tekst źródła
Streszczenie:
<p>Sound event detection is sensitive to the network depth, and the increase of the network depth will lead to a decrease in the event detection ability. However, event localization has a deeper requirement for the network depth. In this paper, the accuracy of the joint task of event detection and localization is improved by decoupling SELD-TCN. The joint task is reflected in the early fusion of primary features and the enhancement of the generalization ability of the sound event detection branch as the DOA branch mask, while the advanced feature extraction and recognition of the two branches are carried out in different ways separately. The primary features extracted by resnet16-dilated instead of CNN-Pool. The SED branch adopts linear temporal convolution to realize sound event detection by imitating the linear classifier, and ED-TCN is used for the localization detection branch. The joint training of the DOA branch and the SED branch will affect each other badly. Using the most appropriate way for both branches and masking the DOA branch with the SED branch can improve the performance of both. In the TUT Sound Events 2019 dataset, the DOA error achieved an error effect of 6.73, 8.8 and 30.7 with no overlapping source data, with two and three overlapping sources, respectively. The SED accuracy has been significantly improved, and the DOA error has been significantly reduced.</p> <p>&nbsp;</p>
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii