Academic literature on the topic 'Data over sound'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Data over sound.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Data over sound"

1

Fejfar, Jiří, Jiří Šťastný, Martin Pokorný, Jiří Balej, and Petr Zach. "Analysis of sound data streamed over the network." Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 61, no. 7 (2013): 2105–10. http://dx.doi.org/10.11118/actaun201361072105.

Full text
Abstract:
In this paper we inspect a difference between original sound recording and signal captured after streaming this original recording over a network loaded with a heavy traffic. There are several kinds of failures occurring in the captured recording caused by network congestion. We try to find a method how to evaluate correctness of streamed audio. Usually there are metrics based on a human perception of a signal such as “signal is clear, without audible failures”, “signal is having some failures but it is understandable”, or “signal is inarticulate”. These approaches need to be statistically evaluated on a broad set of respondents, which is time and resource consuming. We try to propose some metrics based on signal properties allowing us to compare the original and captured recording. We use algorithm called Dynamic Time Warping (Müller, 2007) commonly used for time series comparison in this paper. Some other time series exploration approaches can be found in (Fejfar, 2011) and (Fejfar, 2012). The data was acquired in our network laboratory simulating network traffic by downloading files, streaming audio and video simultaneously. Our former experiment inspected Quality of Service (QoS) and its impact on failures of received audio data stream. This experiment is focused on the comparison of sound recordings rather than network mechanism.We focus, in this paper, on a real time audio stream such as a telephone call, where it is not possible to stream audio in advance to a “pool”. Instead it is necessary to achieve as small delay as possible (between speaker voice recording and listener voice replay). We are using RTP protocol for streaming audio.
APA, Harvard, Vancouver, ISO, and other styles
2

Kim, Hyun-Don, Kazunori Komatani, Tetsuya Ogata, and Hiroshi G. Okuno. "Binaural Active Audition for Humanoid Robots to Localise Speech over Entire Azimuth Range." Applied Bionics and Biomechanics 6, no. 3-4 (2009): 355–67. http://dx.doi.org/10.1155/2009/817874.

Full text
Abstract:
We applied motion theory to robot audition to improve the inadequate performance. Motions are critical for overcoming the ambiguity and sparseness of information obtained by two microphones. To realise this, we first designed a sound source localisation system integrated with cross-power spectrum phase (CSP) analysis and an EM algorithm. The CSP of sound signals obtained with only two microphones was used to localise the sound source without having to measure impulse response data. The expectation-maximisation (EM) algorithm helped the system to cope with several moving sound sources and reduce localisation errors. We then proposed a way of constructing a database for moving sounds to evaluate binaural sound source localisation. We evaluated our sound localisation method using artificial moving sounds and confirmed that it could effectively localise moving sounds slower than 1.125 rad/s. Consequently, we solved the problem of distinguishing whether sounds were coming from the front or rear by rotating and/or tipping the robot's head that was equipped with only two microphones. Our system was applied to a humanoid robot called SIG2, and we confirmed its ability to localise sounds over the entire azimuth range as the success rates for sound localisation in the front and rear areas were 97.6% and 75.6% respectively.
APA, Harvard, Vancouver, ISO, and other styles
3

Saldanha, Jane, Shaunak Chakraborty, Shruti Patil, Ketan Kotecha, Satish Kumar, and Anand Nayyar. "Data augmentation using Variational Autoencoders for improvement of respiratory disease classification." PLOS ONE 17, no. 8 (2022): e0266467. http://dx.doi.org/10.1371/journal.pone.0266467.

Full text
Abstract:
Computerized auscultation of lung sounds is gaining importance today with the availability of lung sounds and its potential in overcoming the limitations of traditional diagnosis methods for respiratory diseases. The publicly available ICBHI respiratory sounds database is severely imbalanced, making it difficult for a deep learning model to generalize and provide reliable results. This work aims to synthesize respiratory sounds of various categories using variants of Variational Autoencoders like Multilayer Perceptron VAE (MLP-VAE), Convolutional VAE (CVAE) Conditional VAE and compare the influence of augmenting the imbalanced dataset on the performance of various lung sound classification models. We evaluated the quality of the synthetic respiratory sounds’ quality using metrics such as Fréchet Audio Distance (FAD), Cross-Correlation and Mel Cepstral Distortion. Our results showed that MLP-VAE achieved an average FAD of 12.42 over all classes, whereas Convolutional VAE and Conditional CVAE achieved an average FAD of 11.58 and 11.64 for all classes, respectively. A significant improvement in the classification performance metrics was observed upon augmenting the imbalanced dataset for certain minority classes and marginal improvement for the other classes. Hence, our work shows that deep learning-based lung sound classification models are not only a promising solution over traditional methods but can also achieve a significant performance boost upon augmenting an imbalanced training set.
APA, Harvard, Vancouver, ISO, and other styles
4

Aiello, Luca Maria, Rossano Schifanella, Daniele Quercia, and Francesco Aletta. "Chatty maps: constructing sound maps of urban areas from social media data." Royal Society Open Science 3, no. 3 (2016): 150690. http://dx.doi.org/10.1098/rsos.150690.

Full text
Abstract:
Urban sound has a huge influence over how we perceive places. Yet, city planning is concerned mainly with noise, simply because annoying sounds come to the attention of city officials in the form of complaints, whereas general urban sounds do not come to the attention as they cannot be easily captured at city scale. To capture both unpleasant and pleasant sounds, we applied a new methodology that relies on tagging information of georeferenced pictures to the cities of London and Barcelona. To begin with, we compiled the first urban sound dictionary and compared it with the one produced by collating insights from the literature: ours was experimentally more valid (if correlated with official noise pollution levels) and offered a wider geographical coverage. From picture tags, we then studied the relationship between soundscapes and emotions. We learned that streets with music sounds were associated with strong emotions of joy or sadness, whereas those with human sounds were associated with joy or surprise. Finally, we studied the relationship between soundscapes and people's perceptions and, in so doing, we were able to map which areas are chaotic, monotonous, calm and exciting. Those insights promise to inform the creation of restorative experiences in our increasingly urbanized world.
APA, Harvard, Vancouver, ISO, and other styles
5

Cushing, Colby W., Jason D. Sagers, and Megan Ballard. "Ambient sound observations from beamformed horizontal array data in the Pacific Arctic." Journal of the Acoustical Society of America 152, no. 4 (2022): A71. http://dx.doi.org/10.1121/10.0015582.

Full text
Abstract:
Changes in the Arctic environment with regard to declining sea ice and changing oceanography are expected to alter the ambient sound field, affecting both the sound generating processes and the acoustic propagation. This talk presents acoustic recordings collected on the 150-m isobath on the Chukchi Shelf during the Canada Basin Acoustic Propagation Experiment (CANAPE), which took place over a yearlong period spanning October 2016 to October 2017. The data were recorded on a 52-channel center-tapered horizontal line array and adaptively beamformed to quantify the azimuthal directionality in long-term trends ambient sound under 1200 Hz as well as track specific sound events as they travel through space over time. The acoustic data were analyzed in the context of wind speed and satellite imagery to identify the dominant sound generation mechanisms. Automated identification system (AIS) data were also incorporated to determine sources of ship generated sound and seismic profiler activity observed in the acoustic recordings. This talk will provide an overview of the long-term trends and describe a subset of results from the beamforming. [Work Supported by ONR.]
APA, Harvard, Vancouver, ISO, and other styles
6

Bessen, Sarah Y., James E. Saunders, Eric A. Eisen, and Isabelle L. Magro. "Perceptions of Sound Quality and Enjoyment After Cochlear Implantation." OTO Open 5, no. 3 (2021): 2473974X2110314. http://dx.doi.org/10.1177/2473974x211031471.

Full text
Abstract:
Objectives To characterize the quality and enjoyment of sound by cochlear implant (CI) recipients and identify predictors of these outcomes after cochlear implantation. Study Design Cross-sectional study. Setting A tertiary care hospital. Methods Surveys based on the Hearing Implant Sound Quality Index were sent to all patients who received a CI at a tertiary care hospital from 2000 to 2019. Survey questions prompted CI recipients to characterize enjoyment and quality of voices, music, and various sounds. Results Of the 339 surveys, 60 (17.7%) were returned with complete data. CI recipients had a mean ± SD age of 62.5 ± 17.4 years with a mean 8.0 ± 6.1 years since CI surgery. Older current age and age at implantation significantly predicted lower current sound quality ( P < .05) and sound enjoyment ( P < .05), as well as worsening of sound quality ( P < .05) and sound enjoyment ( P < .05) over time. Greater length of implantation was associated with higher reported quality and enjoyment ( r = 0.4, P < .001; r = 0.4, P < .05), as well as improvement of sound quality ( r = 0.3, P < .05) but not sound enjoyment over time. Conclusion Recipients who had CIs for a longer period had improved quality of sound perception, suggesting a degree of adaptation. However, CI recipients with implantation at an older age reported poorer sound quality and enjoyment as well as worsening sound quality and enjoyment over time, indicating that age-related changes influence outcomes of cochlear implantation.
APA, Harvard, Vancouver, ISO, and other styles
7

Bandara, Meelan, Roshinie Jayasundara, Isuru Ariyarathne, Dulani Meedeniya, and Charith Perera. "Forest Sound Classification Dataset: FSC22." Sensors 23, no. 4 (2023): 2032. http://dx.doi.org/10.3390/s23042032.

Full text
Abstract:
The study of environmental sound classification (ESC) has become popular over the years due to the intricate nature of environmental sounds and the evolution of deep learning (DL) techniques. Forest ESC is one use case of ESC, which has been widely experimented with recently to identify illegal activities inside a forest. However, at present, there is a limitation of public datasets specific to all the possible sounds in a forest environment. Most of the existing experiments have been done using generic environment sound datasets such as ESC-50, U8K, and FSD50K. Importantly, in DL-based sound classification, the lack of quality data can cause misguided information, and the predictions obtained remain questionable. Hence, there is a requirement for a well-defined benchmark forest environment sound dataset. This paper proposes FSC22, which fills the gap of a benchmark dataset for forest environmental sound classification. It includes 2025 sound clips under 27 acoustic classes, which contain possible sounds in a forest environment. We discuss the procedure of dataset preparation and validate it through different baseline sound classification models. Additionally, it provides an analysis of the new dataset compared to other available datasets. Therefore, this dataset can be used by researchers and developers who are working on forest observatory tasks.
APA, Harvard, Vancouver, ISO, and other styles
8

Schwock, Felix, and Shima Abadi. "Summary of underwater ambient sound from wind and rain in the northeast Pacific continental margin." Journal of the Acoustical Society of America 153, no. 3_supplement (2023): A97. http://dx.doi.org/10.1121/10.0018294.

Full text
Abstract:
Analyzing underwater ambient sound from various sources such as ships, marine mammals, rain, and wind is crucial for characterizing the ocean environment. While efforts to analyze ocean ambient sounds have been ongoing since the 1940s, networks such as the Ocean Observatories Initiative (OOI) provide modern large-scale recording setups for a more in-depth analysis. Here we will summarize results from analyzing over 11,000h of wind generated ambient sound and 280 h of ambient sound during rain collected between 2015 and 2019 by two OOI hydrophones deployed in the northeast Pacific continental margin. The hydrophones record continuously at depths of 81 and 581 m with a sample rate of 64 kHz. Meteorological data are provided by surface buoys deployed near the hydrophones. We compare our results to data obtained from a large-scale recording setup in the tropical Pacific Ocean (Ma et al., 2005). In contrast to their results, we found that sound levels during rain in the northeast Pacific Ocean are highly dependent on the wind speed over a wide frequency range. This implies that large-scale distributed sound measurements are necessary to accurately characterize underwater ambient sound from wind and rain across the globe. [Work supported by ONR.]
APA, Harvard, Vancouver, ISO, and other styles
9

Wałęga, Przemysław Andrzej, Mark Kaminski, and Bernardo Cuenca Grau. "Reasoning over Streaming Data in Metric Temporal Datalog." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3092–99. http://dx.doi.org/10.1609/aaai.v33i01.33013092.

Full text
Abstract:
We study stream reasoning in datalogMTL—an extension of Datalog with metric temporal operators. We propose a sound and complete stream reasoning algorithm that is applicable to a fragment datalogMTLFP of datalogMTL, in which propagation of derived information towards past time points is precluded. Memory consumption in our algorithm depends both on the properties of the rule set and the input data stream; in particular, it depends on the distances between timestamps occurring in data. This is undesirable since these distances can be very small, in which case the algorithm may require large amounts of memory. To address this issue, we propose a second algorithm, where the size of the required memory becomes independent on the timestamps in the data at the expense of disallowing punctual intervals in the rule set. Finally, we provide tight bounds to the data complexity of standard query answering in datalogMTLFP without punctual intervals in rules, which yield a new PSPACE lower bound to the data complexity of the full datalogMTL.
APA, Harvard, Vancouver, ISO, and other styles
10

Phillips, James E. "Verification of an acoustic model of outdoor sound propagation from a natural resource compressor station over complex topography." Journal of the Acoustical Society of America 153, no. 3_supplement (2023): A324. http://dx.doi.org/10.1121/10.0019009.

Full text
Abstract:
Outdoor sound propagation from a natural resource compressor station with multiple, large, reciprocating compressors enclosed within a structure was modeled using DGMR iNoise. Data from sound level measurements taken near the station were used to estimate the sound power of the operating compressor station equipment and used as input to the model. The model was then used to project the sound pressure levels at multiple measurement locations over complex topography. Good agreement was achieved between the projected and measured sound pressure levels as far as ½-mile from the station, particularly after accounting for meteorological influences upon sound propagation in the field. Observations and lesson learned while measuring and modeling the sound propagation will be discussed.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!