Journal articles on the topic 'Data over sound'

To see the other types of publications on this topic, follow the link: Data over sound.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Data over sound.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Fejfar, Jiří, Jiří Šťastný, Martin Pokorný, Jiří Balej, and Petr Zach. "Analysis of sound data streamed over the network." Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 61, no. 7 (2013): 2105–10. http://dx.doi.org/10.11118/actaun201361072105.

Full text
Abstract:
In this paper we inspect a difference between original sound recording and signal captured after streaming this original recording over a network loaded with a heavy traffic. There are several kinds of failures occurring in the captured recording caused by network congestion. We try to find a method how to evaluate correctness of streamed audio. Usually there are metrics based on a human perception of a signal such as “signal is clear, without audible failures”, “signal is having some failures but it is understandable”, or “signal is inarticulate”. These approaches need to be statistically evaluated on a broad set of respondents, which is time and resource consuming. We try to propose some metrics based on signal properties allowing us to compare the original and captured recording. We use algorithm called Dynamic Time Warping (Müller, 2007) commonly used for time series comparison in this paper. Some other time series exploration approaches can be found in (Fejfar, 2011) and (Fejfar, 2012). The data was acquired in our network laboratory simulating network traffic by downloading files, streaming audio and video simultaneously. Our former experiment inspected Quality of Service (QoS) and its impact on failures of received audio data stream. This experiment is focused on the comparison of sound recordings rather than network mechanism.We focus, in this paper, on a real time audio stream such as a telephone call, where it is not possible to stream audio in advance to a “pool”. Instead it is necessary to achieve as small delay as possible (between speaker voice recording and listener voice replay). We are using RTP protocol for streaming audio.
APA, Harvard, Vancouver, ISO, and other styles
2

Kim, Hyun-Don, Kazunori Komatani, Tetsuya Ogata, and Hiroshi G. Okuno. "Binaural Active Audition for Humanoid Robots to Localise Speech over Entire Azimuth Range." Applied Bionics and Biomechanics 6, no. 3-4 (2009): 355–67. http://dx.doi.org/10.1155/2009/817874.

Full text
Abstract:
We applied motion theory to robot audition to improve the inadequate performance. Motions are critical for overcoming the ambiguity and sparseness of information obtained by two microphones. To realise this, we first designed a sound source localisation system integrated with cross-power spectrum phase (CSP) analysis and an EM algorithm. The CSP of sound signals obtained with only two microphones was used to localise the sound source without having to measure impulse response data. The expectation-maximisation (EM) algorithm helped the system to cope with several moving sound sources and reduce localisation errors. We then proposed a way of constructing a database for moving sounds to evaluate binaural sound source localisation. We evaluated our sound localisation method using artificial moving sounds and confirmed that it could effectively localise moving sounds slower than 1.125 rad/s. Consequently, we solved the problem of distinguishing whether sounds were coming from the front or rear by rotating and/or tipping the robot's head that was equipped with only two microphones. Our system was applied to a humanoid robot called SIG2, and we confirmed its ability to localise sounds over the entire azimuth range as the success rates for sound localisation in the front and rear areas were 97.6% and 75.6% respectively.
APA, Harvard, Vancouver, ISO, and other styles
3

Saldanha, Jane, Shaunak Chakraborty, Shruti Patil, Ketan Kotecha, Satish Kumar, and Anand Nayyar. "Data augmentation using Variational Autoencoders for improvement of respiratory disease classification." PLOS ONE 17, no. 8 (August 12, 2022): e0266467. http://dx.doi.org/10.1371/journal.pone.0266467.

Full text
Abstract:
Computerized auscultation of lung sounds is gaining importance today with the availability of lung sounds and its potential in overcoming the limitations of traditional diagnosis methods for respiratory diseases. The publicly available ICBHI respiratory sounds database is severely imbalanced, making it difficult for a deep learning model to generalize and provide reliable results. This work aims to synthesize respiratory sounds of various categories using variants of Variational Autoencoders like Multilayer Perceptron VAE (MLP-VAE), Convolutional VAE (CVAE) Conditional VAE and compare the influence of augmenting the imbalanced dataset on the performance of various lung sound classification models. We evaluated the quality of the synthetic respiratory sounds’ quality using metrics such as Fréchet Audio Distance (FAD), Cross-Correlation and Mel Cepstral Distortion. Our results showed that MLP-VAE achieved an average FAD of 12.42 over all classes, whereas Convolutional VAE and Conditional CVAE achieved an average FAD of 11.58 and 11.64 for all classes, respectively. A significant improvement in the classification performance metrics was observed upon augmenting the imbalanced dataset for certain minority classes and marginal improvement for the other classes. Hence, our work shows that deep learning-based lung sound classification models are not only a promising solution over traditional methods but can also achieve a significant performance boost upon augmenting an imbalanced training set.
APA, Harvard, Vancouver, ISO, and other styles
4

Aiello, Luca Maria, Rossano Schifanella, Daniele Quercia, and Francesco Aletta. "Chatty maps: constructing sound maps of urban areas from social media data." Royal Society Open Science 3, no. 3 (March 2016): 150690. http://dx.doi.org/10.1098/rsos.150690.

Full text
Abstract:
Urban sound has a huge influence over how we perceive places. Yet, city planning is concerned mainly with noise, simply because annoying sounds come to the attention of city officials in the form of complaints, whereas general urban sounds do not come to the attention as they cannot be easily captured at city scale. To capture both unpleasant and pleasant sounds, we applied a new methodology that relies on tagging information of georeferenced pictures to the cities of London and Barcelona. To begin with, we compiled the first urban sound dictionary and compared it with the one produced by collating insights from the literature: ours was experimentally more valid (if correlated with official noise pollution levels) and offered a wider geographical coverage. From picture tags, we then studied the relationship between soundscapes and emotions. We learned that streets with music sounds were associated with strong emotions of joy or sadness, whereas those with human sounds were associated with joy or surprise. Finally, we studied the relationship between soundscapes and people's perceptions and, in so doing, we were able to map which areas are chaotic, monotonous, calm and exciting. Those insights promise to inform the creation of restorative experiences in our increasingly urbanized world.
APA, Harvard, Vancouver, ISO, and other styles
5

Cushing, Colby W., Jason D. Sagers, and Megan Ballard. "Ambient sound observations from beamformed horizontal array data in the Pacific Arctic." Journal of the Acoustical Society of America 152, no. 4 (October 2022): A71. http://dx.doi.org/10.1121/10.0015582.

Full text
Abstract:
Changes in the Arctic environment with regard to declining sea ice and changing oceanography are expected to alter the ambient sound field, affecting both the sound generating processes and the acoustic propagation. This talk presents acoustic recordings collected on the 150-m isobath on the Chukchi Shelf during the Canada Basin Acoustic Propagation Experiment (CANAPE), which took place over a yearlong period spanning October 2016 to October 2017. The data were recorded on a 52-channel center-tapered horizontal line array and adaptively beamformed to quantify the azimuthal directionality in long-term trends ambient sound under 1200 Hz as well as track specific sound events as they travel through space over time. The acoustic data were analyzed in the context of wind speed and satellite imagery to identify the dominant sound generation mechanisms. Automated identification system (AIS) data were also incorporated to determine sources of ship generated sound and seismic profiler activity observed in the acoustic recordings. This talk will provide an overview of the long-term trends and describe a subset of results from the beamforming. [Work Supported by ONR.]
APA, Harvard, Vancouver, ISO, and other styles
6

Bessen, Sarah Y., James E. Saunders, Eric A. Eisen, and Isabelle L. Magro. "Perceptions of Sound Quality and Enjoyment After Cochlear Implantation." OTO Open 5, no. 3 (July 2021): 2473974X2110314. http://dx.doi.org/10.1177/2473974x211031471.

Full text
Abstract:
Objectives To characterize the quality and enjoyment of sound by cochlear implant (CI) recipients and identify predictors of these outcomes after cochlear implantation. Study Design Cross-sectional study. Setting A tertiary care hospital. Methods Surveys based on the Hearing Implant Sound Quality Index were sent to all patients who received a CI at a tertiary care hospital from 2000 to 2019. Survey questions prompted CI recipients to characterize enjoyment and quality of voices, music, and various sounds. Results Of the 339 surveys, 60 (17.7%) were returned with complete data. CI recipients had a mean ± SD age of 62.5 ± 17.4 years with a mean 8.0 ± 6.1 years since CI surgery. Older current age and age at implantation significantly predicted lower current sound quality ( P < .05) and sound enjoyment ( P < .05), as well as worsening of sound quality ( P < .05) and sound enjoyment ( P < .05) over time. Greater length of implantation was associated with higher reported quality and enjoyment ( r = 0.4, P < .001; r = 0.4, P < .05), as well as improvement of sound quality ( r = 0.3, P < .05) but not sound enjoyment over time. Conclusion Recipients who had CIs for a longer period had improved quality of sound perception, suggesting a degree of adaptation. However, CI recipients with implantation at an older age reported poorer sound quality and enjoyment as well as worsening sound quality and enjoyment over time, indicating that age-related changes influence outcomes of cochlear implantation.
APA, Harvard, Vancouver, ISO, and other styles
7

Bandara, Meelan, Roshinie Jayasundara, Isuru Ariyarathne, Dulani Meedeniya, and Charith Perera. "Forest Sound Classification Dataset: FSC22." Sensors 23, no. 4 (February 10, 2023): 2032. http://dx.doi.org/10.3390/s23042032.

Full text
Abstract:
The study of environmental sound classification (ESC) has become popular over the years due to the intricate nature of environmental sounds and the evolution of deep learning (DL) techniques. Forest ESC is one use case of ESC, which has been widely experimented with recently to identify illegal activities inside a forest. However, at present, there is a limitation of public datasets specific to all the possible sounds in a forest environment. Most of the existing experiments have been done using generic environment sound datasets such as ESC-50, U8K, and FSD50K. Importantly, in DL-based sound classification, the lack of quality data can cause misguided information, and the predictions obtained remain questionable. Hence, there is a requirement for a well-defined benchmark forest environment sound dataset. This paper proposes FSC22, which fills the gap of a benchmark dataset for forest environmental sound classification. It includes 2025 sound clips under 27 acoustic classes, which contain possible sounds in a forest environment. We discuss the procedure of dataset preparation and validate it through different baseline sound classification models. Additionally, it provides an analysis of the new dataset compared to other available datasets. Therefore, this dataset can be used by researchers and developers who are working on forest observatory tasks.
APA, Harvard, Vancouver, ISO, and other styles
8

Schwock, Felix, and Shima Abadi. "Summary of underwater ambient sound from wind and rain in the northeast Pacific continental margin." Journal of the Acoustical Society of America 153, no. 3_supplement (March 1, 2023): A97. http://dx.doi.org/10.1121/10.0018294.

Full text
Abstract:
Analyzing underwater ambient sound from various sources such as ships, marine mammals, rain, and wind is crucial for characterizing the ocean environment. While efforts to analyze ocean ambient sounds have been ongoing since the 1940s, networks such as the Ocean Observatories Initiative (OOI) provide modern large-scale recording setups for a more in-depth analysis. Here we will summarize results from analyzing over 11,000h of wind generated ambient sound and 280 h of ambient sound during rain collected between 2015 and 2019 by two OOI hydrophones deployed in the northeast Pacific continental margin. The hydrophones record continuously at depths of 81 and 581 m with a sample rate of 64 kHz. Meteorological data are provided by surface buoys deployed near the hydrophones. We compare our results to data obtained from a large-scale recording setup in the tropical Pacific Ocean (Ma et al., 2005). In contrast to their results, we found that sound levels during rain in the northeast Pacific Ocean are highly dependent on the wind speed over a wide frequency range. This implies that large-scale distributed sound measurements are necessary to accurately characterize underwater ambient sound from wind and rain across the globe. [Work supported by ONR.]
APA, Harvard, Vancouver, ISO, and other styles
9

Wałęga, Przemysław Andrzej, Mark Kaminski, and Bernardo Cuenca Grau. "Reasoning over Streaming Data in Metric Temporal Datalog." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3092–99. http://dx.doi.org/10.1609/aaai.v33i01.33013092.

Full text
Abstract:
We study stream reasoning in datalogMTL—an extension of Datalog with metric temporal operators. We propose a sound and complete stream reasoning algorithm that is applicable to a fragment datalogMTLFP of datalogMTL, in which propagation of derived information towards past time points is precluded. Memory consumption in our algorithm depends both on the properties of the rule set and the input data stream; in particular, it depends on the distances between timestamps occurring in data. This is undesirable since these distances can be very small, in which case the algorithm may require large amounts of memory. To address this issue, we propose a second algorithm, where the size of the required memory becomes independent on the timestamps in the data at the expense of disallowing punctual intervals in the rule set. Finally, we provide tight bounds to the data complexity of standard query answering in datalogMTLFP without punctual intervals in rules, which yield a new PSPACE lower bound to the data complexity of the full datalogMTL.
APA, Harvard, Vancouver, ISO, and other styles
10

Phillips, James E. "Verification of an acoustic model of outdoor sound propagation from a natural resource compressor station over complex topography." Journal of the Acoustical Society of America 153, no. 3_supplement (March 1, 2023): A324. http://dx.doi.org/10.1121/10.0019009.

Full text
Abstract:
Outdoor sound propagation from a natural resource compressor station with multiple, large, reciprocating compressors enclosed within a structure was modeled using DGMR iNoise. Data from sound level measurements taken near the station were used to estimate the sound power of the operating compressor station equipment and used as input to the model. The model was then used to project the sound pressure levels at multiple measurement locations over complex topography. Good agreement was achieved between the projected and measured sound pressure levels as far as ½-mile from the station, particularly after accounting for meteorological influences upon sound propagation in the field. Observations and lesson learned while measuring and modeling the sound propagation will be discussed.
APA, Harvard, Vancouver, ISO, and other styles
11

Rhys, Paul. "Smart Interfaces for Granular Synthesis of Sound by Fractal Organization." Computer Music Journal 40, no. 3 (September 2016): 58–67. http://dx.doi.org/10.1162/comj_a_00374.

Full text
Abstract:
This article describes software for granular synthesis of sound. The software features a graphical interface that enables easy creation and modification of sound clouds by deterministic fractal organization. Output sound clouds exist in multidimensional parameter–time space, and are constructed as a micropolyphony of statements of a single input melody or group of notes. The approach described here is an effective alternative to statistical methods, creating sounds with vitality and interest over a range of time scales. Standard techniques are used for the creation of individual grains. Innovation is demonstrated in the particular approach to fractal organization of the sound cloud and in the design of a smart interface to effect easy control of cloud morphology. The interface provides for intuitive control and reorganization of large amounts of data.
APA, Harvard, Vancouver, ISO, and other styles
12

Du, Heshan, and Natasha Alechina. "Qualitative Spatial Logic over 2D Euclidean Spaces Is Not Finitely Axiomatisable." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 2776–83. http://dx.doi.org/10.1609/aaai.v33i01.33012776.

Full text
Abstract:
Several qualitative spatial logics used in reasoning about geospatial data have a sound and complete axiomatisation over metric spaces. It has been open whether the same axiomatisation is also sound and complete for 2D Euclidean spaces. We answer this question negatively by showing that the axiomatisations presented in (Du et al. 2013; Du and Alechina 2016) are not complete for 2D Euclidean spaces and, moreover, the logics are not finitely axiomatisable.
APA, Harvard, Vancouver, ISO, and other styles
13

Al-Badrawi, Mahdi H., and Kevin D. Heaney. "Ambient sound characterization over decadal time scales in the Atlantic Ocean." Journal of the Acoustical Society of America 152, no. 4 (October 2022): A290—A291. http://dx.doi.org/10.1121/10.0016314.

Full text
Abstract:
Gaining more knowledge about the variations in the ambient ocean soundscape plays a critical role in assessing the human impact (anthropogenic noise pollution) on marine species’ behavior and habitat choices. Accessing historical recordings is important to understand the long-term changes in the ambient sound and to identify mechanistic drivers influencing the soundscape regionally and globally. Data recorded in the Northwest Indian Ocean (Bearing Stake exercise) in 1977 were compared with more recent (2003 and 2013) data from the north side of Diego Garcia Island. This comparison between datasets 40 years apart showed that the ambient sound levels in that region were not higher in the 2000s than they were in 1977. The spectral levels decreased between 1977 and 2003 and then leveled out between 2003 and 2013. These findings are different from the well-cited 3 dB/decade increase in Pacific Ocean sound levels, but they did align with recent studies that also show declines in sound levels in other regions. Therefore, another comparison between acoustic recordings 40 years apart near Bermuda and from an ADEON site (October 1980 and 2018–2020, respectively) was performed to facilitate a better interpretation of how the soundscape is changing over decadal timescales in the Atlantic Ocean.
APA, Harvard, Vancouver, ISO, and other styles
14

Cuzzocrea, Alfredo, and Elisa Bertino. "Privacy Preserving OLAP over Distributed XML Data: A Theoretically-Sound Secure-Multiparty-Computation Approach." Journal of Computer and System Sciences 77, no. 6 (November 2011): 965–87. http://dx.doi.org/10.1016/j.jcss.2011.02.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Yang, Xiao, Yang Zhao, Hairong Qi, and George T. Tabler. "Characterizing Sounds of Different Sources in a Commercial Broiler House." Animals 11, no. 3 (March 23, 2021): 916. http://dx.doi.org/10.3390/ani11030916.

Full text
Abstract:
Audio data collected in commercial broiler houses are mixed sounds of different sources that contain useful information regarding bird health condition, bird behavior, and equipment operation. However, characterizations of the sounds of different sources in commercial broiler houses have not been well established. The objective of this study was, therefore, to determine the frequency ranges of six common sounds, including bird vocalization, fan, feed system, heater, wing flapping, and dustbathing, at bird ages of week 1 to 8 in a commercial Ross 708 broiler house. In addition, the frequencies of flapping (in wing flapping events, flaps/s) and scratching (during dustbathing, scratches/s) behaviors were examined through sound analysis. A microphone was installed in the middle of broiler house at the height of 40 cm above the back of birds to record audio data at a sampling frequency of 44,100 Hz. A top-view camera was installed to continuously monitor bird activities. Total of 85 min audio data were manually labeled and fed to MATLAB for analysis. The audio data were decomposed using Maximum Overlap Discrete Wavelet Transform (MODWT). Decompositions of the six concerned sound sources were then transformed with the Fast Fourier Transform (FFT) method to generate the single-sided amplitude spectrums. By fitting the amplitude spectrum of each sound source into a Gaussian regression model, its frequency range was determined as the span of the three standard deviations (99% CI) away from the mean. The behavioral frequencies were determined by examining the spectrograms of wing flapping and dustbathing sounds. They were calculated by dividing the number of movements by the time duration of complete behavioral events. The frequency ranges of bird vocalization changed from 2481 ± 191–4409 ± 136 Hz to 1058 ± 123–2501 ± 88 Hz as birds grew. For the sound of fan, the frequency range increased from 129 ± 36–1141 ± 50 Hz to 454 ± 86–1449 ± 75 Hz over the flock. The sound frequencies of feed system, heater, wing flapping and dustbathing varied from 0 Hz to over 18,000 Hz. The behavioral frequencies of wing flapping were continuously decreased from week 3 (17 ± 4 flaps/s) to week 8 (10 ± 1 flaps/s). For dustbathing, the behavioral frequencies decreased from 16 ± 2 scratches/s in week 3 to 11 ± 1 scratches/s in week 6. In conclusion, characterizing sounds of different sound sources in commercial broiler houses provides useful information for further advanced acoustic analysis that may assist farm management in continuous monitoring of animal health and behavior. It should be noted that this study was conducted with one flock in a commercial house. The generalization of the results remains to be explored.
APA, Harvard, Vancouver, ISO, and other styles
16

Doyle, John B., Rohit R. Raghunathan, Ilana Cellum, Gen Li, and Justin S. Golub. "Longitudinal Tracking of Sound Exposure and Hearing Aid Usage through Objective Data Logs." Otolaryngology–Head and Neck Surgery 159, no. 1 (March 27, 2018): 110–16. http://dx.doi.org/10.1177/0194599818766056.

Full text
Abstract:
Objective To use data-logging technology to objectively track and identify predictors of hearing aid (HA) usage and aided sound exposure. Study Design Case series with planned data collection. Setting Tertiary academic medical center. Subjects and Methods Individuals with HAs between 2007 and 2016 were included (N = 431; mean, 74.6 years; 95% CI, 73.1-76.0). Data-logging technology intrinsic to new-generation HAs was enabled to track usage and sound exposure. With multivariable linear regression, age, sex, number of audiology visits, duration of audiologic follow-up, pure tone average, and HA side were assessed as predictors of usage (hours/day) and aided sound exposure (dB-hours/day; ie, “dose” of sound per day). Results Mean follow-up was 319 days (95% CI, 277-360). Mean HA usage was 8.4 hours/day (95% CI, 8.0-8.8; N = 431). Mean aided sound exposure was 440 dB-hours/day (95% CI, 385-493; n = 110). HA use (β < 0.001, P = .45) and aided sound exposure (β = −0.006, P = .87) were both stable over time. HA usage was associated only with hearing loss level (pure tone average; β = 0.030, P = .04). Aided sound exposure was associated only with duration of audiologic follow-up (β = 0.100, P = .02). Conclusion While measurement of HA use has traditionally relied on subjective reporting, data logging offers an objective tool to longitudinally track HA use and sound exposure. We demonstrate the feasibility of using this potentially powerful research tool. Usage and sound exposure were stable among patients throughout the study period. Use was greater among subjects with greater hearing loss. Maximizing aided sound exposure might be possible through continued audiology follow-up visits.
APA, Harvard, Vancouver, ISO, and other styles
17

Zhang, Xiaoyan, Hongli Wang, Xiang-Yu Huang, Feng Gao, and Neil A. Jacobs. "Using Adjoint-Based Forecast Sensitivity Method to Evaluate TAMDAR Data Impacts on Regional Forecasts." Advances in Meteorology 2015 (2015): 1–13. http://dx.doi.org/10.1155/2015/427616.

Full text
Abstract:
This study evaluates the impact of Tropospheric Airborne Meteorological Data Reporting (TAMDAR) observations on regional 24-hour forecast error reduction over the Continental United States (CONUS) domain using adjoint-based forecast sensitivity to observation (FSO) method as the diagnostic tool. The relative impact of TAMDAR observations on reducing the forecast error was assessed by conducting the WRFDA FSO experiments for two two-week-long periods, one in January and one in June 2010. These experiments assimilated operational TAMDAR data and other conventional observations, as well as GPS refractivity (GPSREF). FSO results show that rawinsonde soundings (SOUND) and TAMDAR exhibit the largest observation impact on 24 h WRF forecast, followed by GeoAMV, aviation routine weather reports (METAR), GPSREF, and synoptic observations (SYNOP). At 0000 and 1200 UTC, TAMDAR has an equivalent impact to SOUND in reducing the 24-hour forecast error. However, at 1800 UTC, TAMDAR has a distinct advantage over SOUND, which has the sparse observation report at these times. In addition, TAMDAR humidity observations at lower levels of the atmosphere (700 and 850 hPa) have a significant impact on 24 h forecast error reductions. TAMDAR and SOUND observations present a qualitatively similar observation impact between FSO and Observation System Experiments (OSEs).
APA, Harvard, Vancouver, ISO, and other styles
18

Downing, Micah, Jonathan Gillis, Ben Manning, Josh Mellon, and Matthew Calton. "Navy aircraft sound monitoring study." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 264, no. 1 (June 24, 2022): 775–86. http://dx.doi.org/10.3397/nc-2022-809.

Full text
Abstract:
The United States Department of the Navy (Navy), as directed by Congress, executed a real time sound monitoring study of jet aircraft at Naval Air Station (NAS) Whidbey Island and NAS Lemoore over the past year and compared the resulting measured data with modeled noise data. The Navy collected real-time aircraft sound level and operational data during four discrete seven-day monitoring periods in 2020 and 2021. The data collected each period included: (1) acoustic recordings by sound level meters deployed at sites around each airfield to capture sound levels during a range of flight operations across a range of seasonal weather conditions; and (2) detailed operations data. The collected acoustic data are compared to noise modeling done specifically for this study using the observed flight operations data. This presentation provides and overview of the measured data and the results of the comparison with modeled data.
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, MingQing. "Construction and Research of Guqin Sound Synthesis and Plucking Big Data Simulation Model Based on Computer Synthesis Technology." Discrete Dynamics in Nature and Society 2022 (March 24, 2022): 1–10. http://dx.doi.org/10.1155/2022/1516648.

Full text
Abstract:
The application and economic efficiency evaluation mode of traditional composition in the field of modern music cannot meet the needs of various types of music in China, especially Guqin music. Based on this, this paper studies the big data simulation model of Guqin sound synthesis and plucking based on computer composition technology. The computer composition technology uses the discrete dynamic modeling technology of complex system to complete the computer simulation of Guqin sound through the analysis of the correlation between Guqin music data and realizes the storage and analysis of the generated Guqin sound data. In addition, the technology can analyze the data information of composition mode in the piano sound plucking simulation model with the classical Guqin music data stored in the cloud system over the years and then feed it back to relevant professionals for verification. The experimental results show that the Guqin sound simulation model can efficiently compare and analyze the melody and other data of classical Guqin sound with the simulated Guqin sound and can realize secondary data mining. This paper studies the application of computer composition method based on discrete dynamic modeling technology in Guqin sound simulation, which has certain reference significance for improving the cloud data in China’s modern music field and the intelligent construction of Guqin sound data cloud storage system.
APA, Harvard, Vancouver, ISO, and other styles
20

Pita, Antonio, Francisco J. Rodriguez, and Juan M. Navarro. "Analysis and Evaluation of Clustering Techniques Applied to Wireless Acoustics Sensor Network Data." Applied Sciences 12, no. 17 (August 26, 2022): 8550. http://dx.doi.org/10.3390/app12178550.

Full text
Abstract:
Exposure to environmental noise is related to negative health effects. To prevent it, the city councils develop noise maps and action plans to identify, quantify, and decrease noise pollution. Smart cities are deploying wireless acoustic sensor networks that continuously gather the sound pressure level from many locations using acoustics nodes. These nodes provide very relevant updated information, both temporally and spatially, over the acoustic zones of the city. In this paper, the performance of several data clustering techniques is evaluated for discovering and analyzing different behavior patterns of the sound pressure level. A comparison of clustering techniques is carried out using noise data from two large cities, considering isolated and federated data. Experiments support that Hierarchical Agglomeration Clustering and K-means are the algorithms more appropriate to fit acoustics sound pressure level data.
APA, Harvard, Vancouver, ISO, and other styles
21

Siebein, Gary W., Keely M. Siebein, Marylin Roa, Jennifer Miller, Gary Siebein, and Matthew Vetterick. "Working towards soundscape compatibility of indoor and outdoor shooting ranges with surrounding properties." Journal of the Acoustical Society of America 152, no. 4 (October 2022): A160. http://dx.doi.org/10.1121/10.0015882.

Full text
Abstract:
This paper explores methods to help make shooting ranges compatible with surrounding properties. Sounds from outdoor firing ranges can propagate over 2 miles from the range depending upon topography, vegetation, background sound levels, numbers of shooters, weapon types, and mitigation systems employed at the range. Many communities have regulations for maximum sound levels that can be propagated from one property to another. The impulsive nature of gun shots produces sounds that are not easily measured using conventional acoustical metrics and sound level meters. These items can be studied using three-dimensional computer models. Acoustical data for different weapon types are used as the sound sources. Mitigation options such as shooting sheds, berms, and other strategies can be studied as part of the design process to optimize sonic compatibility with neighboring properties. Similar processes are used for partially and fully enclosed ranges with the addition of the walls, roofs, doors, and HVAC systems for the range included in the models. An architect, engineer, and other design team members work to design specific systems to provide the required mitigation methods. Consultants can evaluate the cost of implementing the mitigation measures so that sonic compatibility is addressed prior to using the range.
APA, Harvard, Vancouver, ISO, and other styles
22

Al-Qudsy, Zainab N. Al, Zainab Mahmood Fadhil, Refed Adnan Jaleel, and Musaddak Maher Abdul Zahra. "Blockchain and 1D-CNN based IoTs for securing and classifying of PCG sound signal data." Fusion: Practice and Applications 12, no. 2 (2023): 28–41. http://dx.doi.org/10.54216/fpa.120203.

Full text
Abstract:
The Internet of Things (IoTs) has accelerated with the introduction of powerful biomedical sensors, telemedicine services and population ageing are concerns that can be solved by smart healthcare systems. However, the security of medical signal data that collected from sensors of IoTs technology, while it is being transmitted over public channels has grown to be a serious problem that has limited the adoption of intelligent healthcare systems. This suggests using the technology of blockchain to create a safe and reliable heart sound signal (PCG) that can communicate with wireless body area networks. The security plan offers a totally dependable and safe environment for every data flowing from the back end to front-end. Also in this paper, to classify heart sound signals, we suggested a one-dimensional convolutional neural network (1D-CNN) model. The denoising autoencoder extracted the heart sounds' deep features as an input feature of 1D-CNN. To extract the detailed characteristics from the PCG signals, a Data Denoising Auto Encoder (DDAE) was used instead of the standard MFCC, the suggested model shows significant improvement. The system's benefits include a less difficult encryption algorithm and a more capable and effective blockchain-based data transfer and storage system.
APA, Harvard, Vancouver, ISO, and other styles
23

Konishi, K., and Z. Maekawa. "Interpretation of long term data measured continuously on long range sound propagation over sea surfaces." Applied Acoustics 62, no. 10 (October 2001): 1183–210. http://dx.doi.org/10.1016/s0003-682x(00)00096-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Abadi, Shima, Tor A. Bjorklund, Junzhe Liu, and H. P. Johnson. "Detection and monitoring of seafloor methane bubbles using hydrophones." Journal of the Acoustical Society of America 152, no. 4 (October 2022): A290. http://dx.doi.org/10.1121/10.0016312.

Full text
Abstract:
Natural marine seeps of methane are important sources of greenhouse gas emissions that enter the global environment. Monitoring marine methane seeps will reveal important information about how they form, their source regions, and how much of the inventory is microbially consumed within the water column before the gas is released to the atmosphere. While active acoustics methods have been extensively used to detect and monitor methane emissions from the seafloor, there are only a few studies showing the use of passive acoustics for bubble sound detection and monitoring. In this presentation, we use passive acoustics data recorded in a methane seep field in Puget Sound to characterize the bubble sound and estimate the bubble radii from their sound frequency. We use the ratio of a short-term average from a bubble burst over the long-term average to automatically detect the bubble sounds. We also show the relationship between the number of bubble sound detections and the hydrostatics seafloor pressure fluctuations due to tidal changes in water height.
APA, Harvard, Vancouver, ISO, and other styles
25

Rizal, Achmad, Risanuri Hidayat, Hanung Adi Nugroho, and Willy Anugrah Cahyadi. "Lung sound classification using multiresolution Higuchi fractal dimension measurement." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 5 (October 1, 2023): 5091. http://dx.doi.org/10.11591/ijece.v13i5.pp5091-5100.

Full text
Abstract:
<span lang="EN-GB">Lung sound is one indicator of abnormalities in the lungs and respiratory tract. Research for automatic lung sound classification has become one of the interests for researchers because lung disease is one of the diseases with the most sufferers in the world. The use of lung sounds as a source of information because of the ease in data acquisition and auscultation is a standard method in examining pulmonary function. This study simulated the potential use of Higuchi fractal dimension (HFD) as a feature extraction method for lung sound classification. HFD calculations were run on a series of </span><em><span lang="EN-GB">k</span></em><span lang="EN-GB"> values to generate some HFD values as features. According to the simulation results, the proposed method could produce an accuracy of up to 97.98% for five classes of lung sound data. The results also suggested that the shift in HFD values over the selection of a time interval </span><em><span lang="EN-GB">k</span></em><span lang="EN-GB"> can be used for lung sound classification.</span>
APA, Harvard, Vancouver, ISO, and other styles
26

Menze, Sebastian, Daniel P. Zitterbart, Ilse van Opzeeland, and Olaf Boebel. "The influence of sea ice, wind speed and marine mammals on Southern Ocean ambient sound." Royal Society Open Science 4, no. 1 (January 2017): 160370. http://dx.doi.org/10.1098/rsos.160370.

Full text
Abstract:
This paper describes the natural variability of ambient sound in the Southern Ocean, an acoustically pristine marine mammal habitat. Over a 3-year period, two autonomous recorders were moored along the Greenwich meridian to collect underwater passive acoustic data. Ambient sound levels were strongly affected by the annual variation of the sea-ice cover, which decouples local wind speed and sound levels during austral winter. With increasing sea-ice concentration, area and thickness, sound levels decreased while the contribution of distant sources increased. Marine mammal sounds formed a substantial part of the overall acoustic environment, comprising calls produced by Antarctic blue whales ( Balaenoptera musculus intermedia ), fin whales ( Balaenoptera physalus ), Antarctic minke whales ( Balaenoptera bonaerensis ) and leopard seals ( Hydrurga leptonyx ). The combined sound energy of a group or population vocalizing during extended periods contributed species-specific peaks to the ambient sound spectra. The temporal and spatial variation in the contribution of marine mammals to ambient sound suggests annual patterns in migration and behaviour. The Antarctic blue and fin whale contributions were loudest in austral autumn, whereas the Antarctic minke whale contribution was loudest during austral winter and repeatedly showed a diel pattern that coincided with the diel vertical migration of zooplankton.
APA, Harvard, Vancouver, ISO, and other styles
27

Clarey, J. C., P. Barone, and T. J. Imig. "Functional organization of sound direction and sound pressure level in primary auditory cortex of the cat." Journal of Neurophysiology 72, no. 5 (November 1, 1994): 2383–405. http://dx.doi.org/10.1152/jn.1994.72.5.2383.

Full text
Abstract:
1. The functional organization of neuronal tuning to the azimuthal location and sound pressure level (SPL) of noise bursts was studied in high-frequency primary auditory cortex (AI) of barbiturate-anesthetized cats. Three data collection strategies were used to map neural responses: 1) electrode penetrations oriented normal to the cortical surface provided information on the radial organization of neurons' responses; 2) neurons' responses were examined at a few points in the middle cortical layers in multiple normal penetrations across AI to produce fine-grain maps of azimuth and level selectivity; and 3) electrode penetrations oriented tangential to the cortical surface provided information on neurons' responses along the isofrequency dimension. 2. An azimuth-level data set was obtained for each single- or multiple- (multi-) unit recording; this consisted of responses to noise bursts at five SPLs (0–80 dB in 20-dB steps) from seven azimuthal locations in the frontal hemifield (-90 to +90 degrees in 30 degrees steps; 0 degree elevation). An azimuth function was derived from these data by averaging response magnitude over all SPLs at each azimuth tested. A preferred azimuth range (PAR; range of azimuths over which the response was > or = 75% of maximum) was calculated from the azimuth function and provided a level-independent measure of azimuth selectivity. Each PAR was assigned to one of four azimuth preference categories (contralateral-, midline-, ipsilateral-preferring, or broad/multipeaked) according to its location and extent. A level function obtained from the data set (responsiveness averaged over all azimuths) was classified as monotonic if it showed a decrease of < or = 25% (relative to maximum) at the highest SPL tested (usually 80 dB), and nonmonotonic if it showed a decrease of > 25%. The percentage reduction in responsiveness, relative to maximum, at the highest level tested (termed nonmonotonic strength) and the preferred level range (PLR; range of SPLs over which responsiveness was > or = 75% of maximum) of each response was also determined. 3. Normal penetrations typically showed a predominance of one azimuth preference category and/or level function type. The majority of penetrations (26/36: 72.2%) showed statistically significant azimuth preference homogeneity, and approximately one-half (17/36: 47.2%) showed significant level function type homogeneity. Over one-third (13/36) showed significant homogeneity for both azimuth preference and level function type. 4. Mapping experiments (n = 4) sampled the azimuth and level response functions at two or more depths in closely spaced normal penetrations that covered several square millimeters of AI.(ABSTRACT TRUNCATED AT 400 WORDS)
APA, Harvard, Vancouver, ISO, and other styles
28

Hanisch, Robert, Michael Wise, Masatoshi Ohishi, Heinz Andernach, Marsha Bishop, Daniel Egret, Elizabeth Griffin, et al. "DIVISION B COMMISSION 5: DOCUMENTATION AND ASTRONOMICAL DATA." Proceedings of the International Astronomical Union 11, T29A (August 2015): 84–89. http://dx.doi.org/10.1017/s174392131600065x.

Full text
Abstract:
IAU Commission 5, Documentation and Astronomical Data, continued its mission of promoting and supporting sound practices of data management, data dissemination, and data preservation over the past three years. The Commission also prepared its proposal for continuation, with some changes in emphasis, after the IAU's commission restructuring program. Below we report on the activities of the various Working Groups and the one Task Force in Commission 5.
APA, Harvard, Vancouver, ISO, and other styles
29

Jadhav, Swapnil, Sarvesh Karpe, and Siuli Das. "Sound Classification Using Python." ITM Web of Conferences 40 (2021): 03024. http://dx.doi.org/10.1051/itmconf/20214003024.

Full text
Abstract:
Sound assumes a significant part in human existence. It is one of the fundamental tangible data which we get or see from the climate and their components which have three principal credits viz. Sufficiency (Loudness of the sound), Frequency (The pitch of the sound), Timbre (Quality of the sound or the personality of the sound for example the Sound contrast between a piano and a violin). It is an event generated from the action. Humans are highly efficient to learn and recognize new and various types of sounds and sound events. There is a lot of research work going on Automatic sound classification and it is used in various real-world applications. The paper proposes an examination of an establishment disturbance classifier reliant upon a model affirmation approach using a neural organization. The signs submitted to the neural association are depicted through a lot of 12 MFCC (Mel Frequency Cepstral Coefficient) limits routinely present toward the front finish of an adaptable terminal. The introduction of the classifier, assessed as far as percent misclassification, show an exactness going between 73 % and 95 % relying upon the term of the choice window. Transmitting sound using a machine and expecting an output is considered a highly accurate deep learning task. This technology is used in our smartphones with mobile assistants such as Siri, Alexa, Google Assistant. In the case of the Google Speech recognition data set over 94 percent accuracy is obtained when trying to identify one of 20 words, silence or unknown. It is a very difficult task to recognize audio or sound events systematically and work on it for identification and give output. We are going to work on it using python programming language and some deep learning techniques. It’s a basic model that we are trying to develop, taking the next step to the innovative model that can help society and also which represent the innovative ideas of Engineering Students.
APA, Harvard, Vancouver, ISO, and other styles
30

Sutanto, Koko D., Ibrahim M. Al-Shahwan, Mureed Husain, Khawaja G. Rasool, Richard W. Mankin, and Abdulrahman S. Aldawood. "Field Evaluation of Promising Indigenous Entomopathogenic Fungal Isolates against Red Palm Weevil, Rhynchophorus ferrugineus (Coleoptera: Dryophthoridae)." Journal of Fungi 9, no. 1 (January 2, 2023): 68. http://dx.doi.org/10.3390/jof9010068.

Full text
Abstract:
The rate of the sounds (i.e., substrate vibrations) produced by the movement and feeding activity of red palm weevil (RPW) pest infestations in a date palm tree was monitored over time after trees were separately treated with injection of entomopathogenic fungal isolates, Beauveria bassiana and Metarhizium anisopliae, or water treatment as the control. The activity sensing device included an accelerometer, an amplifier, a digital recorder, and a signal transmitter that fed the data to a computer that excluded background noise and compared the rates of bursts of movement and feeding sound impulses among treated trees and controls. Observations were made daily for two months. The rates of bursts were representative of the feeding activity of RPW. The unique spectral pattern of sound pulses was typical of the RPW larval feeding activity in the date palm. The microphone confirmed that the same unique tone was produced in each burst. Two months after fungal injection, the RPW sound signal declined, while the RPW sound signal increased in the control date palms (water injection). The mean rates of bursts produced by RPW decreased to zero after the trees were injected with B. bassiana or M. anisopliae compared with the increased rates over time in the control treatment plants.
APA, Harvard, Vancouver, ISO, and other styles
31

Könecke, Susanne, Jasmin Hörmeyer, Tobias Bohne, and Raimund Rolfes. "A new base of wind turbine noise measurement data and its application for a systematic validation of sound propagation models." Wind Energy Science 8, no. 4 (April 28, 2023): 639–59. http://dx.doi.org/10.5194/wes-8-639-2023.

Full text
Abstract:
Abstract. Extensive measurements in the area of wind turbines were performed in order to validate a sound propagation model which is based on the Crank–Nicolson parabolic equation method. The measurements were carried out over a flat grass-covered landscape and under various environmental conditions. During the measurements, meteorological and wind turbine performance data were acquired and acoustical data sets were recorded at distances of 178, 535 and 845 m from the wind turbine. By processing and analysing the measurement data, validation cases and input parameters for the sound propagation model were derived. The validation includes five groups that are characterised by different sound propagation directions, i.e. downwind, crosswind and upwind conditions in varying strength. In strong upwind situations, the sound pressure levels at larger distances are overestimated because turbulence is not considered in the modelling. In the other directions, the model reproduces the measured sound propagation losses well in the overall sound pressure level and in the third octave band spectra. As in the recorded measurements, frequency-dependent maxima and minima are identified, and losses generally increase with increasing distance and frequency. The agreement between measured and modelled sound propagation losses decreases with distance. The data sets used in the validation are freely accessible for further research.
APA, Harvard, Vancouver, ISO, and other styles
32

Mao, Yi Min, Xiao Fang Xue, and Jin Qing Chen. "An Intrusion Detection Model Based on Mining Maximal Frequent Itemsets over Data Streams." Applied Mechanics and Materials 339 (July 2013): 341–48. http://dx.doi.org/10.4028/www.scientific.net/amm.339.341.

Full text
Abstract:
Ming association rules have been proved as an important method to detect intrusions. To improve response speed and detecting precision in the current intrusion detection system, this papers proposes an intrusion detection system model of MMFIID-DS. Firstly, to improve response speed of the system by greatly reducing search space, various pruning strategies are proposed to mine the maximal frequent itemsets on trained normal data set, abnormal data set and current data streams to establish normal and abnormal behavior pattern as well as user behavior pattern of the system. Besides, to improve detection precision of the system, misuse detection and anomaly detection techniques are combined. Both theoretical and experimental results indicate that the MMFIID-DS intrusion detection system is fairly sound in performance.
APA, Harvard, Vancouver, ISO, and other styles
33

Ranft, Richard. "Natural sound archives: past, present and future." Anais da Academia Brasileira de Ciências 76, no. 2 (June 2004): 456–60. http://dx.doi.org/10.1590/s0001-37652004000200041.

Full text
Abstract:
Recordings of wild animals were first made in the Palearctic in 1900, in the Nearctic in 1929, in Antarctica in 1934, in Asia in 1937, and in the Neotropics in the 1940s. However, systematic collecting did not begin until the 1950s. Collections of animal sound recordings serve many uses in education, entertainment, science and nature conservation. In recent years, technological developments have transformed the ways in which sounds can be sampled, stored and accessed. Now the largest collections between them hold altogether around 0.5 million recordings with their associated data. The functioning of a major archive will be described with reference to the British Library Sound Archive. Preserving large collections for the long term is a primary concern in the digital age. While digitization and digital preservation has many advantages over analogue methods, the rate of technology change and lack of standardization are a serious problem for theworld's major audio archives. Another challenge is to make collections more easily and widely accessible via electronic networks. On-line catalogues and access to the actual sounds via the internet are already available for some collections. Case studies describing the establishment and functioning of sound libraries inMexico, Colombia and Brazil are given in individually authored sections in an Appendix.
APA, Harvard, Vancouver, ISO, and other styles
34

Hou, Lulu, Wenrui Duan, Guozhe Xuan, Shanpeng Xiao, Yuan Li, Yizheng Li, and Jiahao Zhao. "Intelligent Microsystem for Sound Event Recognition in Edge Computing Using End-to-End Mesh Networking." Sensors 23, no. 7 (March 31, 2023): 3630. http://dx.doi.org/10.3390/s23073630.

Full text
Abstract:
Wireless acoustic sensor networks (WASNs) and intelligent microsystems are crucial components of the Internet of Things (IoT) ecosystem. In various IoT applications, small, lightweight, and low-power microsystems are essential to enable autonomous edge computing and networked cooperative work. This study presents an innovative intelligent microsystem with wireless networking capabilities, sound sensing, and sound event recognition. The microsystem is designed with optimized sensing, energy supply, processing, and transceiver modules to achieve small size and low power consumption. Additionally, a low-computational sound event recognition algorithm based on a Convolutional Neural Network has been designed and integrated into the microsystem. Multiple microsystems are connected using low-power Bluetooth Mesh wireless networking technology to form a meshed WASN, which is easily accessible, flexible to expand, and straightforward to manage with smartphones. The microsystem is 7.36 cm3 in size and weighs 8 g without housing. The microsystem can accurately recognize sound events in both trained and untrained data tests, achieving an average accuracy of over 92.50% for alarm sounds above 70 dB and water flow sounds above 55 dB. The microsystems can communicate wirelessly with a direct range of 5 m. It can be applied in the field of home IoT and border security.
APA, Harvard, Vancouver, ISO, and other styles
35

Underwood, Samuel, and Lily Wang. "Compilation of restaurant acoustics data logged in Omaha, Nebraska." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 264, no. 1 (June 24, 2022): 603–7. http://dx.doi.org/10.3397/nc-2022-781.

Full text
Abstract:
The high levels of noise in many restaurants continue to be a source of discomfort and complaints for customers. A measurement campaign of local occupied restaurants has been underway at the University of Nebraska since 2019, to gather data towards understanding how owners and acoustical consultants can better design restaurant sound fields. A review of the results obtained to date is presented, including the sound levels experienced in a restaurant over the course of an evening, those levels compared to occupancy, octave band distributions, statistical levels, how levels change at different locations in a restaurant, and the percent of time that levels in the restaurant exceed 70 dB. One goal in the project is to link the measured objective data to subjective responses of the occupants. Progress on that goal is also discussed in this presentation.
APA, Harvard, Vancouver, ISO, and other styles
36

JOHANSSON, NIKLAS, ANDREY ANIKIN, and NIKOLAY ASEYEV. "Color sound symbolism in natural languages." Language and Cognition 12, no. 1 (October 18, 2019): 56–83. http://dx.doi.org/10.1017/langcog.2019.35.

Full text
Abstract:
abstractThis paper investigates the underlying cognitive processes of sound–color associations by connecting perceptual evidence from research on cross-modal correspondences to sound symbolic patterns in the words for colors in natural languages. Building upon earlier perceptual experiments, we hypothesized that sonorous and bright phonemes would be over-represented in the words for bright and saturated colors. This hypothesis was tested on eleven color words and related concepts (red–green, yellow–blue, black–white, gray, night–day, dark–light) from 245 language families. Textual data was transcribed into the International Phonetic Alphabet (IPA), and each phoneme was described acoustically using high-quality IPA recordings. These acoustic measurements were then correlated with the luminance and saturation of each color obtained from cross-linguistic color-naming data in the World Color Survey. As expected, vowels with high brightness and sonority ratings were over-represented in the words for colors with high luminance, while sonorous consonants were more common in the words for saturated colors. We discuss these results in relation to lexicalization patterns and the links between iconicity and perceptual cross-modal associations.
APA, Harvard, Vancouver, ISO, and other styles
37

Jen, Chih-Hung, and Chien-Chih Wang. "Real-Time Process Monitoring Based on Multivariate Control Chart for Anomalies Driven by Frequency Signal via Sound and Electrocardiography Cases." Processes 9, no. 9 (August 26, 2021): 1510. http://dx.doi.org/10.3390/pr9091510.

Full text
Abstract:
Recent developments in network technologies have led to the application of cloud computing and big data analysis to industrial automation. However, the automation of process monitoring still has numerous issues that need to be addressed. Traditionally, offline statistical processes are generally used for process monitoring; thus, problems are often detected too late. This study focused on the construction of an automated process monitoring system based on sound and vibration frequency signals. First, empirical mode decomposition was combined with intrinsic mode functions to construct different sound frequency combinations and differentiate sound frequencies according to anomalies. Then, linear discriminant analysis (LDA) was adopted to classify abnormal and normal sound frequency signals, and a control line was constructed to monitor the sound frequency. In a case study, the proposed method was applied to detect abnormal sounds at high and low frequencies, and a detection accuracy of over 90% was realized. In another case study, the proposed method was applied to analyze electrocardiography signals and was similarly able to identify abnormal situations. Thus, the proposed method can be applied to real-time process monitoring and the detection of abnormalities with high accuracy in various situations.
APA, Harvard, Vancouver, ISO, and other styles
38

Lewis, Camilla. "Listening to community: The aural dimensions of neighbouring." Sociological Review 68, no. 1 (May 29, 2019): 94–109. http://dx.doi.org/10.1177/0038026119853944.

Full text
Abstract:
This article examines the multisensory nature of everyday life and the ways in which sound shapes experiences of community, presenting findings from a research project, ‘Place and belonging: What can we learn from Claremont Court housing scheme?’ Whilst acknowledging the multisensory nature of perception, the discussion focuses on sound in particular, exploring the different ways that sound (or lack of it) informed residents’ neighbouring practices and sense of community. Despite general fears of ‘loss of community’ due to increasing individualisation, the findings show the continued importance of neighbouring relations, which point to varied types of community attachment. Cases are presented from the data focusing on the themes of nostalgia, uncertainty and feelings of difference. These themes provide telling insights into the ways in which community is experienced and how people living in the same housing scheme interpret sounds differently. All residents were exposed to similar sound ecologies, but their significance and meanings were understood in vastly different ways. The article offers an original contribution by arguing that sound is an important dimension of everyday life in urban settings, which is related to affective and emotional dimensions of community, which have, as yet, been glossed over in the sociological literature.
APA, Harvard, Vancouver, ISO, and other styles
39

Yang, Xiaohong, Yanming Yang, and Jinbao Weng. "A Comparative Study of the Temperature Change in a Warm Eddy Using Multisource Data." Remote Sensing 15, no. 6 (March 18, 2023): 1650. http://dx.doi.org/10.3390/rs15061650.

Full text
Abstract:
An ocean acoustic tomography (OAT) experiment conducted in the northern South China Sea in 2021 measured a month-long record of acoustic travel times along paths of over one hundred kilometers in range. A mesoscale eddy passed through the experimental region during the deployment of four acoustic moorings, providing unique OAT data for examining the deep temperature change in the eddy and for comparison with the Hybrid Coordinate Ocean Model (HYCOM) data. The existence of the eddy is first confirmed by the merged sea level anomaly (MSLA) image and HYCOM data and it can exceed the depth of the sound channel axis. The temperature changes measured by temperature and depth (TD)/conductivity–temperature–depth (CTD) loggers and by the OAT sound speed are in accordance with those reflected on the MSLA image during the movement of the eddy. However, the eddy movement prompted by temperature changes in the HYCOM data is different from that measured by TD/CTD. The modeled eddy intensity is at least two times less than the measured eddy intensity. At the sound channel axis depth, a factor of approximately 4.17 ms−1 °C−1 can be used to scale between sound speed and temperature. The transmission/reception path-averaged temperature of the eddy derived from the OAT-computed sound speed at the depth of the sound channel axis is five times greater than those in the HYCOM data. OAT is feasible as a tool to study mesoscale eddy properties in the deep ocean, while HYCOM data are not accurate enough for this mesoscale eddy at the sound channel axis depth. It is suggested that the model be refined by the OAT path-averaged temperature as constraints when the HYCOM data capture the mesoscale eddies.
APA, Harvard, Vancouver, ISO, and other styles
40

Manul’chev, Denis, Andrey Tyshchenko, Mikhail Fershalov, and Pavel Petrov. "Estimating Sound Exposure Levels Due to a Broadband Source over Large Areas of Shallow Sea." Journal of Marine Science and Engineering 10, no. 1 (January 8, 2022): 82. http://dx.doi.org/10.3390/jmse10010082.

Full text
Abstract:
3D sound propagation modeling in the context of acoustic noise monitoring problems is considered. A technique of effective source spectrum reconstruction from a reference single-hydrophone measurement is discussed, and the procedure of simulation of sound exposure level (SEL) distribution over a large sea area is described. The proposed technique is also used for the modeling of pulse signal waveforms at other receiver locations, and results of a direct comparison with the pulses observed in the experimental data is presented.
APA, Harvard, Vancouver, ISO, and other styles
41

Verburg, Samuel A., Earl G. Williams, and Efren Fernandez-Grande. "Three-dimensional characterization of spatial sound fields via acousto-optic sensing." Journal of the Acoustical Society of America 152, no. 4 (October 2022): A260. http://dx.doi.org/10.1121/10.0016203.

Full text
Abstract:
Characterizing audible sound fields over space is at the core of many applications in acoustics and audio technology, including active sound field control, spatial audio, and the experimental analysis of sound radiation. For this purpose, arrays of microphones are commonly deployed across the sound field, with an inter-microphone spacing that depends on the highest frequency studied. Capturing sound fields at mid and high frequencies is nonetheless challenging, as the number of transducers required becomes impractically large and the scattering of sound introduced by the array is often significant. Acousto-optic sensing enables the remote and non-invasive characterization of acoustic fields, and thus it represents an attractive alternative to conventional electro-mechanical transducers in several applications. The phase-shift that laser beams experience as they travel through a pressure field are measured via optical interferometry. The optical measurements are then used to reconstruct the field that originated such phase variations. In this study, we project the measured data into a set of functions suitable to represent sound fields over space. The projection makes it possible to improve the reconstruction accuracy and extrapolate the reconstruction outside the measured region. We demonstrate the method by reconstructing the three-dimensional sound field inside a room using optical data.
APA, Harvard, Vancouver, ISO, and other styles
42

Gavriely, N., and D. W. Cugell. "Airflow effects on amplitude and spectral content of normal breath sounds." Journal of Applied Physiology 80, no. 1 (January 1, 1996): 5–13. http://dx.doi.org/10.1152/jappl.1996.80.1.5.

Full text
Abstract:
Even though it is well known that breath-sound amplitude (BSA) increases with airflow, the exact quantitative relationships and their distribution within the relevant frequency range have not yet been determined. To evaluate these relationships, the spectral content of tracheal and chest wall breath sounds was measured during breath hold, inspiration, and expiration in six normal men. Average spectra were measured at six flow rates from 0.5 to 3.0 l/s. The areas under the spectral curves of the breath sounds minus the corresponding areas under the breath-hold spectra (BSA) were found to have power relationships with flow (F), best modeled as BSA = k.F alpha, where k and alpha are constants. The overall mean +/- SD value of the power (alpha) was 1.66 +/- 0.35, significantly less than the previously reported second power. Isoflow inspiratory chest wall sound amplitudes were 1.99 +/- 0.70- to 2.43 +/- 0.65-fold larger than the amplitudes of the corresponding expiratory sounds, whereas tracheal sound amplitudes were not dependent on respiratory phase. Isoflow breath sounds from the left posterior base were 32% louder than those from the right lung base (P < 0.01). BSA-F relationships were not frequency dependent during expiration but were significantly stronger in higher than in lower frequencies during inspiration over both posterior bases. These data are compatible with sound generation by turbulent flow in a bifurcating network with 1) flow separation, 2) downstream movement of eddies, and 3) collision of fast-moving cores of the inflowing air with carinas, all occurring during inspiration but not during expiration.
APA, Harvard, Vancouver, ISO, and other styles
43

Mohino-Herranz, Inma, Joaquín García-Gómez, Miguel Aguilar-Ortega, Manuel Utrilla-Manso, Roberto Gil-Pita, and Manuel Rosa-Zurera. "Introducing the ReaLISED Dataset for Sound Event Classification." Electronics 11, no. 12 (June 7, 2022): 1811. http://dx.doi.org/10.3390/electronics11121811.

Full text
Abstract:
This paper presents the Real-Life Indoor Sound Event Dataset (ReaLISED), a new database which has been developed to contribute to the scientific advance by providing a large amount of real labeled indoor audio event recordings. They offer the scientific community the possibility of testing Sound Event Classification (SEC) algorithms. The full set is made up of 2479 sound clips of 18 different events, which were recorded following a precise recording process described along the proposal. This, together with a described way of testing the similarity of new audio, makes the dataset scalable and opens up the door to its future growth, if desired by the researchers. The full set presents a good balance in terms of the number of recordings of each type of event, which is a desirable characteristic of any dataset. Conversely, the main limitation of the provided data is that all the audio is recorded in indoor environments, which was the aim behind this development. To test the quality of the dataset, both the intraclass and the interclass similarities were evaluated. The first has been studied through the calculation of the intraclass Pearson correlation coefficient and further discard of redundant audio, while the second one has been evaluated with the creation, training and testing of different classifiers: linear and quadratic discriminants, k-Nearest Neighbors (kNN), Support Vector Machines (SVM), Multilayer Perceptron (MLP), and Deep Neural Networks (DNN). Firstly, experiments were carried out over the entire dataset, and later over three different groups (impulsive sounds, non-impulsive sounds, and appliances) composed of six classes according to the results from the entire dataset. This clustering shows the usefulness of following a two-step classification process.
APA, Harvard, Vancouver, ISO, and other styles
44

Bard, Seth. "Empirical validation of an angle-error model for computing free-field sound power using a cylindrical enveloping surface." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 266, no. 2 (May 25, 2023): 615–26. http://dx.doi.org/10.3397/nc_2023_0091.

Full text
Abstract:
The angle error inherently biases the calculated sound power level when determined using free-field sound pressure level measurements over an enveloping surface. A regression equation was previously created to predict the angle error associated with omnidirectional sources using just a few easily obtainable input variables related to the geometry of the source and the enveloping measurement surface. This paper assesses the accuracy of that predictive regression equation through an empirical study. A dodecahedron loudspeaker was used as a noise source over a range of positions within the reference box. The modeled angle error correction was applied to the computed sound power levels using measurement data from multiple cylindrical enveloping surfaces in accordance with ISO 3744 and ISO 7779. The corrected sound power levels were compared to the measured sound power levels of the source per ISO 3741 direct and comparison methods.
APA, Harvard, Vancouver, ISO, and other styles
45

Demiray, Burcu, Minxia Luo, and Matthew Grilli. "Sounds of Healthy Aging: Assessing Everyday Cognitive Activity From Real-Life Audio Data." Innovation in Aging 4, Supplement_1 (December 1, 2020): 622. http://dx.doi.org/10.1093/geroni/igaa057.2119.

Full text
Abstract:
Abstract The healthy aging model of the World Health Organization (2015) highlights the value of assessing and monitoring everyday activities in understanding health in old age. This symposium includes four studies that used the Electronically Activated Recorder (EAR), a portable recording device that periodically collects sound snippets in everyday life, to assess various real-life cognitive activities in the context of healthy aging. The four studies collected over 100,000 sound snippets (30-seconds long) over a few days from young and older adults in the US and Switzerland. Participants’ speech in the sound snippets were transcribed and coded for different cognitive activity information. Specifically, Haas and Kliegel have investigated the “prospective memory paradox” by examining the commonality and differences in utterances about retrospective and prospective memory failure in young and older adults’ everyday conversations. Demiray and colleagues investigated the relation between autobiographical memory functions and conversation types in young and older adults in relation to well-being. Luo and colleagues have identified the compensatory function of real-world contexts in cognitive aging: Their study showed that older adults benefited from talking with their spouse in producing complex grammatical structures. Finally, Polsinelli and colleagues found robust associations between language markers (e.g., prepositions, more numbers) and executive functions, highlighting the potential use of spontaneous speech in predicting cognitive status in healthy older adults. Finally, Prof. Matthew Grilli will serve as a discussant and provide an integrative discussion of the papers, informed by his extensive work on clinical and cognitive neuroscience of memory in relation to real-life contexts.
APA, Harvard, Vancouver, ISO, and other styles
46

Testa, J. Ward. "Over-winter movements and diving behavior of female Weddell seals (Leptonychotes weddellii) in the southwestern Ross Sea, Antarctica." Canadian Journal of Zoology 72, no. 10 (October 1, 1994): 1700–1710. http://dx.doi.org/10.1139/z94-229.

Full text
Abstract:
The movements and diving behavior of 18 adult female Weddell seals (Leptonychotes weddellii) were determined by satellite telemetry during the over-winter period in 1990 and 1991. Nine seals provided diving and movement data for 8 – 9 months. Seals that normally bred in the eastern part of McMurdo Sound spent most of the winter in the middle and northern parts of McMurdo Sound before the annual shore-fast ice had formed in those areas, or in the pack ice 0–50 km north of the sound and Ross Island. This is a greater use of pack ice, as opposed to shore-fast ice, in winter than was previously believed. Some long-distance movements (one over 1500 km in total) to the middle and northwestern parts of the Ross Sea also occurred. Although highly variable within and between individuals, dives indicative of foraging were primarily to mid-water regions (100 – 350 m) in both years, and were similar to those that have been observed in spring and summer, when Pleuragramma antarcticum is the primary prey of Weddell seals in McMurdo Sound.
APA, Harvard, Vancouver, ISO, and other styles
47

Mushtaq, Zohaib, and Shun-Feng Su. "Efficient Classification of Environmental Sounds through Multiple Features Aggregation and Data Enhancement Techniques for Spectrogram Images." Symmetry 12, no. 11 (November 3, 2020): 1822. http://dx.doi.org/10.3390/sym12111822.

Full text
Abstract:
Over the past few years, the study of environmental sound classification (ESC) has become very popular due to the intricate nature of environmental sounds. This paper reports our study on employing various acoustic features aggregation and data enhancement approaches for the effective classification of environmental sounds. The proposed data augmentation techniques are mixtures of the reinforcement, aggregation, and combination of distinct acoustics features. These features are known as spectrogram image features (SIFs) and retrieved by different audio feature extraction techniques. All audio features used in this manuscript are categorized into two groups: one with general features and the other with Mel filter bank-based acoustic features. Two novel and innovative features based on the logarithmic scale of the Mel spectrogram (Mel), Log (Log-Mel) and Log (Log (Log-Mel)) denoted as L2M and L3M are introduced in this paper. In our study, three prevailing ESC benchmark datasets, ESC-10, ESC-50, and Urbansound8k (Us8k) are used. Most of the audio clips in these datasets are not fully acquired with sound and include silence parts. Therefore, silence trimming is implemented as one of the pre-processing techniques. The training is conducted by using the transfer learning model DenseNet-161, which is further fine-tuned with individual optimal learning rates based on the discriminative learning technique. The proposed methodologies attain state-of-the-art outcomes for all used ESC datasets, i.e., 99.22% for ESC-10, 98.52% for ESC-50, and 97.98% for Us8k. This work also considers real-time audio data to evaluate the performance and efficiency of the proposed techniques. The implemented approaches also have competitive results on real-time audio data.
APA, Harvard, Vancouver, ISO, and other styles
48

Olson, Kenneth S., and John Hajek. "The phonetic status of the labial flap." Journal of the International Phonetic Association 29, no. 2 (December 1999): 101–14. http://dx.doi.org/10.1017/s0025100300006484.

Full text
Abstract:
The labial flap is a speech sound which has received little attention in the literature. In this paper, we document the articulation of the sound, including audio and video data from Mono (D.R. Congo, Ubangian). The sound is attested in over sixty languages and has been incorporated into the phonological system of at least a dozen of them. The sound is easily describable in terms of values of phonological features or phonetic parameters, and it appears to have arisen independently in at least two regions of the world. These factors argue for the inclusion of the sound in the International Phonetic Alphabet.
APA, Harvard, Vancouver, ISO, and other styles
49

Vincent, R. F. "An Assessment of the Lancaster Sound Polynya Using Satellite Data 1979 to 2022." Remote Sensing 15, no. 4 (February 9, 2023): 954. http://dx.doi.org/10.3390/rs15040954.

Full text
Abstract:
Situated between Devon Island and Baffin Island, Lancaster Sound is part of Tallurutiup Imanga, which is in the process of becoming the largest marine conservation area in Canada. The cultural and ecological significance of the region is due, in part, to a recurring polynya in Lancaster Sound. The polynya is demarcated by an ice arch that generally forms in mid-winter and collapses in late spring or early summer. Advanced Very High Resolution imagery from 1979 to 2022 was analyzed to determine the position, formation and collapse of the Lancaster Sound ice arch. The location of the ice arch demonstrates high interannual variability, with 512 km between the eastern and western extremes, resulting in a polynya area that can fluctuate between 6000 km2 and 40,000 km2. The timing of the seasonal ice arch formation and collapse has implications with respect to ice transport through Lancaster Sound and the navigability of the Northwest Passage. The date of both the formation and collapse of the ice arch is variable from season to season, with the formation observed between November and April and collapse usually occurring in June or July. A linear trend from 1979 to 2022 indicates that seasonal ice arch duration has declined from 150 to 102 days. The reduction in ice arch duration is a result of earlier collapse dates over the study period and later formation dates, particularly from 1979 to 2000. Lancaster Sound normally freezes west to east each season until the ice arch is established, but there is no statistical relationship between the ice arch location and duration. Satellite surface temperature mapping of the region indicates that the polynya is characterized by sub-resolution leads during winter.
APA, Harvard, Vancouver, ISO, and other styles
50

Mayo, Paul G., and Matthew J. Goupell. "The changes to interaural acoustics imparted by placing circumaural headphones over bilateral cochlear-implant sound processors." Journal of the Acoustical Society of America 151, no. 4 (April 2022): A126. http://dx.doi.org/10.1121/10.0010862.

Full text
Abstract:
Interaural level differences (ILDs) are the primary binaural cue used by bilateral cochlear-implant (BiCI) listeners for horizontal-plane sound localization. As such, factors affecting their fidelity are important to understand. One approach for delivering binaural stimuli to BiCI listeners is to present virtually spatialized signals via circumaural headphones placed over the listener’s sound processors. An assumption of this approach is that the binaural cues presented are relatively unaltered by the transmission process and headphone placement; yet there is a lack of evidence supporting this assumption. Therefore, this study measured the effect of small changes to headphone placement on the ILDs received by BiCI sound processors. The intent was to determine both the extent and frequency range in which ILDs are affected using recordings of sine-sweeps received by the device microphones. Results show that slight changes in headphone placement affect coupling to the sound processor microphones and outer ear, primarily influencing ILDs between 1 and 5 kHz as much as 8.6 dB at the midline. They also show that occluding the listener’s ear canals with earplugs or moldable putty significantly reduces ILD variability from inconsistent headphone placement. In summary, these data suggest ways to improve this presentation method for BiCI research studies.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography