Journal articles on the topic 'Data quality and noise'

To see the other types of publications on this topic, follow the link: Data quality and noise.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Data quality and noise.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Van Hulse, Jason, Taghi M. Khoshgoftaar, and Amri Napolitano. "Evaluating the Impact of Data Quality on Sampling." Journal of Information & Knowledge Management 10, no. 03 (September 2011): 225–45. http://dx.doi.org/10.1142/s021964921100295x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Learning from imbalanced training data can be a difficult endeavour, and the task is made even more challenging if the data is of low quality or the size of the training dataset is small. Data sampling is a commonly used method for improving learner performance when data is imbalanced. However, little effort has been put forth to investigate the performance of data sampling techniques when data is both noisy and imbalanced. In this work, we present a comprehensive empirical investigation of the impact of changes in four training dataset characteristics — dataset size, class distribution, noise level and noise distribution — on data sampling techniques. We present the performance of four common data sampling techniques using 11 learning algorithms. The results, which are based on an extensive suite of experiments for which over 15 million models were trained and evaluated, show that: (1) even for relatively clean datasets, class imbalance can still hurt learner performance, (2) data sampling, however, may not improve performance for relatively clean but imbalanced datasets, (3) data sampling can be very effective at dealing with the combined problems of noise and imbalance, (4) both the level and distribution of class noise among the classes are important, as either factor alone does not cause a significant impact, (5) when sampling does improve the learners (i.e. for noisy and imbalanced datasets), RUS and SMOTE are the most effective at improving the AUC, while SMOTE performed well relative to the F-measure, (6) there are significant differences in the empirical results depending on the performance measure used, and hence it is important to consider multiple metrics in this type of analysis, and (7) data sampling rarely hurt the AUC, but only significantly improved performance when data was at least moderately skewed or noisy, while for the F-measure, data sampling often resulted in significantly worse performance when applied to slightly skewed or noisy datasets, but did improve performance when data was either severely noisy or skewed, or contained moderate levels of both noise and imbalance.
2

Li, Benchong, and Qiong Gao. "Improving data quality with label noise correction." Intelligent Data Analysis 23, no. 4 (September 26, 2019): 737–57. http://dx.doi.org/10.3233/ida-184024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ning, Ai Min, Cheng Li, and Zhao Liu. "Acoustic Transceiver Optimization Analysis for Downhole Sensor Data Telemetry via Drillstring." Applied Mechanics and Materials 302 (February 2013): 389–94. http://dx.doi.org/10.4028/www.scientific.net/amm.302.389.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Downhole sensor data telemetry using acoustic waves along the drillstring helps to know the physical and chemical properties of the formation and drilling fluid in Logging While Drilling. However, complex drillstring channel characteristics and normal downhole drilling operations will often adversely affect the quality of acoustic telemetry. Based on a theoretical channel model, we analyze the effects of transceiver optimal placements on acoustic transmission through a periodic drillstring. Considering the downhole noisy conditions including the surface noise sources, the downhole noise sources and multiple reflection echoes, dual acoustic receivers and an acoustic isolator are analyzed to improve the Signal-to-Noise Ratio and the capacity of the uplink channel. By arranging two receivers spaced one-quarter wavelength apart at receiver ends, the suppression results of one-way downlink noises are evaluated with the aid of the channel transient simulation model. Then the isolating results of uplink noises from drilling bit are investigated, with regard to the isolator placed between the downhole transmitter and a noise source. These methods, in conjunction with the complex drillstring features, show that the uses of the available transceiver design and signal processing techniques can make the drillstring as a waveguide for transmitting downhole sensor information at high data rate.
4

Terbe, Dániel, László Orzó, Barbara Bicsák, and Ákos Zarándy. "Hologram Noise Model for Data Augmentation and Deep Learning." Sensors 24, no. 3 (February 1, 2024): 948. http://dx.doi.org/10.3390/s24030948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper introduces a noise augmentation technique designed to enhance the robustness of state-of-the-art (SOTA) deep learning models against degraded image quality, a common challenge in long-term recording systems. Our method, demonstrated through the classification of digital holographic images, utilizes a novel approach to synthesize and apply random colored noise, addressing the typically encountered correlated noise patterns in such images. Empirical results show that our technique not only maintains classification accuracy in high-quality images but also significantly improves it when given noisy inputs without increasing the training time. This advancement demonstrates the potential of our approach for augmenting data for deep learning models to perform effectively in production under varied and suboptimal conditions.
5

V, Malathi, and Gopinath MP. "Noise Deduction in Novel Paddy Data Repository using Filtering Techniques." Scalable Computing: Practice and Experience 21, no. 4 (December 20, 2020): 601–10. http://dx.doi.org/10.12694/scpe.v21i4.1718.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Classification of paddy crop diseases in prior knowledge is the current challenging task to evolve the economicgrowth of the country. In image processing techniques, the initial process is to eliminate the noise present in the dataset. Removing the noise leads to improvements in the quality of the image. Noise can be removed by applying filtering techniques. In this paper, a novel data repository created from different paddy areas in Vellore, which includes the following diseases, namely Bacteria Leaf Blight, Blast, Leaf Spot, Leaf Holder, Hispa and Healthy leaves. In the initial process, three kinds of noises, namely Salt and Pepper noise, Speckle noise, and Poisson noises, were removed using noise filtering techniques, namely Median and Wiener filter. Theinterpretation made over the median and Wiener filtering techniques concerning noises, the performance of the methods measured using metrics namely PSNR (peak to signal to noise ration), MSE (mean square error), Maxerr (Maximum squared error), L2rat (ratio of squared error). It is observed that the PSNR value of the hybrid approach is 18.42dB, which produces less error rate as compared with the traditional approach. Results suggest that the methods used in this paper are suitable for processing noise.
6

Hedderich, Michael A., Dawei Zhu, and Dietrich Klakow. "Analysing the Noise Model Error for Realistic Noisy Label Data." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 7675–84. http://dx.doi.org/10.1609/aaai.v35i9.16938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Distant and weak supervision allow to obtain large amounts of labeled training data quickly and cheaply, but these automatic annotations tend to contain a high amount of errors. A popular technique to overcome the negative effects of these noisy labels is noise modelling where the underlying noise process is modelled. In this work, we study the quality of these estimated noise models from the theoretical side by deriving the expected error of the noise model. Apart from evaluating the theoretical results on commonly used synthetic noise, we also publish NoisyNER, a new noisy label dataset from the NLP domain that was obtained through a realistic distant supervision technique. It provides seven sets of labels with differing noise patterns to evaluate different noise levels on the same instances. Parallel, clean labels are available making it possible to study scenarios where a small amount of gold-standard data can be leveraged. Our theoretical results and the corresponding experiments give insights into the factors that influence the noise model estimation like the noise distribution and the sampling technique.
7

Ataeyan, Mahdieh, and Negin Daneshpour. "Automated Noise Detection in a Database Based on a Combined Method." Statistics, Optimization & Information Computing 9, no. 3 (June 9, 2021): 665–80. http://dx.doi.org/10.19139/soic-2310-5070-879.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Data quality has diverse dimensions, from which accuracy is the most important one. Data cleaning is one of the preprocessing steps in data mining which consists of detecting errors and repairing them. Noise is a common type of error, that occur in database. This paper proposes an automated method based on the k-means clustering for noise detection. At first, each attribute (Aj) is temporarily removed from data and the k-means clustering is applied to other attributes. Thereafter, the k-nearest neighbors is used in each cluster. After that a value is predicted for Aj in each record by the nearest neighbors. The proposed method detects noisy attributes using predicted values. Our method is able to identify several noises in a record. In addition, this method can detect noise in fields with different data types, too. Experiments show that this method can averagely detect 92% of the noises existing in the data. The proposed method is compared with a noise detection method using association rules. The results indicate that the proposed method have improved noise detection averagely by 13%.
8

Shin, Jaegwang, and Suan Lee. "Robust and Lightweight Deep Learning Model for Industrial Fault Diagnosis in Low-Quality and Noisy Data." Electronics 12, no. 2 (January 13, 2023): 409. http://dx.doi.org/10.3390/electronics12020409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Machines in factories are typically operated 24 h a day to support production, which may result in malfunctions. Such mechanical malfunctions may disrupt factory output, resulting in financial losses or human casualties. Therefore, we investigate a deep learning model that can detect abnormalities in machines based on the operating noise. Various data preprocessing methods, including the discrete wavelet transform, the Hilbert transform, and short-time Fourier transform, were applied to extract characteristics from machine-operating noises. To create a model that can be used in factories, the environment of real factories was simulated by introducing noise and quality degradation to the sound dataset for Malfunctioning Industrial Machine Investigation and Inspection (MIMII). Thus, we proposed a lightweight model that runs reliably even in noisy and low-quality sound data environments, such as a real factory. We propose a Convolutional Neural Network–Long Short-Term Memory (CNN–LSTM) model using Short-Time Fourier Transforms (STFTs), and the proposed model can be very effective in terms of application because it is a lightweight model that requires only about 6.6% of the number of parameters used in the underlying CNN, and has only a performance difference within 0.5%.
9

Liu, Xiaoqiong, Guang Li, Jin Li, Xiaohui Zhou, Xianjie Gu, Cong Zhou, and Meng Gong. "Self-organizing Competitive Neural Network Based Adaptive Sparse Representation for Magnetotelluric Data Denoising." Journal of Physics: Conference Series 2651, no. 1 (December 1, 2023): 012129. http://dx.doi.org/10.1088/1742-6596/2651/1/012129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract The existing sparse decomposition denoising methods for magnetotelluric (MT) data need to set the iterative stop condition manually, which not only has a large workload and high difficulty, but also easily causes subjective bias. To this end, we propose a new adaptive sparse representation method for MT data denoising. First, the data to be processed is divided into high-quality segments and noisy segments by machine learning algorithm. Then, the characteristic parameters of high-quality segments are calculated, and the boundary value of the characteristic parameters is taken as the threshold. The threshold has two functions, one is as a criterion for signal-to-noise identification, and the other is as an iterative stop condition for subsequent sparse decomposition. Finally, the optimized orthogonal matching pursuit algorithm is used to separate the signal and noise of the noisy segments, and the denoised segments and high-quality segments are combined to obtain the complete denoised MT data. The field data processing results show that this method is a fully automatic and intelligent MT data denoising method. It greatly improves the signal-to-noise ratio and the apparent resistivity-phase curves.
10

Kaspirzhny, Anton V., Paul Gogan, Ginette Horcholle-Bossavit, and Suzanne Tyč-Dumont. "Neuronal morphology data bases: morphological noise and assesment of data quality." Network: Computation in Neural Systems 13, no. 3 (January 2002): 357–80. http://dx.doi.org/10.1088/0954-898x_13_3_307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

N. Hussin, Kholood, Ali K. Nahar, and Hussain Kareem Khleaf. "A hybrid bat-genetic algorithm for improving the visual quality of medical images." Indonesian Journal of Electrical Engineering and Computer Science 28, no. 1 (October 1, 2022): 220. http://dx.doi.org/10.11591/ijeecs.v28.i1.pp220-226.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Efficient repression of noise in a medical image is a very significant issue. This paper proposed a method to denoise medical images by the use of a hybrid adaptive algorithm based on the bat algorithm (BA) and genetic algorithm (GA). Medical images can be often affected by different kinds of noise that decrease the precision of any automatic system for analysis. Therefore, the noise reduction methods are always utilized for increasing the Peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM) of images to optimize the originality. Gaussian noise and salt and pepper noise corrupted the used medical data, separately. The noise level to medical images was added noise variance from 0.1 to 0.5 to compare the performance of the de-noising techniques. In the analytical study, we apply different kinds of noise like Gaussian noise and salt-and-pepper noise to medical images for making these images noisy. The hybrid BA-GA model was applied on medical noisy images to eliminate noise and the performances have been determined by the statistical analyses such as PSNR, values are gotten 63.04 dB and 59.75 dB for CT and MRI images.
12

Dunne, Jarrod, and Greg Beresford. "Improving seismic data quality in the Gippsland Basin (Australia)." GEOPHYSICS 63, no. 5 (September 1998): 1496–506. http://dx.doi.org/10.1190/1.1444446.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Deep seismic exploration in the Gippsland Basin is hindered by strong noise below the Latrobe Group coal sequence. The reflectivity method provides a means for constructing detailed and accurate synthetic seismograms, often from little more than a partial sonic log. The noise contributions to the synthetics can then be interpreted using additional synthetics computed from variations upon the depth model and by exercising control over the wave types modeled. This approach revealed three types of persistent noise in progressively deeper parts of the subcoal image: (1) mode‐converted interbed multiples (generated within the coal sequence), (2) S-wave reflections and long‐period multiples (generated between the coal sequence and the Miocene carbonates), and (3) surface‐related multiples. The noise interpretation can also be performed upon semblance analyses of the elastic synthetics to guide a velocity analysis away from a well. This procedure helped to avoid picking the interformation long‐period multiples, whose stacking velocities were only 5 to 10% below those of the weak target zone primaries. An improve subcoal image was obtained by making full use of the versatile noise suppression offered by a τ-p domain processing stream. By separating the strong linear events at the far offsets, it is possible to stack a larger portion of the target zone reflections, provided hyperbolic velocity filtering (HVF) is applied to suppress the transform artifacts. Hyperbolic velocity filtering can be incorporated into a point‐source τ-p transform to suppress S-wave reflections and guided waves while preserving plane‐wave amplitudes to assist the subsequent deconvolution of the mode‐converted interbed multiples. Stacking in the τ-p domain is achieved using an elliptical moveout correction that reduces wavelet stretch and approximates the exact reflection traveltime better than NMO. Two regional seismic lines were reprocessed in this manner and cointerpreted with the modeling studies performed at nearby wells to avoid the noise events that still remained. Several new events appeared in the immediate target zone, passing the low‐frequency character expected following transmission through a coal sequence.
13

Oh, Soo Hee, and Kyoungwon Lee. "Aircraft Noise of Airport Community in Korea." Audiology and Speech Research 16, no. 1 (January 31, 2020): 1–10. http://dx.doi.org/10.21848/asr.200001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Aircraft noise is one of the serious environmental noises with the increased use of flight traffic. The purpose of this study is to understand aircraft noise levels of airport communities in Korea using baseline data for audiologic management. Aircraft noise levels were retrieved from the National Noise Information System every month between 2004 and 2018. We reviewed aircraft noise levels obtained from total of 111 airport communities across 14 airports. In order to understand aircraft noise levels of civil and military airports, the aircraft noise levels measured in civil and military airport communities compared with the noise levels from civil airport communities. The data showed average 71-73 weight equivalent continuous perceived noise level (WECPNL) for fifteen years across airport cities and the average noise levels did not increase over time between 2004 and 2018 years. The civil and military airports showed about 12 WECPNLs of increased noise levels compared to the civil airports. The most civil and military airport communities, including Gwangju, Gunsan, Daegu, Wonju, and Cheongju generated the maximum noise levels and ranked as the highest airport for aircraft noise levels. Although aircraft noise levels in airport communities were similar over the past decade, civil and military airports generated increased noised levels compared to civil airports due to jet plane noises and other military-related noises. Careful consideration is necessary to implement noise reduction policy for civil and military airport communities. Ongoing noise control, hearing monitoring, education, and relevant policies are required to improve the quality of life in the airport community residences.
14

Pan, Yufei, Zehui Yuan, Jiaoyu Zheng, and Xiaoyang Ma. "Recovery of Power Quality Terminal’s Harmonic Data with the Interference of Bad Data." Electronics 11, no. 11 (May 26, 2022): 1694. http://dx.doi.org/10.3390/electronics11111694.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Power quality monitoring equipment is inevitably faced with the problem of data loss and is vulnerable to the interference of noise or bad data. We propose a harmonic data recovery method that is based on graph clustering and non-negative matrix factorization (NMF) under multiple constraints. Compared with the existing harmonic data recovery methods, the proposed method can effectively recover lost data and it has a strong anti-interference ability, especially for the recovery of harmonic data with interference. In the recovery of data loss, noisy interference tests and bad data interference tests, the presented recovery algorithm has high accuracy within 60% for continuous missing data. In an environment with SNR = 50, this method has high recover reliability and accuracy within 15% for situations involving bad data interference.
15

Павло Б. Олійник. "DATA FILTERING METHODS FOR HYDROGRAPHIC SURVEY DATA." MECHANICS OF GYROSCOPIC SYSTEMS, no. 27 (October 6, 2014): 10–18. http://dx.doi.org/10.20535/0203-377127201437908.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Current trends in navigation are characterized by the further increase of demands on the precision of hydrographic information, especially of the nautical maps. Thus, precision of both spatial position and depth bathymetric data is important for ensuring safe navigation, and so problem of data filtering and elimination of outliers arises.In the present work, comparison of methods, used for postprocessing of depth data, measured by echosounder, is done.First of all, review of commonly used data filtering and outlier elimination methods is done, and their advantages and disadvantages are analyzed.As improved outlier elimination algorithm and median filtering has their flaws, Kalman filtering is considered as a measure of outlier elimination and real data estimation. It’s shown that Kalman filter can both effectively filter noise and eliminate outliers; however, quality of the filtered data strongly depends on measurement noise covariation and process noise covariation estimates, and respectively. At that, the lower is, the better noise is filtered and the smoother depth profile is; the higher is, the better outliers are eliminated. However, care must be taken, as depth profile is distorted at high values, and noise is almost not filtered at low ones.It’s shown that noise covariation estimate has more influence on data filtering; therefore, one should pay attention to correct estimation. For practical reasons, values of ; =10 are recommended.In the recent works, wavelet filtering is considered as a promising method of data filtering in postprocessing. Therefore, as a next step, comparison of Kalman filtering and wavelet filtering is done using the real-world data. To that end, white noise is added to filtered and smoothed data, and then those data are filtered by methods, mentioned above. Corellation of source and denoised data is chosen as a criterion of filter effectiveness.It’s shown that Kalman filter is somewhat less effective in data postprocessing than wavelet filter. However, as Kalman filter allows one both to filter noises form the measured data and to eliminate outliers, and can be used for “on-the-fly” data filtering, it’s advisable to use Kalman filtering for real-time measurements during surveys, and wavelets for data postprocessing.Future studies may be devoted to improvement of existing and introduction of new data filtering and postrprocessing methods.
16

KHOSHGOFTAAR, TAGHI M., VEDANG JOSHI, and NAEEM SELIYA. "DETECTING NOISY INSTANCES WITH THE ENSEMBLE FILTER: A STUDY IN SOFTWARE QUALITY ESTIMATION." International Journal of Software Engineering and Knowledge Engineering 16, no. 01 (February 2006): 53–76. http://dx.doi.org/10.1142/s0218194006002677.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The performance of a classification model is invariably affected by the characteristics of the measurement data it is built upon. If the quality of the data is generally poor, then the classification model will demonstrate poor performance. The detection and removal of noisy instances will improve quality of the data, and consequently, the performance of the classification model. We investigate a noise handling technique that attempts to improve the quality of datasets for classification purposes by eliminating instances that are likely to be noise. Our approach uses twenty five different classification techniques to create an ensemble filter for eliminating likely noise. The basic assumption is that if a given majority of classifiers in the ensemble misclassify an instance, then it is likely to be a noisy instance. Using a relatively large number of base-level classifiers in the ensemble filter facilitates in achieving the desired level of noise removal conservativeness with several possible levels of filtering. It also provides a higher degree of confidence in the noise elimination procedure as the results are less likely to get influenced by (possibly) inappropriate learning bias of a few algorithms with twenty five base-level classifiers than with relatively smaller number of base-level classifiers. Empirical case studies of two high assurance software projects demonstrates the effectiveness of our noise elimination approach by the significant improvement achieved in classification accuracies at various levels of noise filtering.
17

Portela, Filipe, Manuel Filipe Santos, António Abelha, José Machado, and Fernando Rua. "Data Quality and Critical Events in Ventilation." International Journal of Reliable and Quality E-Healthcare 6, no. 2 (April 2017): 40–48. http://dx.doi.org/10.4018/ijrqeh.2017040104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The data quality assessment is a critical task in Intensive Care Units (ICUs). In the ICUs the patients are continuously monitored and the values are collected in real-time through data streaming processes. In the case of ventilation, the ventilator is monitoring the patient respiratory system and then a gateway receives the monitored values. This process can collect any values, noise values or values that can have clinical significance, for example, when a patient is having a critical event associated with the respiratory system. In this paper, the critical events concept was applied to the ventilation system, and a quality assessment of the collected data was performed when a new value arrived. Some interesting results were achieved: 56.59% of the events were critical, and 5% of the data collected were noise values. In this field, Average Ventilation Pressure and Peak flow are respectively the variables with the most influence.
18

Wu, Sihong, Qinghua Huang, and Li Zhao. "De-noising of transient electromagnetic data based on the long short-term memory-autoencoder." Geophysical Journal International 224, no. 1 (September 8, 2020): 669–81. http://dx.doi.org/10.1093/gji/ggaa424.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
SUMMARY Late-time transient electromagnetic (TEM) data contain deep subsurface information and are important for resolving deeper electrical structures. However, due to their relatively small signal amplitudes, TEM responses later in time are often dominated by ambient noises. Therefore, noise removal is critical to the application of TEM data in imaging electrical structures at depth. De-noising techniques for TEM data have been developed rapidly in recent years. Although strong efforts have been made to improving the quality of the TEM responses, it is still a challenge to effectively extract the signals due to unpredictable and irregular noises. In this study, we develop a new type of neural network architecture by combining the long short-term memory (LSTM) network with the autoencoder structure to suppress noise in TEM signals. The resulting LSTM-autoencoders yield excellent performance on synthetic data sets including horizontal components of the electric field and vertical component of the magnetic field generated by different sources such as dipole, loop and grounded line sources. The relative errors between the de-noised data sets and the corresponding noise-free transients are below 1% for most of the sampling points. Notable improvement in the resistivity structure inversion result is achieved using the TEM data de-noised by the LSTM-autoencoder in comparison with several widely-used neural networks, especially for later-arriving signals that are important for constraining deeper structures. We demonstrate the effectiveness and general applicability of the LSTM-autoencoder by de-noising experiments using synthetic 1-D and 3-D TEM signals as well as field data sets. The field data from a fixed loop survey using multiple receivers are greatly improved after de-noising by the LSTM-autoencoder, resulting in more consistent inversion models with significantly increased exploration depth. The LSTM-autoencoder is capable of enhancing the quality of the TEM signals at later times, which enables us to better resolve deeper electrical structures.
19

Wang, Runjie, Wenzhong Shi, Xianglei Liu, and Zhiyuan Li. "An Adaptive Cutoff Frequency Selection Approach for Fast Fourier Transform Method and Its Application into Short-Term Traffic Flow Forecasting." ISPRS International Journal of Geo-Information 9, no. 12 (December 7, 2020): 731. http://dx.doi.org/10.3390/ijgi9120731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Historical measurements are usually used to build assimilation models in sequential data assimilation (S-DA) systems. However, they are always disturbed by local noises. Simultaneously, the accuracy of assimilation model construction and assimilation forecasting results will be affected. The fast Fourier transform (FFT) method can be used to acquire de-noised historical traffic flow measurements to reduce the influence of local noises on constructed assimilation models and improve the accuracy of assimilation results. In the practical signal de-noising applications, the FFT method is commonly used to de-noise the noisy signal with known noise frequency. However, knowing the noise frequency is difficult. Thus, a proper cutoff frequency should be chosen to separate high-frequency information caused by noises from the low-frequency part of useful signals under the unknown noise frequency. If the cutoff frequency is too high, too much noisy information will be treated as useful information. Conversely, if the cutoff frequency is too low, part of the useful information will be lost. To solve this problem, this paper proposes an adaptive cutoff frequency selection (A-CFS) method based on cross-validation. The proposed method can determine a proper cutoff frequency and ensure the quality of de-noised outputs for a given dataset using the FFT method without noise frequency information. Experimental results of real-world traffic flow data measurements in a sub-area of a highway near Birmingham, England, demonstrate the superior performance of the proposed A-CFS method in noisy information separation using the FFT method. The differences between true and predicted traffic flow values are evaluated using the mean absolute error (MAE), root mean square error (RMSE), and mean absolute percentage (MAPE) values. Compared to the results of the two commonly used de-noising methods, i.e., discrete wavelet transform (DWT) and ensemble empirical mode decomposition (EEMD) methods, the short-term traffic flow forecasting results of the proposed A-CFS method are much more reliable. In terms of the MAE value, the average relative improvements of the assimilation model built using the proposed method are 19.26%, 3.47%, and 4.25%, compared to the model built using raw data, DWT method, and EEMD method, respectively; the corresponding average relative improvements in RMSE are 19.05%, 5.36%, and 3.02%, respectively; lastly, the corresponding average relative improvements in MAPE are 18.88%, 2.83%, and 2.28%, respectively. The test results show that the proposed method is effective in separating noises from historical measurements and can improve the accuracy of assimilation model construction and assimilation forecasting results.
20

Shi, Haoxiang, Jun Ai, Jingyu Liu, and Jiaxi Xu. "Improving Software Defect Prediction in Noisy Imbalanced Datasets." Applied Sciences 13, no. 18 (September 19, 2023): 10466. http://dx.doi.org/10.3390/app131810466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Software defect prediction is a popular method for optimizing software testing and improving software quality and reliability. However, software defect datasets usually have quality problems, such as class imbalance and data noise. Oversampling by generating the minority class samples is one of the most well-known methods to improving the quality of datasets; however, it often introduces overfitting noise to datasets. To better improve the quality of these datasets, this paper proposes a method called US-PONR, which uses undersampling to remove duplicate samples from version iterations and then uses oversampling through propensity score matching to reduce class imbalance and noise samples in datasets. The effectiveness of this method was validated in a software prediction experiment that involved 24 versions of software data in 11 projects from PROMISE in noisy environments that varied from 0% to 30% noise level. The experiments showed a significant improvement in the quality of datasets pre-processed by US-PONR in noisy imbalanced datasets, especially the noisiest ones, compared with 12 other advanced dataset processing methods. The experiments also demonstrated that the US-PONR method can effectively identify the label noise samples and remove them.
21

Yang, Han Sheng. "Denoising Power Quality Signal Using Savitzky-Golay Based on Virtual Instrument." Advanced Materials Research 655-657 (January 2013): 974–77. http://dx.doi.org/10.4028/www.scientific.net/amr.655-657.974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In power quality monitoring system, there are unavoidably existing various kinds of noises in collected data,the presence of noise may result in increased false classification rate, denoising is an extremely important work for detection and classification of power quality disturbances. In order to improve the denoising result of power quality signal, an denoising method for power quality signal using Savitzky-Golay is proposed. Numerical results show that the proposed method can eliminate the influence of noise components and implement transient power quality disturbance detection and localization, thus providing good foundations for transient power quality disturbance monitoring under noise environment.
22

Brochier, Tim J., Amanda Fullerton, Adam Hersbach, Harish Krishnamoorthi, and Zachary Smith. "Deep neural network-based speech enhancement for cochlear implants." Journal of the Acoustical Society of America 154, no. 4_supplement (October 1, 2023): A28. http://dx.doi.org/10.1121/10.0022678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Noisy conditions make understanding speech with a cochlear implant (CI) difficult. Speech enhancement (SE) algorithms based on signal statistics can be beneficial in stationary noise, but rarely provide benefit in modulated multi-talker babble. Current approaches using deep neural networks (DNNs) rely on a data driven approach for training and promise improvements in a wide variety of noisy conditions. In this study a DNN-based SE algorithm was evaluated in CI listeners. The network was trained on a large database of publicly available recordings. A double-blinded acute evaluation was conducted with 10 adult CI users by assessing intelligibility and quality of speech embedded in a range of different noise types. The DNN-based SE algorithm provided significant benefits in speech intelligibility and sound quality in all noise types that were evaluated. Speech reception thresholds, the SNR required to understand 50% of the speech material, improved by 1.8 to 3.5 dB depending on noise type. Benefits varied with the SNR of the input signal and the mixing ratio parameter that was used to combine the original and de-noised signals. The results demonstrate that DNN-based SE can provide benefits in natural, modulated noise conditions, which is critical to CI users in their day-to-day environment.
23

Liao, Chiung Chou, and Ming Xuan Gu. "Implementation of Power Quality Event Detector on a FPGA-Based System." Advanced Materials Research 433-440 (January 2012): 3918–22. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.3918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The discrete wavelet transform (DWT) technique has been proposed for detecting and localizing transient disturbance in the power systems. The disturbance is detected by comparing the transformed signal with an empirically-given threshold. However, as the signal under analysis contains noises, especially the white noise with flat spectrum, the threshold is difficult to give. Due to the nature of flat spectrum, a filter cannot just get rid of the noise without removing the significant disturbance signals together. To enhance the WT technique in processing the noise-riding signals, this paper proposes a noise-suppression algorithm. The abilities of the WT in detecting and localizing the disturbances can hence be restored. Finally, this paper employed the actual data obtained from the practical power systems of Taiwan Power Company (TPC) to validate by digital implementation on an FPGA-based digital device for real-time de-noising function of the monitored PQ DWT data.
24

Purwanti Ningrum, Ika, Agfianto Eko Putra, and Dian Nursantika. "Penapisan Derau Gaussian, Speckle dan Salt&Pepper Pada Citra Warna." IJCCS (Indonesian Journal of Computing and Cybernetics Systems) 5, no. 3 (November 19, 2011): 29. http://dx.doi.org/10.22146/ijccs.5209.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Quality of digital image can decrease becouse some noises. Noise can come from lower quality of image recorder, disturb when transmission data process and weather. Noise filtering can make image better becouse will filtering that noise from the image and can improve quality of digital image. This research have aim to improve color image quality with filtering noise. Noise (Gaussian, Speckle, Salt&Pepper) will apply to original image, noise from image will filtering use Bilateral Filter method, Median Filter method and Average Filter method so can improve color image quality. To know how well this research do, we use PSNR (Peak Signal to Noise Ratio) criteria with compared original image and filtering image (image after using noise and filtering noise).This research result with noise filtering Gaussian (variance = 0.5), highest PSNR value found in the Bilateral Filter method is 27.69. Noise filtering Speckle (variance = 0.5), highest PSNR value found in the Average Filter method is 34.12. Noise filtering Salt&Pepper (variance = 0.5), highest PSNR value found in the Median Filter method is 31.27. Keywords— Bilateral Filter, image restoration, derau Gaussian, Speckle dan Salt&Pepper
25

Seybold, Tamara, Marion Knopp, Christian Keimel, and Walter Stechele. "Beyond Standard Noise Models: Evaluating Denoising Algorithms with Respect to Realistic Camera Noise." International Journal of Semantic Computing 08, no. 02 (June 2014): 145–67. http://dx.doi.org/10.1142/s1793351x14400029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The development and tuning of denoising algorithms is usually based on readily processed test images that are artificially degraded with additive white Gaussian noise (AWGN). While AWGN allows us to easily generate test data in a repeatable manner, it does not reflect the noise characteristics in a real digital camera. Realistic camera noise is signal-dependent and spatially correlated due to the demosaicking step required to obtain full-color images. Hence, the noise characteristic is fundamentally different from AWGN. Using such unrealistic data to test, optimize and compare denoising algorithms may lead to incorrect parameter tuning or suboptimal choices in research on denoising algorithms. In this paper, we therefore propose an approach to evaluate denoising algorithms with respect to realistic camera noise: we describe a new camera noise model that includes the full processing chain of a single sensor camera. We determine the visual quality of noisy and denoised test sequences using a subjective test with 18 participants. We show that the noise characteristics have a significant effect on visual quality. Quality metrics, which are required to compare denoising results, are applied, and we first evaluate the performance of 12 full-reference metrics. As no-reference metrics are especially useful for parameter tuning, we additionally evaluate five no-reference metrics with our realistic test data. We conclude that a more realistic noise model should be used in future research to improve the quality estimation of digital images and videos and to improve the research on denoising algorithms.
26

Liu, Zhenhua, Ting Wang, Yonghua Qu, Huiming Liu, Xiaofang Wu, and Ya Wen. "Prediction of High-Quality MODIS-NPP Product Data." Remote Sensing 11, no. 12 (June 20, 2019): 1458. http://dx.doi.org/10.3390/rs11121458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Net primary productivity (NPP) is a key vegetation parameter and ecological indicator for tracking natural environmental change. High-quality Moderate Resolution Imaging Spectroradiometer Net primary productivity (MODIS-NPP) products are critical for assuring the scientific rigor of NPP analyses. However, obtaining high-quality MODIS-NPP products consistently is challenged by factors such as cloud contamination, heavy aerosol pollution, and atmospheric variability. This paper proposes a method combining the discrete wavelet transform (DWT) with an extended Kalman filter (EKF) for generating high-quality MODIS-NPP data. In this method, the DWT is used to remove noise in the original MODIS-NPP data, and the EKF is applied to the de-noised images. The de-noised images are modeled as a triply modulated cosine function that predicts the NPP data values when excessive cloudiness is present. This study was conducted in South China. By comparing measured NPP data to original MODIS-NPP and NPP estimates derived from combining the DWT and EKF, we found that the accuracy of the NPP estimates was significantly improved. The MODIS-NPP estimates had a mean relative error (RE) of 13.96% and relative root mean square error (rRMSE) of 15.67%, while the original MODIS-NPP had a mean RE of 23.58% and an rRMSE of 24.98%. The method combining DWT and EKF provides a feasible approach for generating new, high-quality NPP data in the absence of high-quality original MODIS-NPP data.
27

Al-Attabi, Ali, and Ali Al. "Spectral Graph Filtering for Noisy Signals Using the Kalman filter." ECTI Transactions on Electrical Engineering, Electronics, and Communications 21, no. 2 (June 27, 2023): 249818. http://dx.doi.org/10.37936/ecti-eec.2023212.249818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Noise is unwanted electrical or electromagnetic radiation that degrades the quality of the signal and the data. It can be difficult to denoise a signal that has been acquired in a noisy environment, but doing so may be necessary in a number of signal processing applications. This paper extends the issue of signal denoising from signals with regular structures, which are affected by noise, to signals with irregular structures by applying the graph signal processing (GSP) technique and a very wellknown filter, the standard Kalman filter, after adjusting it. When the modified Kalman filter is compared to the standard Kalman filter, the modified one performs better in situations where there are uncertain observations and/or processing noise and shows the best results. Also, the modified Kalman filter showed a higher efficiency when we compared it with other filters for different types of noise, which are not only standard Gaussian noises but also uniform distribution noise across two intervals for uncertain observation noise.
28

Ohkubo, Masato, and Yasushi Nagata. "Anomaly Detection for Noisy Data with the Mahalanobis–Taguchi System." Quality Innovation Prosperity 24, no. 2 (July 31, 2020): 75. http://dx.doi.org/10.12776/qip.v24i2.1441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
<p><strong>Purpose:</strong> Condition-based maintenance requires an accurate detection of unknown yet-to-have-occurred anomalies and the establishment of anomaly detection procedure for sensor data is urgently needed. Sensor data are noisy, and a conventional analysis cannot always be conducted appropriately. An anomaly detection procedure for noisy data was therefore developed.</p><p><strong>Methodology/Approach:</strong> In a conventional Mahalanobis–Taguchi method, appropriate anomaly detection is difficult with noisy data. Herein, the following is applied: 1) estimation of a statistical model considering noise, 2) its application to anomaly detection, and 3) development of a corresponding analysis framework.</p><p><strong>Findings:</strong> Engineers can conduct anomaly detection through the measurement and accumulation, analysis, and feedback of data. Especially, the two-step estimation of the statistical model in the analysis stage helps because it bridges technical knowledge and advanced anomaly detection.</p><p><strong>Research Limitation/implication:</strong> A novel data-utilisation design regarding the acquired quality is provided. Sensor-collected big data are generally noisy. By contrast, data targeted through conventional statistical quality control are small but the noise is controlled. Thus various findings for quality acquisition can be obtained. A framework for data analysis using big and small data is provided.</p><strong>Originality/Value of paper:</strong> The proposed statistical anomaly detection procedure for noisy data will improve of the feasibility of new services such as condition-based maintenance of equipment using sensor data.
29

Zokay, Sam. "Construction noise modelling—A comparison of equipment noise emissions data sources." Journal of the Acoustical Society of America 155, no. 3_Supplement (March 1, 2024): A199. http://dx.doi.org/10.1121/10.0027295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The accuracy of any environmental noise impact model depends on the quality and application of its input data, typically consisting of at-source emission levels, propagation factors, and receiver characteristics. In the realm of construction noise modelling, this starts with the determination of the noise emission level for each construction activity or item of equipment. This data can be procured from a variety of sources including manufacturer data, standards and guideline documents, or proprietary measurements. This study explores how emission levels from different data sources can be utilized to assess noise impacts against project criteria, and aims to validate the reliability through real-world case studies.
30

Nisha, Bernad, and M. Victor Jose. "DTMF: Decision Based Trimmed Multimode Approach Filter for Denoising MRI Images." IT Journal Research and Development 7, no. 2 (February 7, 2023): 152–72. http://dx.doi.org/10.25299/itjrd.2023.9463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The brain MRI image denoising is a challenging and attracting field for young researchers because it enhances the quality of medical images. Salt and pepper noise is the most dangerous noise which reduces the accuracy of brain diagnosis, and it damages the brain medical images severely, that leads to neurologists to fix incorrect treatments or surgery. The pitfalls raised in the existing denoising methods are less Peak signal to noise ratio, high time consumption and incapable for enormous level of noise range. Hence, this research proposes a novel denoising filter which is entitled as ‘Decision based Trimmed Multimode approach oriented Filter (DTMF)’ for salt and pepper noise removal. Herein, the noise removal section is branched into six steps which efficiently reduce noises based on multimode of majority strength. The main concepts used in this research are viz. decision based approach, trimming process, majority of intensity, median, mean, dynamic windows and Square shaped Exemplar Modeled Patch Mechanism (SEMPM). The essential contributions of this approach are i) designing rule set for majority strength structured multimode denoising, ii) computation of majority property oriented parameters like majority-instance, majority strength and majority value, iii) novel SEMPM mechanism to predict noise-free data. The novel SEMPM mechanism grants a solution for the prediction of noise-free pixel in account of the noisy pixels whose surrounding window is completely packed by noisy data. The proposed decision-based approach removes the salt and pepper noise with high peak signal to noise ratio even for huge noise range, with reasonable time consumption.
31

Nematzadeh, Zahra, Roliana Ibrahim, Ali Selamat, and Vahdat Nazerian. "The synergistic combination of fuzzy C-means and ensemble filtering for class noise detection." Engineering Computations 37, no. 7 (June 15, 2020): 2337–55. http://dx.doi.org/10.1108/ec-05-2019-0242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Purpose The purpose of this study is to enhance data quality and overall accuracy and improve certainty by reducing the negative impacts of the FCM algorithm while clustering real-world data and also decreasing the inherent noise in data sets. Design/methodology/approach The present study proposed a new effective model based on fuzzy C-means (FCM), ensemble filtering (ENS) and machine learning algorithms, called an FCM-ENS model. This model is mainly composed of three parts: noise detection, noise filtering and noise classification. Findings The performance of the proposed model was tested by conducting experiments on six data sets from the UCI repository. As shown by the obtained results, the proposed noise detection model very effectively detected the class noise and enhanced performance in case the identified class noisy instances were removed. Originality/value To the best of the authors’ knowledge, no effort has been made to improve the FCM algorithm in relation to class noise detection issues. Thus, the novelty of existing research is combining the FCM algorithm as a noise detection technique with ENS to reduce the negative effect of inherent noise and increase data quality and accuracy.
32

Selvaraj, Poovarasan, and E. Chandra. "A variant of SWEMDH technique based on variational mode decomposition for speech enhancement." International Journal of Knowledge-based and Intelligent Engineering Systems 25, no. 3 (November 10, 2021): 299–308. http://dx.doi.org/10.3233/kes-210072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In Speech Enhancement (SE) techniques, the major challenging task is to suppress non-stationary noises including white noise in real-time application scenarios. Many techniques have been developed for enhancing the vocal signals; however, those were not effective for suppressing non-stationary noises very well. Also, those have high time and resource consumption. As a result, Sliding Window Empirical Mode Decomposition and Hurst (SWEMDH)-based SE method where the speech signal was decomposed into Intrinsic Mode Functions (IMFs) based on the sliding window and the noise factor in each IMF was chosen based on the Hurst exponent data. Also, the least corrupted IMFs were utilized to restore the vocal signal. However, this technique was not suitable for white noise scenarios. Therefore in this paper, a Variant of Variational Mode Decomposition (VVMD) with SWEMDH technique is proposed to reduce the complexity in real-time applications. The key objective of this proposed SWEMD-VVMDH technique is to decide the IMFs based on Hurst exponent and then apply the VVMD technique to suppress both low- and high-frequency noisy factors from the vocal signals. Originally, the noisy vocal signal is decomposed into many IMFs using SWEMDH technique. Then, Hurst exponent is computed to decide the IMFs with low-frequency noisy factors and Narrow-Band Components (NBC) is computed to decide the IMFs with high-frequency noisy factors. Moreover, VVMD is applied on the addition of all chosen IMF to remove both low- and high-frequency noisy factors. Thus, the speech signal quality is improved under non-stationary noises including additive white Gaussian noise. Finally, the experimental outcomes demonstrate the significant speech signal improvement under both non-stationary and white noise surroundings.
33

Glick, Meir, Anthony E. Klon, Pierre Acklin, and John W. Davies. "Enrichment of Extremely Noisy High-Throughput Screening Data Using a Naïve Bayes Classifier." Journal of Biomolecular Screening 9, no. 1 (February 2004): 32–36. http://dx.doi.org/10.1177/1087057103260590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The noise level of a high-throughput screening (HTS) experiment depends on various factors such as the quality and robustness of the assay itself and the quality of the robotic platform. Screening of compound mixtures is noisier than screening single compounds per well. A classification model based on naïve Bayes (NB) may be used to enrich such data. The authors studied the ability of the NB classifier to prioritize noisy primary HTS data of compound mixtures (5 compounds/well) in 4 campaigns in which the percentage of noise presumed to be inactive compounds ranged between 81% and 91%. The top 10% of the compounds suggested by the classifier captured between 26% and 45% of the active compounds. These results are reasonable and useful, considering the poor quality of the training set and the short computing time that is needed to build and deploy the classifier. ( Journal of Biomolecular Screening 2004:32-36)
34

Zhang, Zihui, Cuican Yu, Shuang Xu, and Huibin Li. "Learning Flexibly Distributional Representation for Low-quality 3D Face Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 4 (May 18, 2021): 3465–73. http://dx.doi.org/10.1609/aaai.v35i4.16460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Due to the superiority of using geometric information, 3D Face Recognition (FR) has achieved great successes. Existing methods focus on high-quality 3D FR which is unpractical in real scenarios. Low-quality 3D FR is a more realistic scenario but the low-quality data are born with heavy noises. Therefore, exploring noise-robust low-quality 3D FR methods becomes an urgent and challenging problem. To solve this issue, in this paper, we propose to learn flexibly distributional representation for low-quality 3D FR. Firstly, we introduce the distributional representation for low-quality 3D faces due to that it can weaken the impact of noises. Generally, the distributional representation of a given 3D face is restricted to a specific distribution such as Gaussian distribution. However, the specific distribution may be not up to describing the complex low-quality face. Therefore, we propose to transform this specific distribution to a flexible one via Continuous Normalizing Flow (CNF), which can get rid of the form limitation. This kind of flexible distribution can approximate the latent distribution of the given noisy face more accurately, which further improves accuracy of low-quality 3D FR. Comprehensive experiments show that our proposed method improves both low-quality and cross-quality 3D FR performances on low-quality benchmarks. Furthermore, the improvements are more remarkable on low-quality 3D faces when the intensity of noise increases which indicate the robustness
35

F Acernese, M. Agathos, A. Ain, S. Albanesi, A. Allocca, A. Amato, T. Andrade, et al. "Virgo detector characterization and data quality: tools." Classical and Quantum Gravity 40, no. 18 (August 14, 2023): 185005. http://dx.doi.org/10.1088/1361-6382/acdf36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Detector characterization and data quality studies—collectively referred to as DetChar activities in this article—are paramount to the scientific exploitation of the joint dataset collected by the LIGO-Virgo-KAGRA global network of ground-based gravitational-wave (GW) detectors. They take place during each phase of the operation of the instruments (upgrade, tuning and optimization, data taking), are required at all steps of the dataflow (from data acquisition to the final list of GW events) and operate at various latencies (from near real-time to vet the public alerts to offline analyses). This work requires a wide set of tools which have been developed over the years to fulfill the requirements of the various DetChar studies: data access and bookkeeping; global monitoring of the instruments and of the different steps of the data processing; studies of the global properties of the noise at the detector outputs; identification and follow-up of noise peculiar features (whether they be transient or continuously present in the data); quick processing of the public alerts. The present article reviews all the tools used by the Virgo DetChar group during the third LIGO-Virgo Observation Run (O3, from April 2019 to March 2020), mainly to analyze the Virgo data acquired at EGO. Concurrently, a companion article focuses on the results achieved by the DetChar group during the O3 run using these tools.
36

Li, Chaoqun, Victor S. Sheng, Liangxiao Jiang, and Hongwei Li. "Noise filtering to improve data and model quality for crowdsourcing." Knowledge-Based Systems 107 (September 2016): 96–103. http://dx.doi.org/10.1016/j.knosys.2016.06.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Chaoqun, Liangxiao Jiang, and Wenqiang Xu. "Noise correction to improve data and model quality for crowdsourcing." Engineering Applications of Artificial Intelligence 82 (June 2019): 184–91. http://dx.doi.org/10.1016/j.engappai.2019.04.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Puyana-Romero, Virginia, Giuseppe Ciaburro, Giovanni Brambilla, Christiam Garzón, and Luigi Maffei. "Representation of the soundscape quality in urban areas through colours." Noise Mapping 6, no. 1 (May 16, 2019): 8–21. http://dx.doi.org/10.1515/noise-2019-0002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractNoise mapping is a useful and widespread method to visualise various items like the exposure to noise pollution, statistics of affected population, different noise source contribution analysis, and it is also a useful tool in designing noise-control plans. Some researches have moved a step further, proposing maps to represent the people perception of the acoustic environment. Most of these maps use colours as mere tools to display the spatial variability of acoustic parameters. In this paper the colours associated by interviewed people to different urban soundscapes have been analysed, and the possibility of using meaningful colours to represent the soundscape quality in noise mapping has been examined. For this purpose, correspondence analysiswas applied on the data collected fromon-site interviews, performed in the water front of Naples and its surroundings. The outcomes show that in the pedestrian areas nearby the sea, the blue colour was often associated with the soundscape rating, whereas in the areas nearby the sea but open to road traffic the interviewees selected mainly the blue and grey colours. In the areas away from the sea, a wider selection of colours was observed: red and greywere predominantly selected in the areas open to road traffic and green, yellow and red in the green areas.
39

Zafar, M. I., S. Bharadwaj, R. Dubey, and S. Biswas. "DIFFERENT SCALES OF URBAN TRAFFIC NOISE PREDICTION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2020 (August 14, 2020): 1181–88. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2020-1181-2020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract. Noise pollution is an important problem. Places around the road or railway corridor can get serious noise hazards in the outdoor environment. The problem of noise is dynamic and varies from one location to another. It becomes more challenging due to the varying nature of noise sources (e.g., bus, truck, tempo, etc.) that differ in frequency spectra of audible noises. It is required to characterize the noise environment for an area, which requires noise measurement and use it for noise prediction. An attempt has been made to predict the noise levels in the form of noise maps. Noise prediction requires information on terrain data, noise data (of sources) and a model to predict noise levels around the noise sources. With the variation in terrain data, noise data, and use of prediction model the performance of prediction can vary. Thus, the study is conducted at three different locations i.e., (i) Ratapur Road crossing, Rae Bareli (ii) Bahadurpur Road crossing, at Jais, and (iii) RGIPT Academic Block close to the railway track. The three studies indicated how the performance of prediction can vary with changes in the quality of terrain data, noise sampling, and schemes of noise modeling. Generally, with a better quality of terrain data (comprehensive and precise), better prediction can be possible. Similarly, more focused and event-specific noise recording, modeling can provide more detailed time-specific noise mapping, which is not possible otherwise with customary average noise recording technique. However, detailed and comprehensive modeling warrants complex and bigger data handling.
40

Zhou, Haoqiu, Xuan Feng, Zejun Dong, Cai Liu, and Wenjing Liang. "Application of Denoising CNN for Noise Suppression and Weak Signal Extraction of Lunar Penetrating Radar Data." Remote Sensing 13, no. 4 (February 20, 2021): 779. http://dx.doi.org/10.3390/rs13040779.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
As one of the main payloads mounted on the Yutu-2 rover of Chang’E-4 probe, lunar penetrating radar (LPR) aims to map the subsurface structure in the Von Kármán crater. The field LPR data are generally masked by clutters and noises of large quantities. To solve the noise interference, dozens of filtering methods have been applied to LPR data. However, these methods have their limitations, so noise suppression is still a tough issue worth studying. In this article, the denoising convolutional neural network (CNN) framework is applied to the noise suppression and weak signal extraction of 500 MHz LPR data. The results verify that the low-frequency clutters embedded in the LPR data mainly came from the instrument system of the Yutu rover. Besides, compared with the classic band-pass filter and the mean filter, the CNN filter has better performance when dealing with noise interference and weak signal extraction; compared with Kirchhoff migration, it can provide original high-quality radargram with diffraction information. Based on the high-quality radargram provided by the CNN filter, the subsurface sandwich structure is revealed and the weak signals from three sub-layers within the paleo-regolith are extracted.
41

Oboué, Yapo Abolé Serge Innocent, and Yangkang Chen. "Enhanced low-rank matrix estimation for simultaneous denoising and reconstruction of 5D seismic data." GEOPHYSICS 86, no. 5 (September 1, 2021): V459—V470. http://dx.doi.org/10.1190/geo2020-0773.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Noise and missing traces usually influence the quality of multidimensional seismic data. Therefore, it is necessary to estimate the useful signal from its noisy observation. The damped rank-reduction (DRR) method has emerged as an effective method to reconstruct the useful signal matrix from noisy and incomplete observations. However, the higher the noise level and the larger the ratio of missing traces, the weaker the DRR operator becomes. Consequently, the estimated low-rank (LR) signal matrix includes a significant amount of residual noise that influences the following processing steps. Therefore, we focus on the problem of estimating an LR signal matrix from its noisy observation. To elaborate on the novel algorithm, we formulate an improved proximity function by mixing the moving-average filter and the arctangent penalty function. First, we apply the proximity function to the level-4 block Hankel matrix before singular-value decomposition (SVD) and, then, to singular values, during the damped truncated SVD process. The combination of the novel proximity function and the DRR framework leads to an optimization problem, which results in better recovery performance. Our algorithm aims at producing an enhanced rank-reduction operator to estimate the useful signal matrix with higher quality. Experiments are conducted on synthetic and real 5D seismic data to compare the effectiveness of our approach to the DRR approach. Our approach obtains better performance because the estimated LR signal matrix is cleaner and contains fewer artifacts compared to that reconstructed by the DRR algorithm.
42

Carpenter, Chris. "Cleaned Hydrophone Array Logging Data Aids Identification of Wellbore Leaks." Journal of Petroleum Technology 74, no. 07 (July 1, 2022): 59–61. http://dx.doi.org/10.2118/0722-0059-jpt.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 201512, “Enhanced Wellbore-Leak Localization With Estimation and Removal of Guided Wave Noise Using Array Hydrophone Logging Data,” by Yao Ge, Ruijia Wang, and Yi Yang Ang, Halliburton, et al. The paper has not been peer reviewed. Leaks in wellbore tubulars emit acoustic waves in the borehole that can be captured by a hydrophone array. Processing the array data yields location and energy level of the wellbore leaks; however, the hydrophones also may capture other coherent noise propagating as guided waves along the borehole. The complete paper describes an approach to estimate and subsequently remove the guided-wave noise from the hydrophone array data to improve the accuracy of leak-source locations. Estimating the propagation direction and amplitude of leak-induced guided waves aids logging operations to locate a leak source efficiently. Noise Logging and Well Integrity Well integrity has become a focused area for most operators given the long life cycle and complex structure of a wellbore. Noise-logging tools have been developed to detect flow or leak in a wellbore. A typical noise-logging tool consists of one or two acoustic sensors logged through the depth of the well that produce the amplitude and frequency spectrum-log of the received acoustic signals. A noise-logging tool usually operates in two modes—the continuous (or dynamic) mode and the stationary mode. In the continuous mode, the tool usually is logged through all accessible depths at a constant speed. Based on the data from the continuous pass, specific depths are identified and additional stationary mode passes are performed to stop the tool at selected depths and acquire higher-quality data. A major issue of the data from continuous logging is the presence of road noise, which is created by the centralizer, cable, or any part of the tool strings scratching against the casing or tubing. To reduce the effect of the road noise, a high-pass filter usually is applied to remove the signal from the lower-frequency range corresponding to the range of the road noise. However, this procedure could affect the quality of the leak signal when a portion of the leak signal lies in the lower-frequency range. The logging data taken when the tool is stationary for 1 minute or more at a target depth is free from road noise, but this approach prolongs logging time and limits depth resolution. Aside from road noise, other forms of propagating noise exist in the wellbore, such as noise from surface or downhole equipment or propagating noise induced by a leak source. Road noise and all forms of propagating noises are referred to as guided-wave noise in the complete paper. The guided waves usually propagate along the borehole’s axis for a long distance with low attenuation. However, traditional noise-logging tools with one or more omnidirectional hydrophones are unable to distinguish guided-wave noise from leak signals. Recent advances in noise-logging-tool design use array hydrophones to provide not only depth but also radial location of downhole leaks. The array hydrophones also enable an advanced array-processing method to measure and suppress the guided-wave noise.
43

Hermance, John F. "Ground‐penetrating radar: Postmigration stacking of n-fold common midpoint profile data." GEOPHYSICS 66, no. 2 (March 2001): 379–88. http://dx.doi.org/10.1190/1.1444929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Diffraction and nonvertical side‐looking reflection patterns are typical features of most commercial ground‐penetrating radar (GPR) surveys. While a number of techniques are used by GPR workers to migrate such signals back to the proper position of the subsurface reflector, these have not been generally applied to multifold data nor to conditions of extreme noise. A procedure known among seismologists as wavefront migration is adapted here for processing multifold common midpoint (CMP) GPR data; it appears to be promising for both noise‐free and noisy data. The algorithm is largely geometrical and nonmathematical, and it is readily implemented on a personal computer. An example of synthetic data with extreme levels of noise illustrates that migrating prestack multifold CMP data, followed by postmigration, n-fold stacking, leads to a substantial improvement in image quality over unmigrated or single‐fold migrated data.
44

Yuan, Jianlong, Jiashun Yu, Xiaobo Fu, and Chao Han. "Antinoise performance of time-reverse imaging conditions for microseismic location." GEOPHYSICS 85, no. 3 (April 14, 2020): KS75—KS87. http://dx.doi.org/10.1190/geo2019-0488.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A suitable imaging condition is critical for the success of seismic imaging or source location. To understand what imaging condition to select for handling noisy data, the antinoise performance of the maximum amplitude imaging condition (MAIC), the autocorrelation imaging condition (ACIC), and the geometric mean imaging condition (GMIC) were comparatively studied. Synthetic microseismic data based on the Marmousi2 model, with different levels of synthetic Gaussian noise and field noise separately added, were used for tests. For Gaussian noise data, five signal-to-noise (S/N) ratio levels were considered, ranging from an absolutely clean level of [Formula: see text] to an extremely noisy level of [Formula: see text], in an increment of five times of the lower level of S/N. It was found that the antinoise ability of MAIC outperforms ACIC, and ACIC outperforms GMIC. This conclusion was confirmed to be valid for field noise in the further experiments performed, using 16 groups of industrial noise recordings from different areas. The statistical analysis shows these performance differences are statistically consistently significant. In terms of spatial resolution, it is the other way around; that is, GMIC outperforms ACIC, and ACIC outperforms MAIC. These suggest that in choosing a suitable imaging condition for time-reverse imaging location, one needs to consider the balance between the resolution demand and data quality requirement. If the data quality is very high, GMIC may be used to achieve a high-resolution location result. Conversely, if the data quality is poor, MAIC is a good choice for obtaining a robust location result. In between, ACIC or grouped GMIC is a proper approach to work out a balanced result for resolution demand and the noisy level provision.
45

Geng, Yu, Zongbo Han, Changqing Zhang, and Qinghua Hu. "Uncertainty-Aware Multi-View Representation Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 7545–53. http://dx.doi.org/10.1609/aaai.v35i9.16924.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Learning from different data views by exploring the underlying complementary information among them can endow the representation with stronger expressive ability. However, high-dimensional features tend to contain noise, and furthermore, quality of data usually varies for different samples (even for different views), i.e., one view may be informative for one sample but not the case for another. Therefore, it is quite challenging to integrate multi-view noisy data under unsupervised setting. Traditional multi-view methods either simply treat each view with equal importance or tune the weights of different views to fixed values, which are insufficient to capture the dynamic noise in multi-view data. In this work, we devise a novel unsupervised multi-view learning approach, termed as Dynamic Uncertainty-Aware Networks (DUA-Nets). Guided by the uncertainty of data estimated from the generation perspective, intrinsic information from multiple views is integrated to obtain noise-free representations. Under the help of uncertainty estimation, DUA-Nets weigh each view of individual sample according to data quality so that the high-quality samples (or views) can be fully exploited while the effects from the noisy samples (or views) will be alleviated. Our model achieves superior performance in extensive experiments and shows the robustness to noisy data.
46

Abe, Hitoshi, Giuliana Aquilanti, Roberto Boada, Bruce Bunker, Pieter Glatzel, Maarten Nachtegaal, and Sakura Pascarelli. "Improving the quality of XAFS data." Journal of Synchrotron Radiation 25, no. 4 (May 29, 2018): 972–80. http://dx.doi.org/10.1107/s1600577518006021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Following the Q2XAFS Workshop and Satellite to IUCr Congress 2017 on `Data Acquisition, Treatment, Storage – quality assurance in XAFS spectroscopy', a summary is given of the discussion on different aspects of a XAFS experiment that affect data quality. Some pertinent problems ranging from sources and minimization of noise to harmonic contamination and uncompensated monochromator glitches were addressed. Also, an overview is given of the major limitations and pitfalls of a selection of related methods, such as photon-out spectroscopies and energy-dispersive XAFS, and of increasingly common applications, namely studies at high pressure, and time-resolved investigations of catalysts in operando. Advice on how to avoid or deal with these problems and a few good practice recommendations are reported, including how to correctly report results.
47

Lee, Sang-Kwon, Kanghyun An, Hye-Young Cho, and Sung-Uk Hwang. "Prediction and Sound Quality Analysis of Tire Pattern Noise Based on System Identification by Utilizing an Optimal Adaptive Filter." Applied Sciences 9, no. 19 (September 24, 2019): 3995. http://dx.doi.org/10.3390/app9193995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Identifying the cause of vehicle noise is a basic requirement for the development of low-noise vehicles. The tire pattern noise depends on the tire itself and causes complex and unpredictable sounds. In pneumatic tire pattern design, the prediction technology of the tire pattern noise according to pattern shape design is important. The conventional method of predicting tire pattern noise is to simply scan the pattern shape of tire and to analyze its spectrum. However, this method has limitations because it does not consider the transfer function and precise mechanism of tire pattern noise. In this study, adaptive filter theory was applied to identify the transfer function between the grooves of patterns and measured acoustic data. To predict the waveform of an actual pattern noise in the time domain, the impulse response of this transfer function was convolved by the scanned pattern input of tires. The predicted waveform of pattern noise was validated with the waveforms of measured noise data. Finally, a sound quality index (SQI) of tire pattern noise was developed using the measured pattern noises and was applied to estimate the sound quality of pattern noise. Eventually, using the prediction method from this study, we hope to reduce the time and cost spent on tire pattern design and verification.
48

Konyar, Mehmet Zeki, and Sıtkı Öztürk. "Reed Solomon Coding-Based Medical Image Data Hiding Method against Salt and Pepper Noise." Symmetry 12, no. 6 (June 1, 2020): 899. http://dx.doi.org/10.3390/sym12060899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Medical data hiding is used to hide patient information inside medical images to protect patient privacy. Patient information in the image should be protected when sending medical images to other specialists or hospitals over the communication network. However, the images are exposed to various unwanted disruptive signals in the communication channel. One of these signals is salt and pepper noise. A pixel exposed to salt and pepper noise becomes completely black or completely white. In pixel-based data hiding methods, it is not possible to extract the secret message in the pixel exposed to this kind of noise. While current data hiding methods are good for many disruptive effects, they are weak against salt and pepper noise. For this reason, the proposed study especially focuses on the accurate extraction of patient information in the salt and pepper noisy medical images. This study was proposed for the most accurate extraction of secret message despite salt and pepper noise, by use of a Reed Solomon error control coding-based data hiding method. The most important feature of Reed Solomon codes is that they can correct errors in non-binary (decimal) numbers directly. Therefore, the Reed Solomon coding-based data hiding method that proposed in this study increases the resistance against salt and pepper noise. Experimental studies show that secret data is accurately extracted from stego images with various densities of salt and pepper noise. Stego medical images created by the proposed method have superior quality values compared to similar literature studies. Additionally, compared to similar methods, the secret message is extracted from the noisy stego image with higher accuracy.
49

Zhu, Mengjun, Wenjun Yi, Zhaohua Dong, Peng Xiong, Junyi Du, Xingjia Tang, Ying Yang, et al. "Refinement method for compressive hyperspectral data cubes based on self-fusion." Journal of the Optical Society of America A 39, no. 12 (November 22, 2022): 2282. http://dx.doi.org/10.1364/josaa.465165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Compressive hyperspectral images often suffer from various noises and artifacts, which severely degrade the imaging quality and limit subsequent applications. In this paper, we present a refinement method for compressive hyperspectral data cubes based on self-fusion of the raw data cubes, which can effectively reduce various noises and improve the spatial and spectral details of the data cubes. To verify the universality, flexibility, and extensibility of the self-fusion refinement (SFR) method, a series of specific simulations and practical experiments were conducted, and SFR processing was performed through different fusion algorithms. The visual and quantitative assessments of the results demonstrate that, in terms of noise reduction and spatial–spectral detail restoration, the SFR method generally is much better than other typical denoising methods for hyperspectral data cubes. The results also indicate that the denoising effects of SFR greatly depend on the fusion algorithm used, and SFR implemented by joint bilateral filtering (JBF) performs better than SRF by guided filtering (GF) or a Markov random field (MRF). The proposed SFR method can significantly improve the quality of a compressive hyperspectral data cube in terms of noise reduction, artifact removal, and spatial and spectral detail improvement, which will further benefit subsequent hyperspectral applications.
50

Zhao, Y. X., Y. Li, and N. Wu. "Data augmentation and its application in distributed acoustic sensing data denoising." Geophysical Journal International 228, no. 1 (August 24, 2021): 119–33. http://dx.doi.org/10.1093/gji/ggab345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
SUMMARY As a data-driven approach, the performance of deep learning models depends largely on the quantity and quality of the training data sets, which greatly limits the application of deep learning to tasks with small data sets. Unfortunately, sometimes we need to use limited small data sets to complete our tasks, such as distributed acoustic sensing (DAS) data denoising. However, using a small data set to train the network may cause overfitting, resulting in poor network generalization. To solve this problem, we propose an approach based on the combination of a generative adversarial network and a deep convolutional neural network. First, we used a small noise data set to train a generative adversarial network to generate synthetic noise samples, and then used these synthetic noise samples to augment the noise data set. Next, we used the augmented noise data set and the signal data set obtained through forward modelling to construct a synthetic training set. Finally, a denoising network based on a convolutional neural network was trained on the constructed synthetic training set. Experimental results show that the augmented data set can effectively improve the denoising performance and generalization ability of the network, and the denoising network trained on the augmented data set can more effectively reduce various kinds of noise in the DAS data.

To the bibliography