To see the other types of publications on this topic, follow the link: NOISEX database.

Journal articles on the topic 'NOISEX database'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'NOISEX database.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhang, Yan, Zhen-min Tang, Yan-ping Li, and Yang Luo. "A Hierarchical Framework Approach for Voice Activity Detection and Speech Enhancement." Scientific World Journal 2014 (2014): 1–8. http://dx.doi.org/10.1155/2014/723643.

Full text
Abstract:
Accurate and effective voice activity detection (VAD) is a fundamental step for robust speech or speaker recognition. In this study, we proposed a hierarchical framework approach for VAD and speech enhancement. The modified Wiener filter (MWF) approach is utilized for noise reduction in the speech enhancement block. For the feature selection and voting block, several discriminating features were employed in a voting paradigm for the consideration of reliability and discriminative power. Effectiveness of the proposed approach is compared and evaluated to other VAD techniques by using two well-known databases, namely, TIMIT database and NOISEX-92 database. Experimental results show that the proposed method performs well under a variety of noisy conditions.
APA, Harvard, Vancouver, ISO, and other styles
2

Qi, Yingmei, Heming Huang, and Huiyun Zhang. "Research on Speech Emotion Recognition Method Based A-CapsNet." Applied Sciences 12, no. 24 (December 17, 2022): 12983. http://dx.doi.org/10.3390/app122412983.

Full text
Abstract:
Speech emotion recognition is a crucial work direction in speech recognition. To increase the performance of speech emotion detection, researchers have worked relentlessly to improve data augmentation, feature extraction, and pattern formation. To address the concerns of limited speech data resources and model training overfitting, A-CapsNet, a neural network model based on data augmentation methodologies, is proposed in this research. In order to solve the issue of data scarcity and achieve the goal of data augmentation, the noise from the Noisex-92 database is first combined with four different data division methods (emotion-independent random-division, emotion-dependent random-division, emotion-independent cross-validation and emotion-dependent cross-validation methods, abbreviated as EIRD, EDRD, EICV and EDCV, respectively). The database EMODB is then used to analyze and compare the performance of the model proposed in this paper under different signal-to-noise ratios, and the results show that the proposed model and data augmentation are effective.
APA, Harvard, Vancouver, ISO, and other styles
3

FAROOQ, O., S. DATTA, and M. C. SHROTRIYA. "WAVELET SUB-BAND BASED TEMPORAL FEATURES FOR ROBUST HINDI PHONEME RECOGNITION." International Journal of Wavelets, Multiresolution and Information Processing 08, no. 06 (November 2010): 847–59. http://dx.doi.org/10.1142/s0219691310003845.

Full text
Abstract:
This paper proposes the use of wavelet transform-based feature extraction technique for Hindi speech recognition application. The new proposed features take into account temporal as well as frequency band energy variations for the task of Hindi phoneme recognition. The recognition performance achieved by the proposed features is compared with the standard MFCC and 24-band admissible wavelet packet-based features using a linear discriminant function based classifier. To evaluate robustness of these features, the NOISEX database is used to add different types of noise into phonemes to achieve signal-to-noise ratios in the range of 20 dB to -5 dB. The recognition results show that under noisy background the proposed technique always achieves a better performance over MFCC-based features.
APA, Harvard, Vancouver, ISO, and other styles
4

Rudramurthy, M. S., V. Kamakshi Prasad, and R. Kumaraswamy. "Speaker Verification Under Degraded Conditions Using Empirical Mode Decomposition Based Voice Activity Detection Algorithm." Journal of Intelligent Systems 23, no. 4 (December 1, 2014): 359–78. http://dx.doi.org/10.1515/jisys-2013-0085.

Full text
Abstract:
AbstractThe performance of most of the state-of-the-art speaker recognition (SR) systems deteriorates under degraded conditions, owing to mismatch between the training and testing sessions. This study focuses on the front end of the speaker verification (SV) system to reduce the mismatch between training and testing. An adaptive voice activity detection (VAD) algorithm using zero-frequency filter assisted peaking resonator (ZFFPR) was integrated into the front end of the SV system. The performance of this proposed SV system was studied under degraded conditions with 50 selected speakers from the NIST 2003 database. The degraded condition was simulated by adding different types of noises to the original speech utterances. The different types of noises were chosen from the NOISEX-92 database to simulate degraded conditions at signal-to-noise ratio levels from 0 to 20 dB. In this study, widely used 39-dimension Mel frequency cepstral coefficient (MFCC; i.e., 13-dimension MFCCs augmented with 13-dimension velocity and 13-dimension acceleration coefficients) features were used, and Gaussian mixture model–universal background model was used for speaker modeling. The proposed system’s performance was studied against the energy-based VAD used as the front end of the SV system. The proposed SV system showed some encouraging results when EMD-based VAD was used at its front end.
APA, Harvard, Vancouver, ISO, and other styles
5

Yang, Jie. "Combining Speech Enhancement and Cepstral Mean Normalization for LPC Cepstral Coefficients." Key Engineering Materials 474-476 (April 2011): 349–54. http://dx.doi.org/10.4028/www.scientific.net/kem.474-476.349.

Full text
Abstract:
A mismatch between the training and testing in noisy circumstance often causes a drastic decrease in the performance of speech recognition system. The robust feature coefficients might suppress this sensitivity of mismatch during the recognition stage. In this paper, we investigate the noise robustness of LPC Cepstral Coefficients (LPCC) by using speech enhancement with feature post-processing. At front-end, speech enhancement in the wavelet domain is used to remove noise components from noisy signals. This enhanced processing adopts the combination of discrete wavelet transform (DWT), wavelet packet decomposition (WPD), multi-thresholds processing etc to obtain the estimated speech. The feature post-processing employs cepstral mean normalization (CMN) to compensate the signal distortion and residual noise of enhanced signals in the cepstral domain. The performance of digit speech recognition systems is evaluated under noisy environments based on NOISEX-92 database. The experimental results show that the presented method exhibits performance improvements in the adverse noise environment compared with the previous features.
APA, Harvard, Vancouver, ISO, and other styles
6

Upadhyaya, Prashant, Omar Farooq, M. R. Abidi, and Priyanka Varshney. "Comparative Study of Visual Feature for Bimodal Hindi Speech Recognition." Archives of Acoustics 40, no. 4 (December 1, 2015): 609–19. http://dx.doi.org/10.1515/aoa-2015-0061.

Full text
Abstract:
Abstract In building speech recognition based applications, robustness to different noisy background condition is an important challenge. In this paper bimodal approach is proposed to improve the robustness of Hindi speech recognition system. Also an importance of different types of visual features is studied for audio visual automatic speech recognition (AVASR) system under diverse noisy audio conditions. Four sets of visual feature based on Two-Dimensional Discrete Cosine Transform feature (2D-DCT), Principal Component Analysis (PCA), Two-Dimensional Discrete Wavelet Transform followed by DCT (2D-DWT- DCT) and Two-Dimensional Discrete Wavelet Transform followed by PCA (2D-DWT-PCA) are reported. The audio features are extracted using Mel Frequency Cepstral coefficients (MFCC) followed by static and dynamic feature. Overall, 48 features, i.e. 39 audio features and 9 visual features are used for measuring the performance of the AVASR system. Also, the performance of the AVASR using noisy speech signal generated by using NOISEX database is evaluated for different Signal to Noise ratio (SNR: 30 dB to −10 dB) using Aligarh Muslim University Audio Visual (AMUAV) Hindi corpus. AMUAV corpus is Hindi continuous speech high quality audio visual databases of Hindi sentences spoken by different subjects.
APA, Harvard, Vancouver, ISO, and other styles
7

Varga, Andrew, and Herman J. M. Steeneken. "Assessment for automatic speech recognition: II. NOISEX-92: A database and an experiment to study the effect of additive noise on speech recognition systems." Speech Communication 12, no. 3 (July 1993): 247–51. http://dx.doi.org/10.1016/0167-6393(93)90095-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Ren Di, and Yan Li Zhang. "Denoising of ECG Signal Based on Empirical Mode Decomposition and Adaptive Noise Cancellation." Applied Mechanics and Materials 40-41 (November 2010): 140–45. http://dx.doi.org/10.4028/www.scientific.net/amm.40-41.140.

Full text
Abstract:
To remove the noises in ECG and to overcome the disadvantage of the denoising method only based on empirical mode decomposition (EMD), a combination of EMD and adaptive noise cancellation is introduced in this paper. The noisy ECG signals are firstly decomposed into intrinsic mode functions (IMFs) by EMD. Then the IMFs corresponding to noises are used to reconstruct signal. The reconstructed signal as the reference input of adaptive noise cancellation and the noisy ECG as the basic input, the de-noised ECG signal is obtained after adaptive filtering. The de-noised ECG has high signal-to-noise ratio, preferable correlation coefficient and lower mean square error. Through analyzing these performance parameters and testing the denoising method using MIT-BIH Database, the conclusion can be drawn that the combination of EMD and adaptive noise cancellation has considered the frequency distribution of ECG and noises, eliminate the noises effectively and need not to select a proper threshold.
APA, Harvard, Vancouver, ISO, and other styles
9

Ataeyan, Mahdieh, and Negin Daneshpour. "Automated Noise Detection in a Database Based on a Combined Method." Statistics, Optimization & Information Computing 9, no. 3 (June 9, 2021): 665–80. http://dx.doi.org/10.19139/soic-2310-5070-879.

Full text
Abstract:
Data quality has diverse dimensions, from which accuracy is the most important one. Data cleaning is one of the preprocessing steps in data mining which consists of detecting errors and repairing them. Noise is a common type of error, that occur in database. This paper proposes an automated method based on the k-means clustering for noise detection. At first, each attribute (Aj) is temporarily removed from data and the k-means clustering is applied to other attributes. Thereafter, the k-nearest neighbors is used in each cluster. After that a value is predicted for Aj in each record by the nearest neighbors. The proposed method detects noisy attributes using predicted values. Our method is able to identify several noises in a record. In addition, this method can detect noise in fields with different data types, too. Experiments show that this method can averagely detect 92% of the noises existing in the data. The proposed method is compared with a noise detection method using association rules. The results indicate that the proposed method have improved noise detection averagely by 13%.
APA, Harvard, Vancouver, ISO, and other styles
10

Ma, Lilong, Tuanwei Xu, Kai Cao, Yinghao Jiang, Dimin Deng, and Fang Li. "Signal Activity Detection for Fiber Optic Distributed Acoustic Sensing with Adaptive-Calculated Threshold." Sensors 22, no. 4 (February 21, 2022): 1670. http://dx.doi.org/10.3390/s22041670.

Full text
Abstract:
The key point on analyzing the data stream measured by fiber optic distributed acoustic sensing (DAS) is signal activity detection separating measured signals from environmental noise. The inability to calculate the threshold for signal activity detection accurately and efficiently without affecting the measured signals is a bottleneck problem for current methods. In this article, a novel signal activity detection method with the adaptive-calculated threshold is proposed to solve the problem. With the analysis of the time-varying random noise’s statistical commonality and the short-term energy (STE) of real-time data stream, the top range of the total STE distribution of the noise is found accurately for real-time data stream’s ascending STE, thus the adaptive dividing level of signals and noise is obtained as the threshold. Experiments are implemented with simulated database and urban field database with complex noise. The average detection accuracies of the two databases are 97.34% and 90.94% only consuming 0.0057 s for a data stream of 10 s, which demonstrates the proposed method is accurate and high efficiency for signal activity detection.
APA, Harvard, Vancouver, ISO, and other styles
11

Moreno Escobar, Jesús Jaime, Erika Yolanda Aguilar del Villar, Oswaldo Morales Matamoros, and Liliana Chanona Hernández. "3D22MX: Performance Subjective Evaluation of 3D/Stereoscopic Image Processing and Analysis." Mathematics 11, no. 1 (December 29, 2022): 171. http://dx.doi.org/10.3390/math11010171.

Full text
Abstract:
This work is divided into three parts: (i) a methodology developed for building a 3D/ stereoscopic database, called 3D22MX, (ii) a software tool designed for degradation of 3D/stereoscopic images, and (iii) a psychophysical experiment carried out for a specific type of noise. The novelty of this work is to integrate these three parts precisely to provide not only professionals who design algorithms to estimate three-dimensional image quality but also those who wish to generate new image databases. For the development of the 3D/stereoscopic database, 15 indoor images and 5 outdoor ones were spatial-calibrated and lighted for different types of scenarios. Criteria calibration is different for indoor images with respect to outdoor images. The software tool to degrade 3D/stereoscopic images is designed from MatLab programming language since images captured in the first part are calculated for achieving several image degradation. Our program has ten different types of noises for degradation, such as white Gauss impulse, localvar, spatial correlation, salt & pepper, speckle, blur, contrast, jpeg, and j2k. Due to each type of noise containing up to five levels of degradation, in this proposal, a database of 20 images is required to design a tool for degrading and generating three-dimensional images ranging from all types of noise to yield psychophysical. Finally, there are applied specific criteria to carry out some psychophysical experiments with 3D/stereoscopic images. Moreover, we analyzed the methodology used to qualify and apply images to the j2k noise, explaining every degradation level for this noise.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Yatao, Shoushui Wei, Yutao Long, and Chengyu Liu. "Performance Analysis of Multiscale Entropy for the Assessment of ECG Signal Quality." Journal of Electrical and Computer Engineering 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/563915.

Full text
Abstract:
This study explored the performance of multiscale entropy (MSE) for the assessment of mobile ECG signal quality, aiming to provide a reasonable application guideline. Firstly, the MSE for the typical noises, that is, high frequency (HF) noise, low frequency (LF) noise, and power-line (PL) noise, was analyzed. The sensitivity of MSE to the signal to noise ratio (SNR) of the synthetic artificial ECG plus different noises was further investigated. The results showed that the MSE values could reflect content level of various noises contained in the ECG signals. For the synthetic ECG plus LF noise, the MSE was sensitive to SNR within higher range of scale factor. However, for the synthetic ECG plus HF noise, the MSE was sensitive to SNR within lower range of scale factor. Thus, a recommended scale factor range within 5 to 10 was given. Finally, the results were verified on the real ECG signals, which were derived from MIT-BIH Arrhythmia Database and Noise Stress Test Database. In all, MSE could effectively assess the noise level on the real ECG signals, and this study provided a valuable reference for applying MSE method to the practical signal quality assessment of mobile ECG.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhang, Lingwen, Teng Tan, Yafan Gong, and Wenkao Yang. "Fingerprint Database Reconstruction Based on Robust PCA for Indoor Localization." Sensors 19, no. 11 (June 3, 2019): 2537. http://dx.doi.org/10.3390/s19112537.

Full text
Abstract:
The indoor localization method based on the Received Signal Strength (RSS) fingerprint is widely used for its high positioning accuracy and low cost. However, the propagation behavior of radio signals in an indoor environment is complicated and always leads to the existence of outliers and noises that deviate from a normal RSS value in the database. The fingerprint database containing outliers and noises will severely degrade the performance of an indoor localization system. In this paper, an approach to reconstruct the fingerprint database is proposed with the purpose of mitigating the influences of outliers. More specifically, by exploiting the spatial and temporal correlations of RSS data, the database can be transformed into a low-rank matrix. Therefore, the RPCA (Robust Principle Component Analysis) technique can be applied to recover the low-rank matrix from a noisy matrix. In addition, we propose an improved RPCA model which takes advantage of the prior knowledge of a singular value and could remove outliers and structured noise simultaneously. The experimental results show that the proposed method can eliminate outliers and structured noise efficiently.
APA, Harvard, Vancouver, ISO, and other styles
14

Uma, A., and P. Kalpana. "ECG Noise Removal Using Modified Distributed Arithmetic Based Finite Impulse Response Filter." Journal of Medical Imaging and Health Informatics 11, no. 5 (May 1, 2021): 1444–52. http://dx.doi.org/10.1166/jmihi.2021.3770.

Full text
Abstract:
ECG monitoring is essential to support human life. During signal acquisition, the signals are contaminated by various noises that occur due to different sources. This paper focuses on Baseline wander and Muscle Artifact noise removal using Distributed Arithmetic (DA) based FIR filters. An area-efficient modified DA based FIR filter consists of LUT-less structure and used for noise removal. The performance of the modified DA based FIR filter is compared with the conventional DA FIR filter. An arbitrary real-time ECG record is taken from MIT-BIH database and Baseline Wander noise, Muscle artifact noises are taken from MIT-BIH noise stress test database. The performance of both filters is evaluated in terms of output Signal to Noise Ratio (SNR) and Mean Square Error (MSE). For Baseline wander noise removal, the modified DA based FIR filter produces high output SNR and also low MSE of 76.6% than the conventional filter. Similarly, for Muscle Artifact noise removal, it produces high SNR, and MSE is reduced to 73.8%. A modified DA based FIR filter is synthesized for the target FPGA device Spartan3E XC3s2000-4fg900 and hardware resource utilization is presented.
APA, Harvard, Vancouver, ISO, and other styles
15

Zhang, Peng, Mingfeng Jiang, Yang Li, Ling Xia, Zhefeng Wang, Yongquan Wu, Yaming Wang, and Huaxiong Zhang. "An efficient ECG denoising method by fusing ECA-Net and CycleGAN." Mathematical Biosciences and Engineering 20, no. 7 (2023): 13415–33. http://dx.doi.org/10.3934/mbe.2023598.

Full text
Abstract:
<abstract> <p>For wearable electrocardiogram (ECG) acquisition, it was easy to infer motion artifices and other noises. In this paper, a novel end-to-end ECG denoising method was proposed, which was implemented by fusing the Efficient Channel Attention (ECA-Net) and the cycle consistent generative adversarial network (CycleGAN) method. The proposed denoising model was optimized by using the ECA-Net method to highlight the key features and introducing a new loss function to further extract the global and local ECG features. The original ECG signal came from the MIT-BIH Arrhythmia Database. Additionally, the noise signals used in this method consist of a combination of Gaussian white noise and noises sourced from the MIT-BIH Noise Stress Test Database, including EM (Electrode Motion Artifact), BW (Baseline Wander) and MA (Muscle Artifact), as well as mixed noises composed of EM+BW, EM+MA, BW+MA and EM+BW+MA. Moreover, corrupted ECG signals were generated by adding different levels of single and mixed noises to clean ECG signals. The experimental results show that the proposed method has better denoising performance and generalization ability with higher signal-to-noise ratio improvement (SNR<sub>imp</sub>), as well as lower root-mean-square error (RMSE) and percentage-root-mean-square difference (PRD).</p> </abstract>
APA, Harvard, Vancouver, ISO, and other styles
16

Hussein, Ahmed F., Warda R. Mohammed, Mustafa Musa Jaber, and Osamah Ibrahim Khalaf. "An Adaptive ECG Noise Removal Process Based on Empirical Mode Decomposition (EMD)." Contrast Media & Molecular Imaging 2022 (August 17, 2022): 1–9. http://dx.doi.org/10.1155/2022/3346055.

Full text
Abstract:
The electrocardiogram (ECG) is a generally used instrument for examining cardiac disorders. For proper interpretation of cardiac illnesses, a noise-free ECG is often preferred. ECG signals, on the other hand, are suffering from numerous noises throughout gathering and programme. This article suggests an empirical mode decomposition-based adaptive ECG noise removal technique (EMD). The benefits of the proposed methods are used to dip noise in ECG signals with the least amount of distortion. For decreasing high-frequency noises, traditional EMD-based approaches either cast off the preliminary fundamental functions or use a window-based methodology. The signal quality is then improved via an adaptive process. The simulation study uses ECG data from the universal MIT-BIH database as well as the Brno University of Technology ECG Quality Database (BUT QDB). The proposed method’s efficiency is measured using three typical evaluation metrics: mean square error, output SNR change, and ratio root mean square alteration at various SNR levels (signal to noise ratio). The suggested noise removal approach is compatible with other commonly used ECG noise removal techniques. A detailed examination reveals that the proposed method could be served as an effective means of noise removal ECG signals, resulting in enhanced diagnostic functions in automated medical systems.
APA, Harvard, Vancouver, ISO, and other styles
17

Mohd Apandi, Ziti Fariha, Ryojun Ikeura, Soichiro Hayakawa, and Shigeyoshi Tsutsumi. "An Analysis of the Effects of Noisy Electrocardiogram Signal on Heartbeat Detection Performance." Bioengineering 7, no. 2 (June 6, 2020): 53. http://dx.doi.org/10.3390/bioengineering7020053.

Full text
Abstract:
Heartbeat detection for ambulatory cardiac monitoring is more challenging as the level of noise and artefacts induced by daily-life activities are considerably higher than monitoring in a hospital setting. It is valuable to understand the relationship between the characteristics of electrocardiogram (ECG) noises and the beat detection performance in the cardiac monitoring system. For this purpose, three well-known algorithms for the beat detection process were re-implemented. The beat detection algorithms were validated using two types of ambulatory datasets, which were the ECG signal from the MIT-BIH Arrhythmia Database and the simulated noise-contaminated ECG signal with different intensities of baseline wander (BW), muscle artefact (MA) and electrode motion (EM) artefact from the MIT-BIH Noise Stress Test Database. The findings showed that signals contaminated with noise and artefacts decreased the potential of beat detection in ambulatory signal with the poorest performance noted for ECG signal affected by the EM artefacts. In conclusion, none of the algorithms was able to detect all QRS complexes without any false detection at the highest level of noise. The EM noise influenced the beat detection performance the most in comparison to the MA and BW noises that resulted in the highest number of misdetections and false detections.
APA, Harvard, Vancouver, ISO, and other styles
18

Maas, A., F. Rottensteiner, and C. Heipke. "USING LABEL NOISE ROBUST LOGISTIC REGRESSION FOR AUTOMATED UPDATING OF TOPOGRAPHIC GEOSPATIAL DATABASES." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-7 (June 7, 2016): 133–40. http://dx.doi.org/10.5194/isprsannals-iii-7-133-2016.

Full text
Abstract:
Supervised classification of remotely sensed images is a classical method to update topographic geospatial databases. The task requires training data in the form of image data with known class labels, whose generation is time-consuming. To avoid this problem one can use the labels from the outdated database for training. As some of these labels may be wrong due to changes in land cover, one has to use training techniques that can cope with wrong class labels in the training data. In this paper we adapt a label noise tolerant training technique to the problem of database updating. No labelled data other than the existing database are necessary. The resulting label image and transition matrix between the labels can help to update the database and to detect changes between the two time epochs. Our experiments are based on different test areas, using real images with simulated existing databases. Our results show that this method can indeed detect changes that would remain undetected if label noise were not considered in training.
APA, Harvard, Vancouver, ISO, and other styles
19

Maas, A., F. Rottensteiner, and C. Heipke. "USING LABEL NOISE ROBUST LOGISTIC REGRESSION FOR AUTOMATED UPDATING OF TOPOGRAPHIC GEOSPATIAL DATABASES." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-7 (June 7, 2016): 133–40. http://dx.doi.org/10.5194/isprs-annals-iii-7-133-2016.

Full text
Abstract:
Supervised classification of remotely sensed images is a classical method to update topographic geospatial databases. The task requires training data in the form of image data with known class labels, whose generation is time-consuming. To avoid this problem one can use the labels from the outdated database for training. As some of these labels may be wrong due to changes in land cover, one has to use training techniques that can cope with wrong class labels in the training data. In this paper we adapt a label noise tolerant training technique to the problem of database updating. No labelled data other than the existing database are necessary. The resulting label image and transition matrix between the labels can help to update the database and to detect changes between the two time epochs. Our experiments are based on different test areas, using real images with simulated existing databases. Our results show that this method can indeed detect changes that would remain undetected if label noise were not considered in training.
APA, Harvard, Vancouver, ISO, and other styles
20

Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith. "Calibrating Noise to Sensitivity in Private Data Analysis." Journal of Privacy and Confidentiality 7, no. 3 (May 30, 2017): 17–51. http://dx.doi.org/10.29012/jpc.v7i3.405.

Full text
Abstract:
We continue a line of research initiated in Dinur and Nissim (2003); Dwork and Nissim (2004); and Blum et al. (2005) on privacy-preserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function $f$ mapping databases to reals, the so-called {\em true answer} is the result of applying $f$ to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which $f = \sum_i g(x_i)$, where $x_i$ denotes the $i$th row of the database and $g$ maps database rows to $[0,1]$. We extend the study to general functions $f$, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the {\em sensitivity} of the function $f$. Roughly speaking, this is the amount that any single argument to $f$ can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean definition of privacy---now known as differential privacy---and measure of its loss. We also provide a set of tools for designing and combining differentially private algorithms, permitting the construction of complex differentially private analytical tools from simple differentially private primitives. Finally, we obtain separation results showing the increased value of interactive statistical release mechanisms over non-interactive ones.
APA, Harvard, Vancouver, ISO, and other styles
21

Kang, Yimei, and Wang Pan. "A Novel Approach of Low-Light Image Denoising for Face Recognition." Advances in Mechanical Engineering 6 (January 1, 2014): 256790. http://dx.doi.org/10.1155/2014/256790.

Full text
Abstract:
Illumination variation makes automatic face recognition a challenging task, especially in low light environments. A very simple and efficient novel low-light image denoising of low frequency noise (DeLFN) is proposed. The noise frequency distribution of low-light images is presented based on massive experimental results. The low and very low frequency noise are dominant in low light conditions. DeLFN is a three-level image denoising method. The first level denoises mixed noises by histogram equalization (HE) to improve overall contrast. The second level denoises low frequency noise by logarithmic transformation (LOG) to enhance the image detail. The third level denoises residual very low frequency noise by high-pass filtering to recover more features of the true images. The PCA (Principal Component Analysis) recognition method is applied to test recognition rate of the preprocessed face images with DeLFN. DeLFN are compared with several representative illumination preprocessing methods on the Yale Face Database B, the Extended Yale face database B, and the CMU PIE face database, respectively. DeLFN not only outperformed other algorithms in improving visual quality and face recognition rate, but also is simpler and computationally efficient for real time applications.
APA, Harvard, Vancouver, ISO, and other styles
22

Jain, Anshika, and Maya Ingle. "PERFORMANCE ANALYSIS OF NOISE REMOVAL TECHNIQUES FOR FACIAL IMAGES- A COMPARATIVE STUDY." BSSS journal of computer 12, no. 1 (June 30, 2021): 1–10. http://dx.doi.org/10.51767/jc1201.

Full text
Abstract:
Image de-noising has been a challenging issue in the field of digital image processing. It involves the manipulation of image data to produce a visually high quality image. While maintaining the desired information in the quality of an image, elimination of noise is an essential task. Various domain applications such as medical science, forensic science, text extraction, optical character recognition, face recognition, face detection etc. deal with noise removal techniques. There exist a variety of noises that may corrupt the images in different ways. Here, we explore filtering techniques viz. Mean filter, Median filter and Wiener filter to remove noises existing in facial images. The noises of our interest are namely; Gaussian noise, Salt & Pepper noise, Poisson noise and Speckle noise in our study. Further, we perform a comparative study based on the parameters such as Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Structure Similarity Index Method (SSIM). For this research work, MATLAB R2013a on Labeled faces in Wild (lfw) database containing 120 facial images is used. Based upon the aforementioned parameters, we have attempted to analyze the performance of noise removal techniques with different types of noises. It has been observed that MSE, PSNR and SSIM for Mean filter are 44.19 with Poisson noise, 35.88 with Poisson noise and 0.197 with Gaussian noise respectively whereas for that of Median filter, these are 44.12 with Poisson noise, 46.56 with Salt & Pepper noise and 0.132 with Gaussian noise respectively. Wiener filter when contaminated with Poisson, Salt & Pepper and Gaussian noise, these parametric values are 44.52, 44.33 and 0.245 respectively. Based on these observations, we claim that the Median filtering technique works the best when contaminated with Poisson noise while the error strategy is dominant. On the other hand, Median filter also works the best with Salt & Pepper noise when Peak Signal to Noise Ratio is important. It is interesting to note that Median filter performs effectively with Gaussian noise using SSIM.
APA, Harvard, Vancouver, ISO, and other styles
23

Alia Zainudin, Noraina, Ain Nazari, Mohd Marzuki Mustafa, Wan NurShazwani Wan Zakaria, Nor Surayahani Suriani, and Wan Nur Hafsha Wan Kairuddin. "Glaucoma detection of retinal images based on boundary segmentation." Indonesian Journal of Electrical Engineering and Computer Science 18, no. 1 (April 1, 2020): 377. http://dx.doi.org/10.11591/ijeecs.v18.i1.pp377-384.

Full text
Abstract:
<p>The rapid growth of technology makes it possible to implement in immediate diagnosis for patients using image processing. By using morphological processing and adaptive thresholding method for segmentation of optic disc and optic cup, various sizes of retinal fundus images captured through fundus camera from online databases can be processed. This paper explains the use of color channel separation method for pre-processing to remove noise for better optic disc and optic cup segmentation. Noise removal will improve image quality and in return help to increase segmentation standard. Then, morphological processing and adaptive thresholding method is used to extract out optic disc and optic cup from fundus image. The proposed method is tested on two publicly available online databases: RIM-ONE and DRIONS-DB. On RIM-ONE database, the average PSNR value acquired is 0.01891 and MSE is 65.62625. Meanwhile, for DRIONS-DB database, the best PSNR is 64.0928 and the MSE is 0.02647. In conclusion, the proposed method can successfully filter out any unwanted noise in the image and are able to help clearer optic disc and optic cup segmentation to be performed.</p>
APA, Harvard, Vancouver, ISO, and other styles
24

Su, Pei-Chun, Elsayed Z. Soliman, and Hau-Tieng Wu. "Robust T-End Detection via T-End Signal Quality Index and Optimal Shrinkage." Sensors 20, no. 24 (December 9, 2020): 7052. http://dx.doi.org/10.3390/s20247052.

Full text
Abstract:
An automatic accurate T-wave end (T-end) annotation for the electrocardiogram (ECG) has several important clinical applications. While there have been several algorithms proposed, their performance is usually deteriorated when the signal is noisy. Therefore, we need new techniques to support the noise robustness in T-end detection. We propose a new algorithm based on the signal quality index (SQI) for T-end, coined as tSQI, and the optimal shrinkage (OS). For segments with low tSQI, the OS is applied to enhance the signal-to-noise ratio (SNR). We validated the proposed method using eleven short-term ECG recordings from QT database available at Physionet, as well as four 14-day ECG recordings which were visually annotated at a central ECG core laboratory. We evaluated the correlation between the real-world signal quality for T-end and tSQI, and the robustness of proposed algorithm to various additive noises of different types and SNR’s. The performance of proposed algorithm on arrhythmic signals was also illustrated on MITDB arrhythmic database. The labeled signal quality is well captured by tSQI, and the proposed OS denoising help stabilize existing T-end detection algorithms under noisy situations by making the mean of detection errors decrease. Even when applied to ECGs with arrhythmia, the proposed algorithm still performed well if proper metric is applied. We proposed a new T-end annotation algorithm. The efficiency and accuracy of our algorithm makes it a good fit for clinical applications and large ECG databases. This study is limited by the small size of annotated datasets.
APA, Harvard, Vancouver, ISO, and other styles
25

Amara korba, Mohamed Cherif, Houcine Bourouba, and Rafik Djemili. "FEATURE EXTRACTION ALGORITHM USING NEW CEPSTRAL TECHNIQUES FOR ROBUST SPEECH RECOGNITION." Malaysian Journal of Computer Science 33, no. 2 (April 24, 2020): 90–101. http://dx.doi.org/10.22452/mjcs.vol33no2.1.

Full text
Abstract:
In this work, we propose a novel feature extraction algorithm that improves the robustness of automatic speech recognition (ASR) systems in the presence of various types of noise. The proposed algorithm uses a new cepstral technique based on the differential power spectrum (DPS) instead of the power spectrum (PS), the algorithm replaces the logarithmic non linearity by the power function. In order to reduce cepstral coefficients mismatches between training and testing conditions, we used the mean and variance normalization, then we apply auto-regression movingaverage filtering (MVA) in the cepstral domain. The ASR experiments were conducted using two databases, the first is LASA digit database designed for recognition the isolated Arabic digits in the presence of different types of noise. The second is Aurora 2 noisy speech database designed to recognize connected English digits in various operating environments. The experimental results show a substantial improvement from the proposed algorithm over the baseline Mel Frequency Cepstral Coefficients (MFCC), the relative improvement is the 28.92% for LASA database and is the 44.43% for aurora 2 database. The performance of our proposed algorithm was tested and verified by extensive comparisons with the state-of-the-art noise-robust features in aurora 2.
APA, Harvard, Vancouver, ISO, and other styles
26

Thiemann, Joachim, Nobutaka Ito, and Emmanuel Vincent. "The diverse environments multi-channel acoustic noise database: A database of multichannel environmental noise recordings." Journal of the Acoustical Society of America 133, no. 5 (May 2013): 3591. http://dx.doi.org/10.1121/1.4806631.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Sun, Zhe, Zheng-Ping Hu, Meng Wang, Fan Bai, and Bo Sun. "Robust Facial Expression Recognition with Low-Rank Sparse Error Dictionary Based Probabilistic Collaborative Representation Classification." International Journal on Artificial Intelligence Tools 26, no. 04 (August 2017): 1750017. http://dx.doi.org/10.1142/s0218213017500178.

Full text
Abstract:
The performance of facial expression recognition (FER) would be degraded due to some factors such as individual differences, Gaussian random noise and so on. Prior feature extraction methods like Local Binary Patterns (LBP) and Gabor filters require explicit expression components, which are always unavailable and difficult to obtain. To make the facial expression recognition (FER) more robust, we propose a novel FER approach based on low-rank sparse error dictionary (LRSE) to remit the side-effect caused by the problems above. Then the query samples can be represented and classified by a probabilistic collaborative representation based classifier (ProCRC), which exploits the maximum likelihood that the query sample belonging to the collaborative subspace of all classes can be better computed. The final classification is performed by seeking which class has the maximum probability. The proposed approach which exploits ProCRC associated with the LRSE features (LRSE ProCRC) for robust FER reaches higher average accuracies on the different databases (i.e., 79.39% on KDEF database, 89.54% on CAS-PEAL database, 84.45% on CK+ database etc.). In addition, our method also leads to state-of-the-art classification results from the aspect of feature extraction methods, training samples, Gaussian noise variances and classification based methods on benchmark databases.
APA, Harvard, Vancouver, ISO, and other styles
28

Robert, J., and B. P. Pathak. "Noise Levels: A database of industrial noise level measurements." Online Review 12, no. 4 (April 1988): 211–17. http://dx.doi.org/10.1108/eb024280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

BENOSMAN, M. M., F. BEREKSI-REGUIG, and E. GORAN SALERUD. "STRONG REAL-TIME QRS COMPLEX DETECTION." Journal of Mechanics in Medicine and Biology 17, no. 08 (December 2017): 1750111. http://dx.doi.org/10.1142/s0219519417501111.

Full text
Abstract:
Heart rate variability (HRV) analysis is used as a marker of autonomic nervous system activity which may be related to mental and/or physical activity. HRV features can be extracted by detecting QRS complexes from an electrocardiogram (ECG) signal. The difficulties in QRS complex detection are due to the artifacts and noises that may appear in the ECG signal when subjects are performing their daily life activities such as exercise, posture changes, climbing stairs, walking, running, etc. This study describes a strong computation method for real-time QRS complex detection. The detection is improved by the prediction of the position of [Formula: see text] waves by the estimation of the RR intervals lengths. The estimation is done by computing the intensity of the electromyogram noises that appear in the ECG signals and known here in this paper as ECG Trunk Muscles Signals Amplitude (ECG-TMSA). The heart rate (HR) and ECG-TMSA increases with the movement of the subject. We use this property to estimate the lengths of the RR intervals. The method was tested using famous databases, and also with signals acquired when an experiment with 17 subjects from our laboratory. The obtained results using ECG signals from the MIT-Noise Stress Test Database show a QRS complex detection error rate (ER) of 9.06%, a sensitivity of 95.18% and a positive prediction of 95.23%. This method was also tested against MIT-BIH Arrhythmia Database, the result are 99.68% of sensitivity and 99.89% of positive predictivity, with ER of 0.40%. When applied to the signals obtained from the 17 subjects, the algorithm gave an interesting result of 0.00025% as ER, 99.97% as sensitivity and 99.99% as positive predictivity.
APA, Harvard, Vancouver, ISO, and other styles
30

Moore, Brian C. J., Larry E. Humes, Graham Cox, David Lowe, and Hedwig E. Gockel. "Modification of a Method for Diagnosing Noise-Induced Hearing Loss Sustained During Military Service." Trends in Hearing 26 (January 2022): 233121652211450. http://dx.doi.org/10.1177/23312165221145005.

Full text
Abstract:
Moore (2020) proposed a method for diagnosing noise-induced hearing loss (NIHL) sustained during military service, based on an analysis of the shapes of the audiograms of military personnel. The method, denoted M-NIHL, was estimated to have high sensitivity but low-to-moderate specificity. Here, a revised version of the method, denoted rM-NIHL, was developed that gave a better balance between sensitivity and specificity. A database of 285 audiograms of military noise-exposed men was created by merging two previously used databases with a new database, randomly shuffling, and then splitting into two, one for development of the revised method and one for evaluation. Two comparable databases of audiograms of 185 non-exposed men were also created, again one for development and one for evaluation. Based on the evaluation databases, the rM-NIHL method has slightly lower sensitivity than the M-NIHL method, but the specificity is markedly higher. The two methods have similar overall diagnostic performance. If an individual is classified as having NIHL based on a positive diagnosis for either ear, the rM-NIHL method has a sensitivity of 0.98 and a specificity of 0.63. Based on a positive diagnosis for both ears, the rM-NIHL method has a sensitivity of 0.76 and a specificity of 0.95.
APA, Harvard, Vancouver, ISO, and other styles
31

Kirmizitas, Hikmet, and Nurettin Besli. "Image and Texture Independent Deep Learning Noise Estimation Using Multiple Frames." Elektronika ir Elektrotechnika 28, no. 6 (December 21, 2022): 42–47. http://dx.doi.org/10.5755/j02.eie.30586.

Full text
Abstract:
In this study, a novel multiple frame based image and texture independent Convolutional Neural Network (CNN) noise estimator is introduced. Noise estimation is a crucial step for denoising algorithms, especially for ones that are called “non-blind”. The estimator works for additive Gaussian noise for varying noise levels. The noise levels studied in this work have a standard deviation equal to 5 to 25 increasing 5 by 5. Since there is no database for noisy multiple images to train and validate the network, two frames of synthetic noisy images with a variety of noise levels are created by adding Additive White Gaussian Noise (AWGN) to each clean image. The proposed method is applied on the most popular gray level images besides the color image databases such as Kodak, McMaster, BSDS500 in order to compare the results with the other works. Image databases comprise indoor and outdoor scenes that have fine details and richer texture. The estimator has an accuracy rate of 99 % for the classification and favourable results for the regression. The proposed method outperforms traditional methods in most cases. And the regression output can be used with any non-blind denoising method.
APA, Harvard, Vancouver, ISO, and other styles
32

Santos-Domínguez, David, Soledad Torres-Guijarro, Antonio Cardenal-López, and Antonio Pena-Gimenez. "ShipsEar: An underwater vessel noise database." Applied Acoustics 113 (December 2016): 64–69. http://dx.doi.org/10.1016/j.apacoust.2016.06.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

He, Jian Ping, and Ning Wang. "The Research on the Recognition Technique of Rock AE and Noise Signal." Advanced Materials Research 168-170 (December 2010): 293–97. http://dx.doi.org/10.4028/www.scientific.net/amr.168-170.293.

Full text
Abstract:
AE signal waveform Envelope is damped oscillation process, its maximum amplitude descending edge Envelope is exponential rule decay oscillation process ,others noises higher than oscillation delay. The thesis analysis rock AE and noise signal in the mine,on the basis of Fourier transform spectral analysis to found the noise spectrum characteristics, setting up noise database, according to the site data acquisition rock AE and environmental noise signal, with its characters spectral data to wavelet multi-scale decomposed,supplemented by Fourier transform spectral analysis, basing on noise spectral characters, some scale of wavelet decomposition is mulled, the signal of wave coefficient of the approximation elements reconstructed, eliminating disturbance and noise, showing real rock sounds. Results demonstrate that the recognition technique can advance effective data reliability and data analysis precision, making reality significant for monitoring and prediction on rock crack regular and development orientation.
APA, Harvard, Vancouver, ISO, and other styles
34

Lin, Haicai, Ruixia Liu, and Zhaoyang Liu. "ECG Signal Denoising Method Based on Disentangled Autoencoder." Electronics 12, no. 7 (March 29, 2023): 1606. http://dx.doi.org/10.3390/electronics12071606.

Full text
Abstract:
The electrocardiogram (ECG) is widely used in medicine because it can provide basic information about different types of heart disease. However, ECG data are usually disturbed by various types of noise, which can lead to errors in diagnosis by doctors. To address this problem, this study proposes a method for denoising ECG based on disentangled autoencoders. A disentangled autoencoder is an improved autoencoder suitable for denoising ECG data. In our proposed method, we use a disentangled autoencoder model based on a fully convolutional neural network to effectively separate the clean ECG data from the noise. Unlike conventional autoencoders, we disentangle the features of the coding hidden layer to separate the signal-coding features from the noise-coding features. We performed simulation experiments on the MIT-BIH Arrhythmia Database and found that the algorithm had better noise reduction results when dealing with four different types of noise. In particular, using our method, the average improved signal-to-noise ratios for the three noises in the MIT-BIH Noise Stress Test Database were 27.45 db for baseline wander, 25.72 db for muscle artefacts, and 29.91 db for electrode motion artefacts. Compared to a denoising autoencoder based on a fully convolutional neural network (FCN), the signal-to-noise ratio was improved by an average of 12.57%. We can conclude that the model has scientific validity. At the same time, our noise reduction method can effectively remove noise while preserving the important information conveyed by the original signal.
APA, Harvard, Vancouver, ISO, and other styles
35

Kish, Laszlo B., and Walter C. Daugherity. "Entanglement, and Unsorted Database Search in Noise-Based Logic." Applied Sciences 9, no. 15 (July 27, 2019): 3029. http://dx.doi.org/10.3390/app9153029.

Full text
Abstract:
We explore the collapse of “wavefunction” and the measurement of entanglement in the superpositions of hyperspace vectors in classical physical instantaneous-noise-based logic (INBL). We find both similarities with and major differences from the related properties of quantum systems. Two search algorithms utilizing the observed features are introduced. For the first one we assume an unsorted names database set up by Alice that is a superposition (unknown by Bob) of up to n = 2N strings; those we call names. Bob has access to the superposition wave and to the 2N reference noises of the INBL system of N noise bits. For Bob, to decide if a given name x is included in the superposition, once the search has begun, it takes N switching operations followed by a single measurement of the superposition wave. Thus, the time and hardware complexity of the search algorithm is O[log(n)], which indicates an exponential speedup compared to Grover’s quantum algorithm in a corresponding setting. An extra advantage is that the error probability of the search is zero. Moreover, the scheme can also check the existence of a fraction of a string, or several separate string fractions embedded in an arbitrarily long, arbitrary string. In the second algorithm, we expand the above scheme to a phonebook with n names and s phone numbers. When the names and numbers have the same bit resolution, once the search has begun, the time and hardware complexity of this search algorithm is O[log(n)]. In the case of one-to-one correspondence between names and phone numbers (n = s), the algorithm offers inverse phonebook search too. The error probability of this search algorithm is also zero.
APA, Harvard, Vancouver, ISO, and other styles
36

Zhou, Jiena, Zhihao Shi, Lifang Zhou, Yong Hu, and Meibian Zhang. "Occupational noise-induced hearing loss in China: a systematic review and meta-analysis." BMJ Open 10, no. 9 (September 2020): e039576. http://dx.doi.org/10.1136/bmjopen-2020-039576.

Full text
Abstract:
ObjectiveMost of the Chinese occupational population are becoming at risk of noise-induced hearing loss (NIHL). However, there is a limited number of literature reviews on occupational NIHL in China. This study aimed to analyse the prevalence and characteristics of occupational NIHL in the Chinese population using data from relevant studies.DesignSystematic review and meta-analysis.MethodsFrom December 2019 to February 2020, we searched the literature through databases, including Web of Science, PubMed, MEDLINE, Scopus, the China National Knowledge Internet, Chinese Sci-Tech Journal Database (weip.com), WanFang Database and China United Library Database, for studies on NIHL in China published in 1993–2019 and analysed the correlation between NIHL and occupational exposure to noise, including exposure to complex noise and coexposure to noise and chemicals.ResultsA total of 71 865 workers aged 33.5±8.7 years were occupationally exposed to 98.6±7.2 dB(A) (A-weighted decibels) noise for a duration of 9.9±8.4 years in the transportation, mining and typical manufacturing industries. The prevalence of occupational NIHL in China was 21.3%, of which 30.2% was related to high-frequency NIHL (HFNIHL), 9.0% to speech-frequency NIHL and 5.8% to noise-induced deafness. Among manufacturing workers, complex noise contributed to greater HFNIHL than Gaussian noise (overall weighted OR (OR)=1.95). Coexposure to noise and chemicals such as organic solvents, welding fumes, carbon monoxide and hydrogen sulfide led to greater HFNIHL than noise exposure alone (overall weighted OR=2.36). Male workers were more likely to experience HFNIHL than female workers (overall weighted OR=2.26). Age, noise level and exposure duration were also risk factors for HFNIHL (overall weighted OR=1.35, 5.63 and 1.75, respectively).ConclusionsThe high prevalence of occupational NIHL in China was related to the wide distribution of noise in different industries as well as high-level and long-term noise exposure. The prevalence was further aggravated by exposure to complex noise or coexposure to noise and specific chemicals. Additional efforts are needed to reduce occupational noise exposure in China.
APA, Harvard, Vancouver, ISO, and other styles
37

Hamza, Nisreen Ryadh, Rasha Ail Dihin, and Mohammed Hasan Abdulameer. "A hybrid image similarity measure based on a new combination of different similarity techniques." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 2 (April 1, 2020): 1814. http://dx.doi.org/10.11591/ijece.v10i2.pp1814-1822.

Full text
Abstract:
Image similarity is the degree of how two images are similar or dissimilar. It computes the similarity degree between the intensity patterns in images. A new image similarity measure named (HFEMM) is proposed in this paper. The HFEMM is composed of two phases. Phase 1, a modified histogram similarity measure (HSSIM) is merged with feature similarity measure (FSIM) to get a new measure called (HFM). In phase 2, the resulted (HFM) is merged with error measure (EMM) in order to get a new similarity measure, which is named (HFEMM). Different kindes of noises for example Gaussian, Uniform, and salt & ppepper noiser are used with the proposed methods. One of the human face databases (AT&T) is used in the experiments and random images are used as well. For the evaluation, the similarity percentage under peakk signal to noise ratio (PSNR) is usedd. To show the effectiveness of the proposed measure, a comparision anong different similar technique such as SSIM, HFM, EMM and HFEMM are considered. The proposed HFEMM achieved higher similarity result when PSNR was low compared to the other methods.
APA, Harvard, Vancouver, ISO, and other styles
38

Garg, Meenu, and Amandeep Verma. "An Enhanced LSDBIQ Algorithm for Full Reference Image Quality Assessment for Multi Distorted Images." International Journal of Advanced Research in Computer Science and Software Engineering 7, no. 8 (August 30, 2017): 41. http://dx.doi.org/10.23956/ijarcsse.v7i8.18.

Full text
Abstract:
Image processing is an emerging technology as image is used in various fields like medical and education. Images may corrupt due to the various categories of noises. Image quality reduces because of the image acquisition or transmission. Noise reduction is the main focus to retain the quality of the image. For the removal of this noise, there are various techniques and filters. Before applying further processing on the image, noise should be removed from the image. In this paper we deal with with a practical and effectual IQA model, called LSDBIQ (local standard deviation based image quality). This metric is examined on a well known database MDID (multi distorted image dataset). Exploratory results manifest that this metric perform better than alternative techniques for the assessment of image quality and have very low computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
39

Jaenul, Ariep, Shahad Alyousif, Ali Amer Ahmed Alrawi, and Samer K. Salih. "Robust Approach of De-noising ECG Signal Using Multi-Resolution Wavelet Transform." International Journal of Engineering & Technology 7, no. 4.11 (October 2, 2018): 5. http://dx.doi.org/10.14419/ijet.v7i4.11.20678.

Full text
Abstract:
The ECG signal expresses the behavior of human heart against time. The analysis of this signal performs great information for diagnosing different cardiac diseases. In other hand, the ECG signal used for analyzing must be clean from any type of noises that corrupted it by the external environment. In this paper, a new approach of ECG signal noise reduction is proposed to minimize noise from all parts of ECG signal and maintains main characteristics of ECG signal with lowest changes. The new approach applies simple scaling down operation on the detail resolution in the wavelet transform space of noisy signal. The proposed noise reduction approach is validated by some ECG records from MIT-BIH database. Also, the performance of the proposed approach is evaluated graphically using different SNR levels and some standard metrics. The results improve the ability of the proposed approach to reduce noise from the ECG signal with high accuracy in comparison to the existing methods of noise reduction.
APA, Harvard, Vancouver, ISO, and other styles
40

Dai, Peishan, Hanwei Sheng, Jianmei Zhang, Ling Li, Jing Wu, and Min Fan. "Retinal Fundus Image Enhancement Using the Normalized Convolution and Noise Removing." International Journal of Biomedical Imaging 2016 (2016): 1–12. http://dx.doi.org/10.1155/2016/5075612.

Full text
Abstract:
Retinal fundus image plays an important role in the diagnosis of retinal related diseases. The detailed information of the retinal fundus image such as small vessels, microaneurysms, and exudates may be in low contrast, and retinal image enhancement usually gives help to analyze diseases related to retinal fundus image. Current image enhancement methods may lead to artificial boundaries, abrupt changes in color levels, and the loss of image detail. In order to avoid these side effects, a new retinal fundus image enhancement method is proposed. First, the original retinal fundus image was processed by the normalized convolution algorithm with a domain transform to obtain an image with the basic information of the background. Then, the image with the basic information of the background was fused with the original retinal fundus image to obtain an enhanced fundus image. Lastly, the fused image was denoised by a two-stage denoising method including the fourth order PDEs and the relaxed median filter. The retinal image databases, including the DRIVE database, the STARE database, and the DIARETDB1 database, were used to evaluate image enhancement effects. The results show that the method can enhance the retinal fundus image prominently. And, different from some other fundus image enhancement methods, the proposed method can directly enhance color images.
APA, Harvard, Vancouver, ISO, and other styles
41

Dance, Stephen, and Lindsay McIntyre. "The Quiet Project – UK Acoustic Community’s response to COVID19 during the easing of lockdown." Noise Mapping 8, no. 1 (January 1, 2021): 32–40. http://dx.doi.org/10.1515/noise-2021-0003.

Full text
Abstract:
Abstract The COVID-19 lockdown created a new kind of environment both in the UK and globally, never experienced before or likely to occur again. A vital and time-critical working group was formed with the aim of gathering crowd-source high quality baseline noise levels and other supporting information across the UK during the lock-down and subsequent periods. The acoustic community were mobilised through existing networks engaging private companies, public organisations and academics to gather data in accessible places. In addition, pre-existing on-going measurements from major infrastructure projects, airport, and planning applications were gathered to create the largest possible databank. A website was designed and developed to advertise the project, provide instructions and to formalise the uploading of noise data, observations and soundscape feedback. Two case studies gathered in the latter stage of full lockdown are presented in the paper to illustrate the changes in the environmental noise conditions relative to transport activity. Ultimately the databank will be used to establish the relation to other impacts such as air quality, air traffic, economic, and health and wellbeing. As publicly funded research the databank will be made publicly available to assist future research.
APA, Harvard, Vancouver, ISO, and other styles
42

Thangarajan, Ahilandeswari, and Vivekanandan Kalimuthu. "CBIR with Partial Input of Unshaped Images Using Compressed-Pixel Matching Algorithm." International Journal of Engineering & Technology 7, no. 3.27 (August 15, 2018): 206. http://dx.doi.org/10.14419/ijet.v7i3.27.17762.

Full text
Abstract:
Many works have been done to find out whether given image is in the database using Content Based Image Retrieval (CBIR) techniques. However if the query image is unshaped or noise filled then retrieval of that image in the database is difficult .We propose an approach by which for any shape of input image the databases is searched and the most relevant image is retrieved. Results provides better accuracy than existing one and time elapsed also reduced because of making comparison after compression of both partial image and images from the database. The attainment of the proposed system is assessed using LFW and WANG image sets consisting of 2000 and 9990 images, respectively, and it measured with familiar methods with regard to precision and recall which demonstrates the advantages of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
43

Bagherzadeh Cham, Masumeh, Mohammad Ali Mohseni-Bandpei, Mahmood Bahramizadeh, Saeed Kalbasi, and Akbar Biglarian. "The clinical and biomechanical effects of subthreshold random noise on the plantar surface of the foot in diabetic patients and elder people: A systematic review." Prosthetics and Orthotics International 40, no. 6 (July 10, 2016): 658–67. http://dx.doi.org/10.1177/0309364616631351.

Full text
Abstract:
Background:Central nervous system receives information from foot mechanoreceptors in order to control balance and perform movement tasks. Subthreshold random noise seems to improve sensitivity of the cutaneous mechanoreceptor.Objectives:The purpose of this study was to systematically review published evidence conducted to evaluate the clinical and biomechanical effects of subthreshold random noise on the plantar surface of the foot in diabetic patients and elder people.Study design:Systematic review.Methods:A literature search was performed in PubMed, Scopus, ScienceDirect, Web of Knowledge, CINAHL, and EMBASE databases based on population, intervention, comparison, outcomes, and study method. Quality of studies was assessed using the methodological quality assessment tool, using Physiotherapy Evidence Database scale.Results:In all, 11 studies were selected for final evaluation based on inclusion criteria. Five studies evaluated the effects of subthreshold random noise in diabetic patients and six in elder people. In seven studies, biomechanical (balance and gait parameters) effects and in four studies clinical (pressure and vibration sensations) effects of subthreshold random noise were investigated. All reviewed studies were scored fair (2) to good (9) quality in terms of methodological quality assessment using Physiotherapy Evidence Database scale.Conclusion:The results indicated that subthreshold random noise improves balance and sensation in diabetic patients and elder people. Also gait variables can be improved in elder people with subthreshold random noise. However, further well-designed studies are needed.Clinical relevanceThe previous studies reported that subthreshold random noise may improve gait, balance, and sensation, but more studies are needed to evaluate the long-term effect of subthreshold random noise in shoe or insole for daily living tasks in diabetic patients and elder people.
APA, Harvard, Vancouver, ISO, and other styles
44

Li, Lanting, Tianliang Lu, Xingbang Ma, Mengjiao Yuan, and Da Wan. "Voice Deepfake Detection Using the Self-Supervised Pre-Training Model HuBERT." Applied Sciences 13, no. 14 (July 22, 2023): 8488. http://dx.doi.org/10.3390/app13148488.

Full text
Abstract:
In recent years, voice deepfake technology has developed rapidly, but current detection methods have the problems of insufficient detection generalization and insufficient feature extraction for unknown attacks. This paper presents a forged speech detection method (HuRawNet2_modified) based on a self-supervised pre-trained model (HuBERT) to improve detection (and address the above problems). A combination of impulsive signal-dependent additive noise and additive white Gaussian noise was adopted for data boosting and augmentation, and the HuBERT model was fine-tuned on different language databases. On this basis, the size of the extracted feature maps was modified independently by the α-feature map scaling (α-FMS) method, with a modified end-to-end method using the RawNet2 model as the backbone structure. The results showed that the HuBERT model could extract features more comprehensively and accurately. The best evaluation indicators were an equal error rate (EER) of 2.89% and a minimum tandem detection cost function (min t-DCF) of 0.2182 on the database of the ASVspoof2021 LA challenge, which verified the effectiveness of the detection method proposed in this paper. Compared with the baseline systems in databases of the ASVspoof 2021 LA challenge and the FMFCC-A, the values of EER and min t-DCF decreased. The results also showed that the self-supervised pre-trained model with fine-tuning can extract acoustic features across languages. And the detection can be slightly improved when the languages of the pre-trained database, and the fine-tuned and tested database are the same.
APA, Harvard, Vancouver, ISO, and other styles
45

Dalal, Virupaxi, and Satish Bhairannawar. "Efficient de-noising technique for electroencephalogram signal processing." IAES International Journal of Artificial Intelligence (IJ-AI) 11, no. 2 (June 1, 2022): 603. http://dx.doi.org/10.11591/ijai.v11.i2.pp603-612.

Full text
Abstract:
An electroencephalogram (EEG) is a recording of various frequencies of electrical activity in the brain. EEG signal is very useful for diagnosis of various brain related diseases at early stage to prevent severe issues which may lead to loss of life. The raw EEG signal captured through the leads contain different type of noises which is not susceptible for diagnosis. In this paper, an efficient algorithm is proposed to process the raw EEG signal to combat the noise. To obtain noiseless EEG data, the likelihood test ratio is applied to interference computation block. The likelihood ratio test converts EEG data signal into segmented data with nearly constant noise characteristics. This will aid in detecting the noise present in a tiny segment which ensures proper signal denoising. The processed signal is compared with the database of noiseless EEG of the same person using principal component analysis (PCA) classifier. The proposed algorithm is 99.01% efficient to identify and combat noise in the EEG signal.
APA, Harvard, Vancouver, ISO, and other styles
46

Santos, Mónica, Armando Almeida, Catarina Lopes, and Tiago Oliveira. "Ruído: Medidas de Proteção Coletivas e Individuais." Revista Portuguesa de Saúde Ocupacional 9 (June 30, 2020): S82—S90. http://dx.doi.org/10.31252/rpso.18.04.2020.

Full text
Abstract:
Introduction / framework / objectives Noise is an occupational risk factor extensively addressed in the Occupational Health literature. However, its pathophysiological consequences have traditionally been emphasized, sometimes neglecting more detailed explanations concerning personal protective equipment and collective protective measures. Methodology This is a Scoping Review, initiated by a September 2019 search of the databases “CINALH plus with full text, Medline with full text, Cochrane Database of Abstracts of Reviews of Effects, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, Cochrane Methodology Register, Nursing and Allied Health Collection: Comprehensive, Academic Search Ultimate, Science Direct, SCOPUS and RCAAP.” Content There are several collective protection measures (at the workspace structure/ design and use of various materials/ devices) that are able to attenuate noise exposure. When exceeding the lower exposure value (80 decibels) the employer must provide hearing protection; if the upper exposure value (85 decibels) is reached or exceeded its use is required (after prior enhancement of collective protective measures). However, workers and their representatives have to be consulted to choose the model. In selecting the latter, account should be taken of European Community certification, appropriate attenuation, compatibility with tasks and other protective equipment used simultaneously; as well as the physical condition of the worker, acceptability and comfort that it will generate. The effectiveness of these will depend on time of use, correct utilization, shape/ size, fit to the ear, pressure (head and/ or ear), resistance to extreme temperatures and material. Conclusions Occupational health team professionals generally need up-to-date information on individual and collective protection measures to mitigate the effects of noise in the workplace. The bibliography (in indexed databases) on these two themes is not very abundant and / or easily accessible. However, these measures, well used, can attenuate noise, promoting safer and healthier work. It would be pertinent for Occupational Health teams who deal with clients with different noise levels, to investigate which of these techniques are most appropriate to each situation and how employees adhere better to the process and perform their part more effectively.
APA, Harvard, Vancouver, ISO, and other styles
47

Li, Qiuying, Tao Zhang, Yanzhang Geng, and Zhen Gao. "Microphone array speech enhancement based on optimized IMCRA." Noise Control Engineering Journal 69, no. 6 (November 1, 2021): 468–76. http://dx.doi.org/10.3397/1/376944.

Full text
Abstract:
Microphone array speech enhancement algorithm uses temporal and spatial informa- tion to improve the performance of speech noise reduction significantly. By combining noise estimation algorithm with microphone array speech enhancement, the accuracy of noise estimation is improved, and the computation is reduced. In traditional noise es- timation algorithms, the noise power spectrum is not updated in the presence of speech, which leads to the delay and deviation of noise spectrum estimation. An optimized im- proved minimum controlled recursion average speech enhancement algorithm, based on a microphone matrix is proposed in this paper. It consists of three parts. The first part is the preprocessing, divided into two branches: the upper branch enhances the speech signal, and the lower branch gets the noise. The second part is the optimized improved minimum controlled recursive averaging. The noise power spectrum is updated not only in the non-speech segments but also in the speech segments. Fi- nally, according to the estimated noise power spectrum, the minimum mean-square error log-spectral amplitude algorithm is used to enhance speech. Testing data are from TIMIT and Noisex-92 databases. Short-time objective intelligibility and seg- mental signal-to-noise ratio are chosen as evaluation metrics. Experimental results show that the proposed speech enhancement algorithm can improve the segmental signal-to-noise ratio and short-time objective intelligibility for various noise types at different signal-to-noise ratio levels.
APA, Harvard, Vancouver, ISO, and other styles
48

Picaut, Judicaël, Ayoub Boumchich, Erwan Bocher, Nicolas Fortin, Gwendall Petit, and Pierre Aumond. "A Smartphone-Based Crowd-Sourced Database for Environmental Noise Assessment." International Journal of Environmental Research and Public Health 18, no. 15 (July 22, 2021): 7777. http://dx.doi.org/10.3390/ijerph18157777.

Full text
Abstract:
Noise is a major source of pollution with a strong impact on health. Noise assessment is therefore a very important issue to reduce its impact on humans. To overcome the limitations of the classical method of noise assessment (such as simulation tools or noise observatories), alternative approaches have been developed, among which is collaborative noise measurement via a smartphone. Following this approach, the NoiseCapture application was proposed, in an open science framework, providing free access to a considerable amount of information and offering interesting perspectives of spatial and temporal noise analysis for the scientific community. After more than 3 years of operation, the amount of collected data is considerable. Its exploitation for a sound environment analysis, however, requires one to consider the intrinsic limits of each collected information, defined, for example, by the very nature of the data, the measurement protocol, the technical performance of the smartphone, the absence of calibration, the presence of anomalies in the collected data, etc. The purpose of this article is thus to provide enough information, in terms of quality, consistency, and completeness of the data, so that everyone can exploit the database, in full control.
APA, Harvard, Vancouver, ISO, and other styles
49

Gendron, Marlin L., and Juliette W. Ioup. "Wavelet Multi-scale Edge Detection for Extraction of Geographic Features to Improve Vector Map Databases." Journal of Navigation 53, no. 1 (January 2000): 79–92. http://dx.doi.org/10.1017/s0373463399008607.

Full text
Abstract:
Although numerous at smaller geographic scales, vector databases often do not exist at the more detailed, larger scales. A possible solution is the use of image processing techniques to detect edges in high-resolution satellite imagery. Features such as roads and airports are formed from the edges and matched up with similar features in existing low-resolution vector map databases. By replacing the old features with the new more accurate features, the resolution of the existing map database is improved. To accomplish this, a robust edge detection algorithm is needed that will perform well in noisy conditions. This paper studies and tests one such method, the Wavelet Multi-scale Edge Detector. The wavelet transform breaks down a signal into frequency bands at different levels. Noise present at lower scales smoothes out at higher levels. It is demonstrated that this property can be used to detect edges in noisy satellite imagery. Once edges are located, a new method will be proposed for storing these edges geographically so that features can be formed and paired with existing features in a vector map database.
APA, Harvard, Vancouver, ISO, and other styles
50

Kostoulas, Theodoros, Thomas Winkler, Todor Ganchev, Nikos Fakotakis, and Joachim Köhler. "The MoveOn database: motorcycle environment speech and noise database for command and control applications." Language Resources and Evaluation 47, no. 2 (March 15, 2013): 539–63. http://dx.doi.org/10.1007/s10579-013-9222-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography