Статті в журналах з теми "Dataset noise"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Dataset noise.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Dataset noise".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Jia, Qingrui, Xuhong Li, Lei Yu, Jiang Bian, Penghao Zhao, Shupeng Li, Haoyi Xiong, and Dejing Dou. "Learning from Training Dynamics: Identifying Mislabeled Data beyond Manually Designed Features." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 7 (June 26, 2023): 8041–49. http://dx.doi.org/10.1609/aaai.v37i7.25972.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
While mislabeled or ambiguously-labeled samples in the training set could negatively affect the performance of deep models, diagnosing the dataset and identifying mislabeled samples helps to improve the generalization power. Training dynamics, i.e., the traces left by iterations of optimization algorithms, have recently been proved to be effective to localize mislabeled samples with hand-crafted features. In this paper, beyond manually designed features, we introduce a novel learning-based solution, leveraging a noise detector, instanced by an LSTM network, which learns to predict whether a sample was mislabeled using the raw training dynamics as input. Specifically, the proposed method trains the noise detector in a supervised manner using the dataset with synthesized label noises and can adapt to various datasets (either naturally or synthesized label-noised) without retraining. We conduct extensive experiments to evaluate the proposed method. We train the noise detector based on the synthesized label-noised CIFAR dataset and test such noise detector on Tiny ImageNet, CUB-200, Caltech-256, WebVision and Clothing1M. Results show that the proposed method precisely detects mislabeled samples on various datasets without further adaptation, and outperforms state-of-the-art methods. Besides, more experiments demonstrate that the mislabel identification can guide a label correction, namely data debugging, providing orthogonal improvements of algorithm-centric state-of-the-art techniques from the data aspect.
2

Jiang, Gaoxia, Jia Zhang, Xuefei Bai, Wenjian Wang, and Deyu Meng. "Which Is More Effective in Label Noise Cleaning, Correction or Filtering?" Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (March 24, 2024): 12866–73. http://dx.doi.org/10.1609/aaai.v38i11.29183.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Most noise cleaning methods adopt one of the correction and filtering modes to build robust models. However, their effectiveness, applicability, and hyper-parameter insensitivity have not been carefully studied. We compare the two cleaning modes via a rebuilt error bound in noisy environments. At the dataset level, Theorem 5 implies that correction is more effective than filtering when the cleaned datasets have close noise rates. At the sample level, Theorem 6 indicates that confident label noises (large noise probabilities) are more suitable to be corrected, and unconfident noises (medium noise probabilities) should be filtered. Besides, an imperfect hyper-parameter may have fewer negative impacts on filtering than correction. Unlike existing methods with a single cleaning mode, the proposed Fusion cleaning framework of Correction and Filtering (FCF) combines the advantages of different modes to deal with diverse suspicious labels. Experimental results demonstrate that our FCF method can achieve state-of-the-art performance on benchmark datasets.
3

Fu, Bo, Xiangyi Zhang, Liyan Wang, Yonggong Ren, and Dang N. H. Thanh. "A blind medical image denoising method with noise generation network." Journal of X-Ray Science and Technology 30, no. 3 (April 15, 2022): 531–47. http://dx.doi.org/10.3233/xst-211098.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
BACKGROUND: In the process of medical images acquisition, the unknown mixed noise will affect image quality. However, the existing denoising methods usually focus on the known noise distribution. OBJECTIVE: In order to remove the unknown real noise in low-dose CT images (LDCT), a two-step deep learning framework is proposed in this study, which is called Noisy Generation-Removal Network (NGRNet). METHODS: Firstly, the output results of L0 Gradient Minimization are used as the labels of a dental CT image dataset to form a pseudo-image pair with the real dental CT images, which are used to train the noise generation network to estimate real noise distribution. Then, for the lung CT images of the LIDC/IDRI database, we migrate the real noise to the noise-free lung CT images, to construct a new almost-real noisy images dataset. Since dental images and lung images are all CT images, this migration can be achieved. The denoising network is trained to realize the denoising of real LDCT for dental images by using this dataset but can extend for any low-dose CT images. RESULTS: To prove the effectiveness of our NGRNet, we conduct experiments on lung CT images with synthetic noise and tooth CT images with real noise. For synthetic noise image datasets, experimental results show that NGRNet is superior to existing denoising methods in terms of visual effect and exceeds 0.13dB in the peak signal-to-noise ratio (PSNR). For real noisy image datasets, the proposed method can achieve the best visual denoising effect. CONCLUSIONS: The proposed method can retain more details and achieve impressive denoising performance.
4

Choi, Hwiyong, Haesang Yang, Seungjun Lee, and Woojae Seong. "Classification of Inter-Floor Noise Type/Position Via Convolutional Neural Network-Based Supervised Learning." Applied Sciences 9, no. 18 (September 7, 2019): 3735. http://dx.doi.org/10.3390/app9183735.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Inter-floor noise, i.e., noise transmitted from one floor to another floor through walls or ceilings in an apartment building or an office of a multi-layered structure, causes serious social problems in South Korea. Notably, inaccurate identification of the noise type and position by human hearing intensifies the conflicts between residents of apartment buildings. In this study, we propose a robust approach using deep convolutional neural networks (CNNs) to learn and identify the type and position of inter-floor noise. Using a single mobile device, we collected nearly 2000 inter-floor noise events that contain 5 types of inter-floor noises generated at 9 different positions on three floors in a Seoul National University campus building. Based on pre-trained CNN models designed and evaluated separately for type and position classification, we achieved type and position classification accuracy of 99.5% and 95.3%, respectively in validation datasets. In addition, the robustness of noise type classification with the model was checked against a new test dataset. This new dataset was generated in the building and contains 2 types of inter-floor noises at 10 new positions. The approximate positions of inter-floor noises in the new dataset with respect to the learned positions are presented.
5

Hossain, Sadat, and Bumshik Lee. "NG-GAN: A Robust Noise-Generation Generative Adversarial Network for Generating Old-Image Noise." Sensors 23, no. 1 (December 26, 2022): 251. http://dx.doi.org/10.3390/s23010251.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Numerous old images and videos were captured and stored under unfavorable conditions. Hence, old images and videos have uncertain and different noise patterns compared with those of modern ones. Denoising old images is an effective technique for reconstructing a clean image containing crucial information. However, obtaining noisy-clean image pairs for denoising old images is difficult and challenging for supervised learning. Preparing such a pair is expensive and burdensome, as existing denoising approaches require a considerable number of noisy-clean image pairs. To address this issue, we propose a robust noise-generation generative adversarial network (NG-GAN) that utilizes unpaired datasets to replicate the noise distribution of degraded old images inspired by the CycleGAN model. In our proposed method, the perception-based image quality evaluator metric is used to control noise generation effectively. An unpaired dataset is generated by selecting clean images with features that match the old images to train the proposed model. Experimental results demonstrate that the dataset generated by our proposed NG-GAN can better train state-of-the-art denoising models by effectively denoising old videos. The denoising models exhibit significantly improved peak signal-to-noise ratios and structural similarity index measures of 0.37 dB and 0.06 on average, respectively, on the dataset generated by our proposed NG-GAN.
6

Zhang, Rui, Zhenghao Chen, Sanxing Zhang, Fei Song, Gang Zhang, Quancheng Zhou, and Tao Lei. "Remote Sensing Image Scene Classification with Noisy Label Distillation." Remote Sensing 12, no. 15 (July 24, 2020): 2376. http://dx.doi.org/10.3390/rs12152376.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The widespread applications of remote sensing image scene classification-based Convolutional Neural Networks (CNNs) are severely affected by the lack of large-scale datasets with clean annotations. Data crawled from the Internet or other sources allows for the most rapid expansion of existing datasets at a low-cost. However, directly training on such an expanded dataset can lead to network overfitting to noisy labels. Traditional methods typically divide this noisy dataset into multiple parts. Each part fine-tunes the network separately to improve performance further. These approaches are inefficient and sometimes even hurt performance. To address these problems, this study proposes a novel noisy label distillation method (NLD) based on the end-to-end teacher-student framework. First, unlike general knowledge distillation methods, NLD does not require pre-training on clean or noisy data. Second, NLD effectively distills knowledge from labels across a full range of noise levels for better performance. In addition, NLD can benefit from a fully clean dataset as a model distillation method to improve the student classifier’s performance. NLD is evaluated on three remote sensing image datasets, including UC Merced Land-use, NWPU-RESISC45, AID, in which a variety of noise patterns and noise amounts are injected. Experimental results show that NLD outperforms widely used directly fine-tuning methods and remote sensing pseudo-labeling methods.
7

Van Hulse, Jason, Taghi M. Khoshgoftaar, and Amri Napolitano. "Evaluating the Impact of Data Quality on Sampling." Journal of Information & Knowledge Management 10, no. 03 (September 2011): 225–45. http://dx.doi.org/10.1142/s021964921100295x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Learning from imbalanced training data can be a difficult endeavour, and the task is made even more challenging if the data is of low quality or the size of the training dataset is small. Data sampling is a commonly used method for improving learner performance when data is imbalanced. However, little effort has been put forth to investigate the performance of data sampling techniques when data is both noisy and imbalanced. In this work, we present a comprehensive empirical investigation of the impact of changes in four training dataset characteristics — dataset size, class distribution, noise level and noise distribution — on data sampling techniques. We present the performance of four common data sampling techniques using 11 learning algorithms. The results, which are based on an extensive suite of experiments for which over 15 million models were trained and evaluated, show that: (1) even for relatively clean datasets, class imbalance can still hurt learner performance, (2) data sampling, however, may not improve performance for relatively clean but imbalanced datasets, (3) data sampling can be very effective at dealing with the combined problems of noise and imbalance, (4) both the level and distribution of class noise among the classes are important, as either factor alone does not cause a significant impact, (5) when sampling does improve the learners (i.e. for noisy and imbalanced datasets), RUS and SMOTE are the most effective at improving the AUC, while SMOTE performed well relative to the F-measure, (6) there are significant differences in the empirical results depending on the performance measure used, and hence it is important to consider multiple metrics in this type of analysis, and (7) data sampling rarely hurt the AUC, but only significantly improved performance when data was at least moderately skewed or noisy, while for the F-measure, data sampling often resulted in significantly worse performance when applied to slightly skewed or noisy datasets, but did improve performance when data was either severely noisy or skewed, or contained moderate levels of both noise and imbalance.
8

Nogales, Alberto, Javier Caracuel-Cayuela, and Álvaro J. García-Tejedor. "Analyzing the Influence of Diverse Background Noises on Voice Transmission: A Deep Learning Approach to Noise Suppression." Applied Sciences 14, no. 2 (January 15, 2024): 740. http://dx.doi.org/10.3390/app14020740.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper presents an approach to enhancing the clarity and intelligibility of speech in digital communications compromised by various background noises. Utilizing deep learning techniques, specifically a Variational Autoencoder (VAE) with 2D convolutional filters, we aim to suppress background noise in audio signals. Our method focuses on four simulated environmental noise scenarios: storms, wind, traffic, and aircraft. The training dataset has been obtained from public sources (TED-LIUM 3 dataset, which includes audio recordings from the popular TED-TALK series) combined with these background noises. The audio signals were transformed into 2D power spectrograms, upon which our VAE model was trained to filter out the noise and reconstruct clean audio. Our results demonstrate that the model outperforms existing state-of-the-art solutions in noise suppression. Although differences in noise types were observed, it was challenging to definitively conclude which background noise most adversely affects speech quality. The results have been assessed with objective (mathematical metrics) and subjective (listening to a set of audios by humans) methods. Notably, wind noise showed the smallest deviation between the noisy and cleaned audio, perceived subjectively as the most improved scenario. Future work should involve refining the phase calculation of the cleaned audio and creating a more balanced dataset to minimize differences in audio quality across scenarios. Additionally, practical applications of the model in real-time streaming audio are envisaged. This research contributes significantly to the field of audio signal processing by offering a deep learning solution tailored to various noise conditions, enhancing digital communication quality.
9

Kramberger, Tin, and Božidar Potočnik. "LSUN-Stanford Car Dataset: Enhancing Large-Scale Car Image Datasets Using Deep Learning for Usage in GAN Training." Applied Sciences 10, no. 14 (July 17, 2020): 4913. http://dx.doi.org/10.3390/app10144913.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Currently there is no publicly available adequate dataset that could be used for training Generative Adversarial Networks (GANs) on car images. All available car datasets differ in noise, pose, and zoom levels. Thus, the objective of this work was to create an improved car image dataset that would be better suited for GAN training. To improve the performance of the GAN, we coupled the LSUN and Stanford car datasets. A new merged dataset was then pruned in order to adjust zoom levels and reduce the noise of images. This process resulted in fewer images that could be used for training, with increased quality though. This pruned dataset was evaluated by training the StyleGAN with original settings. Pruning the combined LSUN and Stanford datasets resulted in 2,067,710 images of cars with less noise and more adjusted zoom levels. The training of the StyleGAN on the LSUN-Stanford car dataset proved to be superior to the training with just the LSUN dataset by 3.7% using the Fréchet Inception Distance (FID) as a metric. Results pointed out that the proposed LSUN-Stanford car dataset is more consistent and better suited for training GAN neural networks than other currently available large car datasets.
10

Shi, Haoxiang, Jun Ai, Jingyu Liu, and Jiaxi Xu. "Improving Software Defect Prediction in Noisy Imbalanced Datasets." Applied Sciences 13, no. 18 (September 19, 2023): 10466. http://dx.doi.org/10.3390/app131810466.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Software defect prediction is a popular method for optimizing software testing and improving software quality and reliability. However, software defect datasets usually have quality problems, such as class imbalance and data noise. Oversampling by generating the minority class samples is one of the most well-known methods to improving the quality of datasets; however, it often introduces overfitting noise to datasets. To better improve the quality of these datasets, this paper proposes a method called US-PONR, which uses undersampling to remove duplicate samples from version iterations and then uses oversampling through propensity score matching to reduce class imbalance and noise samples in datasets. The effectiveness of this method was validated in a software prediction experiment that involved 24 versions of software data in 11 projects from PROMISE in noisy environments that varied from 0% to 30% noise level. The experiments showed a significant improvement in the quality of datasets pre-processed by US-PONR in noisy imbalanced datasets, especially the noisiest ones, compared with 12 other advanced dataset processing methods. The experiments also demonstrated that the US-PONR method can effectively identify the label noise samples and remove them.
11

Singha, Samir, and Syed Hassan. "ENHANCING THE CLASSIFICATION ACCURACY OF NOISY DATASET BY FUSING CORRELATION BASED FEATURE SELECTION WITH K-NEAREST NEIGHBOUR." Oriental journal of computer science and technology 10, no. 2 (May 15, 2017): 282–90. http://dx.doi.org/10.13005/ojcst/10.02.05.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The performance of data mining and machine learning tasks can be significantly degraded due to the presence of noisy, irrelevant and high dimensional data containing large number of features. A large amount of real world data consist of noise or missing values. While collecting data, there may be many irrelevant features that are collected by the storage repositories. These redundant and irrelevant feature values distorts the classification principle and simultaneously increases calculations overhead and decreases the prediction ability of the classifier. The high-dimensionality of such datasets possesses major bottleneck in the field of data mining, statistics, machine learning. Among several methods of dimensionality reduction, attribute or feature selection technique is often used in dimensionality reduction. Since the k-NN algorithm is sensitive to irrelevant attributes therefore its performance degrades significantly when a dataset contains missing values or noisy data. However, this weakness of the k-NN algorithm can be minimized when combined with the other feature selection techniques. In this research we combine the Correlation based Feature Selection (CFS) with k-Nearest Neighbour (k-NN) Classification algorithm to find better result in classification when the dataset contains missing values or noisy data. The reduced attribute set decreases the time required for classification. The research shows that when dimensionality reduction is done using CFS and classified with k-NN algorithm, dataset with nil or very less noise may have negative impact in the classification accuracy, when compared with classification accuracy of k-NN algorithm alone. When additional noise is introduced to these datasets, the performance of k-NN degrades significantly. When these noisy datasets are classified using CFS and k-NN together, the percentage in classification accuracy is improved.
12

FOLLECO, ANDRES, and TAGHI KHOSHGOFTAAR. "ATTRIBUTE NOISE DETECTION USING MULTI-RESOLUTION ANALYSIS." International Journal of Reliability, Quality and Safety Engineering 13, no. 03 (June 2006): 267–88. http://dx.doi.org/10.1142/s0218539306002252.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The value of knowledge inferred from information databases is critically dependent on the quality of data. The identification of noisy attributes which can easily corrupt and curtail valuable knowledge and information from a dataset can be very helpful to analysts. We present a novel detection method to identify noisy attributes in datasets of software metrics using multi-resolution transformations based on Discrete Wavelet Transforms. The proposed method has been applied to supervised datasets of scientific full-scale data from NASA's Software Metric Data Program (MDP) and to a military command, control, and communications system (CCCS). Empirical results have been favorably compared to those obtained from the robust Pairwise Attribute Noise Detection Algorithm (PANDA) using the same MDP datasets and with mixed results for the CCCS data. All results were verified with several case studies that included injecting known simulated noise into specific attributes with no class noise.
13

Li, Qiang, Ziqi Xie, and Lihong Wang. "Robust Subspace Clustering with Block Diagonal Representation for Noisy Image Datasets." Electronics 12, no. 5 (March 5, 2023): 1249. http://dx.doi.org/10.3390/electronics12051249.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
As a relatively advanced method, the subspace clustering algorithm by block diagonal representation (BDR) will be competent in performing subspace clustering on a dataset if the dataset is assumed to be noise-free and drawn from the union of independent linear subspaces. Unfortunately, this assumption is far from reality, since the real data are usually corrupted by various noises and the subspaces of data overlap with each other, the performance of linear subspace clustering algorithms, including BDR, degrades on the real complex data. To solve this problem, we design a new objective function based on BDR, in which l2,1 norm of the reconstruction error is introduced to model the noises and improve the robustness of the algorithm. After optimizing the objective function, we present the corresponding subspace clustering algorithm to pursue a self-expressive coefficient matrix with a block diagonal structure for a noisy dataset. An affinity matrix is constructed based on the coefficient matrix, and then fed to the spectral clustering algorithm to obtain the final clustering results. Experiments on several artificial noisy image datasets show that the proposed algorithm has robustness and better clustering performance than the compared algorithms.
14

Ihler, Sontje, and Felix Kuhnke. "AUC margin loss for limited, imbalanced and noisy medical image diagnosis – a case study on CheXpert5000." Current Directions in Biomedical Engineering 9, no. 1 (September 1, 2023): 658–61. http://dx.doi.org/10.1515/cdbme-2023-1165.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The AUC margin loss is a valuable loss function for medical image classification as it addresses the problems of imbalanced and noisy labels. It is used by the current winner of the CheXpert competition. The CheXpert dataset is a large dataset (200k+ images), however datasets in the range of 1k-10k medical datasets are much more common. This raises the question if optimizing AUC margin loss also is effective in scenarios with limited data.We compare AUC margin loss optimization to binary cross-entropy on limited, imbalanced and noisy CheXpert5000, a subset of CheXpert dataset. We show that AUC margin loss is beneficial for limited data and considerably improves accuracy in the presence of label noise. It also improves out-of-box calibration.
15

García-Mendoza, Juan-Luis, Luis Villaseñor-Pineda, Felipe Orihuela-Espina, and Lázaro Bustio-Martínez. "An autoencoder-based representation for noise reduction in distant supervision of relation extraction." Journal of Intelligent & Fuzzy Systems 42, no. 5 (March 31, 2022): 4523–29. http://dx.doi.org/10.3233/jifs-219241.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Distant Supervision is an approach that allows automatic labeling of instances. This approach has been used in Relation Extraction. Still, the main challenge of this task is handling instances with noisy labels (e.g., when two entities in a sentence are automatically labeled with an invalid relation). The approaches reported in the literature addressed this problem by employing noise-tolerant classifiers. However, if a noise reduction stage is introduced before the classification step, this increases the macro precision values. This paper proposes an Adversarial Autoencoders-based approach for obtaining a new representation that allows noise reduction in Distant Supervision. The representation obtained using Adversarial Autoencoders minimize the intra-cluster distance concerning pre-trained embeddings and classic Autoencoders. Experiments demonstrated that in the noise-reduced datasets, the macro precision values obtained over the original dataset are similar using fewer instances considering the same classifier. For example, in one of the noise-reduced datasets, the macro precision was improved approximately 2.32% using 77% of the original instances. This suggests the validity of using Adversarial Autoencoders to obtain well-suited representations for noise reduction. Also, the proposed approach maintains the macro precision values concerning the original dataset and reduces the total instances needed for classification.
16

AL-Akhras, Mousa, Abdulmajeed Alshunaybir, Hani Omar, and Samah Alhazmi. "Botnet attacks detection in IoT environment using machine learning techniques." International Journal of Data and Network Science 7, no. 4 (2023): 1683–706. http://dx.doi.org/10.5267/j.ijdns.2023.7.021.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
IoT devices with weak security designs are a serious threat to organizations. They are the building blocks of Botnets, the platforms that launch organized attacks that are capable of shutting down an entire infrastructure. Researchers have been developing IDS solutions that can counter such threats, often by employing innovation from other disciplines like artificial intelligence and machine learning. One of the issues that may be encountered when machine learning is used is dataset purity. Since they are not captured from perfect environments, datasets may contain data that could affect the machine learning process, negatively. Algorithms already exist for such problems. Repeated Edited Nearest Neighbor (RENN), Encoding Length (Explore), and Decremental Reduction Optimization Procedure 5 (DROP5) algorithm can filter noises out of datasets. They also provide other benefits such as instance reduction which could help reduce larger Botnet datasets, without sacrificing their quality. Three datasets were chosen in this study to construct an IDS: IoTID20, N-BaIoT and MedBIoT. The filtering algorithms, RENN, Explore, and DROP5 were used on them to filter noise and reduce instances. Noise was also injected and filtered again to assess the resilience of these filters. Then feature optimizations were used to shrink the dataset features. Finally, machine learning was applied on the processed dataset and the resulting IDS was evaluated with the standard supervised learning metrics: Accuracy, Precision, Recall, Specificity, F-Score and G-Mean. Results showed that RENN and DROP5 filtering delivered excellent results. DROP5, in particular, managed to reduce the dataset substantially without sacrificing accuracy. However, when noise got injected, the DROP5 accuracy went down and could not keep up. Of the three dataset, N-BaIoT delivers the best accuracy overall across the learning techniques.
17

Lee, Yongju, Sungjun Jang, Han Byeol Bae, Taejae Jeon, and Sangyoun Lee. "Multitask Learning Strategy with Pseudo-Labeling: Face Recognition, Facial Landmark Detection, and Head Pose Estimation." Sensors 24, no. 10 (May 18, 2024): 3212. http://dx.doi.org/10.3390/s24103212.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Most facial analysis methods perform well in standardized testing but not in real-world testing. The main reason is that training models cannot easily learn various human features and background noise, especially for facial landmark detection and head pose estimation tasks with limited and noisy training datasets. To alleviate the gap between standardized and real-world testing, we propose a pseudo-labeling technique using a face recognition dataset consisting of various people and background noise. The use of our pseudo-labeled training dataset can help to overcome the lack of diversity among the people in the dataset. Our integrated framework is constructed using complementary multitask learning methods to extract robust features for each task. Furthermore, introducing pseudo-labeling and multitask learning improves the face recognition performance by enabling the learning of pose-invariant features. Our method achieves state-of-the-art (SOTA) or near-SOTA performance on the AFLW2000-3D and BIWI datasets for facial landmark detection and head pose estimation, with competitive face verification performance on the IJB-C test dataset for face recognition. We demonstrate this through a novel testing methodology that categorizes cases as soft, medium, and hard based on the pose values of IJB-C. The proposed method achieves stable performance even when the dataset lacks diverse face identifications.
18

Santiago-Chaparro, Kelvin R., and David A. Noyce. "Expanding the Capabilities of Radar-Based Vehicle Detection Systems: Noise Characterization and Removal Procedures." Transportation Research Record: Journal of the Transportation Research Board 2673, no. 11 (June 10, 2019): 150–60. http://dx.doi.org/10.1177/0361198119852607.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The capabilities of radar-based vehicle detection (RVD) systems used at signalized intersections for stop bar and advanced detection are arguably underutilized. Underutilization happens because RVD systems can monitor the position and speed (i.e., trajectory) of multiple vehicles at the same time but these trajectories are only used to emulate the behavior of legacy detection systems such as inductive loop detectors. When full vehicle trajectories tracked by an RVD system are collected, detailed traffic operations and safety performance measures can be calculated for signalized intersections. Unfortunately, trajectory datasets obtained from RVD systems often contain significant noise which makes the computation of performance measures difficult. In this paper, a description of the type of trajectory datasets that can be obtained from RVD systems is presented along with a characterization of the noise expected in these datasets. Guidance on the noise removal procedures that can be applied to these datasets is also presented. This guidance can be applied to the use of data from commercially-available RVD systems to obtain advanced performance measures. To demonstrate the potential accuracy of the noise removal procedures, the procedures were applied to trajectory data obtained from an existing intersection, and data on a basic performance measure (vehicle volume) were extracted from the dataset. Volume data derived from the de-noised trajectory dataset was compared with ground truth volume and an absolute average difference of approximately one vehicle every 5 min was found, thus highlighting the potential accuracy of the noise removal procedures introduced.
19

Murakami, Reina, Valentin Grave, Osamu Fukuda, Hiroshi Okumura, and Nobuhiko Yamaguchi. "Improved Training of CAE-Based Defect Detectors Using Structural Noise." Applied Sciences 11, no. 24 (December 17, 2021): 12062. http://dx.doi.org/10.3390/app112412062.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Appearances of products are important to companies as they reflect the quality of their manufacture to customers. Nowadays, visual inspection is conducted by human inspectors. This research attempts to automate this process using Convolutional AutoEncoders (CAE). Our models were trained using images of non-defective parts. Previous research on autoencoders has reported that the accuracy of image regeneration can be improved by adding noise to the training dataset, but no extensive analyse of the noise factor has been done. Therefore, our method compares the effects of two different noise patterns on the models efficiency: Gaussian noise and noise made of a known structure. The test datasets were comprised of “defective” parts. Over the experiments, it has mostly been observed that the precision of the CAE sharpened when using noisy data during the training phases. The best results were obtained with structural noise, made of defined shapes randomly corrupting training data. Furthermore, the models were able to process test data that had slightly different positions and rotations compared to the ones found in the training dataset. However, shortcomings appeared when “regular” spots (in the training data) and “defective” spots (in the test data) partially, or totally, overlapped.
20

Northcutt, Curtis, Lu Jiang, and Isaac Chuang. "Confident Learning: Estimating Uncertainty in Dataset Labels." Journal of Artificial Intelligence Research 70 (April 14, 2021): 1373–411. http://dx.doi.org/10.1613/jair.1.12125.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Learning exists in the context of data, yet notions of confidence typically focus on model predictions, not label quality. Confident learning (CL) is an alternative approach which focuses instead on label quality by characterizing and identifying label errors in datasets, based on the principles of pruning noisy data, counting with probabilistic thresholds to estimate noise, and ranking examples to train with confidence. Whereas numerous studies have developed these principles independently, here, we combine them, building on the assumption of a class-conditional noise process to directly estimate the joint distribution between noisy (given) labels and uncorrupted (unknown) labels. This results in a generalized CL which is provably consistent and experimentally performant. We present sufficient conditions where CL exactly finds label errors, and show CL performance exceeding seven recent competitive approaches for learning with noisy labels on the CIFAR dataset. Uniquely, the CL framework is not coupled to a specific data modality or model (e.g., we use CL to find several label errors in the presumed error-free MNIST dataset and improve sentiment classification on text data in Amazon Reviews). We also employ CL on ImageNet to quantify ontological class overlap (e.g., estimating 645 missile images are mislabeled as their parent class projectile), and moderately increase model accuracy (e.g., for ResNet) by cleaning data prior to training. These results are replicable using the open-source cleanlab release.
21

Wang, Zi-yang, Xiao-yi Luo, and Jun Liang. "A Label Noise Robust Stacked Auto-Encoder Algorithm for Inaccurate Supervised Classification Problems." Mathematical Problems in Engineering 2019 (May 14, 2019): 1–19. http://dx.doi.org/10.1155/2019/2182616.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In real applications, label noise and feature noise are two main noise sources. Similar to feature noise, label noise imposes great detriment on training classification models. Motivated by successful application of deep learning method in normal classification problems, this paper proposes a new framework called LNC-SDAE to handle those datasets corrupted with label noise, or so-called inaccurate supervision problems. The LNC-SDAE framework contains a preliminary label noise cleansing part and a stacked denoising auto-encoder. In preliminary label noise cleansing part, the K-fold cross-validation thought is applied for detecting and relabeling those mislabeled samples. After being preprocessed by label noise cleansing part, the cleansed training dataset is then input into the stacked denoising auto-encoder to learn robust representation for classification. A corrupted UCI standard dataset and a corrupted real industrial dataset are used for test, both of which contain a certain proportion of label noise (the ratio changes from 0% to 30%). The experiment results prove the effectiveness of LNC-SDAE, the representation learnt by which is shown robust.
22

Bhatia, Anshul, Anuradha Chug, Amit Prakash Singh, and Dinesh Singh. "A hybrid approach for noise reduction-based optimal classifier using genetic algorithm: A case study in plant disease prediction." Intelligent Data Analysis 26, no. 4 (July 11, 2022): 1023–49. http://dx.doi.org/10.3233/ida-216011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Plant diseases can cause significant losses to agricultural productivity; therefore, their early prediction is much needed. So far, many machine learning-based plant disease prediction models have been recommended, but these models face a problem of noisy class label dataset that degrades the performance. Noisy class label dataset results from the improper assignment of positive class labels into negative class data samples or vice versa. Hence, a precise and noise-free plant disease model is required for a better prediction. The current study proposes noise reduction-based hybridized classifiers for plant disease prediction. One tomato and four soybean disease datasets have been selected to conduct the proposed research. The Adaptive Sampling-based Class Label Noise Reduction (AS-CLNR) method has been used along with the Support Vector Machine (SVM) approach for noise reduction. The noise-minimized datasets have been fed into the Extreme Learning Machine (ELM), Decision Tree (DT), and Random Forest (RF) classifiers whose parameters are optimized using Genetic Algorithm (GA) for developing plant disease prediction models. The performances of all these models viz. Hybrid SVM-GA-ELM, Hybrid SVM-GA-DT, and Hybrid SVM-GA-RF have been evaluated using Accuracy, Area under ROC Curve, and F1-Score metrics. Further, these classifiers have been ranked using the statistical Friedman Test in which the Hybrid SVM-GA-RF classifier performed the best. Lastly, the Nemenyi test has also been performed to find out if significant differences exist between various classifiers or not. It was found that 33.33% of the total pairs of hybrid classifiers show a remarkably different performance from one another.
23

Billa, Wagner S., Rogério G. Negri, and Leonardo B. L. Santos. "WB Score: A Novel Methodology for Visual Classifier Selection in Increasingly Noisy Datasets." Eng 4, no. 4 (September 25, 2023): 2497–513. http://dx.doi.org/10.3390/eng4040142.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This article addresses the challenges of selecting robust classifiers with increasing noise levels in real-world scenarios. We propose the WB Score methodology, which enables the identification of reliable classifiers for deployment in noisy environments. The methodology addresses four significant challenges that are commonly encountered: (i) Ensuring classifiers possess robustness to noise; (ii) Overcoming the difficulty of obtaining representative data that captures real-world noise; (iii) Addressing the complexity of detecting noise, making it challenging to differentiate it from natural variations in the data; and (iv) Meeting the requirement for classifiers capable of efficiently handling noise, allowing prompt responses for decision-making. WB Score provides a comprehensive approach for classifier assessment and selection to address these challenges. We analyze five classic datasets and one customized flooding dataset in São Paulo. The results demonstrate the practical effect of using the WB Score methodology is the enhanced ability to select robust classifiers for datasets in noisy real-world scenarios. Compared with similar techniques, the improvement centers around providing a visual and intuitive output, enhancing the understanding of classifier resilience against noise, and streamlining the decision-making process.
24

Sagarika, Namasani, Bommadi Sreenija Reddy, Vanka Varshitha, Kodavati Geetanjali, N. V. Ganapathi Raju, and Latha Kunaparaju. "Sarcasm Discernment on Social Media Platform." E3S Web of Conferences 309 (2021): 01037. http://dx.doi.org/10.1051/e3sconf/202130901037.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Past studies in Sarcasm Detection mostly make use of Twitter datasets collected using hashtag-based supervision but such datasets are noisy in terms of labels and language. To overcome the limitations related to noise in Twitter datasets, this News Headlines dataset for Sarcasm Detection is collected from two news website. TheOnion aims at producing sarcastic versions of current events and we collected all the headlines from News in Brief and News in Photos categories (which are sarcastic). We collect real (and non-sarcastic) news headlines from Huff Post. Sarcasm Detection on social media platform. The dataset is collected from two news websites, theonion.com and huffingtonpost.com. Since news headlines are written by professionals in a formal manner, there are no spelling mistakes and informal usage. This reduces the sparsity and also increases the chance of finding pre-trained embeddings. Furthermore, since the sole purpose of TheOnion is to publish sarcastic news, we get high-quality labels with much less noise as compared to Twitter datasets. Unlike tweets that reply to other tweets, the news headlines obtained are self-contained.
25

Guan, Qingji, Qinrun Chen, and Yaping Huang. "An Improved Heteroscedastic Modeling Method for Chest X-ray Image Classification with Noisy Labels." Algorithms 16, no. 5 (May 4, 2023): 239. http://dx.doi.org/10.3390/a16050239.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Chest X-ray image classification suffers from the high inter-similarity in appearance that is vulnerable to noisy labels. The data-dependent and heteroscedastic characteristic label noise make chest X-ray image classification more challenging. To address this problem, in this paper, we first revisit the heteroscedastic modeling (HM) for image classification with noise labels. Rather than modeling all images in one fell swoop as in HM, we instead propose a novel framework that considers the noisy and clean samples separately for chest X-ray image classification. The proposed framework consists of a Gaussian Mixture Model-based noise detector and a Heteroscedastic Modeling-based noise-aware classification network, named GMM-HM. The noise detector is constructed to judge whether one sample is clean or noisy. The noise-aware classification network models the noisy and clean samples with heteroscedastic and homoscedastic hypotheses, respectively. Through building the correlations between the corrupted noisy samples, the GMM-HM is much more robust than HM, which uses only the homoscedastic hypothesis. Compared with HM, we show consistent improvements on the ChestX-ray2017 dataset with different levels of symmetric and asymmetric noise. Furthermore, we also conduct experiments on a real asymmetric noisy dataset, ChestX-ray14. The experimental results on ChestX-ray14 show the superiority of the proposed method.
26

Zhao, Na, and Gim Hee Lee. "Robust Visual Recognition with Class-Imbalanced Open-World Noisy Data." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 15 (March 24, 2024): 16989–97. http://dx.doi.org/10.1609/aaai.v38i15.29642.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Learning from open-world noisy data, where both closed-set and open-set noise co-exist in the dataset, is a realistic but underexplored setting. Only recently, several efforts have been initialized to tackle this problem. However, these works assume the classes are balanced when dealing with open-world noisy data. This assumption often violates the nature of real-world large-scale datasets, where the label distributions are generally long-tailed, i.e. class-imbalanced. In this paper, we study the problem of robust visual recognition with class-imbalanced open-world noisy data. We propose a probabilistic graphical model-based approach: iMRF to achieve label noise correction that is robust to class imbalance via an efficient iterative inference of a Markov Random Field (MRF) in each training mini-batch. Furthermore, we design an agreement-based thresholding strategy to adaptively collect clean samples from all classes that includes corrected closed-set noisy samples while rejecting open-set noisy samples. We also introduce a noise-aware balanced cross-entropy loss to explicitly eliminate the bias caused by class-imbalanced data. Extensive experiments on several benchmark datasets including synthetic and real-world noisy datasets demonstrate the superior performance robustness of our method over existing methods. Our code is available at https://github.com/Na-Z/LIOND.
27

Xi, Mengfei, Jie Li, Zhilin He, Minmin Yu, and Fen Qin. "NRN-RSSEG: A Deep Neural Network Model for Combating Label Noise in Semantic Segmentation of Remote Sensing Images." Remote Sensing 15, no. 1 (December 25, 2022): 108. http://dx.doi.org/10.3390/rs15010108.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The performance of deep neural networks depends on the accuracy of labeled samples, as they usually contain label noise. This study examines the semantic segmentation of remote sensing images that include label noise and proposes an anti-label-noise network framework, termed Labeled Noise Robust Network in Remote Sensing Image Semantic Segmentation (NRN-RSSEG), to combat label noise. The algorithm combines three main components: network, attention mechanism, and a noise-robust loss function. Three different noise rates (containing both symmetric and asymmetric noise) were simulated to test the noise resistance of the network. Validation was performed in the Vaihingen region of the ISPRS Vaihingen 2D semantic labeling dataset, and the performance of the network was evaluated by comparing the NRN-RSSEG with the original U-Net model. The results show that NRN-RSSEG maintains a high accuracy on both clean and noisy datasets. Specifically, NRN-RSSEG outperforms UNET in terms of PA, MPA, Kappa, Mean_F1, and FWIoU in the presence of noisy datasets, and as the noise rate increases, each performance of UNET shows a decreasing trend while the performance of NRN-RSSEG decreases slowly and some performances show an increasing trend. At a noise rate of 0.5, the PA (−6.14%), MPA (−4.27%) Kappa (−8.55%), Mean_F1 (−5.11%), and FWIOU (−9.75%) of UNET degrade faster; while the PA (−2.51%), Kappa (−3.33%), and FWIoU of NRN-RSSEG (−3.26) degraded more slowly, MPA (+1.41) and Mean_F1 (+2.69%) showed an increasing trend. Furthermore, comparing the proposed model with the baseline method, the results demonstrate that the proposed NRN-RSSEG anti-noise framework can effectively help the current segmentation model to overcome the adverse effects of noisy label training.
28

Oyewola, David Opeoluwa, Emmanuel Gbenga Dada, Sanjay Misra, and Robertas Damaševičius. "Predicting COVID-19 Cases in South Korea with All K-Edited Nearest Neighbors Noise Filter and Machine Learning Techniques." Information 12, no. 12 (December 19, 2021): 528. http://dx.doi.org/10.3390/info12120528.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The application of machine learning techniques to the epidemiology of COVID-19 is a necessary measure that can be exploited to curtail the further spread of this endemic. Conventional techniques used to determine the epidemiology of COVID-19 are slow and costly, and data are scarce. We investigate the effects of noise filters on the performance of machine learning algorithms on the COVID-19 epidemiology dataset. Noise filter algorithms are used to remove noise from the datasets utilized in this study. We applied nine machine learning techniques to classify the epidemiology of COVID-19, which are bagging, boosting, support vector machine, bidirectional long short-term memory, decision tree, naïve Bayes, k-nearest neighbor, random forest, and multinomial logistic regression. Data from patients who contracted coronavirus disease were collected from the Kaggle database between 23 January 2020 and 24 June 2020. Noisy and filtered data were used in our experiments. As a result of denoising, machine learning models have produced high results for the prediction of COVID-19 cases in South Korea. For isolated cases after performing noise filtering operations, machine learning techniques achieved an accuracy between 98–100%. The results indicate that filtering noise from the dataset can improve the accuracy of COVID-19 case prediction algorithms.
29

Rasheed, Jawad, Ahmad B. Wardak, Adnan M. Abu-Mahfouz, Tariq Umer, Mirsat Yesiltepe, and Sadaf Waziry. "An Efficient Machine Learning-Based Model to Effectively Classify the Type of Noises in QR Code: A Hybrid Approach." Symmetry 14, no. 10 (October 8, 2022): 2098. http://dx.doi.org/10.3390/sym14102098.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Granting smart device consumers with information, simply and quickly, is what drives quick response (QR) codes and mobile marketing to go hand in hand. It boosts marketing campaigns and objectives and allows one to approach, engage, influence, and transform a wider target audience by connecting from offline to online platforms. However, restricted printing technology and flexibility in surfaces introduce noise while printing QR code images. Moreover, noise is often unavoidable during the gathering and transmission of digital images. Therefore, this paper proposed an automatic and accurate noise detector to identify the type of noise present in QR code images. For this, the paper first generates a new dataset comprising 10,000 original QR code images of varying sizes and later introduces several noises, including salt and pepper, pepper, speckle, Poisson, salt, local var, and Gaussian to form a dataset of 80,000 images. We perform extensive experiments by reshaping the generated images to uniform size for exploiting Convolutional Neural Network (CNN), Support Vector Machine (SVM), and Logistic Regression (LG) to classify the original and noisy images. Later, the analysis is further widened by incorporating histogram density analysis to trace and target highly important features by transforming images of varying sizes to obtain 256 features, followed by SVM, LG, and Artificial Neural Network (ANN) to identify the noise type. Moreover, to understand the impact of symmetry of noises in QR code images, we trained the models with combinations of 3-, 5-, and 7-noise types and analyzed the classification performance. From comparative analyses, it is noted that the Gaussian and Localvar noises possess symmetrical characteristics, as all the classifiers did not perform well to segregate these two noises. The results prove that histogram analysis significantly improves classification accuracy with all exploited models, especially when combined with SVM, it achieved maximum accuracy for 4- and 6-class classification problems.
30

Wang, Zixiao, Junwu Weng, Chun Yuan, and Jue Wang. "Truncate-Split-Contrast: A Framework for Learning from Mislabeled Videos." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 3 (June 26, 2023): 2751–58. http://dx.doi.org/10.1609/aaai.v37i3.25375.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Learning with noisy label is a classic problem that has been extensively studied for image tasks, but much less for video in the literature. A straightforward migration from images to videos without considering temporal semantics and computational cost is not a sound choice. In this paper, we propose two new strategies for video analysis with noisy labels: 1) a lightweight channel selection method dubbed as Channel Truncation for feature-based label noise detection. This method selects the most discriminative channels to split clean and noisy instances in each category. 2) A novel contrastive strategy dubbed as Noise Contrastive Learning, which constructs the relationship between clean and noisy instances to regularize model training. Experiments on three well-known benchmark datasets for video classification show that our proposed truNcatE-split-contrAsT (NEAT) significantly outperforms the existing baselines. By reducing the dimension to 10% of it, our method achieves over 0.4 noise detection F1-score and 5% classification accuracy improvement on Mini-Kinetics dataset under severe noise (symmetric-80%). Thanks to Noise Contrastive Learning, the average classification accuracy improvement on Mini-Kinetics and Sth-Sth-V1 is over 1.6%.
31

Moura, Kecia G., Ricardo B. C. Prudêncio, and George D. C. Cavalcanti. "Label noise detection under the noise at random model with ensemble filters." Intelligent Data Analysis 26, no. 5 (September 5, 2022): 1119–38. http://dx.doi.org/10.3233/ida-215980.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Label noise detection has been widely studied in Machine Learning because of its importance in improving training data quality. Satisfactory noise detection has been achieved by adopting ensembles of classifiers. In this approach, an instance is assigned as mislabeled if a high proportion of members in the pool misclassifies it. Previous authors have empirically evaluated this approach; nevertheless, they mostly assumed that label noise is generated completely at random in a dataset. This is a strong assumption since other types of label noise are feasible in practice and can influence noise detection results. This work investigates the performance of ensemble noise detection under two different noise models: the Noisy at Random (NAR), in which the probability of label noise depends on the instance class, in comparison to the Noisy Completely at Random model, in which the probability of label noise is entirely independent. In this setting, we investigate the effect of class distribution on noise detection performance since it changes the total noise level observed in a dataset under the NAR assumption. Further, an evaluation of the ensemble vote threshold is conducted to contrast with the most common approaches in the literature. In many performed experiments, choosing a noise generation model over another can lead to different results when considering aspects such as class imbalance and noise level ratio among different classes.
32

Singh, Abhishek, and Anil Kumar. "Introduction of Local Spatial Constraints and Local Similarity Estimation in Possibilistic c-Means Algorithm for Remotely Sensed Imagery." Journal of Modeling and Optimization 11, no. 1 (June 15, 2019): 51–56. http://dx.doi.org/10.32732/jmo.2019.11.1.51.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper presents a unique Possibilistic c-Means with constraints (PCM-S) with Adaptive Possibilistic Local Information c-Means (ADPLICM) in a supervised way by incorporating local information through local spatial constraints and local similarity measures in Possibilistic c-Means Algorithm. PCM-S with ADPLICM overcome the limitations of the known Possibilistic c-Means (PCM) and Possibilistic c-Means with constraints (PCM-S) algorithms. The major contribution of proposed algorithm to ensure the noise resistance in the presence of random salt & pepper noise. The effectiveness of proposed algorithm has been analysed on random “salt and pepper” noise added on original dataset and Root Mean Square Error (RMSE) has been calculated between original dataset and noisy dataset. It has been observed that PCM-S with ADPLICM is effective in minimizing noise during supervised classification by introducing local convolution.
33

Akyel, Cihan, and Nursal Arıcı. "LinkNet-B7: Noise Removal and Lesion Segmentation in Images of Skin Cancer." Mathematics 10, no. 5 (February 25, 2022): 736. http://dx.doi.org/10.3390/math10050736.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Skin cancer is common nowadays. Early diagnosis of skin cancer is essential to increase patients’ survival rate. In addition to traditional methods, computer-aided diagnosis is used in diagnosis of skin cancer. One of the benefits of this method is that it eliminates human error in cancer diagnosis. Skin images may contain noise such as like hair, ink spots, rulers, etc., in addition to the lesion. For this reason, noise removal is required. The noise reduction in lesion images can be referred to as noise removal. This phase is very important for the correct segmentation of the lesions. One of the most critical problems in using such automated methods is the inaccuracy in cancer diagnosis because noise removal and segmentation cannot be performed effectively. We have created a noise dataset (hair, rulers, ink spots, etc.) that includes 2500 images and masks. There is no such noise dataset in the literature. We used this dataset for noise removal in skin cancer images. Two datasets from the International Skin Imaging Collaboration (ISIC) and the PH2 were used in this study. In this study, a new approach called LinkNet-B7 for noise removal and segmentation of skin cancer images is presented. LinkNet-B7 is a LinkNet-based approach that uses EfficientNetB7 as the encoder. We used images with 16 slices. This way, we lose fewer pixel values. LinkNet-B7 has a 6% higher success rate than LinkNet with the same dataset and parameters. Training accuracy for noise removal and lesion segmentation was calculated to be 95.72% and 97.80%, respectively.
34

Yi, Qian, Guixuan Zhang, and Shuwu Zhang. "Utilizing Entity-Based Gated Convolution and Multilevel Sentence Attention to Improve Distantly Supervised Relation Extraction." Computational Intelligence and Neuroscience 2021 (November 1, 2021): 1–10. http://dx.doi.org/10.1155/2021/6110885.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Distant supervision is an effective method to automatically collect large-scale datasets for relation extraction (RE). Automatically constructed datasets usually comprise two types of noise: the intrasentence noise and the wrongly labeled noisy sentence. To address issues caused by the above two types of noise and improve distantly supervised relation extraction, this paper proposes a novel distantly supervised relation extraction model, which consists of an entity-based gated convolution sentence encoder and a multilevel sentence selective attention (Matt) module. Specifically, we first apply an entity-based gated convolution operation to force the sentence encoder to extract entity-pair-related features and filter out useless intrasentence noise information. Furthermore, the multilevel attention schema fuses the bag information to obtain a fine-grained bag-specific query vector, which can better identify valid sentences and reduce the influence of wrongly labeled sentences. Experimental results on a large-scale benchmark dataset show that our model can effectively reduce the influence of the above two types of noise and achieves state-of-the-art performance in relation extraction.
35

Yoo, Seok Bong, and Mikyong Han. "SCENet: Secondary Domain Intercorrelation Enhanced Network for Alleviating Compressed Poisson Noises." Sensors 19, no. 8 (April 25, 2019): 1939. http://dx.doi.org/10.3390/s19081939.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In real image coding systems, block-based coding is often applied on images contaminated by camera sensor noises such as Poisson noises, which cause complicated types of noises called compressed Poisson noises. Although many restoration methods have recently been proposed for compressed images, they do not provide satisfactory performance on the challenging compressed Poisson noises. This is mainly due to (i) inaccurate modeling regarding the image degradation, (ii) the signal-dependent noise property, and (iii) the lack of analysis on intercorrelation distortion. In this paper, we focused on the challenging issues in practical image coding systems and propose a compressed Poisson noise reduction scheme based on a secondary domain intercorrelation enhanced network. Specifically, we introduced a compressed Poisson noise corruption model and combined the secondary domain intercorrelation prior with a deep neural network especially designed for signal-dependent compression noise reduction. Experimental results showed that the proposed network is superior to the existing state-of-the-art restoration alternatives on classical images, the LIVE1 dataset, and the SIDD dataset.
36

Guan, Donghai, Maqbool Hussain, Weiwei Yuan, Asad Masood Khattak, Muhammad Fahim, and Wajahat Ali Khan. "Enhanced Label Noise Filtering with Multiple Voting." Applied Sciences 9, no. 23 (November 21, 2019): 5031. http://dx.doi.org/10.3390/app9235031.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Label noises exist in many applications, and their presence can degrade learning performance. Researchers usually use filters to identify and eliminate them prior to training. The ensemble learning based filter (EnFilter) is the most widely used filter. According to the voting mechanism, EnFilter is mainly divided into two types: single-voting based (SVFilter) and multiple-voting based (MVFilter). In general, MVFilter is more often preferred because multiple-voting could address the intrinsic limitations of single-voting. However, the most important unsolved issue in MVFilter is how to determine the optimal decision point (ODP). Conceptually, the decision point is a threshold value, which determines the noise detection performance. To maximize the performance of MVFilter, we propose a novel approach to compute the optimal decision point. Our approach is data driven and cost sensitive, which determines the ODP based on the given noisy training dataset and noise misrecognition cost matrix. The core idea of our approach is to estimate the mislabeled data probability distributions, based on which the expected cost of each possible decision point could be inferred. Experimental results on a set of benchmark datasets illustrate the utility of our proposed approach.
37

Delisle, J. B., N. Hara, and D. Ségransan. "Efficient modeling of correlated noise." Astronomy & Astrophysics 638 (June 2020): A95. http://dx.doi.org/10.1051/0004-6361/201936906.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Correlated noise affects most astronomical datasets and to neglect accounting for it can lead to spurious signal detections, especially in low signal-to-noise conditions, which is often the context in which new discoveries are pursued. For instance, in the realm of exoplanet detection with radial velocity time series, stellar variability can induce false detections. However, a white noise approximation is often used because accounting for correlated noise when analyzing data implies a more complex analysis. Moreover, the computational cost can be prohibitive as it typically scales as the cube of the dataset size. For some restricted classes of correlated noise models, there are specific algorithms that can be used to help bring down the computational cost. This improvement in speed is particularly useful in the context of Gaussian process regression, however, it comes at the expense of the generality of the noise model. In this article, we present the S + LEAF noise model, which allows us to account for a large class of correlated noises with a linear scaling of the computational cost with respect to the size of the dataset. The S + LEAF model includes, in particular, mixtures of quasiperiodic kernels and calibration noise. This efficient modeling is made possible by a sparse representation of the covariance matrix of the noise and the use of dedicated algorithms for matrix inversion, solving, determinant computation, etc. We applied the S + LEAF model to reanalyze the HARPS radial velocity time series of the recently published planetary system HD 136352. We illustrate the flexibility of the S + LEAF model in handling various sources of noise. We demonstrate the importance of taking correlated noise into account, and especially calibration noise, to correctly assess the significance of detected signals.
38

Garg, Siddhant, Thuy Vu, and Alessandro Moschitti. "TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 7780–88. http://dx.doi.org/10.1609/aaai.v34i05.6282.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We propose TandA, an effective technique for fine-tuning pre-trained Transformer models for natural language tasks. Specifically, we first transfer a pre-trained model into a model for a general task by fine-tuning it with a large and high-quality dataset. We then perform a second fine-tuning step to adapt the transferred model to the target domain. We demonstrate the benefits of our approach for answer sentence selection, which is a well-known inference task in Question Answering. We built a large scale dataset to enable the transfer step, exploiting the Natural Questions dataset. Our approach establishes the state of the art on two well-known benchmarks, WikiQA and TREC-QA, achieving the impressive MAP scores of 92% and 94.3%, respectively, which largely outperform the the highest scores of 83.4% and 87.5% of previous work. We empirically show that TandA generates more stable and robust models reducing the effort required for selecting optimal hyper-parameters. Additionally, we show that the transfer step of TandA makes the adaptation step more robust to noise. This enables a more effective use of noisy datasets for fine-tuning. Finally, we also confirm the positive impact of TandA in an industrial setting, using domain specific datasets subject to different types of noise.
39

Cheng, Hu, Sophia Vinci-Booher, Jian Wang, Bradley Caron, Qiuting Wen, Sharlene Newman, and Franco Pestilli. "Denoising diffusion weighted imaging data using convolutional neural networks." PLOS ONE 17, no. 9 (September 15, 2022): e0274396. http://dx.doi.org/10.1371/journal.pone.0274396.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Diffusion weighted imaging (DWI) with multiple, high b-values is critical for extracting tissue microstructure measurements; however, high b-value DWI images contain high noise levels that can overwhelm the signal of interest and bias microstructural measurements. Here, we propose a simple denoising method that can be applied to any dataset, provided a low-noise, single-subject dataset is acquired using the same DWI sequence. The denoising method uses a one-dimensional convolutional neural network (1D-CNN) and deep learning to learn from a low-noise dataset, voxel-by-voxel. The trained model can then be applied to high-noise datasets from other subjects. We validated the 1D-CNN denoising method by first demonstrating that 1D-CNN denoising resulted in DWI images that were more similar to the noise-free ground truth than comparable denoising methods, e.g., MP-PCA, using simulated DWI data. Using the same DWI acquisition but reconstructed with two common reconstruction methods, i.e. SENSE1 and sum-of-square, to generate a pair of low-noise and high-noise datasets, we then demonstrated that 1D-CNN denoising of high-noise DWI data collected from human subjects showed promising results in three domains: DWI images, diffusion metrics, and tractography. In particular, the denoised images were very similar to a low-noise reference image of that subject, more than the similarity between repeated low-noise images (i.e. computational reproducibility). Finally, we demonstrated the use of the 1D-CNN method in two practical examples to reduce noise from parallel imaging and simultaneous multi-slice acquisition. We conclude that the 1D-CNN denoising method is a simple, effective denoising method for DWI images that overcomes some of the limitations of current state-of-the-art denoising methods, such as the need for a large number of training subjects and the need to account for the rectified noise floor.
40

Xiong, Shuguang, Huitao Zhang, and Meng Wang. "Ensemble Model of Attention Mechanism-Based DCGAN and Autoencoder for Noised OCR Classification." Journal of Electronic & Information Systems 4, no. 1 (March 31, 2022): 33–41. http://dx.doi.org/10.30564/jeis.v4i1.6725.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Optical Character Recognition (OCR) is a technology that converts images of text into machine-readable formats, essential for digitizing printed texts and enabling digital searches. Traditional OCR methods often struggle with variations in font styles and noise. This paper proposes an innovative approach to enhance OCR classification under challenging conditions by leveraging an ensemble model that combines an Attention Mechanism-Based Generative Adversarial Network (GAN) and an Autoencoder. The GAN generates synthetic data to mitigate the limitations of small datasets, while the autoencoder extracts robust features from noisy images. The model undergoes a two-phase training process, initially learning from the augmented dataset and then fine-tuning on a smaller, labeled dataset. Grad-CAM is used to demonstrate interpretability, highlighting the attention regions during predictions. Experimental results show significant improvements in OCR accuracy and robustness, validating the effectiveness of the proposed method in handling noise and limited training data.
41

Sineglazov, Victor, and Kyrylo Lesohorskyi. "On Noise Effect in Semi-supervised Learning." Electronics and Control Systems 1, no. 71 (June 27, 2022): 9–15. http://dx.doi.org/10.18372/1990-5548.71.16816.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The article deals with the problem of noise effect on semi-supervised learning. The goal of this article is to analyze the impact of noise on the accuracy of binary classification models created using three semi-supervised learning algorithms, namely Simple Recycled Selection, Incrementally Reinforced Selection, and Hybrid Algorithm, using Support Vector Machines to build a base classifier. Different algorithms to compute similarity matrices, namely Radial Bias Function, Cosine Similarity, and K-Nearest Neighbours were analyzed to understand their effect on model accuracy. For benchmarking purposes, datasets from the UCI repository were used. To test the noise effect, different amounts of artificially generated randomly-labeled samples were introduced into the dataset using three strategies (labeled, unlabeled, and mixed) and compared to the baseline classifier trained with the original dataset and the classifier trained on the reduced-size original dataset. The results show that the introduction of random noise into the labeled samples decreases classifier accuracy, while a moderate amount of noise in unmarked samples can have a positive effect on classifier accuracy.
42

Cao, Like, Jie Ling, and Xiaohui Xiao. "Study on the Influence of Image Noise on Monocular Feature-Based Visual SLAM Based on FFDNet." Sensors 20, no. 17 (August 31, 2020): 4922. http://dx.doi.org/10.3390/s20174922.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Noise appears in images captured by real cameras. This paper studies the influence of noise on monocular feature-based visual Simultaneous Localization and Mapping (SLAM). First, an open-source synthetic dataset with different noise levels is introduced in this paper. Then the images in the dataset are denoised using the Fast and Flexible Denoising convolutional neural Network (FFDNet); the matching performances of Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF) and Oriented FAST and Rotated BRIEF (ORB) which are commonly used in feature-based SLAM are analyzed in comparison and the results show that ORB has a higher correct matching rate than that of SIFT and SURF, the denoised images have a higher correct matching rate than noisy images. Next, the Absolute Trajectory Error (ATE) of noisy and denoised sequences are evaluated on ORB-SLAM2 and the results show that the denoised sequences perform better than the noisy sequences at any noise level. Finally, the completely clean sequence in the dataset and the sequences in the KITTI dataset are denoised and compared with the original sequence through comprehensive experiments. For the clean sequence, the Root-Mean-Square Error (RMSE) of ATE after denoising has decreased by 16.75%; for KITTI sequences, 7 out of 10 sequences have lower RMSE than the original sequences. The results show that the denoised image can achieve higher accuracy in the monocular feature-based visual SLAM under certain conditions.
43

Hsieh, Ming-En, and Vincent Tseng. "Boosting Multi-task Learning Through Combination of Task Labels - with Applications in ECG Phenotyping." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 7771–79. http://dx.doi.org/10.1609/aaai.v35i9.16949.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Multi-task learning has increased in importance due to its superior performance by learning multiple different tasks simultaneously and its ability to perform several different tasks using a single model. In medical phenotyping, task labels are costly to acquire and might contain a certain degree of label noise. This decreases the efficiency of using additional human labels as auxiliary tasks when applying multi-task learning to medical phenotyping. In this work, we proposed an effective multi-task learning framework, CO-TASK, to boost multi-task learning performance by generating auxiliary tasks through COmbination of TASK Labels. The proposed CO-TASK framework generates auxiliary tasks without additional labeling effort, is robust to a certain degree of label noise, and can be applied in parallel with various multi-task learning techniques. We evaluated our performance using the CIFAR-MTL dataset and demonstrated its effectiveness in medical phenotyping using two large-scale ECG phenotyping datasets, an 18 diseases multi-label ECG-P18 dataset and an echocardiogram diagnostic from electrocardiogram dataset ECG-EchoLVH. On the CIFAR-MTL dataset, we doubled the average per-task performance gain of the multi-task learning model from 4.38% to 9.78%. With the proposed task-aware imbalance data sampler, the CO-TASK framework can effectively deal with the different imbalance ratios for the different tasks in electrocardiogram phenotyping datasets. The proposed framework combined with noisy annotations as minor tasks increased the sensitivity by 7.1% compared to the single-task model while maintaining the same specificity as the doctor annotations on the ECG-EchoLVH dataset.
44

Li, Gang, Jan Zrimec, Boyang Ji, Jun Geng, Johan Larsbrink, Aleksej Zelezniak, Jens Nielsen, and Martin KM Engqvist. "Performance of Regression Models as a Function of Experiment Noise." Bioinformatics and Biology Insights 15 (January 2021): 117793222110203. http://dx.doi.org/10.1177/11779322211020315.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background: A challenge in developing machine learning regression models is that it is difficult to know whether maximal performance has been reached on the test dataset, or whether further model improvement is possible. In biology, this problem is particularly pronounced as sample labels (response variables) are typically obtained through experiments and therefore have experiment noise associated with them. Such label noise puts a fundamental limit to the metrics of performance attainable by regression models on the test dataset. Results: We address this challenge by deriving an expected upper bound for the coefficient of determination ( R2) for regression models when tested on the holdout dataset. This upper bound depends only on the noise associated with the response variable in a dataset as well as its variance. The upper bound estimate was validated via Monte Carlo simulations and then used as a tool to bootstrap performance of regression models trained on biological datasets, including protein sequence data, transcriptomic data, and genomic data. Conclusions: The new method for estimating upper bounds for model performance on test data should aid researchers in developing ML regression models that reach their maximum potential. Although we study biological datasets in this work, the new upper bound estimates will hold true for regression models from any research field or application area where response variables have associated noise.
45

Liu, Haiqing, Daoxing Li, and Yuancheng Li. "Confident sequence learning: A sequence class-label noise filtering technique to improve scene digit recognition." Journal of Intelligent & Fuzzy Systems 40, no. 5 (April 22, 2021): 9345–59. http://dx.doi.org/10.3233/jifs-201825.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Reading digits from natural images is a challenging computer vision task central to a variety of emerging applications. However, the increased scalability and complexity of datasets or complex applications bring about inevitable label noise. Because the label noise in the scene digit recognition dataset is sequence-like, most existing methods cannot deal with label noise in scene digit recognition. We propose a novel sequence class-label noise filter called Confident Sequence Learning. Confident Sequence Learning consists of two critical parts: the sequence-like confidence segmentation algorithm and the Confident Learning method. The sequence-like confidence segmentation algorithms slice the sequence-like labels and the sequence-like predicted probabilities, reorganize them in the form of the independent stochastic process and the white noise process. The Confident Learning method estimates the joint distribution between observed labels and latent labels using the segmented labels and probabilities. The TRDG dataset and SVHN dataset experiments showed that the confident sequence learning could find label errors with high accuracy and significantly improve the VGG-Attn and the TPS-ResNet-Attn model’s performance in the presence of synthetic sequence class-label noise.
46

Oguntunde, Pelumi E., Hilary I. Okagbue, Omoleye A. Oguntunde, and Oluwole A. Odetunmibi. "A Study of Noise Pollution Measurements and Possible Effects on Public Health in Ota Metropolis, Nigeria." Open Access Macedonian Journal of Medical Sciences 7, no. 8 (April 29, 2019): 1391–95. http://dx.doi.org/10.3889/oamjms.2019.234.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
BACKGROUND: Noise pollution has become a major environmental problem leading to nuisances and health issues. AIM: This paper aims to study and analyse the noise pollution levels in major areas in Ota metropolis. A probability model which is capable of predicting the noise pollution level is also determined. METHODS: Datasets on the noise pollution level in 41 locations across Ota metropolis were used in this research. The datasets were collected thrice per day; morning, afternoon and evening. Descriptive statistics were performed, and analysis of variance was also conducted using Minitab version 17.0 software. Easy fit software was however used to select the appropriate probability model that would best describe the dataset. RESULTS: The noise levels are way far from the WHO recommendations. Also, there is no significant difference in the effects of the noise pollution level for all the times of the day considered. The log-logistic distribution provides the best fit to the dataset based on the Kolmogorov Smirnov goodness of fit test. CONCLUSION: The fitted probability model can help in the prediction of noise pollution and act as a yardstick in the reduction of noise pollution, thereby improving the public health of the populace.
47

Ziyadinov, Vadim, and Maxim Tereshonok. "Noise Immunity and Robustness Study of Image Recognition Using a Convolutional Neural Network." Sensors 22, no. 3 (February 6, 2022): 1241. http://dx.doi.org/10.3390/s22031241.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The problem surrounding convolutional neural network robustness and noise immunity is currently of great interest. In this paper, we propose a technique that involves robustness estimation and stability improvement. We also examined the noise immunity of convolutional neural networks and estimated the influence of uncertainty in the training and testing datasets on recognition probability. For this purpose, we estimated the recognition accuracies of multiple datasets with different uncertainties; we analyzed these data and provided the dependence of recognition accuracy on the training dataset uncertainty. We hypothesized and proved the existence of an optimal (in terms of recognition accuracy) amount of uncertainty in the training data for neural networks working with undefined uncertainty data. We have shown that the determination of this optimum can be performed using statistical modeling. Adding an optimal amount of uncertainty (noise of some kind) to the training dataset can be used to improve the overall recognition quality and noise immunity of convolutional neural networks.
48

Zhou, Ping, Jin Lei Wang, Xian Kai Chen, and Guan Jun Zhang. "Membership Calculation Based on Dimension Hierarchical Division." Applied Mechanics and Materials 475-476 (December 2013): 312–17. http://dx.doi.org/10.4028/www.scientific.net/amm.475-476.312.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Since dataset usually contain noises, it is very helpful to find out and remove the noise in a preprocessing step. Fuzzy membership can measure a samples weight. The weight should be smaller for noise sample but bigger for important sample. Therefore, appropriate sample memberships are vital. The article proposed a novel approach, Membership Calculate based on Hierarchical Division (MCHD), to calculate the membership of training samples. MCHD uses the conception of dimension similarity, which develop a bottom-up clustering technique to calculate the sample membership iteratively. The experiment indicates that MCHD can effectively detect noise and removes them from the dataset. Fuzzy support vector machine based on MCHD outperforms most of approaches published recently and hold the better generalization ability to handle the noise.
49

Chauhan, Neha, Tsuyoshi Isshiki, and Dongju Li. "Enhancing Speaker Recognition Models with Noise-Resilient Feature Optimization Strategies." Acoustics 6, no. 2 (May 14, 2024): 439–69. http://dx.doi.org/10.3390/acoustics6020024.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper delves into an in-depth exploration of speaker recognition methodologies, with a primary focus on three pivotal approaches: feature-level fusion, dimension reduction employing principal component analysis (PCA) and independent component analysis (ICA), and feature optimization through a genetic algorithm (GA) and the marine predator algorithm (MPA). This study conducts comprehensive experiments across diverse speech datasets characterized by varying noise levels and speaker counts. Impressively, the research yields exceptional results across different datasets and classifiers. For instance, on the TIMIT babble noise dataset (120 speakers), feature fusion achieves a remarkable speaker identification accuracy of 92.7%, while various feature optimization techniques combined with K nearest neighbor (KNN) and linear discriminant (LD) classifiers result in a speaker verification equal error rate (SV EER) of 0.7%. Notably, this study achieves a speaker identification accuracy of 93.5% and SV EER of 0.13% on the TIMIT babble noise dataset (630 speakers) using a KNN classifier with feature optimization. On the TIMIT white noise dataset (120 and 630 speakers), speaker identification accuracies of 93.3% and 83.5%, along with SV EER values of 0.58% and 0.13%, respectively, were attained utilizing PCA dimension reduction and feature optimization techniques (PCA-MPA) with KNN classifiers. Furthermore, on the voxceleb1 dataset, PCA-MPA feature optimization with KNN classifiers achieves a speaker identification accuracy of 95.2% and an SV EER of 1.8%. These findings underscore the significant enhancement in computational speed and speaker recognition performance facilitated by feature optimization strategies.
50

Youdale, Chris, Simon Shilton, and James Trow. "Impact of Ground Cover Dataset Selection on CNOSSOS-EU Calculated Levels." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 265, no. 3 (February 1, 2023): 4674–81. http://dx.doi.org/10.3397/in_2022_0676.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The United Kingdom Department for Food and Rural Affairs (Defra) commissioned a series of studies investigating the sensitivity of the CNOSSOS-EU noise assessment method. CNOSSOS-EU presents challenges in terms of input data accuracy and availability. For this reason, the studies were commissioned to support data decision making and quantify potential uncertainty in Defra's national noise model. A study was undertaken to identify how the selection of a ground cover dataset may influence calculated noise levels using the CNOSSOS-EU noise assessment method and computational load. Acoustic test models were developed incorporating prepared ground cover datasets based on CORINE Land Cover 2018, CEH Land Cover Map 2019 and OS Mastermap Topography. Noise calculations in accordance with CNOSSOS-EU were carried out for rural and urban/suburban propagation environments. A statistical analysis of the differences between each selected dataset was then undertaken. The paper discusses the findings of this analysis along with generic rules which were identified with respect to modelling ground effect using CNOSSOS-EU.

До бібліографії