Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Interpolation-Based data augmentation.

Zeitschriftenartikel zum Thema „Interpolation-Based data augmentation“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Interpolation-Based data augmentation" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Oh, Cheolhwan, Seungmin Han und Jongpil Jeong. „Time-Series Data Augmentation based on Interpolation“. Procedia Computer Science 175 (2020): 64–71. http://dx.doi.org/10.1016/j.procs.2020.07.012.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Li, Yuliang, Xiaolan Wang, Zhengjie Miao und Wang-Chiew Tan. „Data augmentation for ML-driven data preparation and integration“. Proceedings of the VLDB Endowment 14, Nr. 12 (Juli 2021): 3182–85. http://dx.doi.org/10.14778/3476311.3476403.

Der volle Inhalt der Quelle
Annotation:
In recent years, we have witnessed the development of novel data augmentation (DA) techniques for creating additional training data needed by machine learning based solutions. In this tutorial, we will provide a comprehensive overview of techniques developed by the data management community for data preparation and data integration. In addition to surveying task-specific DA operators that leverage rules, transformations, and external knowledge for creating additional training data, we also explore the advanced DA techniques such as interpolation, conditional generation, and DA policy learning. Finally, we describe the connection between DA and other machine learning paradigms such as active learning, pre-training, and weakly-supervised learning. We hope that this discussion can shed light on future research directions for a holistic data augmentation framework for high-quality dataset creation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Huang, Chenhui, und Akinobu Shibuya. „High Accuracy Geochemical Map Generation Method by a Spatial Autocorrelation-Based Mixture Interpolation Using Remote Sensing Data“. Remote Sensing 12, Nr. 12 (21.06.2020): 1991. http://dx.doi.org/10.3390/rs12121991.

Der volle Inhalt der Quelle
Annotation:
Generating a high-resolution whole-pixel geochemical contents map from a map with sparse distribution is a regression problem. Currently, multivariate prediction models like machine learning (ML) are constructed to raise the geoscience mapping resolution. Methods coupling the spatial autocorrelation into the ML model have been proposed for raising ML prediction accuracy. Previously proposed methods are needed for complicated modification in ML models. In this research, we propose a new algorithm called spatial autocorrelation-based mixture interpolation (SABAMIN), with which it is easier to merge spatial autocorrelation into a ML model only using a data augmentation strategy. To test the feasibility of this concept, remote sensing data including those from the advanced spaceborne thermal emission and reflection radiometer (ASTER), digital elevation model (DEM), and geophysics (geomagnetic) data were used for the feasibility study, along with copper geochemical and copper mine data from Arizona, USA. We explained why spatial information can be coupled into an ML model only by data augmentation, and introduced how to operate data augmentation in our case. Four tests—(i) cross-validation of measured data, (ii) the blind test, (iii) the temporal stability test, and (iv) the predictor importance test—were conducted to evaluate the model. As the results, the model’s accuracy was improved compared with a traditional ML model, and the reliability of the algorithm was confirmed. In summary, combining the univariate interpolation method with multivariate prediction with data augmentation proved effective for geological studies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Tsourtis, Anastasios, Georgios Papoutsoglou und Yannis Pantazis. „GAN-Based Training of Semi-Interpretable Generators for Biological Data Interpolation and Augmentation“. Applied Sciences 12, Nr. 11 (27.05.2022): 5434. http://dx.doi.org/10.3390/app12115434.

Der volle Inhalt der Quelle
Annotation:
Single-cell measurements incorporate invaluable information regarding the state of each cell and its underlying regulatory mechanisms. The popularity and use of single-cell measurements are constantly growing. Despite the typically large number of collected data, the under-representation of important cell (sub-)populations negatively affects down-stream analysis and its robustness. Therefore, the enrichment of biological datasets with samples that belong to a rare state or manifold is overall advantageous. In this work, we train families of generative models via the minimization of Rényi divergence resulting in an adversarial training framework. Apart from the standard neural network-based models, we propose families of semi-interpretable generative models. The proposed models are further tailored to generate realistic gene expression measurements, whose characteristics include zero-inflation and sparsity, without the need of any data pre-processing. Explicit factors of the data such as measurement time, state or cluster are taken into account by our generative models as conditional variables. We train the proposed conditional models and compare them against the state-of-the-art on a range of synthetic and real datasets and demonstrate their ability to accurately perform data interpolation and augmentation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Bi, Xiao-ying, Bo Li, Wen-long Lu und Xin-zhi Zhou. „Daily runoff forecasting based on data-augmented neural network model“. Journal of Hydroinformatics 22, Nr. 4 (16.05.2020): 900–915. http://dx.doi.org/10.2166/hydro.2020.017.

Der volle Inhalt der Quelle
Annotation:
Abstract Accurate daily runoff prediction plays an important role in the management and utilization of water resources. In order to improve the accuracy of prediction, this paper proposes a deep neural network (CAGANet) composed of a convolutional layer, an attention mechanism, a gated recurrent unit (GRU) neural network, and an autoregressive (AR) model. Given that the daily runoff sequence is abrupt and unstable, it is difficult for a single model and combined model to obtain high-precision daily runoff predictions directly. Therefore, this paper uses a linear interpolation method to enhance the stability of hydrological data and apply the augmented data to the CAGANet model, the support vector machine (SVM) model, the long short-term memory (LSTM) neural network and the attention-mechanism-based LSTM model (AM-LSTM). The comparison results show that among the four models based on data augmentation, the CAGANet model proposed in this paper has the best prediction accuracy. Its Nash–Sutcliffe efficiency can reach 0.993. Therefore, the CAGANet model based on data augmentation is a feasible daily runoff forecasting scheme.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

de Rojas, Ana Lazcano. „Data augmentation in economic time series: Behavior and improvements in predictions“. AIMS Mathematics 8, Nr. 10 (2023): 24528–44. http://dx.doi.org/10.3934/math.20231251.

Der volle Inhalt der Quelle
Annotation:
<abstract> <p>The performance of neural networks and statistical models in time series prediction is conditioned by the amount of data available. The lack of observations is one of the main factors influencing the representativeness of the underlying patterns and trends. Using data augmentation techniques based on classical statistical techniques and neural networks, it is possible to generate additional observations and improve the accuracy of the predictions. The particular characteristics of economic time series make it necessary that data augmentation techniques do not significantly influence these characteristics, this fact would alter the quality of the details in the study. This paper analyzes the performance obtained by two data augmentation techniques applied to a time series and finally processed by an ARIMA model and a neural network model to make predictions. The results show a significant improvement in the predictions by the time series augmented by traditional interpolation techniques, obtaining a better fit and correlation with the original series.</p> </abstract>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Xie, Xiangjin, Li Yangning, Wang Chen, Kai Ouyang, Zuotong Xie und Hai-Tao Zheng. „Global Mixup: Eliminating Ambiguity with Clustering“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 11 (26.06.2023): 13798–806. http://dx.doi.org/10.1609/aaai.v37i11.26616.

Der volle Inhalt der Quelle
Annotation:
Data augmentation with Mixup has been proven an effective method to regularize the current deep neural networks. Mixup generates virtual samples and corresponding labels simultaneously by linear interpolation. However, the one-stage generation paradigm and the use of linear interpolation have two defects: (1) The label of the generated sample is simply combined from the labels of the original sample pairs without reasonable judgment, resulting in ambiguous labels. (2) Linear combination significantly restricts the sampling space for generating samples. To address these issues, we propose a novel and effective augmentation method, Global Mixup, based on global clustering relationships. Specifically, we transform the previous one-stage augmentation process into two-stage by decoupling the process of generating virtual samples from the labeling. And for the labels of the generated samples, relabeling is performed based on clustering by calculating the global relationships of the generated samples. Furthermore, we are no longer restricted to linear relationships, which allows us to generate more reliable virtual samples in a larger sampling space. Extensive experiments for CNN, LSTM, and BERT on five tasks show that Global Mixup outperforms previous baselines. Further experiments also demonstrate the advantage of Global Mixup in low-resource scenarios.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Guo, Hongyu. „Nonlinear Mixup: Out-Of-Manifold Data Augmentation for Text Classification“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 04 (03.04.2020): 4044–51. http://dx.doi.org/10.1609/aaai.v34i04.5822.

Der volle Inhalt der Quelle
Annotation:
Data augmentation with Mixup (Zhang et al. 2018) has shown to be an effective model regularizer for current art deep classification networks. It generates out-of-manifold samples through linearly interpolating inputs and their corresponding labels of random sample pairs. Despite its great successes, Mixup requires convex combination of the inputs as well as the modeling targets of a sample pair, thus significantly limits the space of its synthetic samples and consequently its regularization effect. To cope with this limitation, we propose “nonlinear Mixup”. Unlike Mixup where the input and label pairs share the same, linear, scalar mixing policy, our approach embraces nonlinear interpolation policy for both the input and label pairs, where the mixing policy for the labels is adaptively learned based on the mixed input. Experiments on benchmark sentence classification datasets indicate that our approach significantly improves upon Mixup. Our empirical studies also show that the out-of-manifold samples generated by our strategy encourage training samples in each class to form a tight representation cluster that is far from others.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lim, Seong-Su, und Oh-Wook Kwon. „FrameAugment: A Simple Data Augmentation Method for Encoder–Decoder Speech Recognition“. Applied Sciences 12, Nr. 15 (28.07.2022): 7619. http://dx.doi.org/10.3390/app12157619.

Der volle Inhalt der Quelle
Annotation:
As the architecture of deep learning-based speech recognizers has recently changed to the end-to-end style, increasing the effective amount of training data has become an important issue. To tackle this issue, various data augmentation techniques to create additional training data by transforming labeled data have been studied. We propose a method called FrameAugment to augment data by changing the speed of speech locally for selected sections, which is different from the conventional speed perturbation technique that changes the speed of speech uniformly for the entire utterance. To change the speed of the selected sections of speech, the number of frames for the randomly selected sections is adjusted through linear interpolation in the spectrogram domain. The proposed method is shown to achieve 6.8% better performance than the baseline in the WSJ database and 9.5% better than the baseline in the LibriSpeech database. It is also confirmed that the proposed method further improves speech recognition performance when it is combined with the previous data augmentation techniques.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Xie, Kai, Yuxuan Gao, Yadang Chen und Xun Che. „Mask Mixup Model: Enhanced Contrastive Learning for Few-Shot Learning“. Applied Sciences 14, Nr. 14 (11.07.2024): 6063. http://dx.doi.org/10.3390/app14146063.

Der volle Inhalt der Quelle
Annotation:
Few-shot image classification aims to improve the performance of traditional image classification when faced with limited data. Its main challenge lies in effectively utilizing sparse sample label data to accurately predict the true feature distribution. Recent approaches have employed data augmentation techniques like random Mask or mixture interpolation to enhance the diversity and generalization of labeled samples. However, these methods still encounter several issues: (1) random Mask can lead to complete blockage or exposure of foreground, causing loss of crucial sample information; and (2) uniform data distribution after mixture interpolation makes it difficult for the model to differentiate between different categories and effectively distinguish their boundaries. To address these challenges, this paper introduces a novel data augmentation method based on saliency mask blending. Firstly, it selectively preserves key image features through adaptive selection and retention using visual feature occlusion fusion and confidence clipping strategies. Secondly, a visual feature saliency fusion approach is employed to calculate the importance of various image regions, guiding the blending process to produce more diverse and enriched images with clearer category boundaries. The proposed method achieves outstanding performance on multiple standard few-shot image classification datasets (miniImageNet, tieredImageNet, Few-shot FC100, and CUB), surpassing state-of-the-art methods by approximately 0.2–1%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Liu, Ziwei, Jinbao Jiang, Mengquan Li, Deshuai Yuan, Cheng Nie, Yilin Sun und Peng Zheng. „Identification of Moldy Peanuts under Different Varieties and Moisture Content Using Hyperspectral Imaging and Data Augmentation Technologies“. Foods 11, Nr. 8 (16.04.2022): 1156. http://dx.doi.org/10.3390/foods11081156.

Der volle Inhalt der Quelle
Annotation:
Aflatoxins in moldy peanuts are seriously toxic to humans. These kernels need to be screened in the production process. Hyperspectral imaging techniques can be used to identify moldy peanuts. However, the changes in spectral information and texture information caused by the difference in moisture content in peanuts will affect the identification accuracy. To reduce and eliminate the influence of this factor, a data augmentation method based on interpolation was proposed to improve the generalization ability and robustness of the model. Firstly, the near-infrared hyperspectral images of 5 varieties, 4 classes, and 3 moisture content gradients with 39,119 kernels were collected. Then, the data augmentation method called the difference of spectral mean (DSM) was constructed. K-nearest neighbors (KNN), support vector machines (SVM), and MobileViT-xs models were used to verify the effectiveness of the data augmentation method on data with two gradients and three gradients. The experimental results show that the data augmentation can effectively reduce the influence of the difference in moisture content on the model identification accuracy. The DSM method has the highest accuracy improvement in 5 varieties of peanut datasets. In particular, the accuracy of KNN, SVM, and MobileViT-xs using the data of two gradients was improved by 3.55%, 4.42%, and 5.9%, respectively. Furthermore, this study provides a new method for improving the identification accuracy of moldy peanuts and also provides a reference basis for the screening of related foods such as corn, orange, and mango.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Zhou, Xiaojing, Yunjia Feng, Xu Li, Zijian Zhu und Yanzhong Hu. „Off-Road Environment Semantic Segmentation for Autonomous Vehicles Based on Multi-Scale Feature Fusion“. World Electric Vehicle Journal 14, Nr. 10 (13.10.2023): 291. http://dx.doi.org/10.3390/wevj14100291.

Der volle Inhalt der Quelle
Annotation:
For autonomous vehicles driving in off-road environments, it is crucial to have a sensitive environmental perception ability. However, semantic segmentation in complex scenes remains a challenging task. Most current methods for off-road environments often have the problems of single scene and low accuracy. Therefore, this paper proposes a semantic segmentation network based on LiDAR called Multi-scale Augmentation Point-Cylinder Network (MAPC-Net). The network uses a multi-layer receptive field fusion module to extract features from objects of different scales in off-road environments. Gated feature fusion is used to fuse PointTensor and Cylinder for encoding and decoding. In addition, we use CARLA to build off-road environments for obtaining datasets, and employ linear interpolation to enhance the training data to solve the problem of sample imbalance. Finally, we design experiments to verify the excellent semantic segmentation ability of MAPC-Net in an off-road environment. We also demonstrate the effectiveness of the multi-layer receptive field fusion module and data augmentation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Pizoń, Zofia, Shinji Kimijima und Grzegorz Brus. „Enhancing a Deep Learning Model for the Steam Reforming Process Using Data Augmentation Techniques“. Energies 17, Nr. 10 (17.05.2024): 2413. http://dx.doi.org/10.3390/en17102413.

Der volle Inhalt der Quelle
Annotation:
Methane steam reforming is the foremost method for hydrogen production, and it has been studied through experiments and diverse computational models to enhance its energy efficiency. This study focuses on employing an artificial neural network as a model of the methane steam reforming process. The proposed data-driven model predicts the output mixture’s composition based on reactor operating conditions, such as the temperature, steam-to-methane ratio, nitrogen-to-methane ratio, methane flow, and nickel catalyst mass. The network, a feedforward type, underwent training with a comprehensive dataset augmentation strategy that augments the primary experimental dataset through interpolation and theoretical simulations of the process, ensuring a robust model training phase. Additionally, it introduces weights to evaluate the relative significance of different data categories (experimental, interpolated, and theoretical) within the dataset. The optimal artificial neural network architecture was determined by evaluating various configurations, with the aim of minimizing the mean squared error (0.00022) and maximizing the Pearson correlation coefficient (0.97) and Spearman correlation coefficient (1.00).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Kim, Hyungju, und Nammee Moon. „TN-GAN-Based Pet Behavior Prediction through Multiple-Dimension Time-Series Augmentation“. Sensors 23, Nr. 8 (21.04.2023): 4157. http://dx.doi.org/10.3390/s23084157.

Der volle Inhalt der Quelle
Annotation:
Behavioral prediction modeling applies statistical techniques for classifying, recognizing, and predicting behavior using various data. However, performance deterioration and data bias problems occur in behavioral prediction. This study proposed that researchers conduct behavioral prediction using text-to-numeric generative adversarial network (TN-GAN)-based multidimensional time-series augmentation to minimize the data bias problem. The prediction model dataset in this study used nine-axis sensor data (accelerometer, gyroscope, and geomagnetic sensors). The ODROID N2+, a wearable pet device, collected and stored data on a web server. The interquartile range removed outliers, and data processing constructed a sequence as an input value for the predictive model. After using the z-score as a normalization method for sensor values, cubic spline interpolation was performed to identify the missing values. The experimental group assessed 10 dogs to identify nine behaviors. The behavioral prediction model used a hybrid convolutional neural network model to extract features and applied long short-term memory techniques to reflect time-series features. The actual and predicted values were evaluated using the performance evaluation index. The results of this study can assist in recognizing and predicting behavior and detecting abnormal behavior, capacities which can be applied to various pet monitoring systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Guo, Xinshuai, Tianrui Hou und Li Wu. „DAT-Net: Filling of missing temperature values of meteorological stations by data augmentation attention neural network“. Journal of Physics: Conference Series 2816, Nr. 1 (01.08.2024): 012004. http://dx.doi.org/10.1088/1742-6596/2816/1/012004.

Der volle Inhalt der Quelle
Annotation:
Abstract For a long time, filling in the missing temperature data from meteorological stations has been crucial for researchers in analyzing climate variation cases. In previous studies, people have attempted to solve this problem by using interpolation and deep learning methods. Through extensive case studies, it is observed that the data utilization rate of convolutional neural networks based on PConv is low at a high missing rate, which will result in the poor filling performance of each model at a high missing rate. To solve these problems, a Data Augmentation Attention Neural Network (DAT-Net) is presented. DAT Net uses encoder and decoder structures, which include a data augmentation training mechanism (DAM) to enhance model training. In addition, a time encoder (TED) has been developed to assist the model in learning the temporal dependencies of the data. To evaluate DAT-Net, 75% and 85% of experiments were performed, while comparisons were made with Linear, NLinear, DLinear, PatchTST, and GSTA-Net. The results showed that when the missing rate was 75%, DAT-Net decreased by 55.22%, 55.05%, 55.18%, 28.73%, and 12.35% on MAE and 54.08%, 53.88%, 54.08%, 35.48% and, 14.51% on RMSE, R 2 increased by 3.80%, 3,75%, 3.68%, 0.55%, and 0.27%, respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Yildirim, Muhammed. „Diagnosis of Heart Diseases Using Heart Sound Signals with the Developed Interpolation, CNN, and Relief Based Model“. Traitement du Signal 39, Nr. 3 (30.06.2022): 907–14. http://dx.doi.org/10.18280/ts.390316.

Der volle Inhalt der Quelle
Annotation:
The majority of deaths today are due to heart diseases. Early diagnosis of heart diseases will lead to early initiation of the treatment process. Therefore, computer-aided systems are of great importance. In this study, heart sounds were used for the early diagnosis and treatment of heart diseases. Diagnosing heart sounds provides important information about heart diseases. Therefore, a hybrid model was developed in the study. In the developed model, first of all, spectrograms were obtained from audio signals with the Mel-spectrogram method. Then, the interpolation method was used to train the developed model more accurately and with more data. Unlike other data augmentation methods, the interpolation method produces new data. The feature maps of the data were obtained using the Darknet53 architecture. In order for the developed model to work faster and more effectively, the feature map obtained using the Darknet53 architecture has been optimized using the Relief feature selection method. Finally, the obtained feature map was classified in different classifiers. While the accuracy value of the developed model in the first dataset was 99.63%, the accuracy rate in the second dataset was 97.19%. These values show that the developed model can be used to classify heart sounds.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Wang, Shuo, Jian Wang, Yafei Song, Sicong Li und Wei Huang. „Malware Variants Detection Model Based on MFF–HDBA“. Applied Sciences 12, Nr. 19 (24.09.2022): 9593. http://dx.doi.org/10.3390/app12199593.

Der volle Inhalt der Quelle
Annotation:
A massive proliferation of malware variants has posed serious and evolving threats to cybersecurity. Developing intelligent methods to cope with the situation is highly necessary due to the inefficiency of traditional methods. In this paper, a highly efficient, intelligent vision-based malware variants detection method was proposed. Firstly, a bilinear interpolation algorithm was utilized for malware image normalization, and data augmentation was used to resolve the issue of imbalanced malware data sets. Moreover, the paper improved the convolutional neural network (CNN) model by combining multi-scale feature fusion (MFF) and channel attention mechanism for more discriminative and robust feature extraction. Finally, we proposed a hyperparameter optimization algorithm based on the bat algorithm, referred to as HDBA, in order to overcome the disadvantage of the traditional hyperparameter optimization method based on manual adjustment. Experimental results indicated that our model can effectively and efficiently identify malware variants from real and daily networks, with better performance than state-of-the-art solutions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Wang, Kan, Ahmed El-Mowafy, Wei Wang, Long Yang und Xuhai Yang. „Integrity Monitoring of PPP-RTK Positioning; Part II: LEO Augmentation“. Remote Sensing 14, Nr. 7 (26.03.2022): 1599. http://dx.doi.org/10.3390/rs14071599.

Der volle Inhalt der Quelle
Annotation:
Low Earth orbit (LEO) satellites benefit future ground-based positioning with their high number, strong signal strength and high speed. The rapid geometry change with the LEO augmentation offers acceleration of the convergence of the precision point positioning (PPP) solution. This contribution discusses the influences of the LEO augmentation on the precise point positioning—real-time kinematic (PPP-RTK) positioning and its integrity monitoring. Using 1 Hz simulated data around Beijing for global positioning system (GPS)/Galileo/Beidou navigation satellite system (BDS)-3 and the tested LEO constellation with 150 satellites on L1/L5, it was found that the convergence of the formal horizontal precision can be significantly shortened in the ambiguity-float case, especially for the single-constellation scenarios with low precision of the interpolated ionospheric delays. The LEO augmentation also improves the efficiency of the user ambiguity resolution and the formal horizontal precision with the ambiguities fixed. Using the integrity monitoring (IM) procedure introduced in the first part of this series of papers, the ambiguity-float horizontal protection levels (HPLs) are sharply reduced in various tested scenarios, with an improvement of more than 60% from 5 to 30 min after the processing start. The ambiguity-fixed HPLs can generally be improved by 10% to 60% with the LEO augmentation, depending on the global navigation satellite system (GNSS) constellations used and the precision of the ionospheric interpolation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Ratnam, D. Venkata. „ESTIMATION AND ANALYSIS OF USER IPP DELAYS USING BILINEAR MODEL FOR SATELLITE-BASED AUGMENTED NAVIGATION SYSTEMS“. Aviation 17, Nr. 2 (01.07.2013): 65–69. http://dx.doi.org/10.3846/16487788.2013.805864.

Der volle Inhalt der Quelle
Annotation:
Several countries are involved in developing satellite-based augmentation systems (SBAS) for improving the positional accuracy of GPS. India is also developing one such system, popularly known as GPS-aided geo-augmented navigation (GAGAN), to cater to civil aviation applications. The ionospheric effect is the major source of error in GAGAN. An appropriate efficient and accurate ionospheric time model for GAGAN is necessary. To develop such a model, data from 17 GPS stations of the GAGAN network spread across India are used in modelling. The prominent model, known as bi-linear interpolation technique, is investigated for user IPP (UIPP) delay estimation. User IPP delays for quiet, moderate and disturbed days are estimated. It is evident that measured mean UIPP delays closely follow estimated mean UIPP delays.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Ren, Yougui, Jialu Li, Yubin Bao, Zhibin Zhao und Ge Yu. „An Optimized Object Detection Algorithm for Marine Remote Sensing Images“. Mathematics 12, Nr. 17 (31.08.2024): 2722. http://dx.doi.org/10.3390/math12172722.

Der volle Inhalt der Quelle
Annotation:
In order to address the challenge of the small-scale, small-target, and complex scenes often encountered in offshore remote sensing image datasets, this paper employs an interpolation method to achieve super-resolution-assisted target detection. This approach aligns with the logic of popular GANs and generative diffusion networks in terms of super-resolution but is more lightweight. Additionally, the image count is expanded fivefold by supplementing the dataset with DOTA and data augmentation techniques. Framework-wise, based on the Faster R-CNN model, the combination of a residual backbone network and pyramid balancing structure enables our model to adapt to the characteristics of small-target scenarios. Moreover, the attention mechanism, random anchor re-selection strategy, and the strategy of replacing quantization operations with bilinear interpolation further enhance the model’s detection capability at a low cost. Ablation experiments and comparative experiments show that, with a simple backbone, the algorithm in this paper achieves a mAP of 71.2% on the dataset, an improvement in accuracy of about 10% compared to the Faster R-CNN algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Tiwari, Nitin, Fabio Rondinella, Neelima Satyam und Nicola Baldo. „Experimental and Machine Learning Approach to Investigate the Mechanical Performance of Asphalt Mixtures with Silica Fume Filler“. Applied Sciences 13, Nr. 11 (30.05.2023): 6664. http://dx.doi.org/10.3390/app13116664.

Der volle Inhalt der Quelle
Annotation:
This study explores the potential in substituting ordinary Portland cement (OPC) with industrial waste silica fume (SF) as a mineral filler in asphalt mixtures (AM) for flexible road pavements. The Marshall and indirect tensile strength tests were used to evaluate the mechanical resistance and durability of the AMs for different SF and OPC ratios. To develop predictive models of the key mechanical and volumetric parameters, the experimental data were analyzed using artificial neural networks (ANN) with three different activation functions and leave-one-out cross-validation as a resampling method. The addition of SF resulted in a performance comparable to, or slightly better than, OPC-based mixtures, with a maximum indirect tensile strength of 1044.45 kPa at 5% bitumen content. The ANN modeling was highly successful, partly due to an interpolation-based data augmentation strategy, with a correlation coefficient RCV of 0.9988.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Meshchaninov, Viacheslav Pavlovich, Ivan Andreevich Molodetskikh, Dmitriy Sergeevich Vatolin und Alexey Gennadievich Voloboy. „Combining contrastive and supervised learning for video super-resolution detection“. Keldysh Institute Preprints, Nr. 80 (2022): 1–13. http://dx.doi.org/10.20948/prepr-2022-80.

Der volle Inhalt der Quelle
Annotation:
Upscaled video detection is a helpful tool in multimedia forensics, but it’s a challenging task that involves various upscaling and compression algorithms. There are many resolution-enhancement methods, including interpolation and deep-learning based super-resolution, and they leave unique traces. This paper proposes a new upscaled-resolution-detection method based on learning of visual representations using contrastive and cross-entropy losses. To explain how the method detects videos, the major components of our framework are systematically reviewed — in particular, it is shown that most data-augmentation approaches hinder the learning of the method. Through extensive experiments on various datasets, our method has been shown to effectively detects upscaling even in compressed videos and outperforms the state-of-theart alternatives. The code and models are publicly available at https://github.com/msu-video-group/SRDM.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

de Rezende, L. F. C., E. R. de Paula, I. J. Kantor und P. M. Kintner. „Mapping and Survey of Plasma Bubbles over Brazilian Territory“. Journal of Navigation 60, Nr. 1 (15.12.2006): 69–81. http://dx.doi.org/10.1017/s0373463307004006.

Der volle Inhalt der Quelle
Annotation:
Ionospheric plasma irregularities or bubbles, that are regions with depleted density, are generated at the magnetic equator after sunset due to plasma instabilities, and as they move upward they map along the magnetic field lines to low latitudes. To analyse the temporal and spatial evolution of the bubbles over Brazilian territory, the mapping of ionospheric plasma bubbles for the night of 17/18 March 2002 was generated using data collected from one GPS receiver array, and applying interpolation techniques. The impact on the performance of Global Navigation Satellites System (GNSS) and on the Space Based Augmentation System (SBAS) in the tropical regions of the GPS signal losses of lock and of the signal amplitude fades during ionospheric irregularities is presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

De-La-Cruz, Celso, Jorge Trevejo-Pinedo, Fabiola Bravo, Karina Visurraga, Joseph Peña-Echevarría, Angela Pinedo, Freddy Rojas und María R. Sun-Kou. „Application of Machine Learning Algorithms to Classify Peruvian Pisco Varieties Using an Electronic Nose“. Sensors 23, Nr. 13 (24.06.2023): 5864. http://dx.doi.org/10.3390/s23135864.

Der volle Inhalt der Quelle
Annotation:
Pisco is an alcoholic beverage obtained from grape juice distillation. Considered the flagship drink of Peru, it is produced following strict and specific quality standards. In this work, sensing results for volatile compounds in pisco, obtained with an electronic nose, were analyzed through the application of machine learning algorithms for the differentiation of pisco varieties. This differentiation aids in verifying beverage quality, considering the parameters established in its Designation of Origin”. For signal processing, neural networks, multiclass support vector machines and random forest machine learning algorithms were implemented in MATLAB. In addition, data augmentation was performed using a proposed procedure based on interpolation–extrapolation. All algorithms trained with augmented data showed an increase in performance and more reliable predictions compared to those trained with raw data. From the comparison of these results, it was found that the best performance was achieved with neural networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

González-Vidal, Aurora, José Mendoza-Bernal, Alfonso P. Ramallo, Miguel Ángel Zamora, Vicente Martínez und Antonio F. Skarmeta. „Smart Operation of Climatic Systems in a Greenhouse“. Agriculture 12, Nr. 10 (19.10.2022): 1729. http://dx.doi.org/10.3390/agriculture12101729.

Der volle Inhalt der Quelle
Annotation:
The purpose of our work is to leverage the use of artificial intelligence for the emergence of smart greenhouses. Greenhouse agriculture is a sustainable solution for food crises and therefore data-based decision-support mechanisms are needed to optimally use them. Our study anticipates how the combination of climatic systems will affect the temperature and humidity of the greenhouse. More specifically, our methodology anticipates if a set-point will be reached in a given time by a combination of climatic systems and estimates the humidity at that time. We performed exhaustive data analytics processing that includes the interpolation of missing values and data augmentation, and tested several classification and regression algorithms. Our method can predict with a 90% accuracy if, under current conditions, a combination of climatic systems will reach a fixed temperature set-point, and it is also able to estimate the humidity with a 2.83% CVRMSE. We integrated our methodology on a three-layer holistic IoT platform that is able to collect, fuse and analyze real data in a seamless way.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Ku, Hyeeun, und Minhyeok Lee. „TextControlGAN: Text-to-Image Synthesis with Controllable Generative Adversarial Networks“. Applied Sciences 13, Nr. 8 (19.04.2023): 5098. http://dx.doi.org/10.3390/app13085098.

Der volle Inhalt der Quelle
Annotation:
Generative adversarial networks (GANs) have demonstrated remarkable potential in the realm of text-to-image synthesis. Nevertheless, conventional GANs employing conditional latent space interpolation and manifold interpolation (GAN-CLS-INT) encounter challenges in generating images that accurately reflect the given text descriptions. To overcome these limitations, we introduce TextControlGAN, a controllable GAN-based model specifically designed for text-to-image synthesis tasks. In contrast to traditional GANs, TextControlGAN incorporates a neural network structure, known as a regressor, to effectively learn features from conditional texts. To further enhance the learning performance of the regressor, data augmentation techniques are employed. As a result, the generator within TextControlGAN can learn conditional texts more effectively, leading to the production of images that more closely adhere to the textual conditions. Furthermore, by concentrating the discriminator’s training efforts on GAN training exclusively, the overall quality of the generated images is significantly improved. Evaluations conducted on the Caltech-UCSD Birds-200 (CUB) dataset demonstrate that TextControlGAN surpasses the performance of the cGAN-based GAN-INT-CLS model, achieving a 17.6% improvement in Inception Score (IS) and a 36.6% reduction in Fréchet Inception Distance (FID). In supplementary experiments utilizing 128 × 128 resolution images, TextControlGAN exhibits a remarkable ability to manipulate minor features of the generated bird images according to the given text descriptions. These findings highlight the potential of TextControlGAN as a powerful tool for generating high-quality, text-conditioned images, paving the way for future advancements in the field of text-to-image synthesis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Huang, Yongdi, Qionghai Chen, Zhiyu Zhang, Ke Gao, Anwen Hu, Yining Dong, Jun Liu und Lihong Cui. „A Machine Learning Framework to Predict the Tensile Stress of Natural Rubber: Based on Molecular Dynamics Simulation Data“. Polymers 14, Nr. 9 (06.05.2022): 1897. http://dx.doi.org/10.3390/polym14091897.

Der volle Inhalt der Quelle
Annotation:
Natural rubber (NR), with its excellent mechanical properties, has been attracting considerable scientific and technological attention. Through molecular dynamics (MD) simulations, the effects of key structural factors on tensile stress at the molecular level can be examined. However, this high-precision method is computationally inefficient and time-consuming, which limits its application. The combination of machine learning and MD is one of the most promising directions to speed up simulations and ensure the accuracy of results. In this work, a surrogate machine learning method trained with MD data is developed to predict not only the tensile stress of NR but also other mechanical behaviors. We propose a novel idea based on feature processing by combining our previous experience in performing predictions of small samples. The proposed ML method consists of (i) an extreme gradient boosting (XGB) model to predict the tensile stress of NR, and (ii) a data augmentation algorithm based on nearest-neighbor interpolation (NNI) and the synthetic minority oversampling technique (SMOTE) to maximize the use of limited training data. Among the data enhancement algorithms that we design, the NNI algorithm finally achieves the effect of approaching the original data sample distribution by interpolating at the neighborhood of the original sample, and the SMOTE algorithm is used to solve the problem of sample imbalance by interpolating at the clustering boundaries of minority samples. The augmented samples are used to establish the XGB prediction model. Finally, the robustness of the proposed models and their predictive ability are guaranteed by high performance values, which indicate that the obtained regression models have good internal and external predictive capacities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Gil-Martín, Manuel, María Villa-Monedero, Andrzej Pomirski, Daniel Sáez-Trigueros und Rubén San-Segundo. „Sign Language Motion Generation from Sign Characteristics“. Sensors 23, Nr. 23 (23.11.2023): 9365. http://dx.doi.org/10.3390/s23239365.

Der volle Inhalt der Quelle
Annotation:
This paper proposes, analyzes, and evaluates a deep learning architecture based on transformers for generating sign language motion from sign phonemes (represented using HamNoSys: a notation system developed at the University of Hamburg). The sign phonemes provide information about sign characteristics like hand configuration, localization, or movements. The use of sign phonemes is crucial for generating sign motion with a high level of details (including finger extensions and flexions). The transformer-based approach also includes a stop detection module for predicting the end of the generation process. Both aspects, motion generation and stop detection, are evaluated in detail. For motion generation, the dynamic time warping distance is used to compute the similarity between two landmarks sequences (ground truth and generated). The stop detection module is evaluated considering detection accuracy and ROC (receiver operating characteristic) curves. The paper proposes and evaluates several strategies to obtain the system configuration with the best performance. These strategies include different padding strategies, interpolation approaches, and data augmentation techniques. The best configuration of a fully automatic system obtains an average DTW distance per frame of 0.1057 and an area under the ROC curve (AUC) higher than 0.94.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Wang, Sheng-Yu, David Bau und Jun-Yan Zhu. „Rewriting geometric rules of a GAN“. ACM Transactions on Graphics 41, Nr. 4 (Juli 2022): 1–16. http://dx.doi.org/10.1145/3528223.3530065.

Der volle Inhalt der Quelle
Annotation:
Deep generative models make visual content creation more accessible to novice users by automating the synthesis of diverse, realistic content based on a collected dataset. However, the current machine learning approaches miss a key element of the creative process - the ability to synthesize things that go far beyond the data distribution and everyday experience. To begin to address this issue, we enable a user to "warp" a given model by editing just a handful of original model outputs with desired geometric changes. Our method applies a low-rank update to a single model layer to reconstruct edited examples. Furthermore, to combat overfitting, we propose a latent space augmentation method based on style-mixing. Our method allows a user to create a model that synthesizes endless objects with defined geometric changes, enabling the creation of a new generative model without the burden of curating a large-scale dataset. We also demonstrate that edited models can be composed to achieve aggregated effects, and we present an interactive interface to enable users to create new models through composition. Empirical measurements on multiple test cases suggest the advantage of our method against recent GAN fine-tuning methods. Finally, we showcase several applications using the edited models, including latent space interpolation and image editing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Rizvi, Syed Haider M., und Muntazir Abbas. „Lamb wave damage severity estimation using ensemble-based machine learning method with separate model network“. Smart Materials and Structures 30, Nr. 11 (22.10.2021): 115016. http://dx.doi.org/10.1088/1361-665x/ac2e1a.

Der volle Inhalt der Quelle
Annotation:
Abstract Lamb wave-based damage estimation have great potential for structural health monitoring. However, designing a generalizable model that predicts accurate and reliable damage quantification result is still a practice challenge due to complex behavior of waves with different damage severities. In the recent years, machine learning (ML) algorithms have been proven to be an efficient tool to analyze damage-modulated Lamb wave signals. In this study, ensemble-based ML algorithms are employed to develop a generalizable crack quantification model for thin metallic plates. For this, the scattering of Lamb wave signals due to different configuration of crack dimension and orientation is extensively studied. Various finite element simulations signals, representing distinct crack severities in term of crack length, penetration and orientation are acquired. Realizing that both temporal and spectral information of signal is extremely important to damage quantification, three time-frequency (TF) based damage sensitive indices namely energy concentration, TF flux and coefficient of energy variance are proposed. These damage features are extracted by employing smoothed-pseudo Wigner–Ville distribution. After that data augmentation technique based on the spline-based interpolation is applied to enhance the size of the dataset. Eventually, these fully developed damage dataset is deployed to train ensemble-based models. Here we propose separate model network, in which different models are trained and then link together to predict new and unseen datasets. The performance of the proposed model is demonstrated by two cases: first simulated data incorporated with high artificial noises are employed to test the model and in the second scenario, experimental data in raw form are used. Results indicate that the proposed model has the potential to develop a general model that yields reliable answer for crack quantification.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Tang, Qunfeng, Zhencheng Chen, Carlo Menon, Rabab Ward und Mohamed Elgendi. „PPGTempStitch: A MATLAB Toolbox for Augmenting Annotated Photoplethsmogram Signals“. Sensors 21, Nr. 12 (10.06.2021): 4007. http://dx.doi.org/10.3390/s21124007.

Der volle Inhalt der Quelle
Annotation:
An annotated photoplethysmogram (PPG) is required when evaluating PPG algorithms that have been developed to detect the onset and systolic peaks of PPG waveforms. However, few publicly accessible PPG datasets exist in which the onset and systolic peaks of the waveforms are annotated. Therefore, this study developed a MATLAB toolbox that stitches predetermined annotated PPGs in a random manner to generate a long, annotated PPG signal. With this toolbox, any combination of four annotated PPG templates that represent regular, irregular, fast rhythm, and noisy PPG waveforms can be stitched together to generate a long, annotated PPG. Furthermore, this toolbox can simulate real-life PPG signals by introducing different noise levels and PPG waveforms. The toolbox can implement two stitching methods: one based on the systolic peak and the other on the onset. Additionally, cubic spline interpolation is used to smooth the waveform around the stitching point, and a skewness index is used as a signal quality index to select the final signal output based on the stitching method used. The developed toolbox is free and open-source software, and a graphical user interface is provided. The method of synthesizing by stitching introduced in this paper is a data augmentation strategy that can help researchers significantly increase the size and diversity of annotated PPG signals available for training and testing different feature extraction algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Li, He, Shuaipeng Yang, Rui Zhang, Peng Yu, Zhumu Fu, Xiangyang Wang, Michel Kadoch und Yang Yang. „Detection of Floating Objects on Water Surface Using YOLOv5s in an Edge Computing Environment“. Water 16, Nr. 1 (25.12.2023): 86. http://dx.doi.org/10.3390/w16010086.

Der volle Inhalt der Quelle
Annotation:
Aiming to solve the problems with easy false detection of small targets in river floating object detection and deploying an overly large model, a new method is proposed based on improved YOLOv5s. A new data augmentation method for small objects is designed to enrich the dataset and improve the model’s robustness. Distinct feature extraction network levels incorporate different coordinate attention mechanism pooling methods to enhance the effective feature information extraction of small targets and improve small target detection accuracy. Then, a shallow feature map with 4-fold down-sampling is added, and feature fusion is performed using the Feature Pyramid Network. At the same time, bilinear interpolation replaces the up-sampling method to retain feature information and enhance the network’s ability to sense small targets. Network complex algorithms are optimized to better adapt to embedded platforms. Finally, the model is channel pruned to solve the problem of difficult deployment. The experimental results show that this method has a better feature extraction capability as well as a higher detection accuracy. Compared with the original YOLOv5 algorithm, the accuracy is improved by 15.7%, the error detection rate is reduced by 83% in small target task detection, the detection accuracy can reach 92.01% in edge testing, and the inference speed can reach 33 frames per second, which can meet the real-time requirements.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Amirrajab, Sina, Yasmina Al Khalil, Cristian Lorenz, Jürgen Weese, Josien Pluim und Marcel Breeuwer. „Pathology Synthesis of 3D-Consistent Cardiac MR Images using 2D VAEs and GANs“. Machine Learning for Biomedical Imaging 2, June 2023 (08.06.2023): 288–311. http://dx.doi.org/10.59275/j.melba.2023-1g8b.

Der volle Inhalt der Quelle
Annotation:
We propose a method for synthesizing cardiac magnetic resonance (MR) images with plausible heart pathologies and realistic appearances for the purpose of generating labeled data for the application of supervised deep-learning (DL) training. The image synthesis consists of label deformation and label-to-image translation tasks. The former is achieved via latent space interpolation in a VAE model, while the latter is accomplished via a label-conditional GAN model. We devise three approaches for label manipulation in the latent space of the trained VAE model; i) intra-subject synthesis aiming to interpolate the intermediate slices of a subject to increase the through-plane resolution, ii) inter-subject synthesis aiming to interpolate the geometry and appearance of intermediate images between two dissimilar subjects acquired with different scanner vendors, and iii) pathology synthesis aiming to synthesize a series of pseudo-pathological synthetic subjects with characteristics of a desired heart disease. Furthermore, we propose to model the relationship between 2D slices in the latent space of the VAE prior to reconstruction for generating 3D-consistent subjects from stacking up 2D slice-by-slice generations. We demonstrate that such an approach could provide a solution to diversify and enrich an available database of cardiac MR images and to pave the way for the development of generalizable DL-based image analysis algorithms. We quantitatively evaluate the quality of the synthesized data in an augmentation scenario to achieve generalization and robustness to multi-vendor and multi-disease data for image segmentation. Our code is available at <a href='https://github.com/sinaamirrajab/CardiacPathologySynthesis'>https://github.com/sinaamirrajab/CardiacPathologySynthesis</a>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Hu, Wenyi, Wei Hong, Hongkun Wang, Mingzhe Liu und Shan Liu. „A Study on Tomato Disease and Pest Detection Method“. Applied Sciences 13, Nr. 18 (06.09.2023): 10063. http://dx.doi.org/10.3390/app131810063.

Der volle Inhalt der Quelle
Annotation:
In recent years, with the rapid development of artificial intelligence technology, computer vision-based pest detection technology has been widely used in agricultural production. Tomato diseases and pests are serious problems affecting tomato yield and quality, so it is important to detect them quickly and accurately. In this paper, we propose a tomato disease and pest detection model based on an improved YOLOv5n to overcome the problems of low accuracy and large model size in traditional pest detection methods. Firstly, we use the Efficient Vision Transformer as the feature extraction backbone network to reduce model parameters and computational complexity while improving detection accuracy, thus solving the problems of poor real-time performance and model deployment. Second, we replace the original nearest neighbor interpolation upsampling module with the lightweight general-purpose upsampling operator Content-Aware ReAssembly of FEatures to reduce feature information loss during upsampling. Finally, we use Wise-IoU instead of the original CIoU as the regression loss function of the target bounding box to improve the regression prediction accuracy of the predicted bounding box while accelerating the convergence speed of the regression loss function. We perform statistical analysis on the experimental results of tomato diseases and pests under data augmentation conditions. The results show that the improved algorithm improves mAP50 and mAP50:95 by 2.3% and 1.7%, respectively, while reducing the number of model parameters by 0.4 M and the computational complexity by 0.9 GFLOPs. The improved model has a parameter count of only 1.6 M and a computational complexity of only 3.3 GFLOPs, demonstrating a certain advantage over other mainstream object detection algorithms in terms of detection accuracy, model parameter count, and computational complexity. The experimental results show that this method is suitable for the early detection of tomato diseases and pests.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Gautam, Vinay Kumar, Mahesh Kothari, Pradeep Kumar Singh, Sita Ram Bhakar und Kamal Kishore Yadav. „Spatial mapping of groundwater quality using GIS for Jakham River basin of Southern Rajasthan“. Environment Conservation Journal 23, Nr. 1&2 (22.02.2022): 234–43. http://dx.doi.org/10.36953/ecj.021936-2175.

Der volle Inhalt der Quelle
Annotation:
The physico-chemical analysis of groundwater quality plays a significant role to manage the water resources for drinking as well as irrigation in the sub-humid and semi-arid agro-climatic areas. In this study, the hydrogeochemical analyses and spatial mapping of groundwater quality in the Jakham River Basin located in the southern part of Rajasthan were investigated.The groundwater quality samples were collected from 76 wells marked on the grid map of 5×5 km2 area.A spatial distribution in sampling location in the basin was prepared using GIS (Geographical information system) tool based on 6 physico-chemical parameters i.e., pH, EC, TDS, Cl, NO3and F.The groundwater quality data from the pre and post-monsoon seasons of 2019-20 were used to carry out a detailed analysis of water quality parameters.The water quality maps for the entire basin have been generated using anIDW interpolation technique for these parameters as per the identified location.The higher value of TDS and EC were found in the south-eastern part and along the roadside of study area, which were dominated by agriculture activities and industrial influence. The concentration was observed higher in the post-monsoon period. For EC and TDS, major part of the (>50%) of the study area comes under the safe limit of potable water. Major part of the basin witnessed fluoride concentration (0.40-80 mg/l) for both the season, which is lower than the permissible limit. Higher NO3 concentration was observed after the rainy season. The influence of geogenic activities could be clearly seen in the groundwater quality of the basin. Theresultant map shows that the entire basin has optimally goodgroundwater quality for human consumption. Hence, this study provides suggestion to prepare strategies for the proper management and augmentation of the groundwater condition in the Jakham River Basin.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Farhadi, Moslem, Amir Hossein Foruzan, Mina Esfandiarkhani, Yen-Wei Chen und Hongjie Hu. „RECONSTRUCTION OF HIGH-RESOLUTION HEPATIC TUMOR CT IMAGES USING AN AUGMENTATION-BASED SUPER-RESOLUTION TECHNIQUE“. Biomedical Engineering: Applications, Basis and Communications 33, Nr. 04 (07.04.2021): 2150026. http://dx.doi.org/10.4015/s1016237221500265.

Der volle Inhalt der Quelle
Annotation:
Improving the resolution of medical images is crucial in diagnosis, feature extraction, and data retrieval. A significant group of super-resolution algorithms is multi-frame techniques. However, they are not appropriate to medical data since they need several frames of the same scene, which bring a high risk of radiation or require a considerable acquisition time. We propose a new data augmentation technique and employ it in a multi-frame image reconstruction algorithm to improve the resolution of pathologic liver CT images. The input to our algorithm is a 3D CT-scan of the abdominal region. Neighboring slices are considered to increase the resolution of a single slice. Augmented slices are prepared using the nearby slices and the interpolation approach. The new data is aligned to the original slice, and it is used as an augmented version of the data. Then, a multi-frame scheme is utilized to reconstruct the high-resolution image. Our method’s novelty is to remove the need for multiple scans of a patent to employ multi-resolution techniques in medical applications. The results reveal that the proposed method is superior to conventional interpolation methods and available augmentation techniques. Compared to the tricubic interpolation, the proposed method improved the PSNR by 3.1. Concerning conventional augmentation techniques, it enhanced the SSIM measure by 0.06. The proposed algorithm improved the SSIM by 0.11 compared to traditional interpolation techniques and 0.1 for recent researches. Therefore, a multi-frame super-resolution technique has the potential to reconstruct medical data better.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Tiwari, Nitin, Nicola Baldo, Neelima Satyam und Matteo Miani. „Mechanical Characterization of Industrial Waste Materials as Mineral Fillers in Asphalt Mixes: Integrated Experimental and Machine Learning Analysis“. Sustainability 14, Nr. 10 (13.05.2022): 5946. http://dx.doi.org/10.3390/su14105946.

Der volle Inhalt der Quelle
Annotation:
In this study, the effect of seven industrial waste materials as mineral fillers in asphalt mixtures was investigated. Silica fume (SF), limestone dust (LSD), stone dust (SD), rice husk ash (RHA), fly ash (FA), brick dust (BD), and marble dust (MD) were used to prepare the asphalt mixtures. The obtained experimental results were compared with ordinary Portland cement (OPC), which is used as a conventional mineral filler. The physical, chemical, and morphological assessment of the fillers was performed to evaluate the suitability of industrial waste to replace the OPC. The volumetric, strength, and durability of the modified asphalt mixes were examined to evaluate their performance. The experimental data have been processed through artificial neural networks (ANNs), using k-fold cross-validation as a resampling method and two different activation functions to develop predictive models of the main mechanical and volumetric parameters. In the current research, the two most relevant parameters investigated are the filler type and the filler content, given that they both greatly affect the asphalt concrete mechanical performance. The asphalt mixes have been optimized by means of the Marshall stability analysis, and after that, for each different filler, the optimum asphalt mixtures were investigated by carrying out Indirect tensile strength, moisture susceptibility, and abrasion loss tests. The moisture sensitivity of the modified asphalt mixtures is within the acceptable limit according to the Indian standard. Asphalt mixes modified with the finest mineral fillers exhibited superior stiffness and cracking resistance. Experimental results show higher moisture resistance in calcium-dominant mineral filler-modified asphalt mixtures. Except for mixes prepared with RHA and MD (4% filler content), all the asphalt mixtures considered in this study show MS values higher than 10 kN, as prescribed by Indian regulations. All the values of the void ratio for each asphalt mix have been observed to range between 3–5%, and MQ results were observed between 2 kN/mm–6 kN/mm, which falls within the acceptable range of the Indian specification. Partly due to implementing a data-augmentation strategy based on interpolation, the ANN modeling was very successful, showing a coefficient of correlation averaged over all output variables equal to 0.9967.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Vu, Thanh, Baochen Sun, Bodi Yuan, Alex Ngai, Yueqi Li und Jan-Michael Frahm. „Supervision Interpolation via LossMix: Generalizing Mixup for Object Detection and Beyond“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 6 (24.03.2024): 5280–88. http://dx.doi.org/10.1609/aaai.v38i6.28335.

Der volle Inhalt der Quelle
Annotation:
The success of data mixing augmentations in image classification tasks has been well-received. However, these techniques cannot be readily applied to object detection due to challenges such as spatial misalignment, foreground/background distinction, and plurality of instances. To tackle these issues, we first introduce a novel conceptual framework called Supervision Interpolation (SI), which offers a fresh perspective on interpolation-based augmentations by relaxing and generalizing Mixup. Based on SI, we propose LossMix, a simple yet versatile and effective regularization that enhances the performance and robustness of object detectors and more. Our key insight is that we can effectively regularize the training on mixed data by interpolating their loss errors instead of ground truth labels. Empirical results on the PASCAL VOC and MS COCO datasets demonstrate that LossMix can consistently outperform state-of-the-art methods widely adopted for detection. Furthermore, by jointly leveraging LossMix with unsupervised domain adaptation, we successfully improve existing approaches and set a new state of the art for cross-domain object detection.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Ye, Mao, Haitao Wang und Zheqian Chen. „MSMix: An Interpolation-Based Text Data Augmentation Method Manifold Swap Mixup“. SSRN Electronic Journal, 2023. http://dx.doi.org/10.2139/ssrn.4471276.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Melícias, Francisco S., Tiago F. R. Ribeiro, Carlos Rabadão, Leonel Santos und Rogério Luís de C. Costa. „GPT and Interpolation-based Data Augmentation for Multiclass Intrusion Detection in IIoT“. IEEE Access, 2024, 1. http://dx.doi.org/10.1109/access.2024.3360879.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Ju, Zedong, Yinsheng Chen, Yukang Qiang Qiang, Xinyi Chen, Chao Ju und Jingli Yang. „A systematic review of data augmentation methods for intelligent fault diagnosis of rotating machinery under limited data conditions“. Measurement Science and Technology, 13.09.2024. http://dx.doi.org/10.1088/1361-6501/ad7a97.

Der volle Inhalt der Quelle
Annotation:
Abstract In recent years, research on the intelligent fault diagnosis of rotating machinery has made remarkable progress, bringing considerable economic benefits to industrial production. However, in the industrial environment, the accuracy and stability of the diagnostic model face severe challenges due to the extremely limited fault data. Data augmentation methods have the capability to increase both the quantity and diversity of data without altering the key characteristics of the original data, which is particularly important for the development of intelligent fault diagnosis of rotating machinery under limited data conditions (IFD-RM-LDC). Despite the abundant achievements in research on data augmentation methods, there is a lack of systematic reviews and clear future development directions. Therefore, this paper systematically reviews and discusses data augmentation methods for IFD-RM-LDC. Firstly, existing data augmentation methods are categorized into three groups: synthetic minority over-sampling technique (SMOTE)-based methods, generative model-based methods, and data transformation-based methods. Then, these three methods are introduced in detail and discussed in depth: SMOTE-based methods synthesize new samples through a spatial interpolation strategy; generative model-based methods generate new samples according to the distribution characteristics of existing samples; data transformation-based methods generate new samples through a series of transformation operations. Finally, the challenges faced by current data augmentation methods, including their limitations in generalization, real-time performance, and interpretability, as well as the absence of robust evaluation metrics for generated samples, have been summarized, and potential solutions to address these issues have been explored.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Baydaroğlu Yeşilköy, Özlem, und Ibrahim Demir. „Temporal and spatial satellite data augmentation for deep learning-based rainfall nowcasting“. Journal of Hydroinformatics, 13.03.2024. http://dx.doi.org/10.2166/hydro.2024.235.

Der volle Inhalt der Quelle
Annotation:
Abstract The significance of improving rainfall prediction methods has escalated due to climate change-induced flash floods and severe flooding. In this study, rainfall nowcasting has been studied utilizing NASA Giovanni (Goddard Interactive Online Visualization and Analysis Infrastructure) satellite-derived precipitation products and the convolutional long short-term memory (ConvLSTM) approach. The goal of the study is to assess the impact of data augmentation on flood nowcasting. Due to data requirements of deep learning-based prediction methods, data augmentation is performed using eight different interpolation techniques. Spatial, temporal, and spatio-temporal interpolated rainfall data are used to conduct a comparative analysis of the results obtained through nowcasting rainfall. This research examines two catastrophic floods that transpired in the Türkiye Marmara Region in 2009 and the Central Black Sea Region in 2021, which are selected as the focal case studies. The Marmara and Black Sea regions are prone to frequent flooding, which, due to the dense population, has devastating consequences. Furthermore, these regions exhibit distinct topographical characteristics and precipitation patterns, and the frontal systems that impact them are also dissimilar. The nowcast results for the two regions exhibit a significant difference. Although data augmentation significantly reduced the error values by 59% for one region, it did not yield the same effectiveness for the other region.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Song, Xiao-Lu, Yan-Lin He, Xing-Yuan Li, Qun-Xiong Zhu und Yuan Xu. „Novel virtual sample generation method based on data augmentation and weighted interpolation for soft sensing with small data“. Expert Systems with Applications, April 2023, 120085. http://dx.doi.org/10.1016/j.eswa.2023.120085.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Hu, Qiming, Dongjin Jiang, Guo Zhang, Ya Zhang und Jianping Wang. „Space-time Image Velocimetry in Blurred Scenes Based on BSTI-DCGAN Data Augmentation“. Measurement Science and Technology, 20.05.2024. http://dx.doi.org/10.1088/1361-6501/ad4dd0.

Der volle Inhalt der Quelle
Annotation:
Abstract Due to the limited sample quantity and the complex data collection process of the blurred space-time image (BSTI) dataset, the deep learning-based space-time image velocimetry (STIV) results in larger errors when applied to blurry videos.. To enhance the measurement accuracy, we propose the use of space-time image velocimetry in blurred scenes based on BSTI-DCGAN data augmentation. Firstly, BSTI-DCGAN is developed based on the deep convolutional generative adversarial network (DCGAN). This network utilizes a bilinear interpolation-convolution module for upsampling and integrates coordinated attention (CA) and multi-concatenation attention (MCA) to enhance the resemblance between generated and real images. Next, further expanding the dataset by using artificially synthesized space-time images.Subsequently, all space-time images are transformed into spectrograms to create a training dataset for the classification network. Finally, the primary spectral direction is detected using the classification network. The experimental results indicate that our approach effectively augments the dataset and improves the accuracy of practical measurements. Under the condition of video blur, the relative errors of the average flow velocity and discharge are 3.92% and 2.72%, respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Tan, Xuyan, Xuanxuan Sun, Weizhong Chen, Bowen Du, Junchen Ye und Leilei Sun. „Investigation on the data augmentation using machine learning algorithms in structural health monitoring information“. Structural Health Monitoring, 11.03.2021, 147592172199623. http://dx.doi.org/10.1177/1475921721996238.

Der volle Inhalt der Quelle
Annotation:
Structural health monitoring system plays a vital role in smart management of civil engineering. A lot of efforts have been motivated to improve data quality through mean, median values, or simple interpolation methods, which are low-precision and not fully reflected field conditions due to the neglect of strong spatio-temporal correlations borne by monitoring datasets and the thoughtless for various forms of abnormal conditions. Along this line, this article proposed an integrated framework for data augmentation in structural health monitoring system using machine learning algorithms. As a case study, the monitoring data obtained from structural health monitoring system in the Nanjing Yangtze River Tunnel are selected to make experience. First, the original data are reconstructed based on an improved non-negative matrix factorization model to detect abnormal conditions occurred in different cases. Subsequently, multiple supervised learning methods are introduced to process the abnormal conditions detected by non-negative matrix factorization. The effectiveness of multiple supervised learning methods at different missing ratios is discussed to improve its university. The experimental results indicate that non-negative matrix factorization can recognize different abnormal situations simultaneously. The supervised learning algorithms expressed good effects to impute datasets under different missing rates. Therefore, the presented framework is applied to this case for data augmentation, which is crucial for further analysis and provides an important reference for similar projects.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Jiang, Shan, Bowen Li, Zhiyong Yang, Yuhua Li und Zeyang Zhou. „A back propagation neural network based respiratory motion modelling method“. International Journal of Medical Robotics and Computer Assisted Surgery 20, Nr. 3 (28.05.2024). http://dx.doi.org/10.1002/rcs.2647.

Der volle Inhalt der Quelle
Annotation:
AbstractBackgroundThis study presents the development of a backpropagation neural network‐based respiratory motion modelling method (BP‐RMM) for precisely tracking arbitrary points within lung tissue throughout free respiration, encompassing deep inspiration and expiration phases.MethodsInternal and external respiratory data from four‐dimensional computed tomography (4DCT) are processed using various artificial intelligence algorithms. Data augmentation through polynomial interpolation is employed to enhance dataset robustness. A BP neural network is then constructed to comprehensively track lung tissue movement.ResultsThe BP‐RMM demonstrates promising accuracy. In cases from the public 4DCT dataset, the average target registration error (TRE) between authentic deep respiration phases and those forecasted by BP‐RMM for 75 marked points is 1.819 mm. Notably, TRE for normal respiration phases is significantly lower, with a minimum error of 0.511 mm.ConclusionsThe proposed method is validated for its high accuracy and robustness, establishing it as a promising tool for surgical navigation within the lung.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Gan, Yanglan, Yuhan Chen, Guangwei Xu, Wenjing Guo und Guobing Zou. „Deep enhanced constraint clustering based on contrastive learning for scRNA-seq data“. Briefings in Bioinformatics, 13.06.2023. http://dx.doi.org/10.1093/bib/bbad222.

Der volle Inhalt der Quelle
Annotation:
Abstract Single-cell RNA sequencing (scRNA-seq) measures transcriptome-wide gene expression at single-cell resolution. Clustering analysis of scRNA-seq data enables researchers to characterize cell types and states, shedding new light on cell-to-cell heterogeneity in complex tissues. Recently, self-supervised contrastive learning has become a prominent technique for underlying feature representation learning. However, for the noisy, high-dimensional and sparse scRNA-seq data, existing methods still encounter difficulties in capturing the intrinsic patterns and structures of cells, and seldom utilize prior knowledge, resulting in clusters that mismatch with the real situation. To this end, we propose scDECL, a novel deep enhanced constraint clustering algorithm for scRNA-seq data analysis based on contrastive learning and pairwise constraints. Specifically, based on interpolated contrastive learning, a pre-training model is trained to learn the feature embedding, and then perform clustering according to the constructed enhanced pairwise constraint. In the pre-training stage, a mixup data augmentation strategy and interpolation loss is introduced to improve the diversity of the dataset and the robustness of the model. In the clustering stage, the prior information is converted into enhanced pairwise constraints to guide the clustering. To validate the performance of scDECL, we compare it with six state-of-the-art algorithms on six real scRNA-seq datasets. The experimental results demonstrate the proposed algorithm outperforms the six competing methods. In addition, the ablation studies on each module of the algorithm indicate that these modules are complementary to each other and effective in improving the performance of the proposed algorithm. Our method scDECL is implemented in Python using the Pytorch machine-learning library, and it is freely available at https://github.com/DBLABDHU/scDECL.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Lee, Jeong-Gil, und Yoo-Geun Ham. „Impact of satellite thickness data assimilation on bias reduction in Arctic sea ice concentration“. npj Climate and Atmospheric Science 6, Nr. 1 (24.06.2023). http://dx.doi.org/10.1038/s41612-023-00402-6.

Der volle Inhalt der Quelle
Annotation:
AbstractThe impact of assimilating satellite-retrieved Arctic sea ice thickness (SIT) on simulating sea ice concentration (SIC) climatology in CICE5 is examined using a data assimilation (DA) system based on the ensemble optimal interpolation. The DA of the SIT satellite data of CryoSat-2 and SMOS during 2011–2019 significantly reduces the climatological bias of SIC and SIT in both sea ice melting and growing seasons. Moreover, the response of SIC to SIT change is strongly dependent on the seasons and latitudinal locations. The SIT in the inner ice zone thickens due to the SIT DA during the boreal winter wherein the SIT observation is available; the ice melting throughout the subsequent seasons is attenuated to increase SIC during the boreal summer to reduce the simultaneous SIC bias. In marginal ice zones, the positive SIT bias depicted in the control simulation is significantly reduced by SIT DA, which reduces the positive SIC bias. The idealized experiments of reducing the SIT show that the enhanced ice bottom melting process plays a crucial role in reducing the SIC; the prescribed SIT thinning increases the ice bulk salinity due to the weak gravity drainage of brine and increases the ice bulk temperature due to the decrease of the sea ice albedo. The augmentation of the ice salinity and temperature contributes to the shrinkage of the ice enthalpy, boosting the bottom melting process, which leads to SIC decrease.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Lampier, Lucas Côgo, Carlos Torturella Valadão, Leticia Araújo Silva, Denis Delisle-Rodriguez, Eliete Maria de Oliveira Caldeira und Teodiano Freire Bastos Filho. „A deep learning approach to estimate pulse rate by remote photoplethysmography“. Physiological Measurement, 21.06.2022. http://dx.doi.org/10.1088/1361-6579/ac7b0b.

Der volle Inhalt der Quelle
Annotation:
Abstract Objective. This study proposes an U-net shaped Deep Neural Network (DNN) model to extract remote photoplethysmography (rPPG) signals from skin color signals to estimate Pulse Rate (PR). Approach. Three input window sizes are used into the DNN: 256 samples (5.12 s), 512 samples (10.24 s), and 1024 (20.48 s). A data argumentation algorithm based on interpolation is also used here to artificially increase the number of training samples. Main results. The proposed model outperformed a prior-knowledge rPPG method by using input signals with window of 256 and 512 samples. Also, it was found that the data augmentation procedure only increased the performance for window of 1024 samples. The trained model achieved a Mean Absolute Error (MAE) of 3.97 Beats per Minute (BPM) and Root Mean Squared Error (RMSE) of 6.47 BPM, for the 256 samples window, and MAE of 3.00 BPM and RMSE of 5.45 BPM for the window of 512 samples. On the other hand, the prior-knowledge rPPG method got a MAE of 8.04 BPM and RMSE of 16.63 BPM for the window of 256 samples, and MAE of 3.49 BPM and RMSE of 7.92 BPM for the window of 512. For the longest window (1024 samples), the concordance of the predicted PRs from the DNNs and the true PRs was higher when applying the data augmentation procedure. Significance. These results demonstrate a big potential of this technique for PR estimation, showing that the DNN proposed here may generate reliable rPPG signals even with short window lengths (5.12 s and 10.24 s), suggesting that it needs less data for a faster rPPG measurement and PR estimation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Chen, Zhiguo, Shuangshuang Xing und Xuanyu Ren. „Efficient Windows malware identification and classification scheme for plant protection information systems“. Frontiers in Plant Science 14 (15.02.2023). http://dx.doi.org/10.3389/fpls.2023.1123696.

Der volle Inhalt der Quelle
Annotation:
Due to developments in science and technology, the field of plant protection and the information industry have become increasingly integrated, which has resulted in the creation of plant protection information systems. Plant protection information systems have modernized how pest levels are monitored and improved overall control capabilities. They also provide data to support crop pest monitoring and early warnings and promote the sustainable development of plant protection networks, visualization, and digitization. However, cybercriminals use technologies such as code reuse and automation to generate malware variants, resulting in continuous attacks on plant protection information terminals. Therefore, effective identification of rapidly growing malware and its variants has become critical. Recent studies have shown that malware and its variants can be effectively identified and classified using convolutional neural networks (CNNs) to analyze the similarity between malware binary images. However, the malware images generated by such schemes have the problem of image size imbalance, which affects the accuracy of malware classification. In order to solve the above problems, this paper proposes a malware identification and classification scheme based on bicubic interpolation to improve the security of a plant protection information terminal system. We used the bicubic interpolation algorithm to reconstruct the generated malware images to solve the problem of image size imbalance. We used the Cycle-GAN model for data augmentation to balance the number of samples among malware families and build an efficient malware classification model based on CNNs to improve the malware identification and classification performance of the system. Experimental results show that the system can significantly improve malware classification efficiency. The accuracy of RGB and gray images generated by the Microsoft Malware Classification Challenge Dataset (BIG2015) can reach 99.76% and 99.62%, respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie