To see the other types of publications on this topic, follow the link: Limited training data.

Journal articles on the topic 'Limited training data'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Limited training data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Oh, Se Eun, Nate Mathews, Mohammad Saidur Rahman, Matthew Wright, and Nicholas Hopper. "GANDaLF: GAN for Data-Limited Fingerprinting." Proceedings on Privacy Enhancing Technologies 2021, no. 2 (January 29, 2021): 305–22. http://dx.doi.org/10.2478/popets-2021-0029.

Full text
Abstract:
Abstract We introduce Generative Adversarial Networks for Data-Limited Fingerprinting (GANDaLF), a new deep-learning-based technique to perform Website Fingerprinting (WF) on Tor traffic. In contrast to most earlier work on deep-learning for WF, GANDaLF is intended to work with few training samples, and achieves this goal through the use of a Generative Adversarial Network to generate a large set of “fake” data that helps to train a deep neural network in distinguishing between classes of actual training data. We evaluate GANDaLF in low-data scenarios including as few as 10 training instances per site, and in multiple settings, including fingerprinting of website index pages and fingerprinting of non-index pages within a site. GANDaLF achieves closed-world accuracy of 87% with just 20 instances per site (and 100 sites) in standard WF settings. In particular, GANDaLF can outperform Var-CNN and Triplet Fingerprinting (TF) across all settings in subpage fingerprinting. For example, GANDaLF outperforms TF by a 29% margin and Var-CNN by 38% for training sets using 20 instances per site.
APA, Harvard, Vancouver, ISO, and other styles
2

McLaughlin, Niall, Ji Ming, and Danny Crookes. "Robust Multimodal Person Identification With Limited Training Data." IEEE Transactions on Human-Machine Systems 43, no. 2 (March 2013): 214–24. http://dx.doi.org/10.1109/tsmcc.2012.2227959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Mingyang, Berrak Sisman, Li Zhao, and Haizhou Li. "DeepConversion: Voice conversion with limited parallel training data." Speech Communication 122 (September 2020): 31–43. http://dx.doi.org/10.1016/j.specom.2020.05.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Qian, Tieyun, Bing Liu, Li Chen, Zhiyong Peng, Ming Zhong, Guoliang He, Xuhui Li, and Gang Xu. "Tri-Training for authorship attribution with limited training data: a comprehensive study." Neurocomputing 171 (January 2016): 798–806. http://dx.doi.org/10.1016/j.neucom.2015.07.064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Saunders, Sara L., Ethan Leng, Benjamin Spilseth, Neil Wasserman, Gregory J. Metzger, and Patrick J. Bolan. "Training Convolutional Networks for Prostate Segmentation With Limited Data." IEEE Access 9 (2021): 109214–23. http://dx.doi.org/10.1109/access.2021.3100585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhao, Yao, Dong Joo Rhee, Carlos Cardenas, Laurence E. Court, and Jinzhong Yang. "Training deep‐learning segmentation models from severely limited data." Medical Physics 48, no. 4 (February 19, 2021): 1697–706. http://dx.doi.org/10.1002/mp.14728.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hoffbeck, J. P., and D. A. Landgrebe. "Covariance matrix estimation and classification with limited training data." IEEE Transactions on Pattern Analysis and Machine Intelligence 18, no. 7 (July 1996): 763–67. http://dx.doi.org/10.1109/34.506799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cui, Kaiwen, Jiaxing Huang, Zhipeng Luo, Gongjie Zhang, Fangneng Zhan, and Shijian Lu. "GenCo: Generative Co-training for Generative Adversarial Networks with Limited Data." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (June 28, 2022): 499–507. http://dx.doi.org/10.1609/aaai.v36i1.19928.

Full text
Abstract:
Training effective Generative Adversarial Networks (GANs) requires large amounts of training data, without which the trained models are usually sub-optimal with discriminator over-fitting. Several prior studies address this issue by expanding the distribution of the limited training data via massive and hand-crafted data augmentation. We handle data-limited image generation from a very different perspective. Specifically, we design GenCo, a Generative Co-training network that mitigates the discriminator over-fitting issue by introducing multiple complementary discriminators that provide diverse supervision from multiple distinctive views in training. We instantiate the idea of GenCo in two ways. The first way is Weight-Discrepancy Co-training (WeCo) which co-trains multiple distinctive discriminators by diversifying their parameters. The second way is Data-Discrepancy Co-training (DaCo) which achieves co-training by feeding discriminators with different views of the input images. Extensive experiments over multiple benchmarks show that GenCo achieves superior generation with limited training data. In addition, GenCo also complements the augmentation approach with consistent and clear performance gains when combined.
APA, Harvard, Vancouver, ISO, and other styles
9

Kim, June-Woo, and Ho-Young Jung. "End-to-end speech recognition models using limited training data*." Phonetics and Speech Sciences 12, no. 4 (December 2020): 63–71. http://dx.doi.org/10.13064/ksss.2020.12.4.063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tambouratzis, George, and Marina Vassiliou. "Swarm Algorithms for NLP - The Case of Limited Training Data." Journal of Artificial Intelligence and Soft Computing Research 9, no. 3 (July 1, 2019): 219–34. http://dx.doi.org/10.2478/jaiscr-2019-0005.

Full text
Abstract:
Abstract The present article describes a novel phrasing model which can be used for segmenting sentences of unconstrained text into syntactically-defined phrases. This model is based on the notion of attraction and repulsion forces between adjacent words. Each of these forces is weighed appropriately by system parameters, the values of which are optimised via particle swarm optimisation. This approach is designed to be language-independent and is tested here for different languages. The phrasing model’s performance is assessed per se, by calculating the segmentation accuracy against a golden segmentation. Operational testing also involves integrating the model to a phrase-based Machine Translation (MT) system and measuring the translation quality when the phrasing model is used to segment input text into phrases. Experiments show that the performance of this approach is comparable to other leading segmentation methods and that it exceeds that of baseline systems.
APA, Harvard, Vancouver, ISO, and other styles
11

Liu, Weijian, Zhaojian Zhang, Jun Liu, Zheran Shang, and Yong-Liang Wang. "Detection of a rank-one signal with limited training data." Signal Processing 186 (September 2021): 108120. http://dx.doi.org/10.1016/j.sigpro.2021.108120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Park, Ji-Hoon, Seung-Mo Seo, and Ji-Hee Yoo. "SAR ATR for Limited Training Data Using DS-AE Network." Sensors 21, no. 13 (July 1, 2021): 4538. http://dx.doi.org/10.3390/s21134538.

Full text
Abstract:
Although automatic target recognition (ATR) with synthetic aperture radar (SAR) images has been one of the most important research topics, there is an inherent problem of performance degradation when the number of labeled SAR target images for training a classifier is limited. To address this problem, this article proposes a double squeeze-adaptive excitation (DS-AE) network where new channel attention modules are inserted into the convolutional neural network (CNN) with a modified ResNet18 architecture. Based on the squeeze-excitation (SE) network that employs a representative channel attention mechanism, the squeeze operation of the DS-AE network is carried out by additional fully connected layers to prevent drastic loss in the original channel information. Then, the subsequent excitation operation is performed by a new activation function, called the parametric sigmoid, to improve the adaptivity of selective emphasis of the useful channel information. Using the public SAR target dataset, the recognition rates from different network structures are compared by reducing the number of training images. The analysis results and performance comparison demonstrate that the DS-AE network showed much more improved SAR target recognition performances for small training datasets in relation to the CNN without channel attention modules and with the conventional SE channel attention modules.
APA, Harvard, Vancouver, ISO, and other styles
13

Kim, June-Woo, and Ho-Young Jung. "End-to-end speech recognition models using limited training data*." Phonetics and Speech Sciences 12, no. 4 (December 2020): 63–71. http://dx.doi.org/10.13064/ksss.2020.12.4.063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Ghorbandoost, Mostafa, Abolghasem Sayadiyan, Mohsen Ahangar, Hamid Sheikhzadeh, Abdoreza Sabzi Shahrebabaki, and Jamal Amini. "Voice conversion based on feature combination with limited training data." Speech Communication 67 (March 2015): 113–28. http://dx.doi.org/10.1016/j.specom.2014.12.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

YAMASHITA, Masaru. "Acoustic HMMs to Detect Abnormal Respiration with Limited Training Data." IEICE Transactions on Information and Systems E106.D, no. 3 (March 1, 2023): 374–80. http://dx.doi.org/10.1587/transinf.2022edp7068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Jingjing, Zheng Liu, Rong Xie, and Lei Ran. "Radar HRRP Target Recognition Based on Dynamic Learning with Limited Training Data." Remote Sensing 13, no. 4 (February 18, 2021): 750. http://dx.doi.org/10.3390/rs13040750.

Full text
Abstract:
For high-resolution range profile (HRRP)-based radar automatic target recognition (RATR), adequate training data are required to characterize a target signature effectively and get good recognition performance. However, collecting enough training data involving HRRP samples from each target orientation is hard. To tackle the HRRP-based RATR task with limited training data, a novel dynamic learning strategy is proposed based on the single-hidden layer feedforward network (SLFN) with an assistant classifier. In the offline training phase, the training data are used for pretraining the SLFN using a reduced kernel extreme learning machine (RKELM). In the online classification phase, the collected test data are first labeled by fusing the recognition results of the current SLFN and assistant classifier. Then the test samples with reliable pseudolabels are used as additional training data to update the parameters of SLFN with the online sequential RKELM (OS-RKELM). Moreover, to improve the accuracy of label estimation for test data, a novel semi-supervised learning method named constraint propagation-based label propagation (CPLP) was developed as an assistant classifier. The proposed method dynamically accumulates knowledge from training and test data through online learning, thereby reinforcing performance of the RATR system with limited training data. Experiments conducted on the simulated HRRP data from 10 civilian vehicles and real HRRP data from three military vehicles demonstrated the effectiveness of the proposed method when the training data are limited.
APA, Harvard, Vancouver, ISO, and other styles
17

Xu, Ning, Yibing Tang, Jingyi Bao, Aiming Jiang, Xiaofeng Liu, and Zhen Yang. "Voice conversion based on Gaussian processes by coherent and asymmetric training with limited training data." Speech Communication 58 (March 2014): 124–38. http://dx.doi.org/10.1016/j.specom.2013.11.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, S. L., A. W. C. Liew, W. H. Lau, and S. H. Leung. "An Automatic Lipreading System for Spoken Digits With Limited Training Data." IEEE Transactions on Circuits and Systems for Video Technology 18, no. 12 (December 2008): 1760–65. http://dx.doi.org/10.1109/tcsvt.2008.2004924.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Creswell, Antonia, Alison Pouplin, and Anil A. Bharath. "Denoising adversarial autoencoders: classifying skin lesions using limited labelled training data." IET Computer Vision 12, no. 8 (September 12, 2018): 1105–11. http://dx.doi.org/10.1049/iet-cvi.2018.5243.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Aghamaleki, Javad Abbasi, and Vahid Ashkani Chenarlogh. "Multi-stream CNN for facial expression recognition in limited training data." Multimedia Tools and Applications 78, no. 16 (April 25, 2019): 22861–82. http://dx.doi.org/10.1007/s11042-019-7530-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Crowson, Merry, Ron Hagensieker, and Björn Waske. "Mapping land cover change in northern Brazil with limited training data." International Journal of Applied Earth Observation and Geoinformation 78 (June 2019): 202–14. http://dx.doi.org/10.1016/j.jag.2018.10.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Jannati, Mohammad Javad, and Abolghasem Sayadiyan. "Part-Syllable Transformation-Based Voice Conversion with Very Limited Training Data." Circuits, Systems, and Signal Processing 37, no. 5 (August 30, 2017): 1935–57. http://dx.doi.org/10.1007/s00034-017-0639-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Demir, Begum, Francesca Bovolo, and Lorenzo Bruzzone. "Classification of Time Series of Multispectral Images With Limited Training Data." IEEE Transactions on Image Processing 22, no. 8 (August 2013): 3219–33. http://dx.doi.org/10.1109/tip.2013.2259838.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Lang, Yue, Qing Wang, Yang Yang, Chunping Hou, Yuan He, and Jinchen Xu. "Person identification with limited training data using radar micro‐Doppler signatures." Microwave and Optical Technology Letters 62, no. 3 (November 2019): 1060–68. http://dx.doi.org/10.1002/mop.32125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Khezri, Shirin, Jafar Tanha, Ali Ahmadi, and Arash Sharifi. "STDS: self-training data streams for mining limited labeled data in non-stationary environment." Applied Intelligence 50, no. 5 (January 21, 2020): 1448–67. http://dx.doi.org/10.1007/s10489-019-01585-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Tang, Yehui, Shan You, Chang Xu, Jin Han, Chen Qian, Boxin Shi, Chao Xu, and Changshui Zhang. "Reborn Filters: Pruning Convolutional Neural Networks with Limited Data." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5972–80. http://dx.doi.org/10.1609/aaai.v34i04.6058.

Full text
Abstract:
Channel pruning is effective in compressing the pretrained CNNs for their deployment on low-end edge devices. Most existing methods independently prune some of the original channels and need the complete original dataset to fix the performance drop after pruning. However, due to commercial protection or data privacy, users may only have access to a tiny portion of training examples, which could be insufficient for the performance recovery. In this paper, for pruning with limited data, we propose to use all original filters to directly develop new compact filters, named reborn filters, so that all useful structure priors in the original filters can be well preserved into the pruned networks, alleviating the performance drop accordingly. During training, reborn filters can be easily implemented via 1×1 convolutional layers and then be fused in the inference stage for acceleration. Based on reborn filters, the proposed channel pruning algorithm shows its effectiveness and superiority on extensive experiments.
APA, Harvard, Vancouver, ISO, and other styles
27

Duong, Huu-Thanh, Tram-Anh Nguyen-Thi, and Vinh Truong Hoang. "Vietnamese Sentiment Analysis under Limited Training Data Based on Deep Neural Networks." Complexity 2022 (June 30, 2022): 1–14. http://dx.doi.org/10.1155/2022/3188449.

Full text
Abstract:
The annotated dataset is an essential requirement to develop an artificial intelligence (AI) system effectively and expect the generalization of the predictive models and to avoid overfitting. Lack of the training data is a big barrier so that AI systems can broaden in several domains which have no or missing training data. Building these datasets is a tedious and expensive task and depends on the domains and languages. This is especially a big challenge for low-resource languages. In this paper, we experiment and evaluate many various approaches on sentiment analysis problems so that they can still obtain high performances under limited training data. This paper uses the preprocessing techniques to clean and normalize the data and generate the new samples from the limited training dataset based on many text augmentation techniques such as lexicon substitution, sentence shuffling, back translation, syntax-tree transformation, and embedding mixup. Several experiments have been performed for both well-known machine learning-based classifiers and deep learning models. We compare, analyze, and evaluate the results to indicate the advantage and disadvantage points of the techniques for each approach. The experimental results show that the data augmentation techniques enhance the accuracy of the predictive models; this promises that smart systems can be applied widely in several domains under limited training data.
APA, Harvard, Vancouver, ISO, and other styles
28

Jackson, Q., and D. A. Landgrebe. "An adaptive classifier design for high-dimensional data analysis with a limited training data set." IEEE Transactions on Geoscience and Remote Sensing 39, no. 12 (2001): 2664–79. http://dx.doi.org/10.1109/36.975001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Chen, Shangyu, Wenya Wang, and Sinno Jialin Pan. "Deep Neural Network Quantization via Layer-Wise Optimization Using Limited Training Data." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3329–36. http://dx.doi.org/10.1609/aaai.v33i01.33013329.

Full text
Abstract:
The advancement of deep models poses great challenges to real-world deployment because of the limited computational ability and storage space on edge devices. To solve this problem, existing works have made progress to prune or quantize deep models. However, most existing methods rely heavily on a supervised training process to achieve satisfactory performance, acquiring large amount of labeled training data, which may not be practical for real deployment. In this paper, we propose a novel layer-wise quantization method for deep neural networks, which only requires limited training data (1% of original dataset). Specifically, we formulate parameters quantization for each layer as a discrete optimization problem, and solve it using Alternative Direction Method of Multipliers (ADMM), which gives an efficient closed-form solution. We prove that the final performance drop after quantization is bounded by a linear combination of the reconstructed errors caused at each layer. Based on the proved theorem, we propose an algorithm to quantize a deep neural network layer by layer with an additional weights update step to minimize the final error. Extensive experiments on benchmark deep models are conducted to demonstrate the effectiveness of our proposed method using 1% of CIFAR10 and ImageNet datasets. Codes are available in: https://github.com/csyhhu/L-DNQ
APA, Harvard, Vancouver, ISO, and other styles
30

Mamyrbayev, O. Zh, M. Othman, A. T. Akhmediyarova, A. S. Kydyrbekova, and N. O. Mekebayev. "VOICE VERIFICATION USING I-VECTORS AND NEURAL NETWORKS WITH LIMITED TRAINING DATA." BULLETIN 3, no. 379 (June 15, 2019): 36–43. http://dx.doi.org/10.32014/2019.2518-1467.66.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Ziv, J. "An efficient universal prediction algorithm for unknown sources with limited training data." IEEE Transactions on Information Theory 48, no. 6 (June 2002): 1690–93. http://dx.doi.org/10.1109/tit.2002.1003847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Oh, Yujin, Sangjoon Park, and Jong Chul Ye. "Deep Learning COVID-19 Features on CXR Using Limited Training Data Sets." IEEE Transactions on Medical Imaging 39, no. 8 (August 2020): 2688–700. http://dx.doi.org/10.1109/tmi.2020.2993291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Hou, Yuchao, Ting Xu, Hongping Hu, Peng Wang, Hongxin Xue, and Yanping Bai. "MdpCaps-Csl for SAR Image Target Recognition With Limited Labeled Training Data." IEEE Access 8 (2020): 176217–31. http://dx.doi.org/10.1109/access.2020.3026469.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Ge, Zhiqiang, Zhihuan Song, and Furong Gao. "Self-Training Statistical Quality Prediction of Batch Processes with Limited Quality Data." Industrial & Engineering Chemistry Research 52, no. 2 (December 28, 2012): 979–84. http://dx.doi.org/10.1021/ie300616s.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Kaewtip, Kantapon, Abeer Alwan, and Charles Taylor. "Robust Hidden Markov Models for limited training data for birdsong phrase classification." Journal of the Acoustical Society of America 141, no. 5 (May 2017): 3725–26. http://dx.doi.org/10.1121/1.4988171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Krishnagopal, Sanjukta, Yiannis Aloimonos, and Michelle Girvan. "Similarity Learning and Generalization with Limited Data: A Reservoir Computing Approach." Complexity 2018 (November 1, 2018): 1–15. http://dx.doi.org/10.1155/2018/6953836.

Full text
Abstract:
We investigate the ways in which a machine learning architecture known as Reservoir Computing learns concepts such as “similar” and “different” and other relationships between image pairs and generalizes these concepts to previously unseen classes of data. We present two Reservoir Computing architectures, which loosely resemble neural dynamics, and show that a Reservoir Computer (RC) trained to identify relationships between image pairs drawn from a subset of training classes generalizes the learned relationships to substantially different classes unseen during training. We demonstrate our results on the simple MNIST handwritten digit database as well as a database of depth maps of visual scenes in videos taken from a moving camera. We consider image pair relationships such as images from the same class; images from the same class with one image superposed with noise, rotated 90°, blurred, or scaled; images from different classes. We observe that the reservoir acts as a nonlinear filter projecting the input into a higher dimensional space in which the relationships are separable; i.e., the reservoir system state trajectories display different dynamical patterns that reflect the corresponding input pair relationships. Thus, as opposed to training in the entire high-dimensional reservoir space, the RC only needs to learns characteristic features of these dynamical patterns, allowing it to perform well with very few training examples compared with conventional machine learning feed-forward techniques such as deep learning. In generalization tasks, we observe that RCs perform significantly better than state-of-the-art, feed-forward, pair-based architectures such as convolutional and deep Siamese Neural Networks (SNNs). We also show that RCs can not only generalize relationships, but also generalize combinations of relationships, providing robust and effective image pair classification. Our work helps bridge the gap between explainable machine learning with small datasets and biologically inspired analogy-based learning, pointing to new directions in the investigation of learning processes.
APA, Harvard, Vancouver, ISO, and other styles
37

Ziel, Florian. "Load Nowcasting: Predicting Actuals with Limited Data." Energies 13, no. 6 (March 20, 2020): 1443. http://dx.doi.org/10.3390/en13061443.

Full text
Abstract:
We introduce the problem of load nowcasting to the energy forecasting literature. The recent load of the objective area is predicted based on limited available metering data within this area. Thus, slightly different from load forecasting, we are predicting the recent past using limited available metering data from the supply side of the system. Next, to an industry benchmark model, we introduce multiple high-dimensional models for providing more accurate predictions. They evaluate metered interconnector and generation unit data of different types like wind and solar power, storages, and nuclear and fossil power plants. Additionally, we augment the model by seasonal and autoregressive components to improve the nowcasting performance. We consider multiple estimation techniques based on the lassoand ridge and study the impact of the choice of the training/calibration period. The methodology is applied to a European TSO dataset from 2014 to 2019. The overall results show that in comparison to the industry benchmark, an accuracy improvement in terms of MAE and RMSE of about 60% is achieved. The best model is based on the ridge estimator and uses a specific non-standard shrinkage target. Due to the linear model structure, we can easily interpret the model output.
APA, Harvard, Vancouver, ISO, and other styles
38

Senchenkov, Valentin, Damir Absalyamov, and Dmitriy Avsyukevich. "Diagnostics of life support systems with limited statistical data on failures." E3S Web of Conferences 140 (2019): 05002. http://dx.doi.org/10.1051/e3sconf/201914005002.

Full text
Abstract:
The authors suggest an approach to determine the technical conditions of life support systems of public buildings in conditions of significant uncertainty of statistical information on failures. To improve the re1iabi1ity and increase the resources of life support systems, maintenance and repair strategies are proposed according to the actual state, which implies the availability of objective diagnostic information. The essence of methods for constructing images of system failures based on training procedures is revealed, the latter being founded on the theory of nonparametric statistical analysis. The image is understood as a formalized description of the failure as an element of the system diagnosis model. The solution of image synthesis problem is given when the orthogonal trigonometric basis is applied in the recurrent relations implementing the learning process. The specific case assumes the existence of data on ranges of diagnostic parameter change at all failures of the investigated object. A modification of the training procedure is performed to build images of failures of life support systems of the latest generation when it is possible to find the ranges of changes in diagnostic parameters only in operational state. The modification consists of the formation and application of an orthonormal binary basis in recurrent relations. There is an example of image constructing of one of the ventilation and air conditioning system failures of a public building on the basis of a modified training procedure.
APA, Harvard, Vancouver, ISO, and other styles
39

Bardis, Michelle, Roozbeh Houshyar, Chanon Chantaduly, Alexander Ushinsky, Justin Glavis-Bloom, Madeleine Shaver, Daniel Chow, Edward Uchio, and Peter Chang. "Deep Learning with Limited Data: Organ Segmentation Performance by U-Net." Electronics 9, no. 8 (July 26, 2020): 1199. http://dx.doi.org/10.3390/electronics9081199.

Full text
Abstract:
(1) Background: The effectiveness of deep learning artificial intelligence depends on data availability, often requiring large volumes of data to effectively train an algorithm. However, few studies have explored the minimum number of images needed for optimal algorithmic performance. (2) Methods: This institutional review board (IRB)-approved retrospective review included patients who received prostate magnetic resonance imaging (MRI) between September 2014 and August 2018 and a magnetic resonance imaging (MRI) fusion transrectal biopsy. T2-weighted images were manually segmented by a board-certified abdominal radiologist. Segmented images were trained on a deep learning network with the following case numbers: 8, 16, 24, 32, 40, 80, 120, 160, 200, 240, 280, and 320. (3) Results: Our deep learning network’s performance was assessed with a Dice score, which measures overlap between the radiologist’s segmentations and deep learning-generated segmentations and ranges from 0 (no overlap) to 1 (perfect overlap). Our algorithm’s Dice score started at 0.424 with 8 cases and improved to 0.858 with 160 cases. After 160 cases, the Dice increased to 0.867 with 320 cases. (4) Conclusions: Our deep learning network for prostate segmentation produced the highest overall Dice score with 320 training cases. Performance improved notably from training sizes of 8 to 120, then plateaued with minimal improvement at training case size above 160. Other studies utilizing comparable network architectures may have similar plateaus, suggesting suitable results may be obtainable with small datasets.
APA, Harvard, Vancouver, ISO, and other styles
40

He, Qiuchen, Shaobo Li, Chuanjiang Li, Junxing Zhang, Ansi Zhang, and Peng Zhou. "A Hybrid Matching Network for Fault Diagnosis under Different Working Conditions with Limited Data." Computational Intelligence and Neuroscience 2022 (July 1, 2022): 1–14. http://dx.doi.org/10.1155/2022/3024590.

Full text
Abstract:
Intelligent fault diagnosis methods based on deep learning have achieved much progress in recent years. However, there are two major factors causing serious degradation of the performance of these algorithms in real industrial applications, i.e., limited labeled training data and complex working conditions. To solve these problems, this study proposed a domain generalization-based hybrid matching network utilizing a matching network to diagnose the faults using features encoded by an autoencoder. The main idea was to regularize the feature extractor of the network with an autoencoder in order to reduce the risk of overfitting with limited training samples. In addition, a training strategy using dropout with random changing rates on inputs was implemented to enhance the model’s generalization on unseen domains. The proposed method was validated on two different datasets containing artificial and real faults. The results showed that considerable performance was achieved by the proposed method under cross-domain tasks with limited training samples.
APA, Harvard, Vancouver, ISO, and other styles
41

Vidal, Joel, Guillem Vallicrosa, Robert Martí, and Marc Barnada. "Brickognize: Applying Photo-Realistic Image Synthesis for Lego Bricks Recognition with Limited Data." Sensors 23, no. 4 (February 8, 2023): 1898. http://dx.doi.org/10.3390/s23041898.

Full text
Abstract:
During the last few years, supervised deep convolutional neural networks have become the state-of-the-art for image recognition tasks. Nevertheless, their performance is severely linked to the amount and quality of the training data. Acquiring and labeling data is a major challenge that limits their expansion to new applications, especially with limited data. Recognition of Lego bricks is a clear example of a real-world deep learning application that has been limited by the difficulties associated with data gathering and training. In this work, photo-realistic image synthesis and few-shot fine-tuning are proposed to overcome limited data in the context of Lego bricks recognition. Using synthetic images and a limited set of 20 real-world images from a controlled environment, the proposed system is evaluated on controlled and uncontrolled real-world testing datasets. Results show the good performance of the synthetically generated data and how limited data from a controlled domain can be successfully used for the few-shot fine-tuning of the synthetic training without a perceptible narrowing of its domain. Obtained results reach an AP50 value of 91.33% for uncontrolled scenarios and 98.7% for controlled ones.
APA, Harvard, Vancouver, ISO, and other styles
42

Park, Sangyong, Jaeseon Kim, and Yong Seok Heo. "Semantic Segmentation Using Pixel-Wise Adaptive Label Smoothing via Self-Knowledge Distillation for Limited Labeling Data." Sensors 22, no. 7 (March 29, 2022): 2623. http://dx.doi.org/10.3390/s22072623.

Full text
Abstract:
To achieve high performance, most deep convolutional neural networks (DCNNs) require a significant amount of training data with ground truth labels. However, creating ground-truth labels for semantic segmentation requires more time, human effort, and cost compared with other tasks such as classification and object detection, because the ground-truth label of every pixel in an image is required. Hence, it is practically demanding to train DCNNs using a limited amount of training data for semantic segmentation. Generally, training DCNNs using a limited amount of data is problematic as it easily results in a decrease in the accuracy of the networks because of overfitting to the training data. Here, we propose a new regularization method called pixel-wise adaptive label smoothing (PALS) via self-knowledge distillation to stably train semantic segmentation networks in a practical situation, in which only a limited amount of training data is available. To mitigate the problem caused by limited training data, our method fully utilizes the internal statistics of pixels within an input image. Consequently, the proposed method generates a pixel-wise aggregated probability distribution using a similarity matrix that encodes the affinities between all pairs of pixels. To further increase the accuracy, we add one-hot encoded distributions with ground-truth labels to these aggregated distributions, and obtain our final soft labels. We demonstrate the effectiveness of our method for the Cityscapes dataset and the Pascal VOC2012 dataset using limited amounts of training data, such as 10%, 30%, 50%, and 100%. Based on various quantitative and qualitative comparisons, our method demonstrates more accurate results compared with previous methods. Specifically, for the Cityscapes test set, our method achieved mIoU improvements of 0.076%, 1.848%, 1.137%, and 1.063% for 10%, 30%, 50%, and 100% training data, respectively, compared with the method of the cross-entropy loss using one-hot encoding with ground truth labels.
APA, Harvard, Vancouver, ISO, and other styles
43

Gimeno, Pablo, Victoria Mingote, Alfonso Ortega, Antonio Miguel, and Eduardo Lleida. "Generalizing AUC Optimization to Multiclass Classification for Audio Segmentation With Limited Training Data." IEEE Signal Processing Letters 28 (2021): 1135–39. http://dx.doi.org/10.1109/lsp.2021.3084501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Jafaryani, Mohamadreza, Hamid Sheikhzadeh, and Vahid Pourahmadi. "Parallel voice conversion with limited training data using stochastic variational deep kernel learning." Engineering Applications of Artificial Intelligence 115 (October 2022): 105279. http://dx.doi.org/10.1016/j.engappai.2022.105279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Li, Hai, Wenyu Song, Weijian Liu, and Renbiao Wu. "Moving target detection with limited training data based on the subspace orthogonal projection." IET Radar, Sonar & Navigation 12, no. 7 (July 2018): 679–84. http://dx.doi.org/10.1049/iet-rsn.2017.0449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Zhang, Mengmeng, Wei Li, Ran Tao, and Song Wang. "Transfer Learning for Optical and SAR Data Correspondence Identification With Limited Training Labels." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 14 (2021): 1545–57. http://dx.doi.org/10.1109/jstars.2020.3044643.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Davari, Amirabbas, Erchan Aptoula, Berrin Yanikoglu, Andreas Maier, and Christian Riess. "GMM-Based Synthetic Samples for Classification of Hyperspectral Images With Limited Training Data." IEEE Geoscience and Remote Sensing Letters 15, no. 6 (June 2018): 942–46. http://dx.doi.org/10.1109/lgrs.2018.2817361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Sun, Yuanshuang, Yinghua Wang, Hongwei Liu, Ning Wang, and Jian Wang. "SAR Target Recognition With Limited Training Data Based on Angular Rotation Generative Network." IEEE Geoscience and Remote Sensing Letters 17, no. 11 (November 2020): 1928–32. http://dx.doi.org/10.1109/lgrs.2019.2958379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Zeng, Dan, Luuk Spreeuwers, Raymond Veldhuis, and Qijun Zhao. "Combined training strategy for low-resolution face recognition with limited application-specific data." IET Image Processing 13, no. 10 (August 22, 2019): 1790–96. http://dx.doi.org/10.1049/iet-ipr.2018.5732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Shin, Hyunkyung, Hyeonung Shin, Wonje Choi, Jaesung Park, Minjae Park, Euiyul Koh, and Honguk Woo. "Sample-Efficient Deep Learning Techniques for Burn Severity Assessment with Limited Data Conditions." Applied Sciences 12, no. 14 (July 21, 2022): 7317. http://dx.doi.org/10.3390/app12147317.

Full text
Abstract:
The automatic analysis of medical data and images to help diagnosis has recently become a major area in the application of deep learning. In general, deep learning techniques can be effective when a large high-quality dataset is available for model training. Thus, there is a need for sample-efficient learning techniques, particularly in the field of medical image analysis, as significant cost and effort are required to obtain a sufficient number of well-annotated high-quality training samples. In this paper, we address the problem of deep neural network training under sample deficiency by investigating several sample-efficient deep learning techniques. We concentrate on applying these techniques to skin burn image analysis and classification. We first build a large-scale, professionally annotated dataset of skin burn images, which enables the establishment of convolutional neural network (CNN) models for burn severity assessment with high accuracy. We then deliberately set data limitation conditions and adapt several sample-efficient techniques, such as transferable learning (TL), self-supervised learning (SSL), federated learning (FL), and generative adversarial network (GAN)-based data augmentation, to those conditions. Through comprehensive experimentation, we evaluate the sample-efficient deep learning techniques for burn severity assessment, and show, in particular, that SSL models learned on a small task-specific dataset can achieve comparable accuracy to a baseline model learned on a six-times larger dataset. We also demonstrate the applicability of FL and GANs to model training under different data limitation conditions that commonly occur in the area of healthcare and medicine where deep learning models are adopted.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography