Articles de revues sur le sujet « Learning with Limited Data »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Learning with Limited Data.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Learning with Limited Data ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Oh, Se Eun, Nate Mathews, Mohammad Saidur Rahman, Matthew Wright et Nicholas Hopper. « GANDaLF : GAN for Data-Limited Fingerprinting ». Proceedings on Privacy Enhancing Technologies 2021, no 2 (29 janvier 2021) : 305–22. http://dx.doi.org/10.2478/popets-2021-0029.

Texte intégral
Résumé :
Abstract We introduce Generative Adversarial Networks for Data-Limited Fingerprinting (GANDaLF), a new deep-learning-based technique to perform Website Fingerprinting (WF) on Tor traffic. In contrast to most earlier work on deep-learning for WF, GANDaLF is intended to work with few training samples, and achieves this goal through the use of a Generative Adversarial Network to generate a large set of “fake” data that helps to train a deep neural network in distinguishing between classes of actual training data. We evaluate GANDaLF in low-data scenarios including as few as 10 training instances per site, and in multiple settings, including fingerprinting of website index pages and fingerprinting of non-index pages within a site. GANDaLF achieves closed-world accuracy of 87% with just 20 instances per site (and 100 sites) in standard WF settings. In particular, GANDaLF can outperform Var-CNN and Triplet Fingerprinting (TF) across all settings in subpage fingerprinting. For example, GANDaLF outperforms TF by a 29% margin and Var-CNN by 38% for training sets using 20 instances per site.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Triantafillou, Sofia, et Greg Cooper. « Learning Adjustment Sets from Observational and Limited Experimental Data ». Proceedings of the AAAI Conference on Artificial Intelligence 35, no 11 (18 mai 2021) : 9940–48. http://dx.doi.org/10.1609/aaai.v35i11.17194.

Texte intégral
Résumé :
Estimating causal effects from observational data is not always possible due to confounding. Identifying a set of appropriate covariates (adjustment set) and adjusting for their influence can remove confounding bias; however, such a set is often not identifiable from observational data alone. Experimental data allow unbiased causal effect estimation, but are typically limited in sample size and can therefore yield estimates of high variance. Moreover, experiments are often performed on a different (specialized) population than the population of interest. In this work, we introduce a method that combines large observational and limited experimental data to identify adjustment sets and improve the estimation of causal effects for a target population. The method scores an adjustment set by calculating the marginal likelihood for the experimental data given an observationally-derived causal effect estimate, using a putative adjustment set. The method can make inferences that are not possible using constraint-based methods. We show that the method can improve causal effect estimation, and can make additional inferences when compared to state-of-the-art methods.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Zhao, Yao, Dong Joo Rhee, Carlos Cardenas, Laurence E. Court et Jinzhong Yang. « Training deep‐learning segmentation models from severely limited data ». Medical Physics 48, no 4 (19 février 2021) : 1697–706. http://dx.doi.org/10.1002/mp.14728.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Kim, Minjeong, Yujung Gil, Yuyeon Kim et Jihie Kim. « Deep-Learning-Based Scalp Image Analysis Using Limited Data ». Electronics 12, no 6 (14 mars 2023) : 1380. http://dx.doi.org/10.3390/electronics12061380.

Texte intégral
Résumé :
The World Health Organization and Korea National Health Insurance assert that the number of alopecia patients is increasing every year, and approximately 70 percent of adults suffer from scalp problems. Although alopecia is a genetic problem, it is difficult to diagnose at an early stage. Although deep-learning-based approaches have been effective for medical image analyses, it is challenging to generate deep learning models for alopecia detection and analysis because creating an alopecia image dataset is challenging. In this paper, we present an approach for generating a model specialized for alopecia analysis that achieves high accuracy by applying data preprocessing, data augmentation, and an ensemble of deep learning models that have been effective for medical image analyses. We use an alopecia image dataset containing 526 good, 13,156 mild, 3742 moderate, and 825 severe alopecia images. The dataset was further augmented by applying normalization, geometry-based augmentation (rotate, vertical flip, horizontal flip, crop, and affine transformation), and PCA augmentation. We compare the performance of a single deep learning model using ResNet, ResNeXt, DenseNet, XceptionNet, and ensembles of these models. The best result was achieved when DenseNet, XceptionNet, and ResNet were combined to achieve an accuracy of 95.75 and an F1 score of 87.05.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Chen, Jiaao, Derek Tam, Colin Raffel, Mohit Bansal et Diyi Yang. « An Empirical Survey of Data Augmentation for Limited Data Learning in NLP ». Transactions of the Association for Computational Linguistics 11 (2023) : 191–211. http://dx.doi.org/10.1162/tacl_a_00542.

Texte intégral
Résumé :
Abstract NLP has achieved great progress in the past decade through the use of neural models and large labeled datasets. The dependence on abundant data prevents NLP models from being applied to low-resource settings or novel tasks where significant time, money, or expertise is required to label massive amounts of textual data. Recently, data augmentation methods have been explored as a means of improving data efficiency in NLP. To date, there has been no systematic empirical overview of data augmentation for NLP in the limited labeled data setting, making it difficult to understand which methods work in which settings. In this paper, we provide an empirical survey of recent progress on data augmentation for NLP in the limited labeled data setting, summarizing the landscape of methods (including token-level augmentations, sentence-level augmentations, adversarial augmentations, and hidden-space augmentations) and carrying out experiments on 11 datasets covering topics/news classification, inference tasks, paraphrasing tasks, and single-sentence tasks. Based on the results, we draw several conclusions to help practitioners choose appropriate augmentations in different settings and discuss the current challenges and future directions for limited data learning in NLP.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Han, Te, Chao Liu, Rui Wu et Dongxiang Jiang. « Deep transfer learning with limited data for machinery fault diagnosis ». Applied Soft Computing 103 (mai 2021) : 107150. http://dx.doi.org/10.1016/j.asoc.2021.107150.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Ji, Xuefei, Jue Wang, Ye Li, Qiang Sun, Shi Jin et Tony Q. S. Quek. « Data-Limited Modulation Classification With a CVAE-Enhanced Learning Model ». IEEE Communications Letters 24, no 10 (octobre 2020) : 2191–95. http://dx.doi.org/10.1109/lcomm.2020.3004877.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Forestier, Germain, et Cédric Wemmert. « Semi-supervised learning using multiple clusterings with limited labeled data ». Information Sciences 361-362 (septembre 2016) : 48–65. http://dx.doi.org/10.1016/j.ins.2016.04.040.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Wen, Jiahui, et Zhiying Wang. « Learning general model for activity recognition with limited labelled data ». Expert Systems with Applications 74 (mai 2017) : 19–28. http://dx.doi.org/10.1016/j.eswa.2017.01.002.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Zhang, Ansi, Shaobo Li, Yuxin Cui, Wanli Yang, Rongzhi Dong et Jianjun Hu. « Limited Data Rolling Bearing Fault Diagnosis With Few-Shot Learning ». IEEE Access 7 (2019) : 110895–904. http://dx.doi.org/10.1109/access.2019.2934233.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Tulsyan, Aditya, Christopher Garvin et Cenk Undey. « Machine-learning for biopharmaceutical batch process monitoring with limited data ». IFAC-PapersOnLine 51, no 18 (2018) : 126–31. http://dx.doi.org/10.1016/j.ifacol.2018.09.287.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
12

Prasanna Das, Hari, Ryan Tran, Japjot Singh, Xiangyu Yue, Geoffrey Tison, Alberto Sangiovanni-Vincentelli et Costas J. Spanos. « Conditional Synthetic Data Generation for Robust Machine Learning Applications with Limited Pandemic Data ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 11 (28 juin 2022) : 11792–800. http://dx.doi.org/10.1609/aaai.v36i11.21435.

Texte intégral
Résumé :
Background: At the onset of a pandemic, such as COVID-19, data with proper labeling/attributes corresponding to the new disease might be unavailable or sparse. Machine Learning (ML) models trained with the available data, which is limited in quantity and poor in diversity, will often be biased and inaccurate. At the same time, ML algorithms designed to fight pandemics must have good performance and be developed in a time-sensitive manner. To tackle the challenges of limited data, and label scarcity in the available data, we propose generating conditional synthetic data, to be used alongside real data for developing robust ML models. Methods: We present a hybrid model consisting of a conditional generative flow and a classifier for conditional synthetic data generation. The classifier decouples the feature representation for the condition, which is fed to the flow to extract the local noise. We generate synthetic data by manipulating the local noise with fixed conditional feature representation. We also propose a semi-supervised approach to generate synthetic samples in the absence of labels for a majority of the available data. Results: We performed conditional synthetic generation for chest computed tomography (CT) scans corresponding to normal, COVID-19, and pneumonia afflicted patients. We show that our method significantly outperforms existing models both on qualitative and quantitative performance, and our semi-supervised approach can efficiently synthesize conditional samples under label scarcity. As an example of downstream use of synthetic data, we show improvement in COVID-19 detection from CT scans with conditional synthetic data augmentation.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Zhou, Renzhe, Chen-Xiao Gao, Zongzhang Zhang et Yang Yu. « Generalizable Task Representation Learning for Offline Meta-Reinforcement Learning with Data Limitations ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 15 (24 mars 2024) : 17132–40. http://dx.doi.org/10.1609/aaai.v38i15.29658.

Texte intégral
Résumé :
Generalization and sample efficiency have been long-standing issues concerning reinforcement learning, and thus the field of Offline Meta-Reinforcement Learning (OMRL) has gained increasing attention due to its potential of solving a wide range of problems with static and limited offline data. Existing OMRL methods often assume sufficient training tasks and data coverage to apply contrastive learning to extract task representations. However, such assumptions are not applicable in several real-world applications and thus undermine the generalization ability of the representations. In this paper, we consider OMRL with two types of data limitations: limited training tasks and limited behavior diversity and propose a novel algorithm called GENTLE for learning generalizable task representations in the face of data limitations. GENTLE employs Task Auto-Encoder (TAE), which is an encoder-decoder architecture to extract the characteristics of the tasks. Unlike existing methods, TAE is optimized solely by reconstruction of the state transition and reward, which captures the generative structure of the task models and produces generalizable representations when training tasks are limited. To alleviate the effect of limited behavior diversity, we consistently construct pseudo-transitions to align the data distribution used to train TAE with the data distribution encountered during testing. Empirically, GENTLE significantly outperforms existing OMRL methods on both in-distribution tasks and out-of-distribution tasks across both the given-context protocol and the one-shot protocol.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Guo, Runze, Bei Sun, Xiaotian Qiu, Shaojing Su, Zhen Zuo et Peng Wu. « Fine-Grained Recognition of Surface Targets with Limited Data ». Electronics 9, no 12 (2 décembre 2020) : 2044. http://dx.doi.org/10.3390/electronics9122044.

Texte intégral
Résumé :
Recognition of surface targets has a vital influence on the development of military and civilian applications such as maritime rescue patrols, illegal-vessel screening, and maritime operation monitoring. However, owing to the interference of visual similarity and environmental variations and the lack of high-quality datasets, accurate recognition of surface targets has always been a challenging task. In this paper, we introduce a multi-attention residual model based on deep learning methods, in which channel and spatial attention modules are applied for feature fusion. In addition, we use transfer learning to improve the feature expression capabilities of the model under conditions of limited data. A function based on metric learning is adopted to increase the distance between different classes. Finally, a dataset with eight types of surface targets is established. Comparative experiments on our self-built dataset show that the proposed method focuses more on discriminative regions, avoiding problems like gradient disappearance, and achieves better classification results than B-CNN, RA-CNN, MAMC, and MA-CNN, DFL-CNN.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Bernatchez, Renaud, Audrey Durand et Flavie Lavoie-Cardinal. « Annotation Cost-Sensitive Deep Active Learning with Limited Data (Student Abstract) ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 11 (28 juin 2022) : 12913–14. http://dx.doi.org/10.1609/aaai.v36i11.21593.

Texte intégral
Résumé :
Deep learning is a promising avenue to automate tedious analysis tasks in biomedical imaging. However, its application in such a context is limited by the large amount of labeled data required to train deep learning models. While active learning may be used to reduce the amount of labeling data, many approaches do not consider the cost of annotating, which is often significant in a biomedical imaging setting. In this work we show how annotation cost can be considered and learned during active learning on a classification task on the MNIST dataset.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Alzubaidi, Laith, Muthana Al-Amidie, Ahmed Al-Asadi, Amjad J. Humaidi, Omran Al-Shamma, Mohammed A. Fadhel, Jinglan Zhang, J. Santamaría et Ye Duan. « Novel Transfer Learning Approach for Medical Imaging with Limited Labeled Data ». Cancers 13, no 7 (30 mars 2021) : 1590. http://dx.doi.org/10.3390/cancers13071590.

Texte intégral
Résumé :
Deep learning requires a large amount of data to perform well. However, the field of medical image analysis suffers from a lack of sufficient data for training deep learning models. Moreover, medical images require manual labeling, usually provided by human annotators coming from various backgrounds. More importantly, the annotation process is time-consuming, expensive, and prone to errors. Transfer learning was introduced to reduce the need for the annotation process by transferring the deep learning models with knowledge from a previous task and then by fine-tuning them on a relatively small dataset of the current task. Most of the methods of medical image classification employ transfer learning from pretrained models, e.g., ImageNet, which has been proven to be ineffective. This is due to the mismatch in learned features between the natural image, e.g., ImageNet, and medical images. Additionally, it results in the utilization of deeply elaborated models. In this paper, we propose a novel transfer learning approach to overcome the previous drawbacks by means of training the deep learning model on large unlabeled medical image datasets and by next transferring the knowledge to train the deep learning model on the small amount of labeled medical images. Additionally, we propose a new deep convolutional neural network (DCNN) model that combines recent advancements in the field. We conducted several experiments on two challenging medical imaging scenarios dealing with skin and breast cancer classification tasks. According to the reported results, it has been empirically proven that the proposed approach can significantly improve the performance of both classification scenarios. In terms of skin cancer, the proposed model achieved an F1-score value of 89.09% when trained from scratch and 98.53% with the proposed approach. Secondly, it achieved an accuracy value of 85.29% and 97.51%, respectively, when trained from scratch and using the proposed approach in the case of the breast cancer scenario. Finally, we concluded that our method can possibly be applied to many medical imaging problems in which a substantial amount of unlabeled image data is available and the labeled image data is limited. Moreover, it can be utilized to improve the performance of medical imaging tasks in the same domain. To do so, we used the pretrained skin cancer model to train on feet skin to classify them into two classes—either normal or abnormal (diabetic foot ulcer (DFU)). It achieved an F1-score value of 86.0% when trained from scratch, 96.25% using transfer learning, and 99.25% using double-transfer learning.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Ayaz, Adeeba, Maddu Rajesh, Shailesh Kumar Singh et Shaik Rehana. « Estimation of reference evapotranspiration using machine learning models with limited data ». AIMS Geosciences 7, no 3 (2021) : 268–90. http://dx.doi.org/10.3934/geosci.2021016.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Mazumder, Pratik, et Pravendra Singh. « Protected attribute guided representation learning for bias mitigation in limited data ». Knowledge-Based Systems 244 (mai 2022) : 108449. http://dx.doi.org/10.1016/j.knosys.2022.108449.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Bardis, Michelle, Roozbeh Houshyar, Chanon Chantaduly, Alexander Ushinsky, Justin Glavis-Bloom, Madeleine Shaver, Daniel Chow, Edward Uchio et Peter Chang. « Deep Learning with Limited Data : Organ Segmentation Performance by U-Net ». Electronics 9, no 8 (26 juillet 2020) : 1199. http://dx.doi.org/10.3390/electronics9081199.

Texte intégral
Résumé :
(1) Background: The effectiveness of deep learning artificial intelligence depends on data availability, often requiring large volumes of data to effectively train an algorithm. However, few studies have explored the minimum number of images needed for optimal algorithmic performance. (2) Methods: This institutional review board (IRB)-approved retrospective review included patients who received prostate magnetic resonance imaging (MRI) between September 2014 and August 2018 and a magnetic resonance imaging (MRI) fusion transrectal biopsy. T2-weighted images were manually segmented by a board-certified abdominal radiologist. Segmented images were trained on a deep learning network with the following case numbers: 8, 16, 24, 32, 40, 80, 120, 160, 200, 240, 280, and 320. (3) Results: Our deep learning network’s performance was assessed with a Dice score, which measures overlap between the radiologist’s segmentations and deep learning-generated segmentations and ranges from 0 (no overlap) to 1 (perfect overlap). Our algorithm’s Dice score started at 0.424 with 8 cases and improved to 0.858 with 160 cases. After 160 cases, the Dice increased to 0.867 with 320 cases. (4) Conclusions: Our deep learning network for prostate segmentation produced the highest overall Dice score with 320 training cases. Performance improved notably from training sizes of 8 to 120, then plateaued with minimal improvement at training case size above 160. Other studies utilizing comparable network architectures may have similar plateaus, suggesting suitable results may be obtainable with small datasets.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Krishnagopal, Sanjukta, Yiannis Aloimonos et Michelle Girvan. « Similarity Learning and Generalization with Limited Data : A Reservoir Computing Approach ». Complexity 2018 (1 novembre 2018) : 1–15. http://dx.doi.org/10.1155/2018/6953836.

Texte intégral
Résumé :
We investigate the ways in which a machine learning architecture known as Reservoir Computing learns concepts such as “similar” and “different” and other relationships between image pairs and generalizes these concepts to previously unseen classes of data. We present two Reservoir Computing architectures, which loosely resemble neural dynamics, and show that a Reservoir Computer (RC) trained to identify relationships between image pairs drawn from a subset of training classes generalizes the learned relationships to substantially different classes unseen during training. We demonstrate our results on the simple MNIST handwritten digit database as well as a database of depth maps of visual scenes in videos taken from a moving camera. We consider image pair relationships such as images from the same class; images from the same class with one image superposed with noise, rotated 90°, blurred, or scaled; images from different classes. We observe that the reservoir acts as a nonlinear filter projecting the input into a higher dimensional space in which the relationships are separable; i.e., the reservoir system state trajectories display different dynamical patterns that reflect the corresponding input pair relationships. Thus, as opposed to training in the entire high-dimensional reservoir space, the RC only needs to learns characteristic features of these dynamical patterns, allowing it to perform well with very few training examples compared with conventional machine learning feed-forward techniques such as deep learning. In generalization tasks, we observe that RCs perform significantly better than state-of-the-art, feed-forward, pair-based architectures such as convolutional and deep Siamese Neural Networks (SNNs). We also show that RCs can not only generalize relationships, but also generalize combinations of relationships, providing robust and effective image pair classification. Our work helps bridge the gap between explainable machine learning with small datasets and biologically inspired analogy-based learning, pointing to new directions in the investigation of learning processes.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Tufek, Nilay, Murat Yalcin, Mucahit Altintas, Fatma Kalaoglu, Yi Li et Senem Kursun Bahadir. « Human Action Recognition Using Deep Learning Methods on Limited Sensory Data ». IEEE Sensors Journal 20, no 6 (15 mars 2020) : 3101–12. http://dx.doi.org/10.1109/jsen.2019.2956901.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Huang, Jianqing, Hecong Liu, Jinghang Dai et Weiwei Cai. « Reconstruction for limited-data nonlinear tomographic absorption spectroscopy via deep learning ». Journal of Quantitative Spectroscopy and Radiative Transfer 218 (octobre 2018) : 187–93. http://dx.doi.org/10.1016/j.jqsrt.2018.07.011.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Torres, Alfonso F., Wynn R. Walker et Mac McKee. « Forecasting daily potential evapotranspiration using machine learning and limited climatic data ». Agricultural Water Management 98, no 4 (février 2011) : 553–62. http://dx.doi.org/10.1016/j.agwat.2010.10.012.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
24

Holzer, Jorge, et Qian Qu. « Confidence of the trembling hand : Bayesian learning with data-limited stocks ». Natural Resource Modeling 31, no 2 (12 mars 2018) : e12164. http://dx.doi.org/10.1111/nrm.12164.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Niezgoda, Stephen R., et Jared Glover. « Unsupervised Learning for Efficient Texture Estimation From Limited Discrete Orientation Data ». Metallurgical and Materials Transactions A 44, no 11 (22 février 2013) : 4891–905. http://dx.doi.org/10.1007/s11661-013-1653-7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Zhang, Jialin, Mairidan Wushouer, Gulanbaier Tuerhong et Hanfang Wang. « Semi-Supervised Learning for Robust Emotional Speech Synthesis with Limited Data ». Applied Sciences 13, no 9 (6 mai 2023) : 5724. http://dx.doi.org/10.3390/app13095724.

Texte intégral
Résumé :
Emotional speech synthesis is an important branch of human–computer interaction technology that aims to generate emotionally expressive and comprehensible speech based on the input text. With the rapid development of speech synthesis technology based on deep learning, the research of affective speech synthesis has gradually attracted the attention of scholars. However, due to the lack of quality emotional speech synthesis corpus, emotional speech synthesis research under low-resource conditions is prone to overfitting, exposure error, catastrophic forgetting and other problems leading to unsatisfactory generated speech results. In this paper, we proposed an emotional speech synthesis method that integrates migration learning, semi-supervised training and robust attention mechanism to achieve better adaptation to the emotional style of the speech data during fine-tuning. By adopting an appropriate fine-tuning strategy, trade-off parameter configuration and pseudo-labels in the form of loss functions, we efficiently guided the learning of the regularized synthesis of emotional speech. The proposed SMAL-ET2 method outperforms the baseline methods in both subjective and objective evaluations. It is demonstrated that our training strategy with stepwise monotonic attention and semi-supervised loss method can alleviate the overfitting phenomenon and improve the generalization ability of the text-to-speech model. Our method can also enable the model to successfully synthesize different categories of emotional speech with better naturalness and emotion similarity.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Shin, Hyunkyung, Hyeonung Shin, Wonje Choi, Jaesung Park, Minjae Park, Euiyul Koh et Honguk Woo. « Sample-Efficient Deep Learning Techniques for Burn Severity Assessment with Limited Data Conditions ». Applied Sciences 12, no 14 (21 juillet 2022) : 7317. http://dx.doi.org/10.3390/app12147317.

Texte intégral
Résumé :
The automatic analysis of medical data and images to help diagnosis has recently become a major area in the application of deep learning. In general, deep learning techniques can be effective when a large high-quality dataset is available for model training. Thus, there is a need for sample-efficient learning techniques, particularly in the field of medical image analysis, as significant cost and effort are required to obtain a sufficient number of well-annotated high-quality training samples. In this paper, we address the problem of deep neural network training under sample deficiency by investigating several sample-efficient deep learning techniques. We concentrate on applying these techniques to skin burn image analysis and classification. We first build a large-scale, professionally annotated dataset of skin burn images, which enables the establishment of convolutional neural network (CNN) models for burn severity assessment with high accuracy. We then deliberately set data limitation conditions and adapt several sample-efficient techniques, such as transferable learning (TL), self-supervised learning (SSL), federated learning (FL), and generative adversarial network (GAN)-based data augmentation, to those conditions. Through comprehensive experimentation, we evaluate the sample-efficient deep learning techniques for burn severity assessment, and show, in particular, that SSL models learned on a small task-specific dataset can achieve comparable accuracy to a baseline model learned on a six-times larger dataset. We also demonstrate the applicability of FL and GANs to model training under different data limitation conditions that commonly occur in the area of healthcare and medicine where deep learning models are adopted.
Styles APA, Harvard, Vancouver, ISO, etc.
28

DYCHKA, Ivan, Kateryna POTAPOVA, Liliya VOVK, Vasyl MELIUKH et Olga VEDENIEIEVA. « ADAPTIVE DOMAIN-SPECIFIC NAMED ENTITY RECOGNITION METHOD WITH LIMITED DATA ». MEASURING AND COMPUTING DEVICES IN TECHNOLOGICAL PROCESSES, no 1 (28 mars 2024) : 82–92. http://dx.doi.org/10.31891/2219-9365-2024-77-11.

Texte intégral
Résumé :
The ever-evolving volume of digital information requires the development of innovative search strategies aimed at obtaining the necessary data efficiently and economically feasible. The urgency of the problem is emphasized by the growing complexity of information landscapes and the need for fast data extraction methodologies. In the field of natural language processing, named entity recognition (NER) is an essential task for extracting useful information from unstructured text input for further classification into predefined categories. Nevertheless, conventional methods frequently encounter difficulties when confronted with a limited amount of labeled data, posing challenges in real-world scenarios where obtaining substantial annotated datasets is problematic or costly. In order to address the problem of domain-specific NER with limited data, this work investigates NER techniques that can overcome these constraints by continuously learning from newly collected information on pre-trained models. Several techniques are also used for making the greatest use of the limited labeled data, such as using active learning, exploiting unlabeled data, and integrating domain knowledge. Using domain-specific datasets with different levels of annotation scarcity, the fine-tuning process of pre-trained models, such as transformer-based models (TRF) and Toc2Vec (token-to-vector) models is investigated. The results show that, in general, expanding the volume of training data enhances most models' performance for NER, particularly for models with sufficient learning ability. Depending on the model architecture and the complexity of the entity label being learned, the effect of more data on the model's performance can change. After increasing the training data by 20%, the LT2V model shows the most balanced growth in accuracy overall by 11% recognizing 73% of entities and processing speed. Meanwhile, with consistent processing speed and the greatest F1-score, the Transformer-based model (TRF) shows promise for effective learning with less data, achieving 74% successful prediction and a 7% increase in performance after expanding the training data to 81%. Our results pave the way for the creation of more resilient and efficient NER systems suited to specialized domains and further the field of domain-specific NER with sparse data. We also shed light on the relative merits of various NER models and training strategies, and offer perspectives for future research.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Athey, Susan, et Stefan Wager. « Policy Learning With Observational Data ». Econometrica 89, no 1 (2021) : 133–61. http://dx.doi.org/10.3982/ecta15732.

Texte intégral
Résumé :
In many areas, practitioners seek to use observational data to learn a treatment assignment policy that satisfies application‐specific constraints, such as budget, fairness, simplicity, or other functional form constraints. For example, policies may be restricted to take the form of decision trees based on a limited set of easily observable individual characteristics. We propose a new approach to this problem motivated by the theory of semiparametrically efficient estimation. Our method can be used to optimize either binary treatments or infinitesimal nudges to continuous treatments, and can leverage observational data where causal effects are identified using a variety of strategies, including selection on observables and instrumental variables. Given a doubly robust estimator of the causal effect of assigning everyone to treatment, we develop an algorithm for choosing whom to treat, and establish strong guarantees for the asymptotic utilitarian regret of the resulting policy.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Radino, Radino, et Lia Fatika Yiyi Permatasari. « PAI Teacher Strategy in Improving Learning Effectiveness in Limited Face-to-Face Learning ». Jurnal Pendidikan Agama Islam 19, no 2 (31 décembre 2022) : 249–62. http://dx.doi.org/10.14421/jpai.2022.192-06.

Texte intégral
Résumé :
Purpose – The purpose of this study was to find out PAI and Budi Pekerti Teachers' Strategies in preparing the planning, implementation, methods, and media used as well as evaluating PAI and Budi Pekerti learning to develop the effectiveness of learning in limited face-to-face learning at SMK N 1 Depok. Design/methods/approaches –This research is descriptive qualitative. It involved 3 research subjects, Islamic Religious Education and Moral Education subject teachers for class X, with 20 informants from class X TB students. The data were collected through interviews, observation, and documentation. Data validation technique was done with triangulation technique. Data analysis techniques with data reduction (data selection), data display (data presentation), and conclusion drawing/verification (drawing conclusions). Findings – The results of the study show that: The PAI and Budi Pekerti teacher's strategy in developing the effectiveness of learning in limited face-to-face learning at SMK N 1 Depok is quite effective, PAI and Budi Pekerti teachers prepare learning by the following steps i.e, 1) Making lesson plans based on the syllabus, 2) Preparing material 3) Preparing simple learning media. The teacher's strategy for PAI and Budi Pekerti in carrying out learning to develop learning effectiveness is to carry out learning by paying attention to learning time, the teacher's role as a motivator. Applying appropriate learning methods and providing simple media, PAI Teacher Strategies and PAI and Budi Pekerti characteristics in evaluating learning to develop limited face-to-face learning at SMK N 1 Depok with daily assessments, mid test, and final test. Research implications/limitations – This research has implications for the teacher's strategy in limited face-to-face learning to increase learning effectiveness. Originality/value – This study provides a real picture of the limited face-to-face learning implementation of PAI.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Wang, Jingjing, Zheng Liu, Rong Xie et Lei Ran. « Radar HRRP Target Recognition Based on Dynamic Learning with Limited Training Data ». Remote Sensing 13, no 4 (18 février 2021) : 750. http://dx.doi.org/10.3390/rs13040750.

Texte intégral
Résumé :
For high-resolution range profile (HRRP)-based radar automatic target recognition (RATR), adequate training data are required to characterize a target signature effectively and get good recognition performance. However, collecting enough training data involving HRRP samples from each target orientation is hard. To tackle the HRRP-based RATR task with limited training data, a novel dynamic learning strategy is proposed based on the single-hidden layer feedforward network (SLFN) with an assistant classifier. In the offline training phase, the training data are used for pretraining the SLFN using a reduced kernel extreme learning machine (RKELM). In the online classification phase, the collected test data are first labeled by fusing the recognition results of the current SLFN and assistant classifier. Then the test samples with reliable pseudolabels are used as additional training data to update the parameters of SLFN with the online sequential RKELM (OS-RKELM). Moreover, to improve the accuracy of label estimation for test data, a novel semi-supervised learning method named constraint propagation-based label propagation (CPLP) was developed as an assistant classifier. The proposed method dynamically accumulates knowledge from training and test data through online learning, thereby reinforcing performance of the RATR system with limited training data. Experiments conducted on the simulated HRRP data from 10 civilian vehicles and real HRRP data from three military vehicles demonstrated the effectiveness of the proposed method when the training data are limited.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Lee, Young-Pyo, Ki-Yeon Kim et Yong Soo Kim. « Comparative Study on Predictive Power of Machine Learning with Limited Data Collection ». Journal of Applied Reliability 19, no 3 (30 septembre 2019) : 210–25. http://dx.doi.org/10.33162/jar.2019.09.19.3.210.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Bang, Junseong, Piergiuseppe Di Marco, Hyejeon Shin et Pangun Park. « Deep Transfer Learning-Based Fault Diagnosis Using Wavelet Transform for Limited Data ». Applied Sciences 12, no 15 (25 juillet 2022) : 7450. http://dx.doi.org/10.3390/app12157450.

Texte intégral
Résumé :
Although various deep learning techniques have been proposed to diagnose industrial faults, it is still challenging to obtain sufficient training samples to build the fault diagnosis model in practice. This paper presents a framework that combines wavelet transformation and transfer learning (TL) for fault diagnosis with limited target samples. The wavelet transform converts a time-series sample to a time-frequency representative image based on the extracted hidden time and frequency features of various faults. On the other hand, the TL technique leverages the existing neural networks, called GoogLeNet, which were trained using a sufficient source data set for different target tasks. Since the data distributions between the source and the target domains are considerably different in industrial practice, we partially retrain the pre-trained model of the source domain using intermediate samples that are conceptually related to the target domain. We use a reciprocating pump model to generate various combinations of faults with different severity levels and evaluate the effectiveness of the proposed method. The results show that the proposed method provides higher diagnostic accuracy than the support vector machine and the convolutional neural network under wide variations in the training data size and the fault severity. In particular, we show that the severity level of the fault condition heavily affects the diagnostic performance.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Yang, Qiuju, Yingying Wang et Jie Ren. « Auroral Image Classification With Very Limited Labeled Data Using Few-Shot Learning ». IEEE Geoscience and Remote Sensing Letters 19 (2022) : 1–5. http://dx.doi.org/10.1109/lgrs.2022.3151755.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Villon, Sébastien, Corina Iovan, Morgan Mangeas, Thomas Claverie, David Mouillot, Sébastien Villéger et Laurent Vigliola. « Automatic underwater fish species classification with limited data using few-shot learning ». Ecological Informatics 63 (juillet 2021) : 101320. http://dx.doi.org/10.1016/j.ecoinf.2021.101320.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
36

Oh, Yujin, Sangjoon Park et Jong Chul Ye. « Deep Learning COVID-19 Features on CXR Using Limited Training Data Sets ». IEEE Transactions on Medical Imaging 39, no 8 (août 2020) : 2688–700. http://dx.doi.org/10.1109/tmi.2020.2993291.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
37

Saufi, Syahril Ramadhan, Zair Asrar Bin Ahmad, Mohd Salman Leong et Meng Hee Lim. « Gearbox Fault Diagnosis Using a Deep Learning Model With Limited Data Sample ». IEEE Transactions on Industrial Informatics 16, no 10 (octobre 2020) : 6263–71. http://dx.doi.org/10.1109/tii.2020.2967822.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
38

Xue, Yongjian, et Pierre Beauseroy. « Transfer learning for one class SVM adaptation to limited data distribution change ». Pattern Recognition Letters 100 (décembre 2017) : 117–23. http://dx.doi.org/10.1016/j.patrec.2017.10.030.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
39

Seliya, Naeem, et Taghi M. Khoshgoftaar. « Software quality estimation with limited fault data : a semi-supervised learning perspective ». Software Quality Journal 15, no 3 (10 août 2007) : 327–44. http://dx.doi.org/10.1007/s11219-007-9013-8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
40

Bieker, Katharina, Sebastian Peitz, Steven L. Brunton, J. Nathan Kutz et Michael Dellnitz. « Deep model predictive flow control with limited sensor data and online learning ». Theoretical and Computational Fluid Dynamics 34, no 4 (12 mars 2020) : 577–91. http://dx.doi.org/10.1007/s00162-020-00520-4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
41

Luo, Xihaier, et Ahsan Kareem. « Bayesian deep learning with hierarchical prior : Predictions from limited and noisy data ». Structural Safety 84 (mai 2020) : 101918. http://dx.doi.org/10.1016/j.strusafe.2019.101918.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Chan, Zeke S. H., H. W. Ngan, A. B. Rad, A. K. David et N. Kasabov. « Short-term ANN load forecasting from limited data using generalization learning strategies ». Neurocomputing 70, no 1-3 (décembre 2006) : 409–19. http://dx.doi.org/10.1016/j.neucom.2005.12.131.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Jain, Sanjay, et Efim Kinber. « Learning languages from positive data and a limited number of short counterexamples ». Theoretical Computer Science 389, no 1-2 (décembre 2007) : 190–218. http://dx.doi.org/10.1016/j.tcs.2007.08.010.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
44

Wagenaar, Dennis, Jurjen de Jong et Laurens M. Bouwer. « Multi-variable flood damage modelling with limited data using supervised learning approaches ». Natural Hazards and Earth System Sciences 17, no 9 (29 septembre 2017) : 1683–96. http://dx.doi.org/10.5194/nhess-17-1683-2017.

Texte intégral
Résumé :
Abstract. Flood damage assessment is usually done with damage curves only dependent on the water depth. Several recent studies have shown that supervised learning techniques applied to a multi-variable data set can produce significantly better flood damage estimates. However, creating and applying a multi-variable flood damage model requires an extensive data set, which is rarely available, and this is currently holding back the widespread application of these techniques. In this paper we enrich a data set of residential building and contents damage from the Meuse flood of 1993 in the Netherlands, to make it suitable for multi-variable flood damage assessment. Results from 2-D flood simulations are used to add information on flow velocity, flood duration and the return period to the data set, and cadastre data are used to add information on building characteristics. Next, several statistical approaches are used to create multi-variable flood damage models, including regression trees, bagging regression trees, random forest, and a Bayesian network. Validation on data points from a test set shows that the enriched data set in combination with the supervised learning techniques delivers a 20 % reduction in the mean absolute error, compared to a simple model only based on the water depth, despite several limitations of the enriched data set. We find that with our data set, the tree-based methods perform better than the Bayesian network.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Fuhg, Jan Niklas, Craig M. Hamel, Kyle Johnson, Reese Jones et Nikolaos Bouklas. « Modular machine learning-based elastoplasticity : Generalization in the context of limited data ». Computer Methods in Applied Mechanics and Engineering 407 (mars 2023) : 115930. http://dx.doi.org/10.1016/j.cma.2023.115930.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
46

Jeon, Byung-Ki, et Eui-Jong Kim. « Solar irradiance prediction using reinforcement learning pre-trained with limited historical data ». Energy Reports 10 (novembre 2023) : 2513–24. http://dx.doi.org/10.1016/j.egyr.2023.09.042.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
47

Mostafa, Reham R., Ozgur Kisi, Rana Muhammad Adnan, Tayeb Sadeghifar et Alban Kuriqi. « Modeling Potential Evapotranspiration by Improved Machine Learning Methods Using Limited Climatic Data ». Water 15, no 3 (25 janvier 2023) : 486. http://dx.doi.org/10.3390/w15030486.

Texte intégral
Résumé :
Modeling potential evapotranspiration (ET0) is an important issue for water resources planning and management projects involving droughts and flood hazards. Evapotranspiration, one of the main components of the hydrological cycle, is highly effective in drought monitoring. This study investigates the efficiency of two machine-learning methods, random vector functional link (RVFL) and relevance vector machine (RVM), improved with new metaheuristic algorithms, quantum-based avian navigation optimizer algorithm (QANA), and artificial hummingbird algorithm (AHA) in modeling ET0 using limited climatic data, minimum temperature, maximum temperature, and extraterrestrial radiation. The outcomes of the hybrid RVFL-AHA, RVFL-QANA, RVM-AHA, and RVM-QANA models compared with single RVFL and RVM models. Various input combinations and three data split scenarios were employed. The results revealed that the AHA and QANA considerably improved the efficiency of RVFL and RVM methods in modeling ET0. Considering the periodicity component and extraterrestrial radiation as inputs improved the prediction accuracy of the applied methods.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Mohammad Talebzadeh, Abolfazl Sodagartojgi, Zahra Moslemi, Sara Sedighi, Behzad Kazemi et Faezeh Akbari. « Deep learning-based retinal abnormality detection from OCT images with limited data ». World Journal of Advanced Research and Reviews 21, no 3 (30 mars 2024) : 690–98. http://dx.doi.org/10.30574/wjarr.2024.21.3.0716.

Texte intégral
Résumé :
In the realm of medical diagnosis, the challenge posed by retinal diseases is considerable, given their potential to complicate vision and overall ocular health. A promising avenue for achieving highly accurate classifiers in detecting retinal diseases involves the application of deep learning models. However, overfitting issues often undermine the performance of these models due to the scarcity of image samples in retinal disease datasets. To address this challenge, a novel deep triplet network is proposed as a metric learning approach for detecting retinal diseases using Optical Coherence Tomography (OCT) images. Incorporating a conditional loss function tailored to the constraints of limited data samples, this deep triplet network enhances the model’s accuracy. Drawing inspiration from pre-trained models such as VGG16, the foundational architecture of our model is established. Experiments use open-access datasets comprising retinal OCT images to validate our proposed approach. The performance of the suggested model is demonstrated to surpass that of state-of-the-art models in terms of accuracy. This substantiates the effectiveness of the deep triplet network in addressing overfitting issues associated with limited data samples in retinal disease datasets.
Styles APA, Harvard, Vancouver, ISO, etc.
49

She, Daoming, Zhichao Yang, Yudan Duan, Xiaoan Yan, Jin Chen et Yaoming Li. « A meta transfer learning method for gearbox fault diagnosis with limited data ». Measurement Science and Technology 35, no 8 (9 mai 2024) : 086114. http://dx.doi.org/10.1088/1361-6501/ad4665.

Texte intégral
Résumé :
Abstract Intelligent diagnosis of mechanical faults is an important means to guarantee the safe maintenance of equipment. Cross domain diagnosis may lack sufficient measurement data as support, and this bottleneck is particularly prominent in high-end manufacturing. This paper presents a few-shot fault diagnosis methodology based on meta transfer learning for gearbox. To be specific, firstly, the subtasks for transfer diagnosis are constructed, and then joint distribution adaptation is conducted to align the two domain distributions; secondly, through adaptive manifold regularization, the data of target working condition is further utilized to explore the potential geometric structure of the data distribution. Meta stochastic gradient descent is explored to dynamically adjust the model’s parameter based on the obtained task information to obtain better generalization performance, ultimately to achieve transfer diagnosis of gearbox faults with few samples. The effectiveness of the approach is supported by the experimental datasets of the gearbox.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Cagliero, Luca, Lorenzo Canale et Laura Farinetti. « Data-Driven Analysis of Student Engagement in Time-Limited Computer Laboratories ». Algorithms 16, no 10 (2 octobre 2023) : 464. http://dx.doi.org/10.3390/a16100464.

Texte intégral
Résumé :
Computer laboratories are learning environments where students learn programming languages by practicing under teaching assistants’ supervision. This paper presents the outcomes of a real case study carried out in our university in the context of a database course, where learning SQL is one of the main topics. The aim of the study is to analyze the level of engagement of the laboratory participants by tracing and correlating the accesses of the students to each laboratory exercise, the successful/failed attempts to solve the exercises, the students’ requests for help, and the interventions of teaching assistants. The acquired data are analyzed by means of a sequence pattern mining approach, which automatically discovers recurrent temporal patterns. The mined patterns are mapped to behavioral, cognitive engagement, and affective key indicators, thus allowing students to be profiled according to their level of engagement in all the identified dimensions. To efficiently extract the desired indicators, the mining algorithm enforces ad hoc constraints on the pattern categories of interest. The student profiles and the correlations among different engagement dimensions extracted from the experimental data have been shown to be helpful for the planning of future learning experiences.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie