Inhaltsverzeichnis

  1. Dissertationen

Auswahl der wissenschaftlichen Literatur zum Thema „Learning with Limited Data“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Learning with Limited Data" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Dissertationen zum Thema "Learning with Limited Data"

1

Chen, Si. "Active Learning Under Limited Interaction with Data Labeler." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/104894.

Der volle Inhalt der Quelle
Annotation:
Active learning (AL) aims at reducing labeling effort by identifying the most valuable unlabeled data points from a large pool. Traditional AL frameworks have two limitations: First, they perform data selection in a multi-round manner, which is time-consuming and impractical. Second, they usually assume that there are a small amount of labeled data points available in the same domain as the data in the unlabeled pool. In this thesis, we initiate the study of one-round active learning to solve the first issue. We propose DULO, a general framework for one-round setting based on the notion of data utility functions, which map a set of data points to some performance measure of the model trained on the set. We formulate the one-round active learning problem as data utility function maximization. We then propose D²ULO on the basis of DULO as a solution that solves both issues. Specifically, D²ULO leverages the idea of domain adaptation (DA) to train a data utility model on source labeled data. The trained utility model can then be used to select high-utility data in the target domain and at the same time, provide an estimate for the utility of the selected data. Our experiments show that the proposed frameworks achieves better performance compared with state-of-the-art baselines in the same setting. Particularly, D²ULO is applicable to the scenario where the source and target labels have mismatches, which is not supported by the existing works.<br>M.S.<br>Machine Learning (ML) has achieved huge success in recent years. Machine Learning technologies such as recommendation system, speech recognition and image recognition play an important role on human daily life. This success mainly build upon the use of large amount of labeled data: Compared with traditional programming, a ML algorithm does not rely on explicit instructions from human; instead, it takes the data along with the label as input, and aims to learn a function that can correctly map data to the label space by itself. However, data labeling requires human effort and could be time-consuming and expensive especially for datasets that contain domain-specific knowledge (e.g., disease prediction etc.) Active Learning (AL) is one of the solution to reduce data labeling effort. Specifically, the learning algorithm actively selects data points that provide more information for the model, hence a better model can be achieved with less labeled data. While traditional AL strategies do achieve good performance, it requires a small amount of labeled data as initialization and performs data selection in multi-round, which pose great challenge to its application, as there is no platform provide timely online interaction with data labeler and the interaction is often time inefficient. To deal with the limitations, we first propose DULO which a new setting of AL is studied: data selection is only allowed to be performed once. To further broaden the application of our method, we propose D²ULO which is built upon DULO and Domain Adaptation techniques to avoid the use of initial labeled data. Our experiments show that both of the proposed two frameworks achieve better performance compared with state-of-the-art baselines.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Dvornik, Mikita. "Learning with Limited Annotated Data for Visual Understanding." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM050.

Der volle Inhalt der Quelle
Annotation:
La capacité des méthodes d'apprentissage profond à exceller en vision par ordinateur dépend fortement de la quantité de données annotées disponibles pour la formation. Pour certaines tâches, l'annotation peut être trop coûteuse et demander trop de travail, devenant ainsi le principal obstacle à une meilleure précision. Les algorithmes qui apprennent automatiquement à partir des données, sans supervision humaine, donnent de bien pires résultats que leurs homologues entièrement supervisés. Il y a donc une forte motivation à travailler sur des méthodes efficaces d'apprentissage avec des annotations limitées. Cette thèse propose d'exploiter les connaissances préalables sur la tâche et développe des solutions plus efficaces pour la compréhension des scènes et la classification de quelques images.Les principaux défis de la compréhension des scènes comprennent la détection d'objets, la sémantique et la segmentation des instances. De même, toutes ces tâches visent à reconnaître et localiser des objets, au niveau de la région ou au niveau plus précis des pixels, ce qui rend le processus d'annotation difficile. La première contribution de ce manuscrit est un réseau neuronal convolutionnel (CNN) qui effectue à la fois la détection d'objets et la segmentation sémantique. Nous concevons une architecture de réseau spécialisée, qui est formée pour résoudre les deux problèmes en un seul passage et qui fonctionne en temps réel. Grâce à la procédure de formation multitâche, les deux tâches bénéficient l'une de l'autre en termes de précision, sans données supplémentaires étiquetées.La deuxième contribution introduit une nouvelle technique d'augmentation des données, c'est-à-dire l'augmentation artificielle de la quantité de données de formation. Il vise à créer de nouvelles scènes par copier-coller d'objets d'une image à l'autre, dans un ensemble de données donné. Placer un objet dans un contexte approprié s'est avéré crucial pour améliorer la compréhension de la scène. Nous proposons de modéliser explicitement le contexte visuel à l'aide d'un CNN qui découvre les corrélations entre les catégories d'objets et leur voisinage typique, puis propose des emplacements réalistes à augmenter. Dans l'ensemble, le collage d'objets aux "bons endroits" permet d'améliorer les performances de détection et de segmentation des objets, avec des gains plus importants dans les scénarios d'annotations limitées.Pour certains problèmes, les données sont extrêmement rares et un algorithme doit apprendre de nouveaux concepts à partir de quelques exemples. Peu de classification consiste à apprendre un modèle prédictif capable de s'adapter efficacement à une nouvelle classe, avec seulement quelques échantillons annotés. Alors que la plupart des méthodes actuelles se concentrent sur le mécanisme d'adaptation, peu de travaux ont abordé explicitement le problème du manque de données sur la formation. Dans notre troisième article, nous montrons qu'en s'attaquant à la question fondamentale de la variance élevée des classificateurs d'apprentissage à faible tir, il est possible de surpasser considérablement les techniques existantes plus sophistiquées. Notre approche consiste à concevoir un ensemble de réseaux profonds pour tirer parti de la variance des classificateurs et à introduire de nouvelles stratégies pour encourager les réseaux à coopérer, tout en encourageant la diversité des prédictions. En faisant correspondre différentes sorties de réseaux sur des images d'entrée similaires, nous améliorons la précision et la robustesse du modèle par rapport à la formation d'ensemble classique. De plus, un seul réseau obtenu par distillation montre des performances similaires à celles de l'ensemble complet et donne des résultats à la pointe de la technologie, sans surcharge de calcul au moment du test<br>The ability of deep-learning methods to excel in computer vision highly depends on the amount of annotated data available for training. For some tasks, annotation may be too costly and labor intensive, thus becoming the main obstacle to better accuracy. Algorithms that learn from data automatically, without human supervision, perform substantially worse than their fully-supervised counterparts. Thus, there is a strong motivation to work on effective methods for learning with limited annotations. This thesis proposes to exploit prior knowledge about the task and develops more effective solutions for scene understanding and few-shot image classification.Main challenges of scene understanding include object detection, semantic and instance segmentation. Similarly, all these tasks aim at recognizing and localizing objects, at region- or more precise pixel-level, which makes the annotation process difficult. The first contribution of this manuscript is a Convolutional Neural Network (CNN) that performs both object detection and semantic segmentation. We design a specialized network architecture, that is trained to solve both problems in one forward pass, and operates in real-time. Thanks to the multi-task training procedure, both tasks benefit from each other in terms of accuracy, with no extra labeled data.The second contribution introduces a new technique for data augmentation, i.e., artificially increasing the amount of training data. It aims at creating new scenes by copy-pasting objects from one image to another, within a given dataset. Placing an object in a right context was found to be crucial in order to improve scene understanding performance. We propose to model visual context explicitly using a CNN that discovers correlations between object categories and their typical neighborhood, and then proposes realistic locations for augmentation. Overall, pasting objects in ``right'' locations allows to improve object detection and segmentation performance, with higher gains in limited annotation scenarios.For some problems, the data is extremely scarce, and an algorithm has to learn new concepts from a handful of examples. Few-shot classification consists of learning a predictive model that is able to effectively adapt to a new class, given only a few annotated samples. While most current methods concentrate on the adaptation mechanism, few works have tackled the problem of scarce training data explicitly. In our third contribution, we show that by addressing the fundamental high-variance issue of few-shot learning classifiers, it is possible to significantly outperform more sophisticated existing techniques. Our approach consists of designing an ensemble of deep networks to leverage the variance of the classifiers, and introducing new strategies to encourage the networks to cooperate, while encouraging prediction diversity. By matching different networks outputs on similar input images, we improve model accuracy and robustness, comparing to classical ensemble training. Moreover, a single network obtained by distillation shows similar to the full ensemble performance and yields state-of-the-art results with no computational overhead at test time
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Moskvyak, Olga. "Learning from limited annotated data for re-identification problem." Thesis, Queensland University of Technology, 2021. https://eprints.qut.edu.au/226866/1/Olga_Moskvyak_Thesis.pdf.

Der volle Inhalt der Quelle
Annotation:
The project develops machine learning methods for the re-identification task, which is matching images from the same category in a database. The thesis proposes approaches to reduce the influence of two critical challenges in image re-identification: pose variations that affect the appearance of objects and the need to annotate a large dataset to train a neural network. Depending on the domain, these challenges occur to a different extent. Our approach demonstrates superior performance on several benchmarks for people, cars, and animal categories.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Xian, Yongqin [Verfasser]. "Learning from limited labeled data - Zero-Shot and Few-Shot Learning / Yongqin Xian." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2020. http://d-nb.info/1219904457/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Eriksson, Håkan. "Clustering Generic Log Files Under Limited Data Assumptions." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-189642.

Der volle Inhalt der Quelle
Annotation:
Complex computer systems are often prone to anomalous or erroneous behavior, which can lead to costly downtime as the systems are diagnosed and repaired. One source of information for diagnosing the errors and anomalies are log files, which are often generated in vast and diverse amounts. However, the log files' size and semi-structured nature makes manual analysis of log files generally infeasible. Some automation is desirable to sift through the log files to find the source of the anomalies or errors. This project aimed to develop a generic algorithm that could cluster diverse log files in accordance to domain expertise. The results show that the developed algorithm performs well in accordance to manual clustering even under more relaxed data assumptions.<br>Komplexa datorsystem är ofta benägna att uppvisa anormalt eller felaktigt beteende, vilket kan leda till kostsamma driftstopp under tiden som systemen diagnosticeras och repareras. En informationskälla till feldiagnosticeringen är loggfiler, vilka ofta genereras i stora mängder och av olika typer. Givet loggfilernas storlek och semistrukturerade utseende så blir en manuell analys orimlig att genomföra. Viss automatisering är önsvkärd för att sovra bland loggfilerna så att källan till felen och anormaliteterna blir enklare att upptäcka. Det här projektet syftade till att utveckla en generell algoritm som kan klustra olikartade loggfiler i enlighet med domänexpertis. Resultaten visar att algoritmen presterar väl i enlighet med manuell klustring även med färre antaganden om datan.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Boman, Jimmy. "A deep learning approach to defect detection with limited data availability." Thesis, Umeå universitet, Institutionen för fysik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-173207.

Der volle Inhalt der Quelle
Annotation:
In industrial processes, products are often visually inspected for defects inorder to verify their quality. Many automated visual inspection algorithms exist, and in many cases humans still perform the inspections. Advances in machine learning have showed that deep learning methods lie at the forefront of reliability and accuracy in such inspection tasks. In order to detect defects, most deep learning methods need large amounts of training data to learn from. This makes demonstrating such methods to a new customer problematic, since such data often does not exist beforehand, and has to be gathered specifically for the task. The aim of this thesis is to develop a method to perform such demonstrations. With access to only a small dataset, the method should be able to analyse an image and return a map of binary values, signifying which pixels in the original image belong to a defect and which do not. A method was developed that divides an image into overlapping patches, and analyses each patch individually for defects, using a deep learning method. Three different deep learning methods for classifying the patches were evaluated; a convolutional neural network, a transfer learning model based on the VGG19 network, and an autoencoder. The three methods were first compared in a simple binary classification task, without the patching method. They were then tested together with the patching method on two sets of images. The transfer learning model was able to identify every defect across both tests, having been trained using only four training images, proving that defect detection with deep learning can be done successfully even when there is not much training data available.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Guo, Zhenyu. "Data famine in big data era : machine learning algorithms for visual object recognition with limited training data." Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/46412.

Der volle Inhalt der Quelle
Annotation:
Big data is an increasingly attractive concept in many fields both in academia and in industry. The increasing amount of information actually builds an illusion that we are going to have enough data to solve all the data driven problems. Unfortunately it is not true, especially for areas where machine learning methods are heavily employed, since sufficient high-quality training data doesn't necessarily come with the big data, and it is not easy or sometimes impossible to collect sufficient training samples, which most computational algorithms depend on. This thesis mainly focuses on dealing situations with limited training data in visual object recognition, by developing novel machine learning algorithms to overcome the limited training data difficulty. We investigate three issues in object recognition involving limited training data: 1. one-shot object recognition, 2. cross-domain object recognition, and 3. object recognition for images with different picture styles. For Issue 1, we propose an unsupervised feature learning algorithm by constructing a deep structure of the stacked Hierarchical Dirichlet Process (HDP) auto-encoder, in order to extract "semantic" information from unlabeled source images. For Issue 2, we propose a Domain Adaptive Input-Output Kernel Learning algorithm to reduce the domain shifts in both input and output spaces. For Issue 3, we introduce a new problem involving images with different picture styles, successfully formulate the relationship between pixel mapping functions with gradient based image descriptors, and also propose a multiple kernel based algorithm to learn an optimal combination of basis pixel mapping functions to improve the recognition accuracy. For all the proposed algorithms, experimental results on publicly available data sets demonstrate the performance improvements over previous state-of-arts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Ayllon, Clemente Irene [Verfasser]. "Towards natural speech acquisition: incremental word learning with limited data / Irene Ayllon Clemente." Bielefeld : Universitätsbibliothek Bielefeld, 2013. http://d-nb.info/1077063458/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Chang, Fengming. "Learning accuracy from limited data using mega-fuzzification method to improve small data set learning accuracy for early flexible manufacturing system scheduling." Saarbrücken VDM Verlag Dr. Müller, 2005. http://d-nb.info/989267156/04.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Tania, Zannatun Nayem. "Machine Learning with Reconfigurable Privacy on Resource-Limited Edge Computing Devices." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-292105.

Der volle Inhalt der Quelle
Annotation:
Distributed computing allows effective data storage, processing and retrieval but it poses security and privacy issues. Sensors are the cornerstone of the IoT-based pipelines, since they constantly capture data until it can be analyzed at the central cloud resources. However, these sensor nodes are often constrained by limited resources. Ideally, it is desired to make all the collected data features private but due to resource limitations, it may not always be possible. Making all the features private may cause overutilization of resources, which would in turn affect the performance of the whole system. In this thesis, we design and implement a system that is capable of finding the optimal set of data features to make private, given the device’s maximum resource constraints and the desired performance or accuracy of the system. Using the generalization techniques for data anonymization, we create user-defined injective privacy encoder functions to make each feature of the dataset private. Regardless of the resource availability, some data features are defined by the user as essential features to make private. All other data features that may pose privacy threat are termed as the non-essential features. We propose Dynamic Iterative Greedy Search (DIGS), a greedy search algorithm that takes the resource consumption for each non-essential feature as input and returns the most optimal set of non-essential features that can be private given the available resources. The most optimal set contains the features which consume the least resources. We evaluate our system on a Fitbit dataset containing 17 data features, 4 of which are essential private features for a given classification application. Our results show that we can provide 9 additional private features apart from the 4 essential features of the Fitbit dataset containing 1663 records. Furthermore, we can save 26:21% memory as compared to making all the features private. We also test our method on a larger dataset generated with Generative Adversarial Network (GAN). However, the chosen edge device, Raspberry Pi, is unable to cater to the scale of the large dataset due to insufficient resources. Our evaluations using 1=8th of the GAN dataset result in 3 extra private features with up to 62:74% memory savings as compared to all private data features. Maintaining privacy not only requires additional resources, but also has consequences on the performance of the designed applications. However, we discover that privacy encoding has a positive impact on the accuracy of the classification model for our chosen classification application.<br>Distribuerad databehandling möjliggör effektiv datalagring, bearbetning och hämtning men det medför säkerhets- och sekretessproblem. Sensorer är hörnstenen i de IoT-baserade rörledningarna, eftersom de ständigt samlar in data tills de kan analyseras på de centrala molnresurserna. Dessa sensornoder begränsas dock ofta av begränsade resurser. Helst är det önskvärt att göra alla insamlade datafunktioner privata, men på grund av resursbegränsningar kanske det inte alltid är möjligt. Att göra alla funktioner privata kan orsaka överutnyttjande av resurser, vilket i sin tur skulle påverka prestanda för hela systemet. I denna avhandling designar och implementerar vi ett system som kan hitta den optimala uppsättningen datafunktioner för att göra privata, med tanke på begränsningar av enhetsresurserna och systemets önskade prestanda eller noggrannhet. Med hjälp av generaliseringsteknikerna för data-anonymisering skapar vi användardefinierade injicerbara sekretess-kodningsfunktioner för att göra varje funktion i datasetet privat. Oavsett resurstillgänglighet definieras vissa datafunktioner av användaren som viktiga funktioner för att göra privat. Alla andra datafunktioner som kan utgöra ett integritetshot kallas de icke-väsentliga funktionerna. Vi föreslår Dynamic Iterative Greedy Search (DIGS), en girig sökalgoritm som tar resursförbrukningen för varje icke-väsentlig funktion som inmatning och ger den mest optimala uppsättningen icke-väsentliga funktioner som kan vara privata med tanke på tillgängliga resurser. Den mest optimala uppsättningen innehåller de funktioner som förbrukar minst resurser. Vi utvärderar vårt system på en Fitbit-dataset som innehåller 17 datafunktioner, varav 4 är viktiga privata funktioner för en viss klassificeringsapplikation. Våra resultat visar att vi kan erbjuda ytterligare 9 privata funktioner förutom de 4 viktiga funktionerna i Fitbit-datasetet som innehåller 1663 poster. Dessutom kan vi spara 26; 21% minne jämfört med att göra alla funktioner privata. Vi testar också vår metod på en större dataset som genereras med Generative Adversarial Network (GAN). Den valda kantenheten, Raspberry Pi, kan dock inte tillgodose storleken på den stora datasetet på grund av otillräckliga resurser. Våra utvärderingar med 1=8th av GAN-datasetet resulterar i 3 extra privata funktioner med upp till 62; 74% minnesbesparingar jämfört med alla privata datafunktioner. Att upprätthålla integritet kräver inte bara ytterligare resurser utan har också konsekvenser för de designade applikationernas prestanda. Vi upptäcker dock att integritetskodning har en positiv inverkan på noggrannheten i klassificeringsmodellen för vår valda klassificeringsapplikation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie