Academic literature on the topic 'Semi- and unsupervised learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Semi- and unsupervised learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Semi- and unsupervised learning"

1

Gao Huang, Shiji Song, Jatinder N. D. Gupta, and Cheng Wu. "Semi-Supervised and Unsupervised Extreme Learning Machines." IEEE Transactions on Cybernetics 44, no. 12 (December 2014): 2405–17. http://dx.doi.org/10.1109/tcyb.2014.2307349.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

C A Padmanabha Reddy, Y., P. Viswanath, and B. Eswara Reddy. "Semi-supervised learning: a brief review." International Journal of Engineering & Technology 7, no. 1.8 (February 9, 2018): 81. http://dx.doi.org/10.14419/ijet.v7i1.8.9977.

Full text
Abstract:
Most of the application domain suffers from not having sufficient labeled data whereas unlabeled data is available cheaply. To get labeled instances, it is very difficult because experienced domain experts are required to label the unlabeled data patterns. Semi-supervised learning addresses this problem and act as a half way between supervised and unsupervised learning. This paper addresses few techniques of Semi-supervised learning (SSL) such as self-training, co-training, multi-view learning, TSVMs methods. Traditionally SSL is classified in to Semi-supervised Classification and Semi-supervised Clustering which achieves better accuracy than traditional supervised and unsupervised learning techniques. The paper also addresses the issue of scalability and applications of Semi-supervised learning.
APA, Harvard, Vancouver, ISO, and other styles
3

Hui, Binyuan, Pengfei Zhu, and Qinghua Hu. "Collaborative Graph Convolutional Networks: Unsupervised Learning Meets Semi-Supervised Learning." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 4215–22. http://dx.doi.org/10.1609/aaai.v34i04.5843.

Full text
Abstract:
Graph convolutional networks (GCN) have achieved promising performance in attributed graph clustering and semi-supervised node classification because it is capable of modeling complex graphical structure, and jointly learning both features and relations of nodes. Inspired by the success of unsupervised learning in the training of deep models, we wonder whether graph-based unsupervised learning can collaboratively boost the performance of semi-supervised learning. In this paper, we propose a multi-task graph learning model, called collaborative graph convolutional networks (CGCN). CGCN is composed of an attributed graph clustering network and a semi-supervised node classification network. As Gaussian mixture models can effectively discover the inherent complex data distributions, a new end to end attributed graph clustering network is designed by combining variational graph auto-encoder with Gaussian mixture models (GMM-VGAE) rather than the classic k-means. If the pseudo-label of an unlabeled sample assigned by GMM-VGAE is consistent with the prediction of the semi-supervised GCN, it is selected to further boost the performance of semi-supervised learning with the help of the pseudo-labels. Extensive experiments on benchmark graph datasets validate the superiority of our proposed GMM-VGAE compared with the state-of-the-art attributed graph clustering networks. The performance of node classification is greatly improved by our proposed CGCN, which verifies graph-based unsupervised learning can be well exploited to enhance the performance of semi-supervised learning.
APA, Harvard, Vancouver, ISO, and other styles
4

Guo, Wenbin, and Juan Zhang. "Semi-supervised learning for raindrop removal on a single image." Journal of Intelligent & Fuzzy Systems 42, no. 4 (March 4, 2022): 4041–49. http://dx.doi.org/10.3233/jifs-212342.

Full text
Abstract:
This article propose s a network that is mainly used to deal with a single image polluted by raindrops in rainy weather to get a clean image without raindrops. In the existing solutions, most of the methods rely on paired images, that is, the rain image and the real image without rain in the same scene. However, in many cases, the paired images are difficult to obtain, which makes it impossible to apply the raindrop removal network in many scenarios. Therefore this article proposes a semi-supervised rain-removing network apply to unpaired images. The model contains two parts: a supervised network and an unsupervised network. After the model is trained, the unsupervised network does not require paired images and it can get a clean image without raindrops. In particular, our network can perform training on paired and unpaired samples. The experimental results show that the best results are achieved not only on the supervised rain-removing network, but also on the unsupervised rain-removing network.
APA, Harvard, Vancouver, ISO, and other styles
5

Niu, Gang, Bo Dai, Makoto Yamada, and Masashi Sugiyama. "Information-Theoretic Semi-Supervised Metric Learning via Entropy Regularization." Neural Computation 26, no. 8 (August 2014): 1717–62. http://dx.doi.org/10.1162/neco_a_00614.

Full text
Abstract:
We propose a general information-theoretic approach to semi-supervised metric learning called SERAPH (SEmi-supervised metRic leArning Paradigm with Hypersparsity) that does not rely on the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize its entropy on labeled data and minimize its entropy on unlabeled data following entropy regularization. For metric learning, entropy regularization improves manifold regularization by considering the dissimilarity information of unlabeled data in the unsupervised part, and hence it allows the supervised and unsupervised parts to be integrated in a natural and meaningful way. Moreover, we regularize SERAPH by trace-norm regularization to encourage low-dimensional projections associated with the distance metric. The nonconvex optimization problem of SERAPH could be solved efficiently and stably by either a gradient projection algorithm or an EM-like iterative algorithm whose M-step is convex. Experiments demonstrate that SERAPH compares favorably with many well-known metric learning methods, and the learned Mahalanobis distance possesses high discriminability even under noisy environments.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Ziji, Peng Zhang, Peineng Wang, Jawaad Sheriff, Danny Bluestein, and Yuefan Deng. "Rapid analysis of streaming platelet images by semi-unsupervised learning." Computerized Medical Imaging and Graphics 89 (April 2021): 101895. http://dx.doi.org/10.1016/j.compmedimag.2021.101895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Akdemir, Deniz, and Jean-Luc Jannink. "Ensemble learning with trees and rules: Supervised, semi-supervised, unsupervised." Intelligent Data Analysis 18, no. 5 (July 16, 2014): 857–72. http://dx.doi.org/10.3233/ida-140672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shutova, Ekaterina, Lin Sun, Elkin Darío Gutiérrez, Patricia Lichtenstein, and Srini Narayanan. "Multilingual Metaphor Processing: Experiments with Semi-Supervised and Unsupervised Learning." Computational Linguistics 43, no. 1 (April 2017): 71–123. http://dx.doi.org/10.1162/coli_a_00275.

Full text
Abstract:
Highly frequent in language and communication, metaphor represents a significant challenge for Natural Language Processing (NLP) applications. Computational work on metaphor has traditionally evolved around the use of hand-coded knowledge, making the systems hard to scale. Recent years have witnessed a rise in statistical approaches to metaphor processing. However, these approaches often require extensive human annotation effort and are predominantly evaluated within a limited domain. In contrast, we experiment with weakly supervised and unsupervised techniques—with little or no annotation—to generalize higher-level mechanisms of metaphor from distributional properties of concepts. We investigate different levels and types of supervision (learning from linguistic examples vs. learning from a given set of metaphorical mappings vs. learning without annotation) in flat and hierarchical, unconstrained and constrained clustering settings. Our aim is to identify the optimal type of supervision for a learning algorithm that discovers patterns of metaphorical association from text. In order to investigate the scalability and adaptability of our models, we applied them to data in three languages from different language groups—English, Spanish, and Russian—achieving state-of-the-art results with little supervision. Finally, we demonstrate that statistical methods can facilitate and scale up cross-linguistic research on metaphor.
APA, Harvard, Vancouver, ISO, and other styles
9

Weinlichová, Jana, and Jiří Fejfar. "Usage of self-organizing neural networks in evaluation of consumer behaviour." Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 58, no. 6 (2010): 625–32. http://dx.doi.org/10.11118/actaun201058060625.

Full text
Abstract:
This article deals with evaluation of consumer data by Artificial Intelligence methods. In methodical part there are described learning algorithms for Kohonen maps on the principle of supervised learning, unsupervised learning and semi-supervised learning. The principles of supervised learning and unsupervised learning are compared. On base of binding conditions of these principles there is pointed out an advantage of semi-supervised learning. Three algorithms are described for the semi-supervised learning: label propagation, self-training and co-training. Especially usage of co-training in Kohonen map learning seems to be promising point of other research. In concrete application of Kohonen neural network on consumer’s expense the unsupervised learning method has been chosen – the self-organization. So the features of data are evaluated by clustering method called Kohonen maps. These input data represents consumer expenses of households in countries of European union and are characterised by 12-dimension vector according to commodity classification. The data are evaluated in several years, so we can see their distribution, similarity or dissimilarity and also their evolution. In the article we discus other usage of this method for this type of data and also comparison of our results with results reached by hierarchical cluster analysis.
APA, Harvard, Vancouver, ISO, and other styles
10

Yamkovyi, Klym. "DEVELOPMENT AND COMPARATIVE ANALYSIS OF SEMI-SUPERVISED LEARNING ALGORITHMS ON A SMALL AMOUNT OF LABELED DATA." Bulletin of National Technical University "KhPI". Series: System Analysis, Control and Information Technologies, no. 1 (5) (July 12, 2021): 98–103. http://dx.doi.org/10.20998/2079-0023.2021.01.16.

Full text
Abstract:
The paper is dedicated to the development and comparative experimental analysis of semi-supervised learning approaches based on a mix of unsupervised and supervised approaches for the classification of datasets with a small amount of labeled data, namely, identifying to which of a set of categories a new observation belongs using a training set of data containing observations whose category membership is known. Semi-supervised learning is an approach to machine learning that combines a small amount of labeled data with a large amount of unlabeled data during training. Unlabeled data, when used in combination with a small quantity of labeled data, can produce significant improvement in learning accuracy. The goal is semi-supervised methods development and analysis along with comparing their accuracy and robustness on different synthetics datasets. The proposed approach is based on the unsupervised K-medoids methods, also known as the Partitioning Around Medoid algorithm, however, unlike Kmedoids the proposed algorithm first calculates medoids using only labeled data and next process unlabeled classes – assign labels of nearest medoid. Another proposed approach is the mix of the supervised method of K-nearest neighbor and unsupervised K-Means. Thus, the proposed learning algorithm uses information about both the nearest points and classes centers of mass. The methods have been implemented using Python programming language and experimentally investigated for solving classification problems using datasets with different distribution and spatial characteristics. Datasets were generated using the scikit-learn library. Was compared the developed approaches to find average accuracy on all these datasets. It was shown, that even small amounts of labeled data allow us to use semi-supervised learning, and proposed modifications ensure to improve accuracy and algorithm performance, which was demonstrated during experiments. And with the increase of available label information accuracy of the algorithms grows up. Thus, the developed algorithms are using a distance metric that considers available label information. Keywords: Unsupervised learning, supervised learning. semi-supervised learning, clustering, distance, distance function, nearest neighbor, medoid, center of mass.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Semi- and unsupervised learning"

1

Zhang, Pin. "Nonlinear Semi-supervised and Unsupervised Metric Learning with Applications in Neuroimaging." Ohio University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1525266545968548.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kilinc, Ismail Ozsel. "Graph-based Latent Embedding, Annotation and Representation Learning in Neural Networks for Semi-supervised and Unsupervised Settings." Scholar Commons, 2017. https://scholarcommons.usf.edu/etd/7415.

Full text
Abstract:
Machine learning has been immensely successful in supervised learning with outstanding examples in major industrial applications such as voice and image recognition. Following these developments, the most recent research has now begun to focus primarily on algorithms which can exploit very large sets of unlabeled examples to reduce the amount of manually labeled data required for existing models to perform well. In this dissertation, we propose graph-based latent embedding/annotation/representation learning techniques in neural networks tailored for semi-supervised and unsupervised learning problems. Specifically, we propose a novel regularization technique called Graph-based Activity Regularization (GAR) and a novel output layer modification called Auto-clustering Output Layer (ACOL) which can be used separately or collaboratively to develop scalable and efficient learning frameworks for semi-supervised and unsupervised settings. First, singularly using the GAR technique, we develop a framework providing an effective and scalable graph-based solution for semi-supervised settings in which there exists a large number of observations but a small subset with ground-truth labels. The proposed approach is natural for the classification framework on neural networks as it requires no additional task calculating the reconstruction error (as in autoencoder based methods) or implementing zero-sum game mechanism (as in adversarial training based methods). We demonstrate that GAR effectively and accurately propagates the available labels to unlabeled examples. Our results show comparable performance with state-of-the-art generative approaches for this setting using an easier-to-train framework. Second, we explore a different type of semi-supervised setting where a coarse level of labeling is available for all the observations but the model has to learn a fine, deeper level of latent annotations for each one. Problems in this setting are likely to be encountered in many domains such as text categorization, protein function prediction, image classification as well as in exploratory scientific studies such as medical and genomics research. We consider this setting as simultaneously performed supervised classification (per the available coarse labels) and unsupervised clustering (within each one of the coarse labels) and propose a novel framework combining GAR with ACOL, which enables the network to perform concurrent classification and clustering. We demonstrate how the coarse label supervision impacts performance and the classification task actually helps propagate useful clustering information between sub-classes. Comparative tests on the most popular image datasets rigorously demonstrate the effectiveness and competitiveness of the proposed approach. The third and final setup builds on the prior framework to unlock fully unsupervised learning where we propose to substitute real, yet unavailable, parent- class information with pseudo class labels. In this novel unsupervised clustering approach the network can exploit hidden information indirectly introduced through a pseudo classification objective. We train an ACOL network through this pseudo supervision together with unsupervised objective based on GAR and ultimately obtain a k-means friendly latent representation. Furthermore, we demonstrate how the chosen transformation type impacts performance and helps propagate the latent information that is useful in revealing unknown clusters. Our results show state-of-the-art performance for unsupervised clustering tasks on MNIST, SVHN and USPS datasets with the highest accuracies reported to date in the literature.
APA, Harvard, Vancouver, ISO, and other styles
3

Trivedi, Shubhendu. "A Graph Theoretic Clustering Algorithm based on the Regularity Lemma and Strategies to Exploit Clustering for Prediction." Digital WPI, 2012. https://digitalcommons.wpi.edu/etd-theses/573.

Full text
Abstract:
The fact that clustering is perhaps the most used technique for exploratory data analysis is only a semaphore that underlines its fundamental importance. The general problem statement that broadly describes clustering as the identification and classification of patterns into coherent groups also implicitly indicates it's utility in other tasks such as supervised learning. In the past decade and a half there have been two developments that have altered the landscape of research in clustering: One is improved results by the increased use of graph theoretic techniques such as spectral clustering and the other is the study of clustering with respect to its relevance in semi-supervised learning i.e. using unlabeled data for improving prediction accuracies. In this work an attempt is made to make contributions to both these aspects. Thus our contributions are two-fold: First, we identify some general issues with the spectral clustering framework and while working towards a solution, we introduce a new algorithm which we call "Regularity Clustering" which makes an attempt to harness the power of the Szemeredi Regularity Lemma, a remarkable result from extremal graph theory for the task of clustering. Secondly, we investigate some practical and useful strategies for using clustering unlabeled data in boosting prediction accuracy. For all of these contributions we evaluate our methods against existing ones and also apply these ideas in a number of settings.
APA, Harvard, Vancouver, ISO, and other styles
4

OLIVEIRA, Paulo César de. "Abordagem semi-supervisionada para detecção de módulos de software defeituosos." Universidade Federal de Pernambuco, 2015. https://repositorio.ufpe.br/handle/123456789/19990.

Full text
Abstract:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-07-24T12:11:04Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Dissertação Mestrado Paulo César de Oliveira.pdf: 2358509 bytes, checksum: 36436ca63e0a8098c05718bbee92d36e (MD5)
Made available in DSpace on 2017-07-24T12:11:04Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Dissertação Mestrado Paulo César de Oliveira.pdf: 2358509 bytes, checksum: 36436ca63e0a8098c05718bbee92d36e (MD5) Previous issue date: 2015-08-31
Com a competitividade cada vez maior do mercado, aplicações de alto nível de qualidade são exigidas para a automação de um serviço. Para garantir qualidade de um software, testá-lo visando encontrar falhas antecipadamente é essencial no ciclo de vida de desenvolvimento. O objetivo do teste de software é encontrar falhas que poderão ser corrigidas e consequentemente, aumentar a qualidade do software em desenvolvimento. À medida que o software cresce, uma quantidade maior de testes é necessária para prevenir ou encontrar defeitos, visando o aumento da qualidade. Porém, quanto mais testes são criados e executados, mais recursos humanos e de infraestrutura são necessários. Além disso, o tempo para realizar as atividades de teste geralmente não é suficiente, fazendo com que os defeitos possam escapar. Cada vez mais as empresas buscam maneiras mais baratas e efetivas para detectar defeitos em software. Muitos pesquisadores têm buscado nos últimos anos, mecanismos para prever automaticamente defeitos em software. Técnicas de aprendizagem de máquina vêm sendo alvo das pesquisas, como uma forma de encontrar defeitos em módulos de software. Tem-se utilizado muitas abordagens supervisionadas para este fim, porém, rotular módulos de software como defeituosos ou não para fins de treinamento de um classificador é uma atividade muito custosa e que pode inviabilizar a utilização de aprendizagem de máquina. Neste contexto, este trabalho propõe analisar e comparar abordagens não supervisionadas e semisupervisionadas para detectar módulos de software defeituosos. Para isto, foram utilizados métodos não supervisionados (de detecção de anomalias) e também métodos semi-supervisionados, tendo como base os classificadores AutoMLP e Naive Bayes. Para avaliar e comparar tais métodos, foram utilizadas bases de dados da NASA disponíveis no PROMISE Software Engineering Repository.
Because the increase of market competition then high level of quality applications are required to provide automate services. In order to achieve software quality testing is essential in the development lifecycle with the purpose of finding defect as earlier as possible. The testing purpose is not only to find failures that can be fixed, but improve software correctness and quality. Once software gets more complex, a greater number of tests will be necessary to prevent or find defects. Therefore, the more tests are designed and exercised, the more human and infrastructure resources are needed. However, time to run the testing activities are not enough, thus, as a result, it causes escape defects. Companies are constantly trying to find cheaper and effective ways to software defect detection in earlier stages. In the past years, many researchers are trying to finding mechanisms to automatically predict these software defects. Machine learning techniques are being a research target, as a way of finding software modules detection. Many supervised approaches are being used with this purpose, but labeling software modules as defective or not defective to be used in training phase is very expensive and it can make difficult machine learning use. Considering that this work aims to analyze and compare unsupervised and semi-supervised approaches to software module defect detection. To do so, unsupervised methods (of anomaly detection) and semi-supervised methods using AutoMLP and Naive Bayes algorithms were used. To evaluate and compare these approaches, NASA datasets were used at PROMISE Software Engineering Repository.
APA, Harvard, Vancouver, ISO, and other styles
5

Packer, Thomas L. "Scalable Detection and Extraction of Data in Lists in OCRed Text for Ontology Population Using Semi-Supervised and Unsupervised Active Wrapper Induction." BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/4258.

Full text
Abstract:
Lists of records in machine-printed documents contain much useful information. As one example, the thousands of family history books scanned, OCRed, and placed on-line by FamilySearch.org probably contain hundreds of millions of fact assertions about people, places, family relationships, and life events. Data like this cannot be fully utilized until a person or process locates the data in the document text, extracts it, and structures it with respect to an ontology or database schema. Yet, in the family history industry and other industries, data in lists goes largely unused because no known approach adequately addresses all of the costs, challenges, and requirements of a complete end-to-end solution to this task. The diverse information is costly to extract because many kinds of lists appear even within a single document, differing from each other in both structure and content. The lists' records and component data fields are usually not set apart explicitly from the rest of the text, especially in a corpus of OCRed historical documents. OCR errors and the lack of document structure (e.g. HMTL tags) make list content hard to recognize by a software tool developed without a substantial amount of highly specialized, hand-coded knowledge or machine learning supervision. Making an approach that is not only accurate but also sufficiently scalable in terms of time and space complexity to process a large corpus efficiently is especially challenging. In this dissertation, we introduce a novel family of scalable approaches to list discovery and ontology population. Its contributions include the following. We introduce the first general-purpose methods of which we are aware for both list detection and wrapper induction for lists in OCRed or other plain text. We formally outline a mapping between in-line labeled text and populated ontologies, effectively reducing the ontology population problem to a sequence labeling problem, opening the door to applying sequence labelers and other common text tools to the goal of populating a richly structured ontology from text. We provide a novel admissible heuristic for inducing regular expression wrappers using an A* search. We introduce two ways of modeling list-structured text with a hidden Markov model. We present two query strategies for active learning in a list-wrapper induction setting. Our primary contributions are two complete and scalable wrapper-induction-based solutions to the end-to-end challenge of finding lists, extracting data, and populating an ontology. The first has linear time and space complexity and extracts highly accurate information at a low cost in terms of user involvement. The second has time and space complexity that are linear in the size of the input text and quadratic in the length of an output record and achieves higher F1-measures for extracted information as a function of supervision cost. We measure the performance of each of these approaches and show that they perform better than strong baselines, including variations of our own approaches and a conditional random field-based approach.
APA, Harvard, Vancouver, ISO, and other styles
6

Choi, Jin-Woo. "Action Recognition with Knowledge Transfer." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/101780.

Full text
Abstract:
Recent progress on deep neural networks has shown remarkable action recognition performance from videos. The remarkable performance is often achieved by transfer learning: training a model on a large-scale labeled dataset (source) and then fine-tuning the model on the small-scale labeled datasets (targets). However, existing action recognition models do not always generalize well on new tasks or datasets because of the following two reasons. i) Current action recognition datasets have a spurious correlation between action types and background scene types. The models trained on these datasets are biased towards the scene instead of focusing on the actual action. This scene bias leads to poor generalization performance. ii) Directly testing the model trained on the source data on the target data leads to poor performance as the source, and target distributions are different. Fine-tuning the model on the target data can mitigate this issue. However, manual labeling small- scale target videos is labor-intensive. In this dissertation, I propose solutions to these two problems. For the first problem, I propose to learn scene-invariant action representations to mitigate the scene bias in action recognition models. Specifically, I augment the standard cross-entropy loss for action classification with 1) an adversarial loss for the scene types and 2) a human mask confusion loss for videos where the human actors are invisible. These two losses encourage learning representations unsuitable for predicting 1) the correct scene types and 2) the correct action types when there is no evidence. I validate the efficacy of the proposed method by transfer learning experiments. I trans- fer the pre-trained model to three different tasks, including action classification, temporal action localization, and spatio-temporal action detection. The results show consistent improvement over the baselines for every task and dataset. I formulate human action recognition as an unsupervised domain adaptation (UDA) problem to handle the second problem. In the UDA setting, we have many labeled videos as source data and unlabeled videos as target data. We can use already exist- ing labeled video datasets as source data in this setting. The task is to align the source and target feature distributions so that the learned model can generalize well on the target data. I propose 1) aligning the more important temporal part of each video and 2) encouraging the model to focus on action, not the background scene, to learn domain-invariant action representations. The proposed method is simple and intuitive while achieving state-of-the-art performance without training on a lot of labeled target videos. I relax the unsupervised target data setting to a sparsely labeled target data setting. Then I explore the semi-supervised video action recognition, where we have a lot of labeled videos as source data and sparsely labeled videos as target data. The semi-supervised setting is practical as sometimes we can afford a little bit of cost for labeling target data. I propose multiple video data augmentation methods to inject photometric, geometric, temporal, and scene invariances to the action recognition model in this setting. The resulting method shows favorable performance on the public benchmarks.
Doctor of Philosophy
Recent progress on deep learning has shown remarkable action recognition performance. The remarkable performance is often achieved by transferring the knowledge learned from existing large-scale data to the small-scale data specific to applications. However, existing action recog- nition models do not always work well on new tasks and datasets because of the following two problems. i) Current action recognition datasets have a spurious correlation between action types and background scene types. The models trained on these datasets are biased towards the scene instead of focusing on the actual action. This scene bias leads to poor performance on the new datasets and tasks. ii) Directly testing the model trained on the source data on the target data leads to poor performance as the source, and target distributions are different. Fine-tuning the model on the target data can mitigate this issue. However, manual labeling small-scale target videos is labor-intensive. In this dissertation, I propose solutions to these two problems. To tackle the first problem, I propose to learn scene-invariant action representations to mitigate background scene- biased human action recognition models for the first problem. Specifically, the proposed method learns representations that cannot predict the scene types and the correct actions when there is no evidence. I validate the proposed method's effectiveness by transferring the pre-trained model to multiple action understanding tasks. The results show consistent improvement over the baselines for every task and dataset. To handle the second problem, I formulate human action recognition as an unsupervised learning problem on the target data. In this setting, we have many labeled videos as source data and unlabeled videos as target data. We can use already existing labeled video datasets as source data in this setting. The task is to align the source and target feature distributions so that the learned model can generalize well on the target data. I propose 1) aligning the more important temporal part of each video and 2) encouraging the model to focus on action, not the background scene. The proposed method is simple and intuitive while achieving state-of-the-art performance without training on a lot of labeled target videos. I relax the unsupervised target data setting to a sparsely labeled target data setting. Here, we have many labeled videos as source data and sparsely labeled videos as target data. The setting is practical as sometimes we can afford a little bit of cost for labeling target data. I propose multiple video data augmentation methods to inject color, spatial, temporal, and scene invariances to the action recognition model in this setting. The resulting method shows favorable performance on the public benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
7

Verri, Filipe Alves Neto. "Collective dynamics in complex networks for machine learning." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-18102018-113054/.

Full text
Abstract:
Machine learning enables machines to learn automatically from data. In literature, graph-based methods have received increasing attention due to their ability to learn from both local and global information. In these methods, each data instance is represented by a vertex and is linked to other vertices according to a predefined affinity rule. However, they usually have unfeasible time cost for large problems. To overcome this problem, techniques can employ a heuristic to find suboptimal solutions in a feasible time. Early heuristic optimization methods exploit nature-inspired collective processes, such as ants looking for food sources and swarms of bees. Nowadays, advances in the field of complex systems provide powerful tools to assess and to understand dynamical systems. Complex networks, which are graphs with nontrivial topology, are among these theoretical tools capable of describing the interplay of topology, structure, and dynamics of complex systems. Therefore, machine learning methods based on complex networks and collective dynamics have been proposed. They encompass three steps. First, a complex network is constructed from the input data. Then, the simulation of a distributed collective system in the network generates rich information. Finally, the collected information is used to solve the learning problem. The coordination of the individuals in the system permit to achieve dynamics that is far more complex than the behavior of single individuals. In this research, I have explored collective dynamics in machine learning tasks, both in unsupervised and semi-supervised scenarios. Specifically, I have proposed a new collective system of competing particles that shifts the traditional vertex-centric dynamics to a more informative edge-centric one. Moreover, it is the first particle competition system applied in machine learning task that has deterministic behavior. Results show several advantages of the edge-centric model, including the ability to acquire more information about overlapping areas, a better exploration behavior, and a faster convergence time. Also, I have proposed a new network formation technique that is not based on similarity and has low computational cost. Since addition and removal of samples in the network is cheap, it can be used in real-time application. Finally, I have conducted analytical investigations of a flocking-like system that was needed to guarantee the expected behavior in community detection tasks. In conclusion, the result of the research contributes to many areas of machine learning and complex systems.
Aprendizado de máquina permite que computadores aprendam automaticamente dos dados. Na literatura, métodos baseados em grafos recebem crescente atenção por serem capazes de aprender através de informações locais e globais. Nestes métodos, cada item de dado é um vértice e as conexões são dadas uma regra de afinidade. Todavia, tais técnicas possuem custo de tempo impraticável para grandes grafos. O uso de heurísticas supera este problema, encontrando soluções subótimas em tempo factível. No início, alguns métodos de otimização inspiraram suas heurísticas em processos naturais coletivos, como formigas procurando por comida e enxames de abelhas. Atualmente, os avanços na área de sistemas complexos provêm ferramentas para medir e entender estes sistemas. Redes complexas, as quais são grafos com topologia não trivial, são uma das ferramentas. Elas são capazes de descrever as relações entre topologia, estrutura e dinâmica de sistemas complexos. Deste modo, novos métodos de aprendizado baseados em redes complexas e dinâmica coletiva vêm surgindo. Eles atuam em três passos. Primeiro, uma rede complexa é construída da entrada. Então, simula-se um sistema coletivo distribuído na rede para obter informações. Enfim, a informação coletada é utilizada para resolver o problema. A interação entre indivíduos no sistema permite alcançar uma dinâmica muito mais complexa do que o comportamento individual. Nesta pesquisa, estudei o uso de dinâmica coletiva em problemas de aprendizado de máquina, tanto em casos não supervisionados como semissupervisionados. Especificamente, propus um novo sistema de competição de partículas cuja competição ocorre em arestas ao invés de vértices, aumentando a informação do sistema. Ainda, o sistema proposto é o primeiro modelo de competição de partículas aplicado em aprendizado de máquina com comportamento determinístico. Resultados comprovam várias vantagens do modelo em arestas, includindo detecção de áreas sobrepostas, melhor exploração do espaço e convergência mais rápida. Além disso, apresento uma nova técnica de formação de redes que não é baseada na similaridade dos dados e possui baixa complexidade computational. Uma vez que o custo de inserção e remoção de exemplos na rede é barato, o método pode ser aplicado em aplicações de tempo real. Finalmente, conduzi um estudo analítico em um sistema de alinhamento de partículas. O estudo foi necessário para garantir o comportamento esperado na aplicação do sistema em problemas de detecção de comunidades. Em suma, os resultados da pesquisa contribuíram para várias áreas de aprendizado de máquina e sistemas complexos.
APA, Harvard, Vancouver, ISO, and other styles
8

Bernardini, Alessandra. "Extraction of Grasping Motions from sEMG Signals for the Control of Robotic Hands through Autoencoding." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
Robotic hands are used in many applications, including prosthetic devices controlled by the nervous system, as well as end-effectors in industrial automation or medical surgery, etc. One of the most investigated techniques to control them involves the use of surface electromyographic signals. Surface electromyographic signal (sEMG) is the electrical signal generated by muscle activity and recorded by non-invasive surface electrodes placed on the skin. The basic control pattern consists in online extracting neural information from sEMG signals during muscles contraction and then using these signals to control the robotic device. My thesis will focus on the control information extracting phase. Specifically, we want to extract grasping activation intentions from sEMG signals to control robotic hands. Unsupervised regression methods for extracting control information are particularly interesting because they do not require any label, with a consequent simplification and speed up of the training process. Among these, we want to investigate autoencoders and NMF ability to extract control information describing muscle activity during different grasping motions. The project will be developed along two parallel directions: simulation and experimental case. In the simulation case, the input for NMF or autoencoder will be generated through a model of sEMG signals that I implemented for my bachelor thesis and suitably modified in order to consider single fingers movements. In the second case, we will use real sEMG signals recorded during experimental sessions. Additionally, we will analyse the role of intrinsic muscles during fingers movements. We will compare simulation results with real results, in order to understand the likelihood level of the model. As final step, we will simulate the control a robotic hand using the neural commands obtained with NMF and autoencoding. To obtain this, we will leverage on the theory of postural synergies.
APA, Harvard, Vancouver, ISO, and other styles
9

Kahindo, Senge Muvingi Christian. "Analyse automatique de l’écriture manuscrite sur tablette pour la détection et le suivi thérapeutique de personnes présentant des pathologies." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLL016/document.

Full text
Abstract:
Nous présentons dans cette thèse un nouveau paradigme pour caractériser la maladie d’Alzheimer à travers l’écriture manuscrite acquise sur tablette graphique. L’état de l’art est dominé par des méthodes qui supposent un comportement unique ou homogène au sein de chaque profil cognitif. Ces travaux exploitent des paramètres cinématiques globaux, sur lesquels ils appliquent des tests statistiques ou des algorithmes de classification pour discriminer les différents profils cognitifs (les patients Alzheimer, les troubles cognitifs légers (« Mild Cognitive impairment » : MCI) et les sujets Contrôle (HC)). Notre travail aborde ces deux limites de la littérature de la façon suivante : premièrement au lieu de considérer un comportement homogène au sein de chaque profil cognitif ou classe (HC, MCI, ES-AD : « Early-Stage Alzheimer Disease »), nous nous sommes affranchis de cette hypothèse (ou contrainte) forte de la littérature. Nous considérons qu’il peut y avoir plusieurs comportements au sein de chaque profil cognitif. Ainsi, nous proposons un apprentissage semi-supervisé pour trouver des groupes homogènes de sujets et analysons l’information contenue dans ces clusters ou groupes sur les profils cognitifs. Deuxièmement, au lieu d’exploiter les paramètres cinématiques globaux (ex : vitesse moyenne, pression moyenne, etc.), nous avons défini deux paramétrisations ou codages : une paramétrisation semi-globale, puis locale en modélisant la dynamique complète de chaque paramètre. L’un de nos résultats importants met en évidence deux clusters majeurs qui sont découverts, l’un dominé par les sujets HC et MCI et l’autre par les MCI et ES-AD, révélant ainsi que les patients atteints de MCI ont une motricité fine qui est proche soit des sujets HC, soit des patients ES-AD. Notre travail montre également que la vitesse prise localement regroupe un ensemble riche des caractéristiques telles que la taille, l’inclinaison, la fluidité et la régularité, et révèle comment ces paramètres spatiotemporels peuvent conjointement caractériser les profils cognitifs
We present, in this thesis, a novel paradigm for assessing Alzheimer’s disease by analyzing impairment of handwriting (HW) on tablets, a challenging problem that is still in its infancy. The state of the art is dominated by methods that assume a unique behavioral trend for each cognitive profile, and that extract global kinematic parameters, assessed by standard statistical tests or classification models, for discriminating the neuropathological disorders (Alzheimer’s (AD), Mild Cognitive Impairment (MCI)) from Healthy Controls (HC). Our work tackles these two major limitations as follows. First, instead of considering a unique behavioral pattern for each cognitive profile, we relax this heavy constraint by allowing the emergence of multimodal behavioral patterns. We achieve this by performing semi-supervised learning to uncover homogeneous clusters of subjects, and then we analyze how much information these clusters carry on the cognitive profiles. Second, instead of relying on global kinematic parameters, mostly consisting of their average, we refine the encoding either by a semi-global parameterization, or by modeling the full dynamics of each parameter, harnessing thereby the rich temporal information inherently characterizing online HW. Thanks to our modeling, we obtain new findings that are the first of their kind on this research field. A striking finding is revealed: two major clusters are unveiled, one dominated by HC and MCI subjects, and one by MCI and ES-AD, thus revealing that MCI patients have fine motor skills leaning towards either HC’s or ES-AD’s. This thesis introduces also a new finding from HW trajectories that uncovers a rich set of features simultaneously like the full velocity profile, size and slant, fluidity, and shakiness, and reveals, in a naturally explainable way, how these HW features conjointly characterize, with fine and subtle details, the cognitive profiles
APA, Harvard, Vancouver, ISO, and other styles
10

Cupertino, Thiago Henrique. "Machine learning via dynamical processes on complex networks." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-25032014-154520/.

Full text
Abstract:
Extracting useful knowledge from data sets is a key concept in modern information systems. Consequently, the need of efficient techniques to extract the desired knowledge has been growing over time. Machine learning is a research field dedicated to the development of techniques capable of enabling a machine to \"learn\" from data. Many techniques have been proposed so far, but there are still issues to be unveiled specially in interdisciplinary research. In this thesis, we explore the advantages of network data representation to develop machine learning techniques based on dynamical processes on networks. The network representation unifies the structure, dynamics and functions of the system it represents, and thus is capable of capturing the spatial, topological and functional relations of the data sets under analysis. We develop network-based techniques for the three machine learning paradigms: supervised, semi-supervised and unsupervised. The random walk dynamical process is used to characterize the access of unlabeled data to data classes, configuring a new heuristic we call ease of access in the supervised paradigm. We also propose a classification technique which combines the high-level view of the data, via network topological characterization, and the low-level relations, via similarity measures, in a general framework. Still in the supervised setting, the modularity and Katz centrality network measures are applied to classify multiple observation sets, and an evolving network construction method is applied to the dimensionality reduction problem. The semi-supervised paradigm is covered by extending the ease of access heuristic to the cases in which just a few labeled data samples and many unlabeled samples are available. A semi-supervised technique based on interacting forces is also proposed, for which we provide parameter heuristics and stability analysis via a Lyapunov function. Finally, an unsupervised network-based technique uses the concepts of pinning control and consensus time from dynamical processes to derive a similarity measure used to cluster data. The data is represented by a connected and sparse network in which nodes are dynamical elements. Simulations on benchmark data sets and comparisons to well-known machine learning techniques are provided for all proposed techniques. Advantages of network data representation and dynamical processes for machine learning are highlighted in all cases
A extração de conhecimento útil a partir de conjuntos de dados é um conceito chave em sistemas de informação modernos. Por conseguinte, a necessidade de técnicas eficientes para extrair o conhecimento desejado vem crescendo ao longo do tempo. Aprendizado de máquina é uma área de pesquisa dedicada ao desenvolvimento de técnicas capazes de permitir que uma máquina \"aprenda\" a partir de conjuntos de dados. Muitas técnicas já foram propostas, mas ainda há questões a serem reveladas especialmente em pesquisas interdisciplinares. Nesta tese, exploramos as vantagens da representação de dados em rede para desenvolver técnicas de aprendizado de máquina baseadas em processos dinâmicos em redes. A representação em rede unifica a estrutura, a dinâmica e as funções do sistema representado e, portanto, é capaz de capturar as relações espaciais, topológicas e funcionais dos conjuntos de dados sob análise. Desenvolvemos técnicas baseadas em rede para os três paradigmas de aprendizado de máquina: supervisionado, semissupervisionado e não supervisionado. O processo dinâmico de passeio aleatório é utilizado para caracterizar o acesso de dados não rotulados às classes de dados configurando uma nova heurística no paradigma supervisionado, a qual chamamos de facilidade de acesso. Também propomos uma técnica de classificação de dados que combina a visão de alto nível dos dados, por meio da caracterização topológica de rede, com relações de baixo nível, por meio de medidas de similaridade, em uma estrutura geral. Ainda no aprendizado supervisionado, as medidas de rede modularidade e centralidade Katz são aplicadas para classificar conjuntos de múltiplas observações, e um método de construção evolutiva de rede é aplicado ao problema de redução de dimensionalidade. O paradigma semissupervisionado é abordado por meio da extensão da heurística de facilidade de acesso para os casos em que apenas algumas amostras de dados rotuladas e muitas amostras não rotuladas estão disponíveis. É também proposta uma técnica semissupervisionada baseada em forças de interação, para a qual fornecemos heurísticas para selecionar parâmetros e uma análise de estabilidade mediante uma função de Lyapunov. Finalmente, uma técnica não supervisionada baseada em rede utiliza os conceitos de controle pontual e tempo de consenso de processos dinâmicos para derivar uma medida de similaridade usada para agrupar dados. Os dados são representados por uma rede conectada e esparsa na qual os vértices são elementos dinâmicos. Simulações com dados de referência e comparações com técnicas de aprendizado de máquina conhecidas são fornecidos para todas as técnicas propostas. As vantagens da representação de dados em rede e de processos dinâmicos para o aprendizado de máquina são evidenciadas em todos os casos
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Semi- and unsupervised learning"

1

Albalate, Amparo, and Wolfgang Minker. Semi-Supervised and Unsupervised Machine Learning. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118557693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kyan, Matthew, Paisarn Muneesawang, Kambiz Jarrah, and Ling Guan. Unsupervised Learning. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2014. http://dx.doi.org/10.1002/9781118875568.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Celebi, M. Emre, and Kemal Aydin, eds. Unsupervised Learning Algorithms. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-24211-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Xiangtao, and Ka-Chun Wong, eds. Natural Computing for Unsupervised Learning. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-98566-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Baruque, Bruno. Fusion methods for unsupervised learning ensembles. Berlin: Springer, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bartlett, Marian Stewart. Face image analysis by unsupervised learning. Boston: Kluwer Academic Publishers, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Leordeanu, Marius. Unsupervised Learning in Space and Time. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-42128-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Baruque, Bruno, and Emilio Corchado. Fusion Methods for Unsupervised Learning Ensembles. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-16205-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bartlett, Marian Stewart. Face Image Analysis by Unsupervised Learning. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4615-1637-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bartlett, Marian Stewart. Face Image Analysis by Unsupervised Learning. Boston, MA: Springer US, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Semi- and unsupervised learning"

1

Taguchi, Y.-h. "PCA Based Unsupervised FE." In Unsupervised and Semi-Supervised Learning, 81–102. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-22456-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Taguchi, Y.-h. "TD Based Unsupervised FE." In Unsupervised and Semi-Supervised Learning, 103–16. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-22456-1_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

M. Bagirov, Adil, Napsu Karmitsa, and Sona Taheri. "Introduction to Clustering." In Unsupervised and Semi-Supervised Learning, 3–13. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-37826-4_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

M. Bagirov, Adil, Napsu Karmitsa, and Sona Taheri. "Performance and Evaluation Measures." In Unsupervised and Semi-Supervised Learning, 245–68. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-37826-4_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

M. Bagirov, Adil, Napsu Karmitsa, and Sona Taheri. "Implementations and Data Sets." In Unsupervised and Semi-Supervised Learning, 269–79. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-37826-4_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

M. Bagirov, Adil, Napsu Karmitsa, and Sona Taheri. "Numerical Experiments." In Unsupervised and Semi-Supervised Learning, 281–314. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-37826-4_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

M. Bagirov, Adil, Napsu Karmitsa, and Sona Taheri. "Concluding Remarks." In Unsupervised and Semi-Supervised Learning, 315–17. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-37826-4_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

M. Bagirov, Adil, Napsu Karmitsa, and Sona Taheri. "Theory of Nonsmooth Optimization." In Unsupervised and Semi-Supervised Learning, 15–50. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-37826-4_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

M. Bagirov, Adil, Napsu Karmitsa, and Sona Taheri. "Nonsmooth Optimization Methods." In Unsupervised and Semi-Supervised Learning, 51–94. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-37826-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

M. Bagirov, Adil, Napsu Karmitsa, and Sona Taheri. "Optimization Models in Cluster Analysis." In Unsupervised and Semi-Supervised Learning, 97–133. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-37826-4_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Semi- and unsupervised learning"

1

Hong, Xianbin, Gautam Pal, Sheng-Uei Guan, Prudence Wong, Dawei Liu, Ka Lok Man, and Xin Huang. "Semi-Unsupervised Lifelong Learning for Sentiment Classification." In HPCCT 2019: 2019 The 3rd High Performance Computing and Cluster Technologies Conference. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3341069.3342992.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ando, Shin. "Session details: Unsupervised and semi-supervised learning." In CIKM '11: International Conference on Information and Knowledge Management. New York, NY, USA: ACM, 2011. http://dx.doi.org/10.1145/3244893.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Saini, Nikhil, Drumil Trivedi, Shreya Khare, Tejas Dhamecha, Preethi Jyothi, Samarth Bharadwaj, and Pushpak Bhattacharyya. "Disfluency Correction using Unsupervised and Semi-supervised Learning." In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.eacl-main.299.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Breve, Fabricio Aparecido, and Daniel Carlos Guimaraes Pedronette. "Combined unsupervised and semi-supervised learning for data classification." In 2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2016. http://dx.doi.org/10.1109/mlsp.2016.7738877.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Saul, L. K. "Graph-based methods for unsupervised and semi-supervised learning." In IEEE Workshop on Automatic Speech Recognition and Understanding, 2005. IEEE, 2005. http://dx.doi.org/10.1109/asru.2005.1566469.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nie, Feiping, Hua Wang, Heng Huang, and Chris Ding. "Unsupervised and semi-supervised learning via ℓ1-norm graph." In 2011 IEEE International Conference on Computer Vision (ICCV). IEEE, 2011. http://dx.doi.org/10.1109/iccv.2011.6126506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Escalante, Diego Alonso Chavez, Gabriel Taubin, Luis Gustavo Nonato, and Siome Klein Goldenstein. "Using Unsupervised Learning for Graph Construction in Semi-supervised Learning with Graphs." In 2013 XXVI SIBGRAPI - Conference on Graphics, Patterns and Images (SIBGRAPI). IEEE, 2013. http://dx.doi.org/10.1109/sibgrapi.2013.13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hallett, Nicole, Kai Yi, Josef Dick, Christopher Hodge, Gerard Sutton, Yu Guang Wang, and Jingjing You. "Deep Learning Based Unsupervised and Semi-supervised Classification for Keratoconus." In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. http://dx.doi.org/10.1109/ijcnn48605.2020.9206694.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bai, Shuanhu, Chien-Lin Huang, Bin Ma, and Haizhou Li. "Semi-supervised learning of language model using unsupervised topic model." In 2010 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2010. http://dx.doi.org/10.1109/icassp.2010.5494940.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Levow, Gina-Anne. "Unsupervised and semi-supervised learning of tone and pitch accent." In the main conference. Morristown, NJ, USA: Association for Computational Linguistics, 2006. http://dx.doi.org/10.3115/1220835.1220864.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Semi- and unsupervised learning"

1

Tran, Anh, Theron Rodgers, and Timothy Wildey. Reification of latent microstructures: On supervised unsupervised and semi-supervised deep learning applications for microstructures in materials informatics. Office of Scientific and Technical Information (OSTI), September 2020. http://dx.doi.org/10.2172/1673174.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vesselinov, Velimir Valentinov. TensorDecompostions : Unsupervised machine learning methods. Office of Scientific and Technical Information (OSTI), February 2019. http://dx.doi.org/10.2172/1493534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sprechmann, Pablo, and Guillermo Sapiro. Dictionary Learning and Sparse Coding for Unsupervised Clustering. Fort Belvoir, VA: Defense Technical Information Center, September 2009. http://dx.doi.org/10.21236/ada513140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vesselinov, Velimir, Bulbul Ahmmed, Maruti Mudunuru, Jeff Pepin, Erick Burns, D. Siler, Satish Karra, and Richard Middleton. Discovering Hidden Geothermal Signatures using Unsupervised Machine Learning. Office of Scientific and Technical Information (OSTI), April 2021. http://dx.doi.org/10.2172/1781347.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Safta, Cosmin, Habib Najm, Michael Grant, and Michael Sparapany. Trajectory Optimization via Unsupervised Probabilistic Learning On Manifolds. Office of Scientific and Technical Information (OSTI), September 2021. http://dx.doi.org/10.2172/1821958.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ahmmed, Bulbul. Supervised and Unsupervised Machine Learning to Understanding Reactive-transport Data. Office of Scientific and Technical Information (OSTI), May 2020. http://dx.doi.org/10.2172/1630844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Obert, James, and Timothy James Loffredo. Efficient Binary Static Code Data Flow Analysis Using Unsupervised Learning. Office of Scientific and Technical Information (OSTI), November 2019. http://dx.doi.org/10.2172/1592974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yeamans, Katelyn Angela. Unsupervised Machine Learning for Evaluation of Aging in Explosive Pressed Pellets. Office of Scientific and Technical Information (OSTI), December 2018. http://dx.doi.org/10.2172/1484618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wehner, Michael, Mark Risser, Paul Ullrich, and Shiheng Duan. Exploring variability in seasonal average and extreme precipitation using unsupervised machine learning. Office of Scientific and Technical Information (OSTI), April 2021. http://dx.doi.org/10.2172/1769708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Adams, Jason, Katherine Goode, Joshua Michalenko, Phillip Lewis, and Daniel Ries. Semi-supervised Bayesian Low-shot Learning. Office of Scientific and Technical Information (OSTI), September 2021. http://dx.doi.org/10.2172/1821543.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography