Academic literature on the topic 'Unsupervied learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Unsupervied learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Unsupervied learning"

1

Fong, A. C. M., and G. Hong. "Boosted Supervised Intensional Learning Supported by Unsupervised Learning." International Journal of Machine Learning and Computing 11, no. 2 (March 2021): 98–102. http://dx.doi.org/10.18178/ijmlc.2021.11.2.1020.

Full text
Abstract:
Traditionally, supervised machine learning (ML) algorithms rely heavily on large sets of annotated data. This is especially true for deep learning (DL) neural networks, which need huge annotated data sets for good performance. However, large volumes of annotated data are not always readily available. In addition, some of the best performing ML and DL algorithms lack explainability – it is often difficult even for domain experts to interpret the results. This is an important consideration especially in safety-critical applications, such as AI-assisted medical endeavors, in which a DL’s failure mode is not well understood. This lack of explainability also increases the risk of malicious attacks by adversarial actors because these actions can become obscured in the decision-making process that lacks transparency. This paper describes an intensional learning approach which uses boosting to enhance prediction performance while minimizing reliance on availability of annotated data. The intensional information is derived from an unsupervised learning preprocessing step involving clustering. Preliminary evaluation on the MNIST data set has shown encouraging results. Specifically, using the proposed approach, it is now possible to achieve similar accuracy result as extensional learning alone while using only a small fraction of the original training data set.
APA, Harvard, Vancouver, ISO, and other styles
2

Xu, Mingle, Sook Yoon, Jaesu Lee, and Dong Sun Park. "Unsupervised Transfer Learning for Plant Anomaly Recognition." Korean Institute of Smart Media 11, no. 4 (May 31, 2022): 30–37. http://dx.doi.org/10.30693/smj.2022.11.4.30.

Full text
Abstract:
Disease threatens plant growth and recognizing the type of disease is essential to making a remedy. In recent years, deep learning has witnessed a significant improvement for this task, however, a large volume of labeled images is one of the requirements to get decent performance. But annotated images are difficult and expensive to obtain in the agricultural field. Therefore, designing an efficient and effective strategy is one of the challenges in this area with few labeled data. Transfer learning, assuming taking knowledge from a source domain to a target domain, is borrowed to address this issue and observed comparable results. However, current transfer learning strategies can be regarded as a supervised method as it hypothesizes that there are many labeled images in a source domain. In contrast, unsupervised transfer learning, using only images in a source domain, gives more convenience as collecting images is much easier than annotating. In this paper, we leverage unsupervised transfer learning to perform plant disease recognition, by which we achieve a better performance than supervised transfer learning in many cases. Besides, a vision transformer with a bigger model capacity than convolution is utilized to have a better-pretrained feature space. With the vision transformer-based unsupervised transfer learning, we achieve better results than current works in two datasets. Especially, we obtain 97.3% accuracy with only 30 training images for each class in the Plant Village dataset. We hope that our work can encourage the community to pay attention to vision transformer-based unsupervised transfer learning in the agricultural field when with few labeled images.
APA, Harvard, Vancouver, ISO, and other styles
3

Kruglov, Artem V. "The Unsupervised Learning Algorithm for Detecting Ellipsoid Objects." International Journal of Machine Learning and Computing 9, no. 3 (June 2019): 255–60. http://dx.doi.org/10.18178/ijmlc.2019.9.3.795.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shi, Chengming, Bo Luo, Hongqi Li, Bin Li, Xinyong Mao, and Fangyu Peng. "Anomaly Detection via Unsupervised Learning for Tool Breakage Monitoring." International Journal of Machine Learning and Computing 6, no. 5 (October 2016): 256–59. http://dx.doi.org/10.18178/ijmlc.2016.6.5.607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Banzi, Jamal, Isack Bulugu, and Zhongfu Ye. "Deep Predictive Neural Network: Unsupervised Learning for Hand Pose Estimation." International Journal of Machine Learning and Computing 9, no. 4 (August 2019): 432–39. http://dx.doi.org/10.18178/ijmlc.2019.9.4.822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Barlow, H. B. "Unsupervised Learning." Neural Computation 1, no. 3 (September 1989): 295–311. http://dx.doi.org/10.1162/neco.1989.1.3.295.

Full text
Abstract:
What use can the brain make of the massive flow of sensory information that occurs without any associated rewards or punishments? This question is reviewed in the light of connectionist models of unsupervised learning and some older ideas, namely the cognitive maps and working models of Tolman and Craik, and the idea that redundancy is important for understanding perception (Attneave 1954), the physiology of sensory pathways (Barlow 1959), and pattern recognition (Watanabe 1960). It is argued that (1) The redundancy of sensory messages provides the knowledge incorporated in the maps or models. (2) Some of this knowledge can be obtained by observations of mean, variance, and covariance of sensory messages, and perhaps also by a method called “minimum entropy coding.” (3) Such knowledge may be incorporated in a model of “what usually happens” with which incoming messages are automatically compared, enabling unexpected discrepancies to be immediately identified. (4) Knowledge of the sort incorporated into such a filter is a necessary prerequisite of ordinary learning, and a representation whose elements are independent makes it possible to form associations with logical functions of the elements, not just with the elements themselves.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhuo Wang, Zhuo Wang, Min Huang Zhuo Wang, Xiao-Long Huang Min Huang, Fei Man Xiao-Long Huang, Jia-Ming Dou Fei Man, and Jian-li Lyu Jia-Ming Dou. "Unsupervised Learning of Depth and Ego-Motion from Continuous Monocular Images." 電腦學刊 32, no. 6 (December 2021): 038–51. http://dx.doi.org/10.53106/199115992021123206004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Watkin, T. L. H., and J. P. Nadal. "Optimal unsupervised learning." Journal of Physics A: Mathematical and General 27, no. 6 (March 21, 1994): 1899–915. http://dx.doi.org/10.1088/0305-4470/27/6/016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sanger, T. "Optimal unsupervised learning." Neural Networks 1 (January 1988): 127. http://dx.doi.org/10.1016/0893-6080(88)90166-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chen Guoyang, 陈国洋, 吴小俊 Wu Xiaojun, and 徐天阳 Xu Tianyang. "基于深度学习的无监督红外图像与可见光图像融合算法." Laser & Optoelectronics Progress 59, no. 4 (2022): 0410010. http://dx.doi.org/10.3788/lop202259.0410010.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Unsupervied learning"

1

GIOBERGIA, FLAVIO. "Machine learning with limited label availability: algorithms and applications." Doctoral thesis, Politecnico di Torino, 2023. https://hdl.handle.net/11583/2976594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Snyder, Benjamin Ph D. Massachusetts Institute of Technology. "Unsupervised multilingual learning." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62455.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 241-254).
For centuries, scholars have explored the deep links among human languages. In this thesis, we present a class of probabilistic models that exploit these links as a form of naturally occurring supervision. These models allow us to substantially improve performance for core text processing tasks, such as morphological segmentation, part-of-speech tagging, and syntactic parsing. Besides these traditional NLP tasks, we also present a multilingual model for lost language deciphersment. We test this model on the ancient Ugaritic language. Our results show that we can automatically uncover much of the historical relationship between Ugaritic and Biblical Hebrew, a known related language.
by Benjamin Snyder.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
3

Geigel, Arturo. "Unsupervised Learning Trojan." NSUWorks, 2014. http://nsuworks.nova.edu/gscis_etd/17.

Full text
Abstract:
This work presents a proof of concept of an Unsupervised Learning Trojan. The Unsupervised Learning Trojan presents new challenges over previous work on the Neural network Trojan, since the attacker does not control most of the environment. The current work will presented an analysis of how the attack can be successful by proposing new assumptions under which the attack can become a viable one. A general analysis of how the compromise can be theoretically supported is presented, providing enough background for practical implementation development. The analysis was carried out using 3 selected algorithms that can cover a wide variety of circumstances of unsupervised learning. A selection of 4 encoding schemes on 4 datasets were chosen to represent actual scenarios under which the Trojan compromise might be targeted. A detailed procedure is presented to demonstrate the attack's viability under assumed circumstances. Two tests of hypothesis concerning the experimental setup were carried out which yielded acceptance of the null hypothesis. Further discussion is contemplated on various aspects of actual implementation issues and real world scenarios where this attack might be contemplated.
APA, Harvard, Vancouver, ISO, and other styles
4

Mathieu, Michael. "Unsupervised Learning under Uncertainty." Thesis, New York University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10261120.

Full text
Abstract:

Deep learning, in particular neural networks, achieved remarkable success in the recent years. However, most of it is based on supervised learning, and relies on ever larger datasets, and immense computing power. One step towards general artificial intelligence is to build a model of the world, with enough knowledge to acquire a kind of ``common sense''. Representations learned by such a model could be reused in a number of other tasks. It would reduce the requirement for labelled samples and possibly acquire a deeper understanding of the problem. The vast quantities of knowledge required to build common sense precludes the use of supervised learning, and suggests to rely on unsupervised learning instead.

The concept of uncertainty is central to unsupervised learning. The task is usually to learn a complex, multimodal distribution. Density estimation and generative models aim at representing the whole distribution of the data, while predictive learning consists of predicting the state of the world given the context and, more often than not, the prediction is not unique. That may be because the model lacks the capacity or the computing power to make a certain prediction, or because the future depends on parameters that are not part of the observation. Finally, the world can be chaotic of truly stochastic. Representing complex, multimodal continuous distributions with deep neural networks is still an open problem.

In this thesis, we first assess the difficulties of representing probabilities in high dimensional spaces, and review the related work in this domain. We then introduce two methods to address the problem of video prediction, first using a novel form of linearizing auto-encoders and latent variables, and secondly using Generative Adversarial Networks (GANs). We show how GANs can be seen as trainable loss functions to represent uncertainty, then how they can be used to disentangle factors of variation. Finally, we explore a new non-probabilistic framework for GANs.

APA, Harvard, Vancouver, ISO, and other styles
5

Boschini, Matteo. "Unsupervised Learning of Scene Flow." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16226/.

Full text
Abstract:
As Computer Vision-powered autonomous systems are increasingly deployed to solve problems in the wild, the case is made for developing visual understanding methods that are robust and flexible. One of the most challenging tasks for this purpose is given by the extraction of scene flow, that is the dense three-dimensional vector field that associates each world point with its corresponding position in the next observed frame, hence describing its three-dimensional motion entirely. The recent addition of a limited amount of ground truth scene flow information to the popular KITTI dataset prompted a renewed interest in the study of techniques for scene flow inference, although the proposed solutions in literature mostly rely on computation-intensive techniques and are characterised by execution times that are not suited for real-time application. In the wake of the recent widespread adoption of Deep Learning techniques to Computer Vision tasks and in light of the convenience of Unsupervised Learning for scenarios in which ground truth collection is difficult and time-consuming, this thesis work proposes the first neural network architecture to be trained in end-to-end fashion for unsupervised scene flow regression from monocular visual data, called Pantaflow. The proposed solution is much faster than currently available state-of-the-art methods and therefore represents a step towards the achievement of real-time scene flow inference.
APA, Harvard, Vancouver, ISO, and other styles
6

Jelacic, Mersad. "Unsupervised Learning for Plant Recognition." Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-247.

Full text
Abstract:

Six methods are used for clustering data containing two different objects: sugar-beet plants

and weed. These objects are described by 19 different features, i.e. shape and color features.

There is also information about the distance between sugar-beet plants that is used for

labeling clusters. The methods that are evaluated: k-means, k-medoids, hierarchical clustering,

competitive learning, self-organizing maps and fuzzy c-means. After using the methods on

plant data, clusters are formed. The clusters are labeled with three different proposed

methods: expert, database and context method. Expert method is using a human for giving

initial cluster centers that are labeled. The database method is using a database as an expert

that provides initial cluster centers. The context method is using information about the

environment, which is the distance between sugar-beet plants, for labeling the clusters.

The algorithms that were tested, with the lowest achieved corresponding error, are: k-means

(3.3%), k-medoids (3.8%), hierarchical clustering (5.3%), competitive learning (6.8%), self-

organizing maps (4.9%) and fuzzy c-means (7.9%). Three different datasets were used and the

lowest error on dataset0 is 3.3%, compared to supervised learning methods where it is 3%.

For dataset1 the error is 18.7% and for dataset2 it is 5.8%. Compared to supervised methods,

the error on dataset1 is 11% and for dataset2 it is 5.1%. The high error rate on dataset1 is due

to the samples are not very well separated in different clusters. The features from dataset1 are

extracted from lower resolution on images than the other datasets, and another difference

between the datasets are the sugar-beet plants that are in different growth stages.

The performance of the three methods for labeling clusters is: expert method (6.8% as the

lowest error achieved), database method (3.7%) and context method (6.8%). These results

show the clustering results by competitive learning where the real error is 6.8%.

Unsupervised-learning methods for clustering can very well be used for plant identification.

Because the samples are not classified, an automatic labeling technique must be used if plants

are to be identified. The three proposed techniques can be used for automatic labeling of

plants.

APA, Harvard, Vancouver, ISO, and other styles
7

Amin, Khizer, and Mehmood ul haq Minhas. "Facebook Blocket with Unsupervised Learning." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1969.

Full text
Abstract:
The Internet has become a valuable channel for both business-to- consumer and business-to-business e-commerce. It has changed the way for many companies to manage the business. Every day, more and more companies are making their presence on Internet. Web sites are launched for online shopping as web shops or on-line stores are a popular means of goods distribution. The number of items sold through the internet has sprung up significantly in the past few years. Moreover, it has become a choice for customers to do shopping at their ease. Thus, the aim of this thesis is to design and implement a consumer to consumer application for Facebook, which is one of the largest social networking website. The application allows Facebook users to use their regular profile (on Facebook) to buy and sell goods or services through Facebook. As we already mentioned, there are many web shops such as eBay, Amazon, and applications like blocket on Facebook. However, none of them is directly interacting with the Facebook users, and all of them are using their own platform. Users may use the web shop link from their Facebook profile and will be redirected to web shop. On the other hand, most of the applications in Facebook use notification method to introduce themselves or they push their application on the Facebook pages. This application provides an opportunity to Facebook users to interact directly with other users and use the Facebook platform as a selling/buying point. The application is developed by using a modular approach. Initially a Python web framework, i.e., Django is used and association rule learning is applied for the classification of users’ advertisments. Apriori algorithm generates the rules, which are stored as separate text file. The rule file is further used to classify advertisements and is updated regularly.
APA, Harvard, Vancouver, ISO, and other styles
8

Korkontzelos, Ioannis. "Unsupervised learning of multiword expressions." Thesis, University of York, 2010. http://etheses.whiterose.ac.uk/2091/.

Full text
Abstract:
Multiword expressions are expressions consisting of two or more words that correspond to some conventional way of saying things (Manning & Schutze 1999). Due to the idiomatic nature of many of them and their high frequency of occurence in all sorts of text, they cause problems in many Natural Language Processing (NLP) applications and are frequently responsible for their shortcomings. Efficiently recognising multiword expressions and deciding the degree of their idiomaticity would be useful to all applications that require some degree of semantic processing, such as question-answering, summarisation, parsing, language modelling and language generation. In this thesis we investigate the issues of recognising multiword expressions, domainspecific or not, and of deciding whether they are idiomatic. Moreover, we inspect the extent to which multiword expressions can contribute to a basic NLP task such as shallow parsing and ways that the basic property of multiword expressions, idiomaticity, can be employed to define a novel task for Compositional Distributional Semantics (CDS). The results show that it is possible to recognise multiword expressions and decide their compositionality in an unsupervised manner, based on cooccurrence statistics and distributional semantics. Further, multiword expressions are beneficial for other fundamental applications of Natural Language Processing either by direct integration or as an evaluation tool. In particular, termhood-based methods, which are based on nestedness information, are shown to outperform unithood-based methods, which measure the strength of association among the constituents of a multi-word candidate term. A simple heuristic was proved to perform better than more sophisticated methods. A new graph-based algorithm employing sense induction is proposed to address multiword expression compositionality and is shown to perform better than a standard vector space model. Its parameters were estimated by an unsupervised scheme based on graph connectivity. Multiword expressions are shown to contribute to shallow parsing. Moreover, they are used to define a new evaluation task for distributional semantic composition models.
APA, Harvard, Vancouver, ISO, and other styles
9

Liang, Yingyu. "Modern aspects of unsupervised learning." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52282.

Full text
Abstract:
Unsupervised learning has become more and more important due to the recent explosion of data. Clustering, a key topic in unsupervised learning, is a well-studied task arising in many applications ranging from computer vision to computational biology to the social sciences. This thesis is a collection of work exploring two modern aspects of clustering: stability and scalability. In the first part, we study clustering under a stability property called perturbation resilience. As an alternative approach to worst case analysis, this novel theoretical framework aims at understanding the complexity of clustering instances that satisfy natural stability assumptions. In particular, we show how to correctly cluster instances whose optimal solutions are resilient to small multiplicative perturbations on the distances between data points, significantly improving existing guarantees. We further propose a generalized property that allows small changes in the optimal solutions after perturbations, and provide the first known positive results in this more challenging setting. In the second part, we study the problem of clustering large scale data distributed across nodes which communicate over the edges of a connected graph. We provide algorithms with small communication cost and provable guarantees on the clustering quality. We also propose algorithms for distributed principal component analysis, which can be used to reduce the communication cost of clustering high dimensional data while merely comprising the clustering quality. In the third part, we study community detection, the modern extension of clustering to network data. We propose a theoretical model of communities that are stable in the presence of noisy nodes in the network, and design an algorithm that provably detects all such communities. We also provide a local algorithm for large scale networks, whose running time depends on the sizes of the output communities but not that of the entire network.
APA, Harvard, Vancouver, ISO, and other styles
10

Xiao, Ying. "New tools for unsupervised learning." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52995.

Full text
Abstract:
In an unsupervised learning problem, one is given an unlabelled dataset and hopes to find some hidden structure; the prototypical example is clustering similar data. Such problems often arise in machine learning and statistics, but also in signal processing, theoretical computer science, and any number of quantitative scientific fields. The distinguishing feature of unsupervised learning is that there are no privileged variables or labels which are particularly informative, and thus the greatest challenge is often to differentiate between what is relevant or irrelevant in any particular dataset or problem. In the course of this thesis, we study a number of problems which span the breadth of unsupervised learning. We make progress in Gaussian mixtures, independent component analysis (where we solve the open problem of underdetermined ICA), and we formulate and solve a feature selection/dimension reduction model. Throughout, our goal is to give finite sample complexity bounds for our algorithms -- these are essentially the strongest type of quantitative bound that one can prove for such algorithms. Some of our algorithmic techniques turn out to be very efficient in practice as well. Our major technical tool is tensor spectral decomposition: tensors are generalisations of matrices, and often allow access to the "fine structure" of data. Thus, they are often the right tools for unravelling the hidden structure in an unsupervised learning setting. However, naive generalisations of matrix algorithms to tensors run into NP-hardness results almost immediately, and thus to solve our problems, we are obliged to develop two new tensor decompositions (with robust analyses) from scratch. Both of these decompositions are polynomial time, and can be viewed as efficient generalisations of PCA extended to tensors.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Unsupervied learning"

1

Kyan, Matthew, Paisarn Muneesawang, Kambiz Jarrah, and Ling Guan. Unsupervised Learning. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2014. http://dx.doi.org/10.1002/9781118875568.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Celebi, M. Emre, and Kemal Aydin, eds. Unsupervised Learning Algorithms. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-24211-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Xiangtao, and Ka-Chun Wong, eds. Natural Computing for Unsupervised Learning. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-98566-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Unsupervised Representation Learning with Correlations. [New York, N.Y.?]: [publisher not identified], 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Leordeanu, Marius. Unsupervised Learning in Space and Time. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-42128-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Baruque, Bruno, and Emilio Corchado. Fusion Methods for Unsupervised Learning Ensembles. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-16205-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Albalate, Amparo, and Wolfgang Minker. Semi-Supervised and Unsupervised Machine Learning. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118557693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bartlett, Marian Stewart. Face Image Analysis by Unsupervised Learning. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4615-1637-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

E, Hinton Geoffrey, and Sejnowski Terrence J, eds. Unsupervised learning: Foundations of neural computation. Cambridge, Mass: MIT Press, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bartlett, Marian Stewart. Face Image Analysis by Unsupervised Learning. Boston, MA: Springer US, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Unsupervied learning"

1

Deepak, P. "Anomaly Detection for Data with Spatial Attributes." In Unsupervised Learning Algorithms, 1–32. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-24211-8_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Torra, Vicenç, Guillermo Navarro-Arribas, and Klara Stokes. "An Overview of the Use of Clustering for Data Privacy." In Unsupervised Learning Algorithms, 237–51. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-24211-8_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Chang-Dong, and Jian-Huang Lai. "Nonlinear Clustering: Methods and Applications." In Unsupervised Learning Algorithms, 253–302. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-24211-8_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

İnkaya, Tülin, Sinan Kayalıgil, and Nur Evin Özdemirel. "Swarm Intelligence-Based Clustering Algorithms: A Survey." In Unsupervised Learning Algorithms, 303–41. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-24211-8_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Huang, Xiaohui, Yunming Ye, and Haijun Zhang. "Extending Kmeans-Type Algorithms by Integrating Intra-cluster Compactness and Inter-cluster Separation." In Unsupervised Learning Algorithms, 343–84. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-24211-8_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tsolakis, Dimitrios M., and George E. Tsekouras. "A Fuzzy-Soft Competitive Learning Approach for Grayscale Image Compression." In Unsupervised Learning Algorithms, 385–404. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-24211-8_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wong, Ka-Chun, Yue Li, and Zhaolei Zhang. "Unsupervised Learning in Genome Informatics." In Unsupervised Learning Algorithms, 405–48. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-24211-8_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Martin, Dian I., John C. Martin, and Michael W. Berry. "The Application of LSA to the Evaluation of Questionnaire Responses." In Unsupervised Learning Algorithms, 449–84. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-24211-8_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ahmed, Rezwan, and George Karypis. "Mining Evolving Patterns in Dynamic Relational Networks." In Unsupervised Learning Algorithms, 485–532. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-24211-8_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Trentin, Edmondo, and Marco Bongini. "Probabilistically Grounded Unsupervised Training of Neural Networks." In Unsupervised Learning Algorithms, 533–58. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-24211-8_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Unsupervied learning"

1

Yu, Francis T. S., Taiwei Lu, and Don A. Gregory. "Self-Learning Optical Neural Network." In Spatial Light Modulators and Applications. Washington, D.C.: Optica Publishing Group, 1990. http://dx.doi.org/10.1364/slma.1990.mb4.

Full text
Abstract:
One of the features in neural computing must be the adaptability to changeable environment and to recognize unknown objects. In general, there are two types of learning processes that are used in the human brain; supervised and unsupervised learnings [1]. In a supervised learning process, the artificial neural network has to be taught when to learn and when to process the information. Nevertheless, if an unknown object is presented to the artificial neural network during the processing, the network may provide an error output result. On the other hand, for unsupervised learning (also called self-learning), the students are learning by themselves, in which based on simple learning rules and their past experiences. Kohonon's model is one of the simplest self-organizing algorithms[1], which is capable of performing statistical pattern recognition and classification, and it can be modified for optical neural network implementation. A compact optical neural network of 64 neurons using liquid crystal televisions is used for unsupervised learning process[2].
APA, Harvard, Vancouver, ISO, and other styles
2

Shui, Xinghua, and Huadong Zheng. "Multi-depth Hologram Generation with Unsupervised-learning Based Computer-generated Holography." In Digital Holography and Three-Dimensional Imaging. Washington, D.C.: Optica Publishing Group, 2022. http://dx.doi.org/10.1364/dh.2022.w5a.12.

Full text
Abstract:
Unsupervised-learning based computer-generated holography provides an approach for 2D hologram generation. We propose an unsupervised learning network for multi-depth hologram generation with fully utilizing the different representations of multi-depth object.
APA, Harvard, Vancouver, ISO, and other styles
3

Ver Steeg, Greg. "Unsupervised Learning via Total Correlation Explanation." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/740.

Full text
Abstract:
Learning by children and animals occurs effortlessly and largely without obvious supervision. Successes in automating supervised learning have not translated to the more ambiguous realm of unsupervised learning where goals and labels are not provided. Barlow (1961) suggested that the signal that brains leverage for unsupervised learning is dependence, or redundancy, in the sensory environment. Dependence can be characterized using the information-theoretic multivariate mutual information measure called total correlation. The principle of Total Cor-relation Ex-planation (CorEx) is to learn representations of data that "explain" as much dependence in the data as possible. We review some manifestations of this principle along with successes in unsupervised learning problems across diverse domains including human behavior, biology, and language.
APA, Harvard, Vancouver, ISO, and other styles
4

Gonzalez, Andres, Zoya Heidari, and Olivier Lopez. "Data-Driven Algorithms for Image-Based Rock Classification and Formation Evaluation in Formations With Rapid Spatial Variation in Rock Fabric." In 2022 SPWLA 63rd Annual Symposium. Society of Petrophysicists and Well Log Analysts, 2022. http://dx.doi.org/10.30632/spwla-2022-0018.

Full text
Abstract:
Supervised learning algorithms can be employed for automation of time-intensive tasks, such as image-based rock classification. However, labeled data is not always available. Alternatively, unsupervised learning algorithms, which do not require labeled data, can be employed. Using either of these methods depends on the evaluated formations and the available training/input data sets. Therefore, further investigation is needed to compare the performance of both approaches. The objectives of this paper include, (a) to train two supervised learning models for image-based rock classification employing image-based features from computerized tomography (CT) scan images and core photos (b) to conduct image-based rock classification using the trained model, (c) to compare the results obtained using supervised learning models against an unsupervised learning-based workflow for rock classification, and (d) to derive class-based petrophysical models for improved estimation of petrophysical properties. First, we removed non-formation visual elements from the core image data. Then, we computed image-based features such as grey scale/color and textural features from core image data and conducted feature selection. Then, we employed the extracted features for model training. Finally, we used the trained model to conduct rock classification and compared the obtained rock classes against the results obtained from an unsupervised image-based rock classification workflow. This workflow uses image-based rock fabric features coupled with a physics-based cost function for rock classes optimization. We applied the workflow to one well intersecting three formations with rapid spatial variation in rock fabric. We used 60% of the data to train a random forest and a support vector machines classifier using a 5-fold crossvalidation approach. The remaining 40% of the data was used to test the accuracy of the supervised models. We stablished a base case of unsupervised learning rock classification and four different cases of supervised learning rock classification. The highest accuracy obtained for supervised rock classification was 97.4 %. The accuracy obtained in the unsupervised learning rock classification approach was 82.7% when compared against expert-derived lithofacies. Class-based permeability estimates decreased the mean relative error by 34% and 35% when compared with formation-based permeability estimates, for the supervised and unsupervised approaches, respectively. The highest accuracies for the supervised and unsupervised model were obtained when integrating features from CT-scan images and core photos, highlighting the importance of feature selection for machine learning workflows. Comparison of the two approaches for rock classification showed higher accuracy in supervised learning rock classification. However, the unsupervised approach provided reasonable accuracy as well as a more general and faster approach for rock classification and enhanced formation evaluation.
APA, Harvard, Vancouver, ISO, and other styles
5

Figueirêdo, Ilan Sousa, Tássio Farias Carvalho, Wenisten José Dantas Silva, Lílian Lefol Nani Guarieiro, and Erick Giovani Sperandio Nascimento. "Detecting Interesting and Anomalous Patterns In Multivariate Time-Series Data in an Offshore Platform Using Unsupervised Learning." In Offshore Technology Conference. OTC, 2021. http://dx.doi.org/10.4043/31297-ms.

Full text
Abstract:
Abstract Detection of anomalous events in practical operation of oil and gas (O&G) wells and lines can help to avoid production losses, environmental disasters, and human fatalities, besides decreasing maintenance costs. Supervised machine learning algorithms have been successful to detect, diagnose, and forecast anomalous events in O&G industry. Nevertheless, these algorithms need a large quantity of annotated dataset and labelling data in real world scenarios is typically unfeasible because of exhaustive work of experts. Therefore, as unsupervised machine learning does not require an annotated dataset, this paper intends to perform a comparative evaluation performance of unsupervised learning algorithms to support experts for anomaly detection and pattern recognition in multivariate time-series data. So, the goal is to allow experts to analyze a small set of patterns and label them, instead of analyzing large datasets. This paper used the public 3W database of three offshore naturally flowing wells. The experiment used real data of production of O&G from underground reservoirs with the following anomalous events: (i) spurious closure of Downhole Safety Valve (DHSV) and (ii) quick restriction in Production Choke (PCK). Six unsupervised machine learning algorithms were assessed: Cluster-based Algorithm for Anomaly Detection in Time Series Using Mahalanobis Distance (C-AMDATS), Luminol Bitmap, SAX-REPEAT, k-NN, Bootstrap, and Robust Random Cut Forest (RRCF). The comparison evaluation of unsupervised learning algorithms was performed using a set of metrics: accuracy (ACC), precision (PR), recall (REC), specificity (SP), F1-Score (F1), Area Under the Receiver Operating Characteristic Curve (AUC-ROC), and Area Under the Precision-Recall Curve (AUC-PRC). The experiments only used the data labels for assessment purposes. The results revealed that unsupervised learning successfully detected the patterns of interest in multivariate data without prior annotation, with emphasis on the C-AMDATS algorithm. Thus, unsupervised learning can leverage supervised models through the support given to data annotation.
APA, Harvard, Vancouver, ISO, and other styles
6

Huang, Ling. "Unsupervised Multi-view Learning." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/910.

Full text
Abstract:
Unsupervised multi-view learning is a hot research topic. The main challenge lies in how to integrate information from different views to enhance the unsupervised learning performance. In this paper, we present our research works on multi-view data clustering and multi-view network community detection respectively. The main contributions are summarized by emphasizing the challenges we have addressed. In addition, the ongoing work and the future work are briefly presented.
APA, Harvard, Vancouver, ISO, and other styles
7

Reite, Aaron A., Scott Kangas, Zackery Steck, George S. Goley, Jonathan Von Stroh, and Steven Forsyth. "Unsupervised feature learning in remote sensing." In Applications of Machine Learning, edited by Michael E. Zelinski, Tarek M. Taha, Jonathan Howe, Abdul A. Awwal, and Khan M. Iftekharuddin. SPIE, 2019. http://dx.doi.org/10.1117/12.2529791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Junjie, William K. Cheung, and Anran Wang. "Learning Deep Unsupervised Binary Codes for Image Retrieval." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/85.

Full text
Abstract:
Hashing is an efficient approximate nearest neighbor search method and has been widely adopted for large-scale multimedia retrieval. While supervised learning is more popular for the data-dependent hashing, deep unsupervised hashing methods have recently been developed to learn non-linear transformations for converting multimedia inputs to binary codes. Most of existing deep unsupervised hashing methods make use of a quadratic constraint for minimizing the difference between the compact representations and the target binary codes, which inevitably causes severe information loss. In this paper, we propose a novel deep unsupervised method called DeepQuan for hashing. The DeepQuan model utilizes a deep autoencoder network, where the encoder is used to learn compact representations and the decoder is for manifold preservation. To contrast with the existing unsupervised methods, DeepQuan learns the binary codes by minimizing the quantization error through product quantization technique. Furthermore, a weighted triplet loss is proposed to avoid trivial solution and poor generalization. Extensive experimental results on standard datasets show that the proposed DeepQuan model outperforms the state-of-the-art unsupervised hashing methods for image retrieval tasks.
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Jundong, Jiliang Tang, and Huan Liu. "Reconstruction-based Unsupervised Feature Selection: An Embedded Approach." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/300.

Full text
Abstract:
Feature selection has been proven to be effective and efficient in preparing high-dimensional data for data mining and machine learning problems. Since real-world data is usually unlabeled, unsupervised feature selection has received increasing attention in recent years. Without label information, unsupervised feature selection needs alternative criteria to define feature relevance. Recently, data reconstruction error emerged as a new criterion for unsupervised feature selection, which defines feature relevance as the capability of features to approximate original data via a reconstruction function. Most existing algorithms in this family assume predefined, linear reconstruction functions. However, the reconstruction function should be data dependent and may not always be linear especially when the original data is high-dimensional. In this paper, we investigate how to learn the reconstruction function from the data automatically for unsupervised feature selection, and propose a novel reconstruction-based unsupervised feature selection framework REFS, which embeds the reconstruction function learning process into feature selection. Experiments on various types of real-world datasets demonstrate the effectiveness of the proposed framework REFS.
APA, Harvard, Vancouver, ISO, and other styles
10

Xiang, Mingjun, Lingxiao Wang, Yu Sha, Hui Yuan, Kai Zhou, and Hartmut G. Roskos. "Phase Retrieval for Terahertz Holography with Physics-Informed Deep Learning." In Digital Holography and Three-Dimensional Imaging. Washington, D.C.: Optica Publishing Group, 2022. http://dx.doi.org/10.1364/dh.2022.tu4a.4.

Full text
Abstract:
Two novel phase retrieval methods for THz holography based on physics-informed deep learning are presented. They employ unsupervised learning and supervised learning based on the MNIST dataset, respectively.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Unsupervied learning"

1

Vesselinov, Velimir Valentinov. TensorDecompostions : Unsupervised machine learning methods. Office of Scientific and Technical Information (OSTI), February 2019. http://dx.doi.org/10.2172/1493534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sprechmann, Pablo, and Guillermo Sapiro. Dictionary Learning and Sparse Coding for Unsupervised Clustering. Fort Belvoir, VA: Defense Technical Information Center, September 2009. http://dx.doi.org/10.21236/ada513140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Vesselinov, Velimir, Bulbul Ahmmed, Maruti Mudunuru, Jeff Pepin, Erick Burns, D. Siler, Satish Karra, and Richard Middleton. Discovering Hidden Geothermal Signatures using Unsupervised Machine Learning. Office of Scientific and Technical Information (OSTI), April 2021. http://dx.doi.org/10.2172/1781347.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Safta, Cosmin, Habib Najm, Michael Grant, and Michael Sparapany. Trajectory Optimization via Unsupervised Probabilistic Learning On Manifolds. Office of Scientific and Technical Information (OSTI), September 2021. http://dx.doi.org/10.2172/1821958.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shekhar, Shubhranshu, Jetson Leder-Luis, and Leman Akoglu. Unsupervised Machine Learning for Explainable Health Care Fraud Detection. Cambridge, MA: National Bureau of Economic Research, February 2023. http://dx.doi.org/10.3386/w30946.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ahmmed, Bulbul. Supervised and Unsupervised Machine Learning to Understanding Reactive-transport Data. Office of Scientific and Technical Information (OSTI), May 2020. http://dx.doi.org/10.2172/1630844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Obert, James, and Timothy James Loffredo. Efficient Binary Static Code Data Flow Analysis Using Unsupervised Learning. Office of Scientific and Technical Information (OSTI), November 2019. http://dx.doi.org/10.2172/1592974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yeamans, Katelyn Angela. Unsupervised Machine Learning for Evaluation of Aging in Explosive Pressed Pellets. Office of Scientific and Technical Information (OSTI), December 2018. http://dx.doi.org/10.2172/1484618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wehner, Michael, Mark Risser, Paul Ullrich, and Shiheng Duan. Exploring variability in seasonal average and extreme precipitation using unsupervised machine learning. Office of Scientific and Technical Information (OSTI), April 2021. http://dx.doi.org/10.2172/1769708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pinar, Ali, Tamara G. Kolda, Kevin Thomas Carlberg, Grey Ballard, and Michael Mahoney. Unsupervised Learning Through Randomized Algorithms for High-Volume High-Velocity Data (ULTRA-HV). Office of Scientific and Technical Information (OSTI), January 2018. http://dx.doi.org/10.2172/1417788.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography