Journal articles on the topic 'Unsupervied learning'

To see the other types of publications on this topic, follow the link: Unsupervied learning.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Unsupervied learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Fong, A. C. M., and G. Hong. "Boosted Supervised Intensional Learning Supported by Unsupervised Learning." International Journal of Machine Learning and Computing 11, no. 2 (March 2021): 98–102. http://dx.doi.org/10.18178/ijmlc.2021.11.2.1020.

Full text
Abstract:
Traditionally, supervised machine learning (ML) algorithms rely heavily on large sets of annotated data. This is especially true for deep learning (DL) neural networks, which need huge annotated data sets for good performance. However, large volumes of annotated data are not always readily available. In addition, some of the best performing ML and DL algorithms lack explainability – it is often difficult even for domain experts to interpret the results. This is an important consideration especially in safety-critical applications, such as AI-assisted medical endeavors, in which a DL’s failure mode is not well understood. This lack of explainability also increases the risk of malicious attacks by adversarial actors because these actions can become obscured in the decision-making process that lacks transparency. This paper describes an intensional learning approach which uses boosting to enhance prediction performance while minimizing reliance on availability of annotated data. The intensional information is derived from an unsupervised learning preprocessing step involving clustering. Preliminary evaluation on the MNIST data set has shown encouraging results. Specifically, using the proposed approach, it is now possible to achieve similar accuracy result as extensional learning alone while using only a small fraction of the original training data set.
APA, Harvard, Vancouver, ISO, and other styles
2

Xu, Mingle, Sook Yoon, Jaesu Lee, and Dong Sun Park. "Unsupervised Transfer Learning for Plant Anomaly Recognition." Korean Institute of Smart Media 11, no. 4 (May 31, 2022): 30–37. http://dx.doi.org/10.30693/smj.2022.11.4.30.

Full text
Abstract:
Disease threatens plant growth and recognizing the type of disease is essential to making a remedy. In recent years, deep learning has witnessed a significant improvement for this task, however, a large volume of labeled images is one of the requirements to get decent performance. But annotated images are difficult and expensive to obtain in the agricultural field. Therefore, designing an efficient and effective strategy is one of the challenges in this area with few labeled data. Transfer learning, assuming taking knowledge from a source domain to a target domain, is borrowed to address this issue and observed comparable results. However, current transfer learning strategies can be regarded as a supervised method as it hypothesizes that there are many labeled images in a source domain. In contrast, unsupervised transfer learning, using only images in a source domain, gives more convenience as collecting images is much easier than annotating. In this paper, we leverage unsupervised transfer learning to perform plant disease recognition, by which we achieve a better performance than supervised transfer learning in many cases. Besides, a vision transformer with a bigger model capacity than convolution is utilized to have a better-pretrained feature space. With the vision transformer-based unsupervised transfer learning, we achieve better results than current works in two datasets. Especially, we obtain 97.3% accuracy with only 30 training images for each class in the Plant Village dataset. We hope that our work can encourage the community to pay attention to vision transformer-based unsupervised transfer learning in the agricultural field when with few labeled images.
APA, Harvard, Vancouver, ISO, and other styles
3

Kruglov, Artem V. "The Unsupervised Learning Algorithm for Detecting Ellipsoid Objects." International Journal of Machine Learning and Computing 9, no. 3 (June 2019): 255–60. http://dx.doi.org/10.18178/ijmlc.2019.9.3.795.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shi, Chengming, Bo Luo, Hongqi Li, Bin Li, Xinyong Mao, and Fangyu Peng. "Anomaly Detection via Unsupervised Learning for Tool Breakage Monitoring." International Journal of Machine Learning and Computing 6, no. 5 (October 2016): 256–59. http://dx.doi.org/10.18178/ijmlc.2016.6.5.607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Banzi, Jamal, Isack Bulugu, and Zhongfu Ye. "Deep Predictive Neural Network: Unsupervised Learning for Hand Pose Estimation." International Journal of Machine Learning and Computing 9, no. 4 (August 2019): 432–39. http://dx.doi.org/10.18178/ijmlc.2019.9.4.822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Barlow, H. B. "Unsupervised Learning." Neural Computation 1, no. 3 (September 1989): 295–311. http://dx.doi.org/10.1162/neco.1989.1.3.295.

Full text
Abstract:
What use can the brain make of the massive flow of sensory information that occurs without any associated rewards or punishments? This question is reviewed in the light of connectionist models of unsupervised learning and some older ideas, namely the cognitive maps and working models of Tolman and Craik, and the idea that redundancy is important for understanding perception (Attneave 1954), the physiology of sensory pathways (Barlow 1959), and pattern recognition (Watanabe 1960). It is argued that (1) The redundancy of sensory messages provides the knowledge incorporated in the maps or models. (2) Some of this knowledge can be obtained by observations of mean, variance, and covariance of sensory messages, and perhaps also by a method called “minimum entropy coding.” (3) Such knowledge may be incorporated in a model of “what usually happens” with which incoming messages are automatically compared, enabling unexpected discrepancies to be immediately identified. (4) Knowledge of the sort incorporated into such a filter is a necessary prerequisite of ordinary learning, and a representation whose elements are independent makes it possible to form associations with logical functions of the elements, not just with the elements themselves.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhuo Wang, Zhuo Wang, Min Huang Zhuo Wang, Xiao-Long Huang Min Huang, Fei Man Xiao-Long Huang, Jia-Ming Dou Fei Man, and Jian-li Lyu Jia-Ming Dou. "Unsupervised Learning of Depth and Ego-Motion from Continuous Monocular Images." 電腦學刊 32, no. 6 (December 2021): 038–51. http://dx.doi.org/10.53106/199115992021123206004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Watkin, T. L. H., and J. P. Nadal. "Optimal unsupervised learning." Journal of Physics A: Mathematical and General 27, no. 6 (March 21, 1994): 1899–915. http://dx.doi.org/10.1088/0305-4470/27/6/016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sanger, T. "Optimal unsupervised learning." Neural Networks 1 (January 1988): 127. http://dx.doi.org/10.1016/0893-6080(88)90166-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chen Guoyang, 陈国洋, 吴小俊 Wu Xiaojun, and 徐天阳 Xu Tianyang. "基于深度学习的无监督红外图像与可见光图像融合算法." Laser & Optoelectronics Progress 59, no. 4 (2022): 0410010. http://dx.doi.org/10.3788/lop202259.0410010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Xuejun Zhang, Xuejun Zhang, Jiyang Gai Xuejun Zhang, Zhili Ma Jiyang Gai, Jinxiong Zhao Zhili Ma, Hongzhong Ma Jinxiong Zhao, Fucun He Hongzhong Ma, and Tao Ju Fucun He. "Exploring Unsupervised Learning with Clustering and Deep Autoencoder to Detect DDoS Attack." 電腦學刊 33, no. 4 (August 2022): 029–44. http://dx.doi.org/10.53106/199115992022083304003.

Full text
Abstract:
<p>With the proliferation of services available on the Internet, network attacks have become one of the seri-ous issues. The distributed denial of service (DDoS) attack is such a devastating attack, which poses an enormous threat to network communication and applications and easily disrupts services. To defense against DDoS attacks effectively, this paper proposes a novel DDoS attack detection method that trains detection models in an unsupervised learning manner using preprocessed and unlabeled normal network traffic data, which can not only avoid the impact of unbalanced training data on the detection model per-formance but also detect unknown attacks. Specifically, the proposed method firstly uses Balanced Itera-tive Reducing and Clustering Using Hierarchies algorithm (BIRCH) to pre-cluster the normal network traf-fic data, and then explores autoencoder (AE) to build the detection model in an unsupervised manner based on the cluster subsets. In order to verify the performance of our method, we perform experiments on benchmark network intrusion detection datasets KDDCUP99 and UNSWNB15. The results show that, compared with the state-of-the-art DDoS detection models that used supervised learning and unsuper-vised learning, our proposed method achieves better performance in terms of detection accuracy rate and false positive rate (FPR).</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
12

Li, Changsheng, Kaihang Mao, Lingyan Liang, Dongchun Ren, Wei Zhang, Ye Yuan, and Guoren Wang. "Unsupervised Active Learning via Subspace Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 8332–39. http://dx.doi.org/10.1609/aaai.v35i9.17013.

Full text
Abstract:
Unsupervised active learning has been an active research topic in machine learning community, with the purpose of choosing representative samples to be labelled in an unsupervised manner. Previous works usually take the minimization of data reconstruction loss as the criterion to select representative samples which can better approximate original inputs. However, data are often drawn from low-dimensional subspaces embedded in an arbitrary high-dimensional space in many scenarios, thus it might severely bring in noise if attempting to precisely reconstruct all entries of one observation, leading to a suboptimal solution. In view of this, this paper proposes a novel unsupervised Active Learning model via Subspace Learning, called ALSL. In contrast to previous approaches, ALSL aims to discovery the low-rank structures of data, and then perform sample selection based on learnt low-rank representations. To this end, we devise two different strategies and propose two corresponding formulations to perform unsupervised active learning with and under low-rank sample representations respectively. Since the proposed formulations involve several non-smooth regularization terms, we develop a simple but effective optimization procedure to solve them. Extensive experiments are performed on five publicly available datasets, and experimental results demonstrate the proposed first formulation achieves comparable performance with the state-of-the-arts, while the second formulation significantly outperforms them, achieving a 13\% improvement over the second best baseline at most.
APA, Harvard, Vancouver, ISO, and other styles
13

He, Shuncheng, Yuhang Jiang, Hongchang Zhang, Jianzhun Shao, and Xiangyang Ji. "Wasserstein Unsupervised Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (June 28, 2022): 6884–92. http://dx.doi.org/10.1609/aaai.v36i6.20645.

Full text
Abstract:
Unsupervised reinforcement learning aims to train agents to learn a handful of policies or skills in environments without external reward. These pre-trained policies can accelerate learning when endowed with external reward, and can also be used as primitive options in hierarchical reinforcement learning. Conventional approaches of unsupervised skill discovery feed a latent variable to the agent and shed its empowerment on agent’s behavior by mutual information (MI) maximization. However, the policies learned by MI-based methods cannot sufficiently explore the state space, despite they can be successfully identified from each other. Therefore we propose a new framework Wasserstein unsupervised reinforcement learning (WURL) where we directly maximize the distance of state distributions induced by different policies. Additionally, we overcome difficulties in simultaneously training N(N>2) policies, and amortizing the overall reward to each step. Experiments show policies learned by our approach outperform MI-based methods on the metric of Wasserstein distance while keeping high discriminability. Furthermore, the agents trained by WURL can sufficiently explore the state space in mazes and MuJoCo tasks and the pre-trained policies can be applied to downstream tasks by hierarchical learning.
APA, Harvard, Vancouver, ISO, and other styles
14

Kosko, B. "Unsupervised learning in noise." IEEE Transactions on Neural Networks 1, no. 1 (March 1990): 44–57. http://dx.doi.org/10.1109/72.80204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Hentschel, H. G. E., and Z. Jiang. "Prediction using unsupervised learning." Physica D: Nonlinear Phenomena 67, no. 1-3 (August 1993): 151–65. http://dx.doi.org/10.1016/0167-2789(93)90203-d.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Hammarström, Harald, and Lars Borin. "Unsupervised Learning of Morphology." Computational Linguistics 37, no. 2 (June 2011): 309–50. http://dx.doi.org/10.1162/coli_a_00050.

Full text
Abstract:
This article surveys work on Unsupervised Learning of Morphology. We define Unsupervised Learning of Morphology as the problem of inducing a description (of some kind, even if only morpheme-segmentation) of how orthographic words are built up given only raw text data of a language. We briefly go through the history and motivation of the this problem. Next, over 200 items of work are listed with a brief characterization, and the most important ideas in the field are critically discussed. We summarize the achievements so far and give pointers for future developments.
APA, Harvard, Vancouver, ISO, and other styles
17

Reimann, P. "Unsupervised learning of distributions." Europhysics Letters (EPL) 40, no. 3 (November 1, 1997): 251–56. http://dx.doi.org/10.1209/epl/i1997-00456-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Sibyan, Hidayatus, Wildan Suharso, Edi Suharto, Melda Agnes Manuhutu, and Agus Perdana Windarto. "Optimization of Unsupervised Learning in Machine Learning." Journal of Physics: Conference Series 1783, no. 1 (February 1, 2021): 012034. http://dx.doi.org/10.1088/1742-6596/1783/1/012034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Xu, Hui, Jiaxing Wang, Hao Li, Deqiang Ouyang, and Jie Shao. "Unsupervised meta-learning for few-shot learning." Pattern Recognition 116 (August 2021): 107951. http://dx.doi.org/10.1016/j.patcog.2021.107951.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Sharma, Ritu. "Study of Supervised Learning and Unsupervised Learning." International Journal for Research in Applied Science and Engineering Technology 8, no. 6 (June 30, 2020): 588–93. http://dx.doi.org/10.22214/ijraset.2020.6095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

N, Kranjcic. "Unsupervised Classification for Illegal Building Monitoring." Open Access Journal of Waste Management & Xenobiotics 4, no. 1 (January 26, 2021): 1–5. http://dx.doi.org/10.23880/oajwx-16000157.

Full text
Abstract:
In 2013 the Ministry of Construction and Physical Planning has brought an act by which all illegally built objects must be legalized. To this date almost 75% legalization request has been solved. It is expected that by the end of 2019 all of the illegally built objects will be legalized. In order to prevent further construction of illegal objects the Ministry of Construction and Physical Planning is seeking a way to easily detect start of illegal construction. Since the Copernicus satellite images are available free of charge and with resolution of 10m it should be possible to detect mentioned objects. This paper will provide analysis of Copernicus Sentinel 2A imagery for such use based on unsupervised classification using machine learning. If such procedure results in satisfying accuracy it will be proposed model for automation of the process for monitoring the illegal building construction based on Sentinel 2A imagery.
APA, Harvard, Vancouver, ISO, and other styles
22

Chua, Sook-Ling, Stephen Marsland, and Hans Guesgen. "Unsupervised Learning of Human Behaviours." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (August 4, 2011): 319–24. http://dx.doi.org/10.1609/aaai.v25i1.7911.

Full text
Abstract:
Behaviour recognition is the process of inferring the behaviour of an individual from a series of observations acquired from sensors such as in a smart home. The majority of existing behaviour recognition systems are based on supervised learning algorithms, which means that training them requires a preprocessed, annotated dataset. Unfortunately, annotating a dataset is a rather tedious process and one that is prone to error. In this paper we suggest a way to identify structure in the data based on text compression and the edit distance between words, without any prior labelling. We demonstrate that by using this method we can automatically identify patterns and segment the data into patterns that correspond to human behaviours. To evaluate the effectiveness of our proposed method, we use a dataset from a smart home and compare the labels produced by our approach with the labels assigned by a human to the activities in the dataset. We find that the results are promising and show significant improvement in the recognition accuracy over Self-Organising Maps (SOMs).
APA, Harvard, Vancouver, ISO, and other styles
23

Mo, Yujie, Liang Peng, Jie Xu, Xiaoshuang Shi, and Xiaofeng Zhu. "Simple Unsupervised Graph Representation Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 7797–805. http://dx.doi.org/10.1609/aaai.v36i7.20748.

Full text
Abstract:
In this paper, we propose a simple unsupervised graph representation learning method to conduct effective and efficient contrastive learning. Specifically, the proposed multiplet loss explores the complementary information between the structural information and neighbor information to enlarge the inter-class variation, as well as adds an upper bound loss to achieve the finite distance between positive embeddings and anchor embeddings for reducing the intra-class variation. As a result, both enlarging inter-class variation and reducing intra-class variation result in small generalization error, thereby obtaining an effective model. Furthermore, our method removes widely used data augmentation and discriminator from previous graph contrastive learning methods, meanwhile available to output low-dimensional embeddings, leading to an efficient model. Experimental results on various real-world datasets demonstrate the effectiveness and efficiency of our method, compared to state-of-the-art methods. The source codes are released at https://github.com/YujieMo/SUGRL.
APA, Harvard, Vancouver, ISO, and other styles
24

Lotfi, Ismail, Lamiae Megzari, and Abdelhamid Bouhadi. "Asset allocation by Unsupervised Learning." Review of Economics and Finance 19 (2021): 338–46. http://dx.doi.org/10.55365/1923.x2021.19.34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Zhao, Tingting, Zifeng Wang, Aria Masoomi, and Jennifer Dy. "Deep Bayesian Unsupervised Lifelong Learning." Neural Networks 149 (May 2022): 95–106. http://dx.doi.org/10.1016/j.neunet.2022.02.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Martínez-Toro, Gabriel Mauricio, Dewar Rico-Bautista, Efrén Romero-Riaño, and Paola Andrea Romero-Riaño. "Unsupervised learning: application to epilepsy." Revista Colombiana de Computación 20, no. 2 (December 1, 2019): 20–27. http://dx.doi.org/10.29375/25392115.3718.

Full text
Abstract:
Epilepsy is a neurological disorder characterized by recurrent seizures. The primary objective is to present an analysis of the results shown in the training data simulation charts. Data were collected by means of the 10-20 system. The “10–20” system is an internationally recognized method to describe and apply the location of scalp electrodes in the context of an EEG exam. It shows the differences obtained between the tests generated and the anomalies of the test data based on training data. Finally, the results are interpreted and the efficacy of the procedure is discussed.
APA, Harvard, Vancouver, ISO, and other styles
27

Gao, Jiabao, Caijun Zhong, Xiaoming Chen, Hai Lin, and Zhaoyang Zhang. "Unsupervised Learning for Passive Beamforming." IEEE Communications Letters 24, no. 5 (May 2020): 1052–56. http://dx.doi.org/10.1109/lcomm.2020.2965532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Fan, Qingnan, Jiaolong Yang, David Wipf, Baoquan Chen, and Xin Tong. "Image smoothing via unsupervised learning." ACM Transactions on Graphics 37, no. 6 (January 10, 2019): 1–14. http://dx.doi.org/10.1145/3272127.3275081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Wu, Kuo Lung. "Unsupervised Kernel Learning Vector Quantization." Advanced Engineering Forum 6-7 (September 2012): 243–49. http://dx.doi.org/10.4028/www.scientific.net/aef.6-7.243.

Full text
Abstract:
In this paper, we propose an unsupervised kernel learning vector quantization (UKLVQ) algorithm that combines the concepts of the kernel method and traditional unsupervised learning vector quantization (ULVQ). We first use the definition of the shadow kernel to give a general representation of the UKLVQ method and then easily implement the UKLVQ algorithm with a well-defined objective function in which traditional unsupervised learning vector quantization (ULVQ) becomes a special case of UKLVQ. We also analyze the robustness of our proposed learning algorithm by means of a sensitivity curve. In our simulations, the UKLVQ with Gaussian kernel has a bounded sensitivity curve and is thus robust to noise. The robustness and accuracy of the proposed UKLVQ algorithm is also demonstrated via numerical examples.
APA, Harvard, Vancouver, ISO, and other styles
30

Witten, Daniela M. "Penalized unsupervised learning with outliers." Statistics and Its Interface 6, no. 2 (2013): 211–21. http://dx.doi.org/10.4310/sii.2013.v6.n2.a5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Pothos, Emmanuel M., and Nick Chater. "Unsupervised Categorization and Category Learning." Quarterly Journal of Experimental Psychology Section A 58, no. 4 (May 2005): 733–52. http://dx.doi.org/10.1080/02724980443000322.

Full text
Abstract:
When people categorize a set of items in a certain way they often change their perceptions for these items so that they become more compatible with the learned categorization. In two experiments we examined whether such changes are extensive enough to change the unsupervised categorization for the items—that is, the categorization of the items that is considered more intuitive or natural without any learning. In Experiment 1 we directly employed an unsupervised categorization task; in Experiment 2 we collected similarity ratings for the items and inferred unsupervised categorizations using Pothos and Chater's (2002) model of unsupervised categorization. The unsupervised categorization for the items changed to resemble more the learned one when this was specified by the suppression of a stimulus dimension (both experiments), but less so when it was almost specified by the suppression of a stimulus dimension (Experiment 1, nonsignificant trend in Experiment 2). By contrast, no changes in the unsupervised categorization were observed when participants were taught a classification that was specified by a more fine tuning of the relative salience of the two dimensions.
APA, Harvard, Vancouver, ISO, and other styles
32

Yang Song, L. Goncalves, and P. Perona. "Unsupervised learning of human motion." IEEE Transactions on Pattern Analysis and Machine Intelligence 25, no. 7 (July 2003): 814–27. http://dx.doi.org/10.1109/tpami.2003.1206511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Clapper, John P., and Gordon H. Bower. "Category invention in unsupervised learning." Journal of Experimental Psychology: Learning, Memory, and Cognition 20, no. 2 (March 1994): 443–60. http://dx.doi.org/10.1037/0278-7393.20.2.443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Clapper, John P., and Gordon H. Bower. "Adaptive categorization in unsupervised learning." Journal of Experimental Psychology: Learning, Memory, and Cognition 28, no. 5 (September 2002): 908–23. http://dx.doi.org/10.1037/0278-7393.28.5.908.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Luo, Jiaming, Karthik Narasimhan, and Regina Barzilay. "Unsupervised Learning of Morphological Forests." Transactions of the Association for Computational Linguistics 5 (December 2017): 353–64. http://dx.doi.org/10.1162/tacl_a_00066.

Full text
Abstract:
This paper focuses on unsupervised modeling of morphological families, collectively comprising a forest over the language vocabulary. This formulation enables us to capture edge-wise properties reflecting single-step morphological derivations, along with global distributional properties of the entire forest. These global properties constrain the size of the affix set and encourage formation of tight morphological families. The resulting objective is solved using Integer Linear Programming (ILP) paired with contrastive estimation. We train the model by alternating between optimizing the local log-linear model and the global ILP objective. We evaluate our system on three tasks: root detection, clustering of morphological families, and segmentation. Our experiments demonstrate that our model yields consistent gains in all three tasks compared with the best published results.
APA, Harvard, Vancouver, ISO, and other styles
36

Montemezzani, Germano, Gan Zhou, and Dana Z. Anderson. "Unsupervised Learning of Temporal Features." Optics and Photonics News 5, no. 12 (December 1, 1994): 38. http://dx.doi.org/10.1364/opn.5.12.000038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Shah, Swapnil Nitin. "Variational approach to unsupervised learning." Journal of Physics Communications 3, no. 7 (July 18, 2019): 075006. http://dx.doi.org/10.1088/2399-6528/ab3029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Lake, B. M., G. K. Vallabha, and J. L. McClelland. "Modeling Unsupervised Perceptual Category Learning." IEEE Transactions on Autonomous Mental Development 1, no. 1 (May 2009): 35–43. http://dx.doi.org/10.1109/tamd.2009.2021703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Deco, Gustavo, and Lucas Parra. "Unsupervised learning for Boltzman Machines." Network: Computation in Neural Systems 6, no. 3 (January 1995): 437–48. http://dx.doi.org/10.1088/0954-898x_6_3_009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Hiles, B. P., N. Intrator, and S. Edelman. "Unsupervised learning of visual structure." Journal of Vision 2, no. 7 (March 15, 2010): 74. http://dx.doi.org/10.1167/2.7.74.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Solan, Z., D. Horn, E. Ruppin, and S. Edelman. "Unsupervised learning of natural languages." Proceedings of the National Academy of Sciences 102, no. 33 (August 8, 2005): 11629–34. http://dx.doi.org/10.1073/pnas.0409746102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Mietzner, A., M. Opper, and W. Kinzel. "Maximal stability in unsupervised learning." Journal of Physics A: Mathematical and General 28, no. 10 (May 21, 1995): 2785–97. http://dx.doi.org/10.1088/0305-4470/28/10/011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Roohi, Adil, Kevin Faust, Ugljesa Djuric, and Phedias Diamandis. "Unsupervised Machine Learning in Pathology." Surgical Pathology Clinics 13, no. 2 (June 2020): 349–58. http://dx.doi.org/10.1016/j.path.2020.01.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Oja, Erkki. "Unsupervised learning in neural computation." Theoretical Computer Science 287, no. 1 (September 2002): 187–207. http://dx.doi.org/10.1016/s0304-3975(02)00160-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Powers, David. "Unsupervised Learning of Linguistic Structure." International Journal of Corpus Linguistics 2, no. 1 (January 1, 1997): 91–131. http://dx.doi.org/10.1075/ijcl.2.1.06pow.

Full text
Abstract:
Computational Linguistics and Natural Language have long been targets for Machine Learning, and a variety of learning paradigms and techniques have been employed with varying degrees of success. In this paper, we review approaches which have adopted an unsupervised learning paradigm, explore the assumptions which underlie the techniques used, and develop an approach to empirical evaluation. We concentrate on a statistical framework based on N-grams, although we seek to maintain neurolinguistic plausibility. The model we adopt places putative linguistic units in focus and associates them with a characteristic vector of statistics derived from occurrence frequency. These vectors are treated as defining a hyperspace, within which we demonstrate a technique for examining the empirical utility of the various metrics and normalization, visualization, and clustering techniques proposed in the literature. We conclude with an evaluation of the relative utility of a large array of different metrics and processing techniques in relation to our defined performance criteria.
APA, Harvard, Vancouver, ISO, and other styles
46

Botelho, Fernanda, and Annita Davis. "Stability behavior for unsupervised learning." Physica D: Nonlinear Phenomena 243, no. 1 (January 2013): 111–15. http://dx.doi.org/10.1016/j.physd.2012.10.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Anselmi, Fabio, Joel Z. Leibo, Lorenzo Rosasco, Jim Mutch, Andrea Tacchetti, and Tomaso Poggio. "Unsupervised learning of invariant representations." Theoretical Computer Science 633 (June 2016): 112–21. http://dx.doi.org/10.1016/j.tcs.2015.06.048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Wescoat, Ethan, Matthew Krugh, Andrew Henderson, Josh Goodnough, and Laine Mears. "Vibration Analysis Utilizing Unsupervised Learning." Procedia Manufacturing 34 (2019): 876–84. http://dx.doi.org/10.1016/j.promfg.2019.06.160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Pagan, Darren C., Thien Q. Phan, Jordan S. Weaver, Austin R. Benson, and Armand J. Beaudoin. "Unsupervised learning of dislocation motion." Acta Materialia 181 (December 2019): 510–18. http://dx.doi.org/10.1016/j.actamat.2019.10.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Biehl, M., and A. Mietzner. "Statistical Mechanics of Unsupervised Learning." Europhysics Letters (EPL) 24, no. 5 (November 10, 1993): 421–26. http://dx.doi.org/10.1209/0295-5075/24/5/017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography