Articles de revues sur le sujet « Unsupervied learning »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Unsupervied learning.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Unsupervied learning ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Fong, A. C. M., et G. Hong. « Boosted Supervised Intensional Learning Supported by Unsupervised Learning ». International Journal of Machine Learning and Computing 11, no 2 (mars 2021) : 98–102. http://dx.doi.org/10.18178/ijmlc.2021.11.2.1020.

Texte intégral
Résumé :
Traditionally, supervised machine learning (ML) algorithms rely heavily on large sets of annotated data. This is especially true for deep learning (DL) neural networks, which need huge annotated data sets for good performance. However, large volumes of annotated data are not always readily available. In addition, some of the best performing ML and DL algorithms lack explainability – it is often difficult even for domain experts to interpret the results. This is an important consideration especially in safety-critical applications, such as AI-assisted medical endeavors, in which a DL’s failure mode is not well understood. This lack of explainability also increases the risk of malicious attacks by adversarial actors because these actions can become obscured in the decision-making process that lacks transparency. This paper describes an intensional learning approach which uses boosting to enhance prediction performance while minimizing reliance on availability of annotated data. The intensional information is derived from an unsupervised learning preprocessing step involving clustering. Preliminary evaluation on the MNIST data set has shown encouraging results. Specifically, using the proposed approach, it is now possible to achieve similar accuracy result as extensional learning alone while using only a small fraction of the original training data set.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Xu, Mingle, Sook Yoon, Jaesu Lee et Dong Sun Park. « Unsupervised Transfer Learning for Plant Anomaly Recognition ». Korean Institute of Smart Media 11, no 4 (31 mai 2022) : 30–37. http://dx.doi.org/10.30693/smj.2022.11.4.30.

Texte intégral
Résumé :
Disease threatens plant growth and recognizing the type of disease is essential to making a remedy. In recent years, deep learning has witnessed a significant improvement for this task, however, a large volume of labeled images is one of the requirements to get decent performance. But annotated images are difficult and expensive to obtain in the agricultural field. Therefore, designing an efficient and effective strategy is one of the challenges in this area with few labeled data. Transfer learning, assuming taking knowledge from a source domain to a target domain, is borrowed to address this issue and observed comparable results. However, current transfer learning strategies can be regarded as a supervised method as it hypothesizes that there are many labeled images in a source domain. In contrast, unsupervised transfer learning, using only images in a source domain, gives more convenience as collecting images is much easier than annotating. In this paper, we leverage unsupervised transfer learning to perform plant disease recognition, by which we achieve a better performance than supervised transfer learning in many cases. Besides, a vision transformer with a bigger model capacity than convolution is utilized to have a better-pretrained feature space. With the vision transformer-based unsupervised transfer learning, we achieve better results than current works in two datasets. Especially, we obtain 97.3% accuracy with only 30 training images for each class in the Plant Village dataset. We hope that our work can encourage the community to pay attention to vision transformer-based unsupervised transfer learning in the agricultural field when with few labeled images.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Kruglov, Artem V. « The Unsupervised Learning Algorithm for Detecting Ellipsoid Objects ». International Journal of Machine Learning and Computing 9, no 3 (juin 2019) : 255–60. http://dx.doi.org/10.18178/ijmlc.2019.9.3.795.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Shi, Chengming, Bo Luo, Hongqi Li, Bin Li, Xinyong Mao et Fangyu Peng. « Anomaly Detection via Unsupervised Learning for Tool Breakage Monitoring ». International Journal of Machine Learning and Computing 6, no 5 (octobre 2016) : 256–59. http://dx.doi.org/10.18178/ijmlc.2016.6.5.607.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Banzi, Jamal, Isack Bulugu et Zhongfu Ye. « Deep Predictive Neural Network : Unsupervised Learning for Hand Pose Estimation ». International Journal of Machine Learning and Computing 9, no 4 (août 2019) : 432–39. http://dx.doi.org/10.18178/ijmlc.2019.9.4.822.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Barlow, H. B. « Unsupervised Learning ». Neural Computation 1, no 3 (septembre 1989) : 295–311. http://dx.doi.org/10.1162/neco.1989.1.3.295.

Texte intégral
Résumé :
What use can the brain make of the massive flow of sensory information that occurs without any associated rewards or punishments? This question is reviewed in the light of connectionist models of unsupervised learning and some older ideas, namely the cognitive maps and working models of Tolman and Craik, and the idea that redundancy is important for understanding perception (Attneave 1954), the physiology of sensory pathways (Barlow 1959), and pattern recognition (Watanabe 1960). It is argued that (1) The redundancy of sensory messages provides the knowledge incorporated in the maps or models. (2) Some of this knowledge can be obtained by observations of mean, variance, and covariance of sensory messages, and perhaps also by a method called “minimum entropy coding.” (3) Such knowledge may be incorporated in a model of “what usually happens” with which incoming messages are automatically compared, enabling unexpected discrepancies to be immediately identified. (4) Knowledge of the sort incorporated into such a filter is a necessary prerequisite of ordinary learning, and a representation whose elements are independent makes it possible to form associations with logical functions of the elements, not just with the elements themselves.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Zhuo Wang, Zhuo Wang, Min Huang Zhuo Wang, Xiao-Long Huang Min Huang, Fei Man Xiao-Long Huang, Jia-Ming Dou Fei Man et Jian-li Lyu Jia-Ming Dou. « Unsupervised Learning of Depth and Ego-Motion from Continuous Monocular Images ». 電腦學刊 32, no 6 (décembre 2021) : 038–51. http://dx.doi.org/10.53106/199115992021123206004.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Watkin, T. L. H., et J. P. Nadal. « Optimal unsupervised learning ». Journal of Physics A : Mathematical and General 27, no 6 (21 mars 1994) : 1899–915. http://dx.doi.org/10.1088/0305-4470/27/6/016.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Sanger, T. « Optimal unsupervised learning ». Neural Networks 1 (janvier 1988) : 127. http://dx.doi.org/10.1016/0893-6080(88)90166-9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Chen Guoyang, 陈国洋, 吴小俊 Wu Xiaojun et 徐天阳 Xu Tianyang. « 基于深度学习的无监督红外图像与可见光图像融合算法 ». Laser & ; Optoelectronics Progress 59, no 4 (2022) : 0410010. http://dx.doi.org/10.3788/lop202259.0410010.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Xuejun Zhang, Xuejun Zhang, Jiyang Gai Xuejun Zhang, Zhili Ma Jiyang Gai, Jinxiong Zhao Zhili Ma, Hongzhong Ma Jinxiong Zhao, Fucun He Hongzhong Ma et Tao Ju Fucun He. « Exploring Unsupervised Learning with Clustering and Deep Autoencoder to Detect DDoS Attack ». 電腦學刊 33, no 4 (août 2022) : 029–44. http://dx.doi.org/10.53106/199115992022083304003.

Texte intégral
Résumé :
<p>With the proliferation of services available on the Internet, network attacks have become one of the seri-ous issues. The distributed denial of service (DDoS) attack is such a devastating attack, which poses an enormous threat to network communication and applications and easily disrupts services. To defense against DDoS attacks effectively, this paper proposes a novel DDoS attack detection method that trains detection models in an unsupervised learning manner using preprocessed and unlabeled normal network traffic data, which can not only avoid the impact of unbalanced training data on the detection model per-formance but also detect unknown attacks. Specifically, the proposed method firstly uses Balanced Itera-tive Reducing and Clustering Using Hierarchies algorithm (BIRCH) to pre-cluster the normal network traf-fic data, and then explores autoencoder (AE) to build the detection model in an unsupervised manner based on the cluster subsets. In order to verify the performance of our method, we perform experiments on benchmark network intrusion detection datasets KDDCUP99 and UNSWNB15. The results show that, compared with the state-of-the-art DDoS detection models that used supervised learning and unsuper-vised learning, our proposed method achieves better performance in terms of detection accuracy rate and false positive rate (FPR).</p> <p>&nbsp;</p>
Styles APA, Harvard, Vancouver, ISO, etc.
12

Li, Changsheng, Kaihang Mao, Lingyan Liang, Dongchun Ren, Wei Zhang, Ye Yuan et Guoren Wang. « Unsupervised Active Learning via Subspace Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 35, no 9 (18 mai 2021) : 8332–39. http://dx.doi.org/10.1609/aaai.v35i9.17013.

Texte intégral
Résumé :
Unsupervised active learning has been an active research topic in machine learning community, with the purpose of choosing representative samples to be labelled in an unsupervised manner. Previous works usually take the minimization of data reconstruction loss as the criterion to select representative samples which can better approximate original inputs. However, data are often drawn from low-dimensional subspaces embedded in an arbitrary high-dimensional space in many scenarios, thus it might severely bring in noise if attempting to precisely reconstruct all entries of one observation, leading to a suboptimal solution. In view of this, this paper proposes a novel unsupervised Active Learning model via Subspace Learning, called ALSL. In contrast to previous approaches, ALSL aims to discovery the low-rank structures of data, and then perform sample selection based on learnt low-rank representations. To this end, we devise two different strategies and propose two corresponding formulations to perform unsupervised active learning with and under low-rank sample representations respectively. Since the proposed formulations involve several non-smooth regularization terms, we develop a simple but effective optimization procedure to solve them. Extensive experiments are performed on five publicly available datasets, and experimental results demonstrate the proposed first formulation achieves comparable performance with the state-of-the-arts, while the second formulation significantly outperforms them, achieving a 13\% improvement over the second best baseline at most.
Styles APA, Harvard, Vancouver, ISO, etc.
13

He, Shuncheng, Yuhang Jiang, Hongchang Zhang, Jianzhun Shao et Xiangyang Ji. « Wasserstein Unsupervised Reinforcement Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 6 (28 juin 2022) : 6884–92. http://dx.doi.org/10.1609/aaai.v36i6.20645.

Texte intégral
Résumé :
Unsupervised reinforcement learning aims to train agents to learn a handful of policies or skills in environments without external reward. These pre-trained policies can accelerate learning when endowed with external reward, and can also be used as primitive options in hierarchical reinforcement learning. Conventional approaches of unsupervised skill discovery feed a latent variable to the agent and shed its empowerment on agent’s behavior by mutual information (MI) maximization. However, the policies learned by MI-based methods cannot sufficiently explore the state space, despite they can be successfully identified from each other. Therefore we propose a new framework Wasserstein unsupervised reinforcement learning (WURL) where we directly maximize the distance of state distributions induced by different policies. Additionally, we overcome difficulties in simultaneously training N(N>2) policies, and amortizing the overall reward to each step. Experiments show policies learned by our approach outperform MI-based methods on the metric of Wasserstein distance while keeping high discriminability. Furthermore, the agents trained by WURL can sufficiently explore the state space in mazes and MuJoCo tasks and the pre-trained policies can be applied to downstream tasks by hierarchical learning.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Kosko, B. « Unsupervised learning in noise ». IEEE Transactions on Neural Networks 1, no 1 (mars 1990) : 44–57. http://dx.doi.org/10.1109/72.80204.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Hentschel, H. G. E., et Z. Jiang. « Prediction using unsupervised learning ». Physica D : Nonlinear Phenomena 67, no 1-3 (août 1993) : 151–65. http://dx.doi.org/10.1016/0167-2789(93)90203-d.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

Hammarström, Harald, et Lars Borin. « Unsupervised Learning of Morphology ». Computational Linguistics 37, no 2 (juin 2011) : 309–50. http://dx.doi.org/10.1162/coli_a_00050.

Texte intégral
Résumé :
This article surveys work on Unsupervised Learning of Morphology. We define Unsupervised Learning of Morphology as the problem of inducing a description (of some kind, even if only morpheme-segmentation) of how orthographic words are built up given only raw text data of a language. We briefly go through the history and motivation of the this problem. Next, over 200 items of work are listed with a brief characterization, and the most important ideas in the field are critically discussed. We summarize the achievements so far and give pointers for future developments.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Reimann, P. « Unsupervised learning of distributions ». Europhysics Letters (EPL) 40, no 3 (1 novembre 1997) : 251–56. http://dx.doi.org/10.1209/epl/i1997-00456-2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Sibyan, Hidayatus, Wildan Suharso, Edi Suharto, Melda Agnes Manuhutu et Agus Perdana Windarto. « Optimization of Unsupervised Learning in Machine Learning ». Journal of Physics : Conference Series 1783, no 1 (1 février 2021) : 012034. http://dx.doi.org/10.1088/1742-6596/1783/1/012034.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Xu, Hui, Jiaxing Wang, Hao Li, Deqiang Ouyang et Jie Shao. « Unsupervised meta-learning for few-shot learning ». Pattern Recognition 116 (août 2021) : 107951. http://dx.doi.org/10.1016/j.patcog.2021.107951.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Sharma, Ritu. « Study of Supervised Learning and Unsupervised Learning ». International Journal for Research in Applied Science and Engineering Technology 8, no 6 (30 juin 2020) : 588–93. http://dx.doi.org/10.22214/ijraset.2020.6095.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
21

N, Kranjcic. « Unsupervised Classification for Illegal Building Monitoring ». Open Access Journal of Waste Management & ; Xenobiotics 4, no 1 (26 janvier 2021) : 1–5. http://dx.doi.org/10.23880/oajwx-16000157.

Texte intégral
Résumé :
In 2013 the Ministry of Construction and Physical Planning has brought an act by which all illegally built objects must be legalized. To this date almost 75% legalization request has been solved. It is expected that by the end of 2019 all of the illegally built objects will be legalized. In order to prevent further construction of illegal objects the Ministry of Construction and Physical Planning is seeking a way to easily detect start of illegal construction. Since the Copernicus satellite images are available free of charge and with resolution of 10m it should be possible to detect mentioned objects. This paper will provide analysis of Copernicus Sentinel 2A imagery for such use based on unsupervised classification using machine learning. If such procedure results in satisfying accuracy it will be proposed model for automation of the process for monitoring the illegal building construction based on Sentinel 2A imagery.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Chua, Sook-Ling, Stephen Marsland et Hans Guesgen. « Unsupervised Learning of Human Behaviours ». Proceedings of the AAAI Conference on Artificial Intelligence 25, no 1 (4 août 2011) : 319–24. http://dx.doi.org/10.1609/aaai.v25i1.7911.

Texte intégral
Résumé :
Behaviour recognition is the process of inferring the behaviour of an individual from a series of observations acquired from sensors such as in a smart home. The majority of existing behaviour recognition systems are based on supervised learning algorithms, which means that training them requires a preprocessed, annotated dataset. Unfortunately, annotating a dataset is a rather tedious process and one that is prone to error. In this paper we suggest a way to identify structure in the data based on text compression and the edit distance between words, without any prior labelling. We demonstrate that by using this method we can automatically identify patterns and segment the data into patterns that correspond to human behaviours. To evaluate the effectiveness of our proposed method, we use a dataset from a smart home and compare the labels produced by our approach with the labels assigned by a human to the activities in the dataset. We find that the results are promising and show significant improvement in the recognition accuracy over Self-Organising Maps (SOMs).
Styles APA, Harvard, Vancouver, ISO, etc.
23

Mo, Yujie, Liang Peng, Jie Xu, Xiaoshuang Shi et Xiaofeng Zhu. « Simple Unsupervised Graph Representation Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 7 (28 juin 2022) : 7797–805. http://dx.doi.org/10.1609/aaai.v36i7.20748.

Texte intégral
Résumé :
In this paper, we propose a simple unsupervised graph representation learning method to conduct effective and efficient contrastive learning. Specifically, the proposed multiplet loss explores the complementary information between the structural information and neighbor information to enlarge the inter-class variation, as well as adds an upper bound loss to achieve the finite distance between positive embeddings and anchor embeddings for reducing the intra-class variation. As a result, both enlarging inter-class variation and reducing intra-class variation result in small generalization error, thereby obtaining an effective model. Furthermore, our method removes widely used data augmentation and discriminator from previous graph contrastive learning methods, meanwhile available to output low-dimensional embeddings, leading to an efficient model. Experimental results on various real-world datasets demonstrate the effectiveness and efficiency of our method, compared to state-of-the-art methods. The source codes are released at https://github.com/YujieMo/SUGRL.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Lotfi, Ismail, Lamiae Megzari et Abdelhamid Bouhadi. « Asset allocation by Unsupervised Learning ». Review of Economics and Finance 19 (2021) : 338–46. http://dx.doi.org/10.55365/1923.x2021.19.34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Zhao, Tingting, Zifeng Wang, Aria Masoomi et Jennifer Dy. « Deep Bayesian Unsupervised Lifelong Learning ». Neural Networks 149 (mai 2022) : 95–106. http://dx.doi.org/10.1016/j.neunet.2022.02.001.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Martínez-Toro, Gabriel Mauricio, Dewar Rico-Bautista, Efrén Romero-Riaño et Paola Andrea Romero-Riaño. « Unsupervised learning : application to epilepsy ». Revista Colombiana de Computación 20, no 2 (1 décembre 2019) : 20–27. http://dx.doi.org/10.29375/25392115.3718.

Texte intégral
Résumé :
Epilepsy is a neurological disorder characterized by recurrent seizures. The primary objective is to present an analysis of the results shown in the training data simulation charts. Data were collected by means of the 10-20 system. The “10–20” system is an internationally recognized method to describe and apply the location of scalp electrodes in the context of an EEG exam. It shows the differences obtained between the tests generated and the anomalies of the test data based on training data. Finally, the results are interpreted and the efficacy of the procedure is discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Gao, Jiabao, Caijun Zhong, Xiaoming Chen, Hai Lin et Zhaoyang Zhang. « Unsupervised Learning for Passive Beamforming ». IEEE Communications Letters 24, no 5 (mai 2020) : 1052–56. http://dx.doi.org/10.1109/lcomm.2020.2965532.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
28

Fan, Qingnan, Jiaolong Yang, David Wipf, Baoquan Chen et Xin Tong. « Image smoothing via unsupervised learning ». ACM Transactions on Graphics 37, no 6 (10 janvier 2019) : 1–14. http://dx.doi.org/10.1145/3272127.3275081.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
29

Wu, Kuo Lung. « Unsupervised Kernel Learning Vector Quantization ». Advanced Engineering Forum 6-7 (septembre 2012) : 243–49. http://dx.doi.org/10.4028/www.scientific.net/aef.6-7.243.

Texte intégral
Résumé :
In this paper, we propose an unsupervised kernel learning vector quantization (UKLVQ) algorithm that combines the concepts of the kernel method and traditional unsupervised learning vector quantization (ULVQ). We first use the definition of the shadow kernel to give a general representation of the UKLVQ method and then easily implement the UKLVQ algorithm with a well-defined objective function in which traditional unsupervised learning vector quantization (ULVQ) becomes a special case of UKLVQ. We also analyze the robustness of our proposed learning algorithm by means of a sensitivity curve. In our simulations, the UKLVQ with Gaussian kernel has a bounded sensitivity curve and is thus robust to noise. The robustness and accuracy of the proposed UKLVQ algorithm is also demonstrated via numerical examples.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Witten, Daniela M. « Penalized unsupervised learning with outliers ». Statistics and Its Interface 6, no 2 (2013) : 211–21. http://dx.doi.org/10.4310/sii.2013.v6.n2.a5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
31

Pothos, Emmanuel M., et Nick Chater. « Unsupervised Categorization and Category Learning ». Quarterly Journal of Experimental Psychology Section A 58, no 4 (mai 2005) : 733–52. http://dx.doi.org/10.1080/02724980443000322.

Texte intégral
Résumé :
When people categorize a set of items in a certain way they often change their perceptions for these items so that they become more compatible with the learned categorization. In two experiments we examined whether such changes are extensive enough to change the unsupervised categorization for the items—that is, the categorization of the items that is considered more intuitive or natural without any learning. In Experiment 1 we directly employed an unsupervised categorization task; in Experiment 2 we collected similarity ratings for the items and inferred unsupervised categorizations using Pothos and Chater's (2002) model of unsupervised categorization. The unsupervised categorization for the items changed to resemble more the learned one when this was specified by the suppression of a stimulus dimension (both experiments), but less so when it was almost specified by the suppression of a stimulus dimension (Experiment 1, nonsignificant trend in Experiment 2). By contrast, no changes in the unsupervised categorization were observed when participants were taught a classification that was specified by a more fine tuning of the relative salience of the two dimensions.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Yang Song, L. Goncalves et P. Perona. « Unsupervised learning of human motion ». IEEE Transactions on Pattern Analysis and Machine Intelligence 25, no 7 (juillet 2003) : 814–27. http://dx.doi.org/10.1109/tpami.2003.1206511.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Clapper, John P., et Gordon H. Bower. « Category invention in unsupervised learning. » Journal of Experimental Psychology : Learning, Memory, and Cognition 20, no 2 (mars 1994) : 443–60. http://dx.doi.org/10.1037/0278-7393.20.2.443.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
34

Clapper, John P., et Gordon H. Bower. « Adaptive categorization in unsupervised learning. » Journal of Experimental Psychology : Learning, Memory, and Cognition 28, no 5 (septembre 2002) : 908–23. http://dx.doi.org/10.1037/0278-7393.28.5.908.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Luo, Jiaming, Karthik Narasimhan et Regina Barzilay. « Unsupervised Learning of Morphological Forests ». Transactions of the Association for Computational Linguistics 5 (décembre 2017) : 353–64. http://dx.doi.org/10.1162/tacl_a_00066.

Texte intégral
Résumé :
This paper focuses on unsupervised modeling of morphological families, collectively comprising a forest over the language vocabulary. This formulation enables us to capture edge-wise properties reflecting single-step morphological derivations, along with global distributional properties of the entire forest. These global properties constrain the size of the affix set and encourage formation of tight morphological families. The resulting objective is solved using Integer Linear Programming (ILP) paired with contrastive estimation. We train the model by alternating between optimizing the local log-linear model and the global ILP objective. We evaluate our system on three tasks: root detection, clustering of morphological families, and segmentation. Our experiments demonstrate that our model yields consistent gains in all three tasks compared with the best published results.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Montemezzani, Germano, Gan Zhou et Dana Z. Anderson. « Unsupervised Learning of Temporal Features ». Optics and Photonics News 5, no 12 (1 décembre 1994) : 38. http://dx.doi.org/10.1364/opn.5.12.000038.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
37

Shah, Swapnil Nitin. « Variational approach to unsupervised learning ». Journal of Physics Communications 3, no 7 (18 juillet 2019) : 075006. http://dx.doi.org/10.1088/2399-6528/ab3029.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
38

Lake, B. M., G. K. Vallabha et J. L. McClelland. « Modeling Unsupervised Perceptual Category Learning ». IEEE Transactions on Autonomous Mental Development 1, no 1 (mai 2009) : 35–43. http://dx.doi.org/10.1109/tamd.2009.2021703.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
39

Deco, Gustavo, et Lucas Parra. « Unsupervised learning for Boltzman Machines ». Network : Computation in Neural Systems 6, no 3 (janvier 1995) : 437–48. http://dx.doi.org/10.1088/0954-898x_6_3_009.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
40

Hiles, B. P., N. Intrator et S. Edelman. « Unsupervised learning of visual structure ». Journal of Vision 2, no 7 (15 mars 2010) : 74. http://dx.doi.org/10.1167/2.7.74.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
41

Solan, Z., D. Horn, E. Ruppin et S. Edelman. « Unsupervised learning of natural languages ». Proceedings of the National Academy of Sciences 102, no 33 (8 août 2005) : 11629–34. http://dx.doi.org/10.1073/pnas.0409746102.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Mietzner, A., M. Opper et W. Kinzel. « Maximal stability in unsupervised learning ». Journal of Physics A : Mathematical and General 28, no 10 (21 mai 1995) : 2785–97. http://dx.doi.org/10.1088/0305-4470/28/10/011.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Roohi, Adil, Kevin Faust, Ugljesa Djuric et Phedias Diamandis. « Unsupervised Machine Learning in Pathology ». Surgical Pathology Clinics 13, no 2 (juin 2020) : 349–58. http://dx.doi.org/10.1016/j.path.2020.01.002.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
44

Oja, Erkki. « Unsupervised learning in neural computation ». Theoretical Computer Science 287, no 1 (septembre 2002) : 187–207. http://dx.doi.org/10.1016/s0304-3975(02)00160-3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
45

Powers, David. « Unsupervised Learning of Linguistic Structure ». International Journal of Corpus Linguistics 2, no 1 (1 janvier 1997) : 91–131. http://dx.doi.org/10.1075/ijcl.2.1.06pow.

Texte intégral
Résumé :
Computational Linguistics and Natural Language have long been targets for Machine Learning, and a variety of learning paradigms and techniques have been employed with varying degrees of success. In this paper, we review approaches which have adopted an unsupervised learning paradigm, explore the assumptions which underlie the techniques used, and develop an approach to empirical evaluation. We concentrate on a statistical framework based on N-grams, although we seek to maintain neurolinguistic plausibility. The model we adopt places putative linguistic units in focus and associates them with a characteristic vector of statistics derived from occurrence frequency. These vectors are treated as defining a hyperspace, within which we demonstrate a technique for examining the empirical utility of the various metrics and normalization, visualization, and clustering techniques proposed in the literature. We conclude with an evaluation of the relative utility of a large array of different metrics and processing techniques in relation to our defined performance criteria.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Botelho, Fernanda, et Annita Davis. « Stability behavior for unsupervised learning ». Physica D : Nonlinear Phenomena 243, no 1 (janvier 2013) : 111–15. http://dx.doi.org/10.1016/j.physd.2012.10.003.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
47

Anselmi, Fabio, Joel Z. Leibo, Lorenzo Rosasco, Jim Mutch, Andrea Tacchetti et Tomaso Poggio. « Unsupervised learning of invariant representations ». Theoretical Computer Science 633 (juin 2016) : 112–21. http://dx.doi.org/10.1016/j.tcs.2015.06.048.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
48

Wescoat, Ethan, Matthew Krugh, Andrew Henderson, Josh Goodnough et Laine Mears. « Vibration Analysis Utilizing Unsupervised Learning ». Procedia Manufacturing 34 (2019) : 876–84. http://dx.doi.org/10.1016/j.promfg.2019.06.160.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
49

Pagan, Darren C., Thien Q. Phan, Jordan S. Weaver, Austin R. Benson et Armand J. Beaudoin. « Unsupervised learning of dislocation motion ». Acta Materialia 181 (décembre 2019) : 510–18. http://dx.doi.org/10.1016/j.actamat.2019.10.011.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
50

Biehl, M., et A. Mietzner. « Statistical Mechanics of Unsupervised Learning ». Europhysics Letters (EPL) 24, no 5 (10 novembre 1993) : 421–26. http://dx.doi.org/10.1209/0295-5075/24/5/017.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie