Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Self-supervised learninig.

Zeitschriftenartikel zum Thema „Self-supervised learninig“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Self-supervised learninig" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Zhao, Qingyu, Zixuan Liu, Ehsan Adeli und Kilian M. Pohl. „Longitudinal self-supervised learning“. Medical Image Analysis 71 (Juli 2021): 102051. http://dx.doi.org/10.1016/j.media.2021.102051.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Wang, Fei, und Changshui Zhang. „Robust self-tuning semi-supervised learning“. Neurocomputing 70, Nr. 16-18 (Oktober 2007): 2931–39. http://dx.doi.org/10.1016/j.neucom.2006.11.004.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Hrycej, Tomas. „Supporting supervised learning by self-organization“. Neurocomputing 4, Nr. 1-2 (Februar 1992): 17–30. http://dx.doi.org/10.1016/0925-2312(92)90040-v.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Shin, Sungho, Jongwon Kim, Yeonguk Yu, Seongju Lee und Kyoobin Lee. „Self-Supervised Transfer Learning from Natural Images for Sound Classification“. Applied Sciences 11, Nr. 7 (29.03.2021): 3043. http://dx.doi.org/10.3390/app11073043.

Der volle Inhalt der Quelle
Annotation:
We propose the implementation of transfer learning from natural images to audio-based images using self-supervised learning schemes. Through self-supervised learning, convolutional neural networks (CNNs) can learn the general representation of natural images without labels. In this study, a convolutional neural network was pre-trained with natural images (ImageNet) via self-supervised learning; subsequently, it was fine-tuned on the target audio samples. Pre-training with the self-supervised learning scheme significantly improved the sound classification performance when validated on the following benchmarks: ESC-50, UrbanSound8k, and GTZAN. The network pre-trained via self-supervised learning achieved a similar level of accuracy as those pre-trained using a supervised method that require labels. Therefore, we demonstrated that transfer learning from natural images contributes to improvements in audio-related tasks, and self-supervised learning with natural images is adequate for pre-training scheme in terms of simplicity and effectiveness.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Liu, Yuanyuan, und Qianqian Liu. „Research on Self-Supervised Comparative Learning for Computer Vision“. Journal of Electronic Research and Application 5, Nr. 3 (17.08.2021): 5–17. http://dx.doi.org/10.26689/jera.v5i3.2320.

Der volle Inhalt der Quelle
Annotation:
In recent years, self-supervised learning which does not require a large number of manual labels generate supervised signals through the data itself to attain the characterization learning of samples. Self-supervised learning solves the problem of learning semantic features from unlabeled data, and realizes pre-training of models in large data sets. Its significant advantages have been extensively studied by scholars in recent years. There are usually three types of self-supervised learning: “Generative, Contrastive, and Generative-Contrastive.” The model of the comparative learning method is relatively simple, and the performance of the current downstream task is comparable to that of the supervised learning method. Therefore, we propose a conceptual analysis framework: data augmentation pipeline, architectures, pretext tasks, comparison methods, semi-supervised fine-tuning. Based on this conceptual framework, we qualitatively analyze the existing comparative self-supervised learning methods for computer vision, and then further analyze its performance at different stages, and finally summarize the research status of self-supervised comparative learning methods in other fields.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Jaiswal, Ashish, Ashwin Ramesh Babu, Mohammad Zaki Zadeh, Debapriya Banerjee und Fillia Makedon. „A Survey on Contrastive Self-Supervised Learning“. Technologies 9, Nr. 1 (28.12.2020): 2. http://dx.doi.org/10.3390/technologies9010002.

Der volle Inhalt der Quelle
Annotation:
Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets. It is capable of adopting self-defined pseudolabels as supervision and use the learned representations for several downstream tasks. Specifically, contrastive learning has recently become a dominant component in self-supervised learning for computer vision, natural language processing (NLP), and other domains. It aims at embedding augmented versions of the same sample close to each other while trying to push away embeddings from different samples. This paper provides an extensive review of self-supervised methods that follow the contrastive approach. The work explains commonly used pretext tasks in a contrastive learning setup, followed by different architectures that have been proposed so far. Next, we present a performance comparison of different methods for multiple downstream tasks such as image classification, object detection, and action recognition. Finally, we conclude with the limitations of the current methods and the need for further techniques and future directions to make meaningful progress.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

ITO, Seiya, Naoshi KANEKO und Kazuhiko SUMI. „Self-Supervised Learning for Multi-View Stereo“. Journal of the Japan Society for Precision Engineering 86, Nr. 12 (05.12.2020): 1042–50. http://dx.doi.org/10.2493/jjspe.86.1042.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Tenorio, M. F., und W. T. Lee. „Self-organizing network for optimum supervised learning“. IEEE Transactions on Neural Networks 1, Nr. 1 (März 1990): 100–110. http://dx.doi.org/10.1109/72.80209.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Florence, Peter, Lucas Manuelli und Russ Tedrake. „Self-Supervised Correspondence in Visuomotor Policy Learning“. IEEE Robotics and Automation Letters 5, Nr. 2 (April 2020): 492–99. http://dx.doi.org/10.1109/lra.2019.2956365.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Liu, Chicheng, Libin Song, Jiwen Zhang, Ken Chen und Jing Xu. „Self-Supervised Learning for Specified Latent Representation“. IEEE Transactions on Fuzzy Systems 28, Nr. 1 (Januar 2020): 47–59. http://dx.doi.org/10.1109/tfuzz.2019.2904237.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Pal, S. K., A. Pathak und C. Basu. „Dynamic guard zone for self-supervised learning“. Pattern Recognition Letters 7, Nr. 3 (März 1988): 135–44. http://dx.doi.org/10.1016/0167-8655(88)90056-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Hayat, Md Abul, George Stein, Peter Harrington, Zarija Lukić und Mustafa Mustafa. „Self-supervised Representation Learning for Astronomical Images“. Astrophysical Journal Letters 911, Nr. 2 (01.04.2021): L33. http://dx.doi.org/10.3847/2041-8213/abf2c7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Che, Feihu, Guohua Yang, Dawei Zhang, Jianhua Tao und Tong Liu. „Self-supervised graph representation learning via bootstrapping“. Neurocomputing 456 (Oktober 2021): 88–96. http://dx.doi.org/10.1016/j.neucom.2021.03.123.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Tripathi, Achyut Mani, und Aakansha Mishra. „Self-supervised learning for Environmental Sound Classification“. Applied Acoustics 182 (November 2021): 108183. http://dx.doi.org/10.1016/j.apacoust.2021.108183.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Islam, Md Rabiul, Shuji Sakamoto, Yoshihiro Yamada, Andrew W. Vargo, Motoi Iwata, Masakazu Iwamura und Koichi Kise. „Self-supervised Learning for Reading Activity Classification“. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, Nr. 3 (09.09.2021): 1–22. http://dx.doi.org/10.1145/3478088.

Der volle Inhalt der Quelle
Annotation:
Reading analysis can relay information about user's confidence and habits and can be used to construct useful feedback. A lack of labeled data inhibits the effective application of fully-supervised Deep Learning (DL) for automatic reading analysis. We propose a Self-supervised Learning (SSL) method for reading analysis. Previously, SSL has been effective in physical human activity recognition (HAR) tasks, but it has not been applied to cognitive HAR tasks like reading. We first evaluate the proposed method on a four-class classification task on reading detection using electrooculography datasets, followed by an evaluation of a two-class classification task of confidence estimation on multiple-choice questions using eye-tracking datasets. Fully-supervised DL and support vector machines (SVMs) are used as comparisons for the proposed SSL method. The results show that the proposed SSL method is superior to the fully-supervised DL and SVM for both tasks, especially when training data is scarce. This result indicates the proposed method is the superior choice for reading analysis tasks. These results are important for informing the design of automatic reading analysis platforms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Li, Jingwei, Chi Zhang, Linyuan Wang, Penghui Ding, Lulu Hu, Bin Yan und Li Tong. „A Visual Encoding Model Based on Contrastive Self-Supervised Learning for Human Brain Activity along the Ventral Visual Stream“. Brain Sciences 11, Nr. 8 (29.07.2021): 1004. http://dx.doi.org/10.3390/brainsci11081004.

Der volle Inhalt der Quelle
Annotation:
Visual encoding models are important computational models for understanding how information is processed along the visual stream. Many improved visual encoding models have been developed from the perspective of the model architecture and the learning objective, but these are limited to the supervised learning method. From the view of unsupervised learning mechanisms, this paper utilized a pre-trained neural network to construct a visual encoding model based on contrastive self-supervised learning for the ventral visual stream measured by functional magnetic resonance imaging (fMRI). We first extracted features using the ResNet50 model pre-trained in contrastive self-supervised learning (ResNet50-CSL model), trained a linear regression model for each voxel, and finally calculated the prediction accuracy of different voxels. Compared with the ResNet50 model pre-trained in a supervised classification task, the ResNet50-CSL model achieved an equal or even relatively better encoding performance in multiple visual cortical areas. Moreover, the ResNet50-CSL model performs hierarchical representation of input visual stimuli, which is similar to the human visual cortex in its hierarchical information processing. Our experimental results suggest that the encoding model based on contrastive self-supervised learning is a strong computational model to compete with supervised models, and contrastive self-supervised learning proves an effective learning method to extract human brain-like representations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Nartey, Obed Tettey, Guowu Yang, Sarpong Kwadwo Asare, Jinzhao Wu und Lady Nadia Frempong. „Robust Semi-Supervised Traffic Sign Recognition via Self-Training and Weakly-Supervised Learning“. Sensors 20, Nr. 9 (08.05.2020): 2684. http://dx.doi.org/10.3390/s20092684.

Der volle Inhalt der Quelle
Annotation:
Traffic sign recognition is a classification problem that poses challenges for computer vision and machine learning algorithms. Although both computer vision and machine learning techniques have constantly been improved to solve this problem, the sudden rise in the number of unlabeled traffic signs has become even more challenging. Large data collation and labeling are tedious and expensive tasks that demand much time, expert knowledge, and fiscal resources to satisfy the hunger of deep neural networks. Aside from that, the problem of having unbalanced data also poses a greater challenge to computer vision and machine learning algorithms to achieve better performance. These problems raise the need to develop algorithms that can fully exploit a large amount of unlabeled data, use a small amount of labeled samples, and be robust to data imbalance to build an efficient and high-quality classifier. In this work, we propose a novel semi-supervised classification technique that is robust to small and unbalanced data. The framework integrates weakly-supervised learning and self-training with self-paced learning to generate attention maps to augment the training set and utilizes a novel pseudo-label generation and selection algorithm to generate and select pseudo-labeled samples. The method improves the performance by: (1) normalizing the class-wise confidence levels to prevent the model from ignoring hard-to-learn samples, thereby solving the imbalanced data problem; (2) jointly learning a model and optimizing pseudo-labels generated on unlabeled data; and (3) enlarging the training set to satisfy the hunger of deep learning models. Extensive evaluations on two public traffic sign recognition datasets demonstrate the effectiveness of the proposed technique and provide a potential solution for practical applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Zhou, Meng, Zechen Li und Pengtao Xie. „Self-supervised Regularization for Text Classification“. Transactions of the Association for Computational Linguistics 9 (2021): 641–56. http://dx.doi.org/10.1162/tacl_a_00389.

Der volle Inhalt der Quelle
Annotation:
Abstract Text classification is a widely studied problem and has broad applications. In many real-world problems, the number of texts for training classification models is limited, which renders these models prone to overfitting. To address this problem, we propose SSL-Reg, a data-dependent regularization approach based on self-supervised learning (SSL). SSL (Devlin et al., 2019a) is an unsupervised learning approach that defines auxiliary tasks on input data without using any human-provided labels and learns data representations by solving these auxiliary tasks. In SSL-Reg, a supervised classification task and an unsupervised SSL task are performed simultaneously. The SSL task is unsupervised, which is defined purely on input texts without using any human- provided labels. Training a model using an SSL task can prevent the model from being overfitted to a limited number of class labels in the classification task. Experiments on 17 text classification datasets demonstrate the effectiveness of our proposed method. Code is available at https://github.com/UCSD-AI4H/SSReg.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Li, Li, Kaiyi Zhao, Sicong Li, Ruizhi Sun und Saihua Cai. „Extreme Learning Machine for Supervised Classification with Self-paced Learning“. Neural Processing Letters 52, Nr. 3 (14.06.2020): 1723–44. http://dx.doi.org/10.1007/s11063-020-10286-9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

C A Padmanabha Reddy, Y., P. Viswanath und B. Eswara Reddy. „Semi-supervised learning: a brief review“. International Journal of Engineering & Technology 7, Nr. 1.8 (09.02.2018): 81. http://dx.doi.org/10.14419/ijet.v7i1.8.9977.

Der volle Inhalt der Quelle
Annotation:
Most of the application domain suffers from not having sufficient labeled data whereas unlabeled data is available cheaply. To get labeled instances, it is very difficult because experienced domain experts are required to label the unlabeled data patterns. Semi-supervised learning addresses this problem and act as a half way between supervised and unsupervised learning. This paper addresses few techniques of Semi-supervised learning (SSL) such as self-training, co-training, multi-view learning, TSVMs methods. Traditionally SSL is classified in to Semi-supervised Classification and Semi-supervised Clustering which achieves better accuracy than traditional supervised and unsupervised learning techniques. The paper also addresses the issue of scalability and applications of Semi-supervised learning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Wang, Shaolei, Wangxiang Che, Qi Liu, Pengda Qin, Ting Liu und William Yang Wang. „Multi-Task Self-Supervised Learning for Disfluency Detection“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 05 (03.04.2020): 9193–200. http://dx.doi.org/10.1609/aaai.v34i05.6456.

Der volle Inhalt der Quelle
Annotation:
Most existing approaches to disfluency detection heavily rely on human-annotated data, which is expensive to obtain in practice. To tackle the training data bottleneck, we investigate methods for combining multiple self-supervised tasks-i.e., supervised tasks where data can be collected without manual labeling. First, we construct large-scale pseudo training data by randomly adding or deleting words from unlabeled news data, and propose two self-supervised pre-training tasks: (i) tagging task to detect the added noisy words. (ii) sentence classification to distinguish original sentences from grammatically-incorrect sentences. We then combine these two tasks to jointly train a network. The pre-trained network is then fine-tuned using human-annotated disfluency detection training data. Experimental results on the commonly used English Switchboard test set show that our approach can achieve competitive performance compared to the previous systems (trained using the full dataset) by using less than 1% (1000 sentences) of the training data. Our method trained on the full dataset significantly outperforms previous methods, reducing the error by 21% on English Switchboard.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Fazakis, Nikos, Stamatis Karlos, Sotiris Kotsiantis und Kyriakos Sgarbas. „Self-trained Rotation Forest for semi-supervised learning“. Journal of Intelligent & Fuzzy Systems 32, Nr. 1 (13.01.2017): 711–22. http://dx.doi.org/10.3233/jifs-152641.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Zhao, Qilu, und Junyu Dong. „Self-supervised representation learning by predicting visual permutations“. Knowledge-Based Systems 210 (Dezember 2020): 106534. http://dx.doi.org/10.1016/j.knosys.2020.106534.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Sharma, Vivek, Makarand Tapaswi, M. Saquib Sarfraz und Rainer Stiefelhagen. „Video Face Clustering With Self-Supervised Representation Learning“. IEEE Transactions on Biometrics, Behavior, and Identity Science 2, Nr. 2 (April 2020): 145–57. http://dx.doi.org/10.1109/tbiom.2019.2947264.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Gan, Jiangzhang, Guoqiu Wen, Hao Yu, Wei Zheng und Cong Lei. „Supervised feature selection by self-paced learning regression“. Pattern Recognition Letters 132 (April 2020): 30–37. http://dx.doi.org/10.1016/j.patrec.2018.08.029.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Zeng, Zeng, Yang Xulei, Yu Qiyun, Yao Meng und Zhang Le. „SeSe-Net: Self-Supervised deep learning for segmentation“. Pattern Recognition Letters 128 (Dezember 2019): 23–29. http://dx.doi.org/10.1016/j.patrec.2019.08.002.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Schmidt, Tanner, Richard Newcombe und Dieter Fox. „Self-Supervised Visual Descriptor Learning for Dense Correspondence“. IEEE Robotics and Automation Letters 2, Nr. 2 (April 2017): 420–27. http://dx.doi.org/10.1109/lra.2016.2634089.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Hu, Fanghuai, Zhiqing Shao und Tong Ruan. „Self-Supervised Chinese Ontology Learning from Online Encyclopedias“. Scientific World Journal 2014 (2014): 1–13. http://dx.doi.org/10.1155/2014/848631.

Der volle Inhalt der Quelle
Annotation:
Constructing ontology manually is a time-consuming, error-prone, and tedious task. We present SSCO, a self-supervised learning based chinese ontology, which contains about 255 thousand concepts, 5 million entities, and 40 million facts. We explore the three largest online Chinese encyclopedias for ontology learning and describe how to transfer the structured knowledge in encyclopedias, including article titles, category labels, redirection pages, taxonomy systems, and InfoBox modules, into ontological form. In order to avoid the errors in encyclopedias and enrich the learnt ontology, we also apply some machine learning based methods. First, we proof that the self-supervised machine learning method is practicable in Chinese relation extraction (at least for synonymy and hyponymy) statistically and experimentally and train some self-supervised models (SVMs and CRFs) for synonymy extraction, concept-subconcept relation extraction, and concept-instance relation extraction; the advantages of our methods are that all training examples are automatically generated from the structural information of encyclopedias and a few general heuristic rules. Finally, we evaluate SSCO in two aspects, scale and precision; manual evaluation results show that the ontology has excellent precision, and high coverage is concluded by comparing SSCO with other famous ontologies and knowledge bases; the experiment results also indicate that the self-supervised models obviously enrich SSCO.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Sofman, Boris, Ellie Lin, J. Andrew Bagnell, John Cole, Nicolas Vandapel und Anthony Stentz. „Improving robot navigation through self-supervised online learning“. Journal of Field Robotics 23, Nr. 11-12 (November 2006): 1059–75. http://dx.doi.org/10.1002/rob.20169.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Chen, Yajing, Fanzi Wu, Zeyu Wang, Yibing Song, Yonggen Ling und Linchao Bao. „Self-Supervised Learning of Detailed 3D Face Reconstruction“. IEEE Transactions on Image Processing 29 (2020): 8696–705. http://dx.doi.org/10.1109/tip.2020.3017347.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Guizilini, Vitor, und Fabio Ramos. „Online self-supervised learning for dynamic object segmentation“. International Journal of Robotics Research 34, Nr. 4-5 (25.03.2015): 559–81. http://dx.doi.org/10.1177/0278364914566514.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Decoux, Benoît. „Self-Supervised Learning in Cooperative Stereo Vision Correspondence“. International Journal of Neural Systems 08, Nr. 01 (Februar 1997): 101–11. http://dx.doi.org/10.1142/s0129065797000136.

Der volle Inhalt der Quelle
Annotation:
This paper presents a neural network model of stereoscopic vision, in which a process of fusion seeks the correspondence between points of stereo inputs. Stereo fusion is obtained after a self-supervised learning phase, so called because the learning rule is a supervised-learning rule in which the supervisory information is autonomously extracted from the visual inputs by the model. This supervisory information arises from a global property of the potential matches between the points. The proposed neural network, which is of the cooperative type, and the learning procedure, are tested with random-dot stereograms (RDS) and feature points extracted from real-world images. Those feature points are extracted by a technique based on the use of sigma-pi units. The matching performance and the generalization ability of the model are quantified. The relationship between what have been learned by the network and the constraints used in previous cooperative models of stereo vision, is discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Karlos, Stamatis, Nikos Fazakis, Sotiris Kotsiantis und Kyriakos Sgarbas. „Self-Trained Stacking Model for Semi-Supervised Learning“. International Journal on Artificial Intelligence Tools 26, Nr. 02 (April 2017): 1750001. http://dx.doi.org/10.1142/s0218213017500014.

Der volle Inhalt der Quelle
Annotation:
The most important characteristic of semi-supervised learning methods is the combination of available unlabeled data along with an enough smaller set of labeled examples, so as to increase the learning accuracy compared with the default procedure of supervised methods, which on the other hand use only the labeled data during the training phase. In this work, we have implemented a hybrid Self-trained system that combines a Support Vector Machine, a Decision Tree, a Lazy Learner and a Bayesian algorithm using a Stacking variant methodology. We performed an in depth comparison with other well-known Semi-Supervised classification methods on standard benchmark datasets and we finally reached to the point that the presented technique had better accuracy in most cases.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Cooperstock, Jeremy R., und Evangelos E. Milios. „Self-supervised learning for docking and target reaching“. Robotics and Autonomous Systems 11, Nr. 3-4 (Dezember 1993): 243–60. http://dx.doi.org/10.1016/0921-8890(93)90029-c.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Huang, Jiancong, Juan Rojas, Matthieu Zimmer, Hongmin Wu, Yisheng Guan und Paul Weng. „Hyperparameter Auto-Tuning in Self-Supervised Robotic Learning“. IEEE Robotics and Automation Letters 6, Nr. 2 (April 2021): 3537–44. http://dx.doi.org/10.1109/lra.2021.3064509.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

She, Dong-Yu, und Kun Xu. „Contrastive Self-supervised Representation Learning Using Synthetic Data“. International Journal of Automation and Computing 18, Nr. 4 (11.05.2021): 556–67. http://dx.doi.org/10.1007/s11633-021-1297-9.

Der volle Inhalt der Quelle
Annotation:
AbstractLearning discriminative representations with deep neural networks often relies on massive labeled data, which is expensive and difficult to obtain in many real scenarios. As an alternative, self-supervised learning that leverages input itself as supervision is strongly preferred for its soaring performance on visual representation learning. This paper introduces a contrastive self-supervised framework for learning generalizable representations on the synthetic data that can be obtained easily with complete controllability. Specifically, we propose to optimize a contrastive learning task and a physical property prediction task simultaneously. Given the synthetic scene, the first task aims to maximize agreement between a pair of synthetic images generated by our proposed view sampling module, while the second task aims to predict three physical property maps, i.e., depth, instance contour maps, and surface normal maps. In addition, a feature-level domain adaptation technique with adversarial training is applied to reduce the domain difference between the realistic and the synthetic data. Experiments demonstrate that our proposed method achieves state-of-the-art performance on several visual recognition datasets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Imran, Abdullah-Al-Zubaer, Chao Huang, Hui Tang, Wei Fan, Yuan Xiao, Dingjun Hao, Zhen Qian und Demetri Terzopoulos. „Self-Supervised, Semi-Supervised, Multi-Context Learning for the Combined Classification and Segmentation of Medical Images (Student Abstract)“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 10 (03.04.2020): 13815–16. http://dx.doi.org/10.1609/aaai.v34i10.7179.

Der volle Inhalt der Quelle
Annotation:
To tackle the problem of limited annotated data, semi-supervised learning is attracting attention as an alternative to fully supervised models. Moreover, optimizing a multiple-task model to learn “multiple contexts” can provide better generalizability compared to single-task models. We propose a novel semi-supervised multiple-task model leveraging self-supervision and adversarial training—namely, self-supervised, semi-supervised, multi-context learning (S4MCL)—and apply it to two crucial medical imaging tasks, classification and segmentation. Our experiments on spine X-rays reveal that the S4MCL model significantly outperforms semi-supervised single-task, semi-supervised multi-context, and fully-supervised single-task models, even with a 50% reduction of classification and segmentation labels.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Luo, Dezhao, Chang Liu, Yu Zhou, Dongbao Yang, Can Ma, Qixiang Ye und Weiping Wang. „Video Cloze Procedure for Self-Supervised Spatio-Temporal Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 07 (03.04.2020): 11701–8. http://dx.doi.org/10.1609/aaai.v34i07.6840.

Der volle Inhalt der Quelle
Annotation:
We propose a novel self-supervised method, referred to as Video Cloze Procedure (VCP), to learn rich spatial-temporal representations. VCP first generates “blanks” by withholding video clips and then creates “options” by applying spatio-temporal operations on the withheld clips. Finally, it fills the blanks with “options” and learns representations by predicting the categories of operations applied on the clips. VCP can act as either a proxy task or a target task in self-supervised learning. As a proxy task, it converts rich self-supervised representations into video clip operations (options), which enhances the flexibility and reduces the complexity of representation learning. As a target task, it can assess learned representation models in a uniform and interpretable manner. With VCP, we train spatial-temporal representation models (3D-CNNs) and apply such models on action recognition and video retrieval tasks. Experiments on commonly used benchmarks show that the trained models outperform the state-of-the-art self-supervised models with significant margins.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Livieris, Ioannis, Andreas Kanavos, Vassilis Tampakas und Panagiotis Pintelas. „An Auto-Adjustable Semi-Supervised Self-Training Algorithm“. Algorithms 11, Nr. 9 (14.09.2018): 139. http://dx.doi.org/10.3390/a11090139.

Der volle Inhalt der Quelle
Annotation:
Semi-supervised learning algorithms have become a topic of significant research as an alternative to traditional classification methods which exhibit remarkable performance over labeled data but lack the ability to be applied on large amounts of unlabeled data. In this work, we propose a new semi-supervised learning algorithm that dynamically selects the most promising learner for a classification problem from a pool of classifiers based on a self-training philosophy. Our experimental results illustrate that the proposed algorithm outperforms its component semi-supervised learning algorithms in terms of accuracy, leading to more efficient, stable and robust predictive models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Sun, Ke, Zhouchen Lin und Zhanxing Zhu. „Multi-Stage Self-Supervised Learning for Graph Convolutional Networks on Graphs with Few Labeled Nodes“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 04 (03.04.2020): 5892–99. http://dx.doi.org/10.1609/aaai.v34i04.6048.

Der volle Inhalt der Quelle
Annotation:
Graph Convolutional Networks (GCNs) play a crucial role in graph learning tasks, however, learning graph embedding with few supervised signals is still a difficult problem. In this paper, we propose a novel training algorithm for Graph Convolutional Network, called Multi-Stage Self-Supervised (M3S) Training Algorithm, combined with self-supervised learning approach, focusing on improving the generalization performance of GCNs on graphs with few labeled nodes. Firstly, a Multi-Stage Training Framework is provided as the basis of M3S training method. Then we leverage DeepCluster technique, a popular form of self-supervised learning, and design corresponding aligning mechanism on the embedding space to refine the Multi-Stage Training Framework, resulting in M3S Training Algorithm. Finally, extensive experimental results verify the superior performance of our algorithm on graphs with few labeled nodes under different label rates compared with other state-of-the-art approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Sarukkai, Ramesh R. „Supervised Networks That Self-Organize Class Outputs“. Neural Computation 9, Nr. 3 (01.03.1997): 637–48. http://dx.doi.org/10.1162/neco.1997.9.3.637.

Der volle Inhalt der Quelle
Annotation:
Supervised, neural network, learning algorithms have proved very successful at solving a variety of learning problems; however, they suffer from a common problem of requiring explicit output labels. In this article, it is shown that pattern classification can be achieved, in a multilayered, feedforward, neural network, without requiring explicit output labels, by a process of supervised self-organization. The class projection is achieved by optimizing appropriate within-class uniformity and between-class discernibility criteria. The mapping function and the class labels are developed together iteratively using the derived self organizing backpropagation algorithm. The ability of the self-organizing network to generalize on unseen data is also experimentally evaluated on real data sets and compares favorably with the traditional labeled supervision with neural networks. In addition, interesting features emerge out of the proposed self-organizing supervision, which are absent in conventional approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Li, Haifeng, Tian Zhang und Lin Ma. „Confirmation Based Self-Learning Algorithm in LVCSR's Semi-supervised Incremental Learning“. Procedia Engineering 29 (2012): 754–59. http://dx.doi.org/10.1016/j.proeng.2012.01.036.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Yin, Chunwu, und Zhanbo Chen. „Developing Sustainable Classification of Diseases via Deep Learning and Semi-Supervised Learning“. Healthcare 8, Nr. 3 (24.08.2020): 291. http://dx.doi.org/10.3390/healthcare8030291.

Der volle Inhalt der Quelle
Annotation:
Disease classification based on machine learning has become a crucial research topic in the fields of genetics and molecular biology. Generally, disease classification involves a supervised learning style; i.e., it requires a large number of labelled samples to achieve good classification performance. However, in the majority of the cases, labelled samples are hard to obtain, so the amount of training data are limited. However, many unclassified (unlabelled) sequences have been deposited in public databases, which may help the training procedure. This method is called semi-supervised learning and is very useful in many applications. Self-training can be implemented using high- to low-confidence samples to prevent noisy samples from affecting the robustness of semi-supervised learning in the training process. The deep forest method with the hyperparameter settings used in this paper can achieve excellent performance. Therefore, in this work, we propose a novel combined deep learning model and semi-supervised learning with self-training approach to improve the performance in disease classification, which utilizes unlabelled samples to update a mechanism designed to increase the number of high-confidence pseudo-labelled samples. The experimental results show that our proposed model can achieve good performance in disease classification and disease-causing gene identification.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Mohseni, Sina, Mandar Pitale, JBS Yadawa und Zhangyang Wang. „Self-Supervised Learning for Generalizable Out-of-Distribution Detection“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 04 (03.04.2020): 5216–23. http://dx.doi.org/10.1609/aaai.v34i04.5966.

Der volle Inhalt der Quelle
Annotation:
The real-world deployment of Deep Neural Networks (DNNs) in safety-critical applications such as autonomous vehicles needs to address a variety of DNNs' vulnerabilities, one of which being detecting and rejecting out-of-distribution outliers that might result in unpredictable fatal errors. We propose a new technique relying on self-supervision for generalizable out-of-distribution (OOD) feature learning and rejecting those samples at the inference time. Our technique does not need to pre-know the distribution of targeted OOD samples and incur no extra overheads compared to other methods. We perform multiple image classification experiments and observe our technique to perform favorably against state-of-the-art OOD detection methods. Interestingly, we witness that our method also reduces in-distribution classification risk via rejecting samples near the boundaries of the training set distribution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Sheng, Kekai, Weiming Dong, Menglei Chai, Guohui Wang, Peng Zhou, Feiyue Huang, Bao-Gang Hu, Rongrong Ji und Chongyang Ma. „Revisiting Image Aesthetic Assessment via Self-Supervised Feature Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 04 (03.04.2020): 5709–16. http://dx.doi.org/10.1609/aaai.v34i04.6026.

Der volle Inhalt der Quelle
Annotation:
Visual aesthetic assessment has been an active research field for decades. Although latest methods have achieved promising performance on benchmark datasets, they typically rely on a large number of manual annotations including both aesthetic labels and related image attributes. In this paper, we revisit the problem of image aesthetic assessment from the self-supervised feature learning perspective. Our motivation is that a suitable feature representation for image aesthetic assessment should be able to distinguish different expert-designed image manipulations, which have close relationships with negative aesthetic effects. To this end, we design two novel pretext tasks to identify the types and parameters of editing operations applied to synthetic instances. The features from our pretext tasks are then adapted for a one-layer linear classifier to evaluate the performance in terms of binary aesthetic classification. We conduct extensive quantitative experiments on three benchmark datasets and demonstrate that our approach can faithfully extract aesthetics-aware features and outperform alternative pretext schemes. Moreover, we achieve comparable results to state-of-the-art supervised methods that use 10 million labels from ImageNet.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Wu, Guile, Xiatian Zhu und Shaogang Gong. „Tracklet Self-Supervised Learning for Unsupervised Person Re-Identification“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 07 (03.04.2020): 12362–69. http://dx.doi.org/10.1609/aaai.v34i07.6921.

Der volle Inhalt der Quelle
Annotation:
Existing unsupervised person re-identification (re-id) methods mainly focus on cross-domain adaptation or one-shot learning. Although they are more scalable than the supervised learning counterparts, relying on a relevant labelled source domain or one labelled tracklet per person initialisation still restricts their scalability in real-world deployments. To alleviate these problems, some recent studies develop unsupervised tracklet association and bottom-up image clustering methods, but they still rely on explicit camera annotation or merely utilise suboptimal global clustering. In this work, we formulate a novel tracklet self-supervised learning (TSSL) method, which is capable of capitalising directly from abundant unlabelled tracklet data, to optimise a feature embedding space for both video and image unsupervised re-id. This is achieved by designing a comprehensive unsupervised learning objective that accounts for tracklet frame coherence, tracklet neighbourhood compactness, and tracklet cluster structure in a unified formulation. As a pure unsupervised learning re-id model, TSSL is end-to-end trainable at the absence of source data annotation, person identity labels, and camera prior knowledge. Extensive experiments demonstrate the superiority of TSSL over a wide variety of the state-of-the-art alternative methods on four large-scale person re-id benchmarks, including Market-1501, DukeMTMC-ReID, MARS and DukeMTMC-VideoReID.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Ridge, Barry, Aleš Leonardis, Aleš Ude, Miha Deniša und Danijel Skočaj. „Self-Supervised Online Learning of Basic Object Push Affordances“. International Journal of Advanced Robotic Systems 12, Nr. 3 (Januar 2015): 24. http://dx.doi.org/10.5772/59654.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Yan, Xiang, Syed Zulqarnain Gilani, Mingtao Feng, Liang Zhang, Hanlin Qin und Ajmal Mian. „Self-Supervised Learning to Detect Key Frames in Videos“. Sensors 20, Nr. 23 (04.12.2020): 6941. http://dx.doi.org/10.3390/s20236941.

Der volle Inhalt der Quelle
Annotation:
Detecting key frames in videos is a common problem in many applications such as video classification, action recognition and video summarization. These tasks can be performed more efficiently using only a handful of key frames rather than the full video. Existing key frame detection approaches are mostly designed for supervised learning and require manual labelling of key frames in a large corpus of training data to train the models. Labelling requires human annotators from different backgrounds to annotate key frames in videos which is not only expensive and time consuming but also prone to subjective errors and inconsistencies between the labelers. To overcome these problems, we propose an automatic self-supervised method for detecting key frames in a video. Our method comprises a two-stream ConvNet and a novel automatic annotation architecture able to reliably annotate key frames in a video for self-supervised learning of the ConvNet. The proposed ConvNet learns deep appearance and motion features to detect frames that are unique. The trained network is then able to detect key frames in test videos. Extensive experiments on UCF101 human action and video summarization VSUMM datasets demonstrates the effectiveness of our proposed method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Olley, P., und AK Kochhar†. „Self-supervised learning for an operational knowledge-based system“. Computer Integrated Manufacturing Systems 11, Nr. 4 (Oktober 1998): 297–308. http://dx.doi.org/10.1016/s0951-5240(98)00028-7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Sangineto, Enver, Moin Nabi, Dubravko Culibrk und Nicu Sebe. „Self Paced Deep Learning for Weakly Supervised Object Detection“. IEEE Transactions on Pattern Analysis and Machine Intelligence 41, Nr. 3 (01.03.2019): 712–25. http://dx.doi.org/10.1109/tpami.2018.2804907.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie