Journal articles on the topic 'Deep supervised learning'

To see the other types of publications on this topic, follow the link: Deep supervised learning.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Deep supervised learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kim, Taeheon, Jaewon Hur, and Youkyung Han. "Very High-Resolution Satellite Image Registration Based on Self-supervised Deep Learning." Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography 41, no. 4 (August 31, 2023): 217–25. http://dx.doi.org/10.7848/ksgpc.2023.41.4.217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

AlZuhair, Mona Suliman, Mohamed Maher Ben Ismail, and Ouiem Bchir. "Soft Semi-Supervised Deep Learning-Based Clustering." Applied Sciences 13, no. 17 (August 27, 2023): 9673. http://dx.doi.org/10.3390/app13179673.

Full text
Abstract:
Semi-supervised clustering typically relies on both labeled and unlabeled data to guide the learning process towards the optimal data partition and to prevent falling into local minima. However, researchers’ efforts made to improve existing semi-supervised clustering approaches are relatively scarce compared to the contributions made to enhance the state-of-the-art fully unsupervised clustering approaches. In this paper, we propose a novel semi-supervised deep clustering approach, named Soft Constrained Deep Clustering (SC-DEC), that aims to address the limitations exhibited by existing semi-supervised clustering approaches. Specifically, the proposed approach leverages a deep neural network architecture and generates fuzzy membership degrees that better reflect the true partition of the data. In particular, the proposed approach uses side-information and formulates it as a set of soft pairwise constraints to supervise the machine learning process. This supervision information is expressed using rather relaxed constraints named “should-link” constraints. Such constraints determine whether the pairs of data instances should be assigned to the same or different cluster(s). In fact, the clustering task was formulated as an optimization problem via the minimization of a novel objective function. Moreover, the proposed approach’s performance was assessed via extensive experiments using benchmark datasets. Furthermore, the proposed approach was compared to relevant state-of-the-art clustering algorithms, and the obtained results demonstrate the impact of using minimal previous knowledge about the data in improving the overall clustering performance.
APA, Harvard, Vancouver, ISO, and other styles
3

Wei, Xiang, Xiaotao Wei, Xiangyuan Kong, Siyang Lu, Weiwei Xing, and Wei Lu. "FMixCutMatch for semi-supervised deep learning." Neural Networks 133 (January 2021): 166–76. http://dx.doi.org/10.1016/j.neunet.2020.10.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhou, Shusen, Hailin Zou, Chanjuan Liu, Mujun Zang, Zhiwang Zhang, and Jun Yue. "Deep extractive networks for supervised learning." Optik 127, no. 20 (October 2016): 9008–19. http://dx.doi.org/10.1016/j.ijleo.2016.07.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fong, A. C. M., and G. Hong. "Boosted Supervised Intensional Learning Supported by Unsupervised Learning." International Journal of Machine Learning and Computing 11, no. 2 (March 2021): 98–102. http://dx.doi.org/10.18178/ijmlc.2021.11.2.1020.

Full text
Abstract:
Traditionally, supervised machine learning (ML) algorithms rely heavily on large sets of annotated data. This is especially true for deep learning (DL) neural networks, which need huge annotated data sets for good performance. However, large volumes of annotated data are not always readily available. In addition, some of the best performing ML and DL algorithms lack explainability – it is often difficult even for domain experts to interpret the results. This is an important consideration especially in safety-critical applications, such as AI-assisted medical endeavors, in which a DL’s failure mode is not well understood. This lack of explainability also increases the risk of malicious attacks by adversarial actors because these actions can become obscured in the decision-making process that lacks transparency. This paper describes an intensional learning approach which uses boosting to enhance prediction performance while minimizing reliance on availability of annotated data. The intensional information is derived from an unsupervised learning preprocessing step involving clustering. Preliminary evaluation on the MNIST data set has shown encouraging results. Specifically, using the proposed approach, it is now possible to achieve similar accuracy result as extensional learning alone while using only a small fraction of the original training data set.
APA, Harvard, Vancouver, ISO, and other styles
6

Hu, Yu, and Hongmin Cai. "Hypergraph-Supervised Deep Subspace Clustering." Mathematics 9, no. 24 (December 15, 2021): 3259. http://dx.doi.org/10.3390/math9243259.

Full text
Abstract:
Auto-encoder (AE)-based deep subspace clustering (DSC) methods aim to partition high-dimensional data into underlying clusters, where each cluster corresponds to a subspace. As a standard module in current AE-based DSC, the self-reconstruction cost plays an essential role in regularizing the feature learning. However, the self-reconstruction adversely affects the discriminative feature learning of AE, thereby hampering the downstream subspace clustering. To address this issue, we propose a hypergraph-supervised reconstruction to replace the self-reconstruction. Specifically, instead of enforcing the decoder in the AE to merely reconstruct samples themselves, the hypergraph-supervised reconstruction encourages reconstructing samples according to their high-order neighborhood relations. By the back-propagation training, the hypergraph-supervised reconstruction cost enables the deep AE to capture the high-order structure information among samples, facilitating the discriminative feature learning and, thus, alleviating the adverse effect of the self-reconstruction cost. Compared to current DSC methods, relying on the self-reconstruction, our method has achieved consistent performance improvement on benchmark high-dimensional datasets.
APA, Harvard, Vancouver, ISO, and other styles
7

Fu, Zheren, Yan Li, Zhendong Mao, Quan Wang, and Yongdong Zhang. "Deep Metric Learning with Self-Supervised Ranking." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (May 18, 2021): 1370–78. http://dx.doi.org/10.1609/aaai.v35i2.16226.

Full text
Abstract:
Deep metric learning aims to learn a deep embedding space, where similar objects are pushed towards together and different objects are repelled against. Existing approaches typically use inter-class characteristics, e.g. class-level information or instance-level similarity, to obtain semantic relevance of data points and get a large margin between different classes in the embedding space. However, the intra-class characteristics, e.g. local manifold structure or relative relationship within the same class, are usually overlooked in the learning process. Hence the data structure cannot be fully exploited and the output embeddings have limitation in retrieval. More importantly, retrieval results lack in a good ranking. This paper presents a novel self-supervised ranking auxiliary framework, which captures intra-class characteristics as well as inter-class characteristics for better metric learning. Our method defines specific transform functions to simulates the local structure change of intra-class in the initial image domain, and formulates a self-supervised learning procedure to fully exploit this property and preserve it in the embedding space. Extensive experiments on three standard benchmarks show that our method significantly improves and outperforms the state-of-the-art methods on the performances of both retrieval and ranking by 2%-4%.
APA, Harvard, Vancouver, ISO, and other styles
8

Dutta, Ujjal Kr, Mehrtash Harandi, and C. Chandra Shekhar. "Semi-Supervised Metric Learning: A Deep Resurrection." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 7279–87. http://dx.doi.org/10.1609/aaai.v35i8.16894.

Full text
Abstract:
Distance Metric Learning (DML) seeks to learn a discriminative embedding where similar examples are closer, and dissimilar examples are apart. In this paper, we address the problem of Semi-Supervised DML (SSDML) that tries to learn a metric using a few labeled examples, and abundantly available unlabeled examples. SSDML is important because it is infeasible to manually annotate all the examples present in a large dataset. Surprisingly, with the exception of a few classical approaches that learn a linear Mahalanobis metric, SSDML has not been studied in the recent years, and lacks approaches in the deep SSDML scenario. In this paper, we address this challenging problem, and revamp SSDML with respect to deep learning. In particular, we propose a stochastic, graph-based approach that first propagates the affinities between the pairs of examples from labeled data, to that of the unlabeled pairs. The propagated affinities are used to mine triplet based constraints for metric learning. We impose orthogonality constraint on the metric parameters, as it leads to a better performance by avoiding a model collapse.
APA, Harvard, Vancouver, ISO, and other styles
9

Bharati, Aparna, Richa Singh, Mayank Vatsa, and Kevin W. Bowyer. "Detecting Facial Retouching Using Supervised Deep Learning." IEEE Transactions on Information Forensics and Security 11, no. 9 (September 2016): 1903–13. http://dx.doi.org/10.1109/tifs.2016.2561898.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mathilde Caron. "Self-supervised learning of deep visual representations." Bulletin 1024, no. 21 (April 2023): 171–72. http://dx.doi.org/10.48556/sif.1024.21.171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Qin, Shanshan, Nayantara Mudur, and Cengiz Pehlevan. "Contrastive Similarity Matching for Supervised Learning." Neural Computation 33, no. 5 (April 13, 2021): 1300–1328. http://dx.doi.org/10.1162/neco_a_01374.

Full text
Abstract:
Abstract We propose a novel biologically plausible solution to the credit assignment problem motivated by observations in the ventral visual pathway and trained deep neural networks. In both, representations of objects in the same category become progressively more similar, while objects belonging to different categories become less similar. We use this observation to motivate a layer-specific learning goal in a deep network: each layer aims to learn a representational similarity matrix that interpolates between previous and later layers. We formulate this idea using a contrastive similarity matching objective function and derive from it deep neural networks with feedforward, lateral, and feedback connections and neurons that exhibit biologically plausible Hebbian and anti-Hebbian plasticity. Contrastive similarity matching can be interpreted as an energy-based learning algorithm, but with significant differences from others in how a contrastive function is constructed.
APA, Harvard, Vancouver, ISO, and other styles
12

Alzahrani, Theiab, Baidaa Al-Bander, and Waleed Al-Nuaimy. "Deep Learning Models for Automatic Makeup Detection." AI 2, no. 4 (October 14, 2021): 497–511. http://dx.doi.org/10.3390/ai2040031.

Full text
Abstract:
Makeup can disguise facial features, which results in degradation in the performance of many facial-related analysis systems, including face recognition, facial landmark characterisation, aesthetic quantification and automated age estimation methods. Thus, facial makeup is likely to directly affect several real-life applications such as cosmetology and virtual cosmetics recommendation systems, security and access control, and social interaction. In this work, we conduct a comparative study and design automated facial makeup detection systems leveraging multiple learning schemes from a single unconstrained photograph. We have investigated and studied the efficacy of deep learning models for makeup detection incorporating the use of transfer learning strategy with semi-supervised learning using labelled and unlabelled data. First, during the supervised learning, the VGG16 convolution neural network, pre-trained on a large dataset, is fine-tuned on makeup labelled data. Secondly, two unsupervised learning methods, which are self-learning and convolutional auto-encoder, are trained on unlabelled data and then incorporated with supervised learning during semi-supervised learning. Comprehensive experiments and comparative analysis have been conducted on 2479 labelled images and 446 unlabelled images collected from six challenging makeup datasets. The obtained results reveal that the convolutional auto-encoder merged with supervised learning gives the best makeup detection performance achieving an accuracy of 88.33% and area under ROC curve of 95.15%. The promising results obtained from conducted experiments reveal and reflect the efficiency of combining different learning strategies by harnessing labelled and unlabelled data. It would also be advantageous to the beauty industry to develop such computational intelligence methods.
APA, Harvard, Vancouver, ISO, and other styles
13

Wu, Haiping, Khimya Khetarpal, and Doina Precup. "Self-Supervised Attention-Aware Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (May 18, 2021): 10311–19. http://dx.doi.org/10.1609/aaai.v35i12.17235.

Full text
Abstract:
Visual saliency has emerged as a major visualization tool for interpreting deep reinforcement learning (RL) agents. However, much of the existing research uses it as an analyzing tool rather than an inductive bias for policy learning. In this work, we use visual attention as an inductive bias for RL agents. We propose a novel self-supervised attention learning approach which can 1. learn to select regions of interest without explicit annotations, and 2. act as a plug for existing deep RL methods to improve the learning performance. We empirically show that the self-supervised attention-aware deep RL methods outperform the baselines in the context of both the rate of convergence and performance. Furthermore, the proposed self-supervised attention is not tied with specific policies, nor restricted to a specific scene. We posit that the proposed approach is a general self-supervised attention module for multi-task learning and transfer learning, and empirically validate the generalization ability of the proposed method. Finally, we show that our method learns meaningful object keypoints highlighting improvements both qualitatively and quantitatively.
APA, Harvard, Vancouver, ISO, and other styles
14

Gupta, Jaya, Sunil Pathak, and Gireesh Kumar. "Deep Learning (CNN) and Transfer Learning: A Review." Journal of Physics: Conference Series 2273, no. 1 (May 1, 2022): 012029. http://dx.doi.org/10.1088/1742-6596/2273/1/012029.

Full text
Abstract:
Abstract Deep Learning is a machine learning area that has recently been used in a variety of industries. Unsupervised, semi-supervised, and supervised-learning are only a few of the strategies that have been developed to accommodate different types of learning. A number of experiments showed that deep learning systems fared better than traditional ones when it came to image processing, computer vision, and pattern recognition. Several real-world applications and hierarchical systems have utilised transfer learning and deep learning algorithms for pattern recognition and classification tasks. Real-world machine learning settings, on the other hand, often do not support this assumption since training data can be difficult or expensive to get, and there is a constant need to generate high-performance beginners who can work with data from a variety of sources. The objective of this paper is using deep learning to uncover higher-level representational features, to clearly explain transfer learning, to provide current solutions and evaluate applications in diverse areas of transfer learning as well as deep learning.
APA, Harvard, Vancouver, ISO, and other styles
15

Gupta, Jaya, Sunil Pathak, and Gireesh Kumar. "Deep Learning (CNN) and Transfer Learning: A Review." Journal of Physics: Conference Series 2273, no. 1 (May 1, 2022): 012029. http://dx.doi.org/10.1088/1742-6596/2273/1/012029.

Full text
Abstract:
Abstract Deep Learning is a machine learning area that has recently been used in a variety of industries. Unsupervised, semi-supervised, and supervised-learning are only a few of the strategies that have been developed to accommodate different types of learning. A number of experiments showed that deep learning systems fared better than traditional ones when it came to image processing, computer vision, and pattern recognition. Several real-world applications and hierarchical systems have utilised transfer learning and deep learning algorithms for pattern recognition and classification tasks. Real-world machine learning settings, on the other hand, often do not support this assumption since training data can be difficult or expensive to get, and there is a constant need to generate high-performance beginners who can work with data from a variety of sources. The objective of this paper is using deep learning to uncover higher-level representational features, to clearly explain transfer learning, to provide current solutions and evaluate applications in diverse areas of transfer learning as well as deep learning.
APA, Harvard, Vancouver, ISO, and other styles
16

Gupta, Ashwani, and Utpal Sharma. "Deep Learning-Based Aspect Term Extraction for Sentiment Analysis in Hindi." Indian Journal Of Science And Technology 17, no. 7 (February 15, 2024): 625–34. http://dx.doi.org/10.17485/ijst/v17i7.2766.

Full text
Abstract:
Objectives: Aspect terms play a vital role in finalizing the sentiment of a given review. This experimental study aims to improve the aspect term extraction mechanism for Hindi language reviews. Methods: We trained and evaluated a deep learning-based supervised model for aspect term extraction. All experiments are performed on a well-accepted Hindi dataset. A BiLSTM-based attention technique is employed to improve the extraction results. Findings: Our results show better F-score results than many existing supervised methods for aspect term extraction. Accuracy results are outstanding compared to other reported results. Results showed an outstanding 91.27% accuracy and an F–score of 43.16. Novelty: This proposed architecture and the achieved results are a foundational resource for future studies and endeavours in the field. Keywords: Sentiment analysis, Aspect based sentiment analysis, Aspect term extraction, Deep Learning, Bi LSTM, Indian language, Hindi
APA, Harvard, Vancouver, ISO, and other styles
17

Kim, Chayoung. "Deep Q-Learning Network with Bayesian-Based Supervised Expert Learning." Symmetry 14, no. 10 (October 13, 2022): 2134. http://dx.doi.org/10.3390/sym14102134.

Full text
Abstract:
Deep reinforcement learning (DRL) algorithms interact with the environment and have achieved considerable success in several decision-making problems. However, DRL requires a significant number of data before it can achieve adequate performance. Moreover, it might have limited applicability when DRL agents are able to learn in a real-world environment. Therefore, some algorithms combine DRL agents with supervised learning and leverage previous additional knowledge. Some have integrated a deep Q-learning network with a behavioral cloning model that can exploit supervised learning as prior learning. The algorithm proposed in this study is also based on these methods and is intended to update the loss function of the existing technique into a Bayesian approach. The supervised loss function used in existing algorithms and the loss function based on the Bayesian method proposed in this study differ in terms of the utilization of prior knowledge. Using prior knowledge and not using prior knowledge, such as the cross entropy being symmetric. As a result of the various OpenAI Gym environments, such as Cart-Pole and MountainCar, the learning convergence performance was improved. In particular, the proposed method can be applied to achieve fairly stable learning during the early stage when learning in a sparse environment is uncertain.
APA, Harvard, Vancouver, ISO, and other styles
18

Lin, Yi-Nan, Tsang-Yen Hsieh, Cheng-Ying Yang, Victor RL Shen, Tony Tong-Ying Juang, and Wen-Hao Chen. "Deep Petri nets of unsupervised and supervised learning." Measurement and Control 53, no. 7-8 (June 9, 2020): 1267–77. http://dx.doi.org/10.1177/0020294020923375.

Full text
Abstract:
Artificial intelligence is one of the hottest research topics in computer science. In general, when it comes to the needs to perform deep learning, the most intuitive and unique implementation method is to use neural network. But there are two shortcomings in neural network. First, it is not easy to be understood. When encountering the needs for implementation, it often requires a lot of relevant research efforts to implement the neural network. Second, the structure is complex. When constructing a perfect learning structure, in order to achieve the fully defined connection between nodes, the overall structure becomes complicated. It is hard for developers to track the parameter changes inside. Therefore, the goal of this article is to provide a more streamlined method so as to perform deep learning. A modified high-level fuzzy Petri net, called deep Petri net, is used to perform deep learning, in an attempt to propose a simple and easy structure and to track parameter changes, with faster speed than the deep neural network. The experimental results have shown that the deep Petri net performs better than the deep neural network.
APA, Harvard, Vancouver, ISO, and other styles
19

Yin, Chunwu, and Zhanbo Chen. "Developing Sustainable Classification of Diseases via Deep Learning and Semi-Supervised Learning." Healthcare 8, no. 3 (August 24, 2020): 291. http://dx.doi.org/10.3390/healthcare8030291.

Full text
Abstract:
Disease classification based on machine learning has become a crucial research topic in the fields of genetics and molecular biology. Generally, disease classification involves a supervised learning style; i.e., it requires a large number of labelled samples to achieve good classification performance. However, in the majority of the cases, labelled samples are hard to obtain, so the amount of training data are limited. However, many unclassified (unlabelled) sequences have been deposited in public databases, which may help the training procedure. This method is called semi-supervised learning and is very useful in many applications. Self-training can be implemented using high- to low-confidence samples to prevent noisy samples from affecting the robustness of semi-supervised learning in the training process. The deep forest method with the hyperparameter settings used in this paper can achieve excellent performance. Therefore, in this work, we propose a novel combined deep learning model and semi-supervised learning with self-training approach to improve the performance in disease classification, which utilizes unlabelled samples to update a mechanism designed to increase the number of high-confidence pseudo-labelled samples. The experimental results show that our proposed model can achieve good performance in disease classification and disease-causing gene identification.
APA, Harvard, Vancouver, ISO, and other styles
20

Chong, De Wei, Kenny, and Abel Yang. "Photometric Redshift Analysis using Supervised Learning Algorithms and Deep Learning." EPJ Web of Conferences 206 (2019): 09006. http://dx.doi.org/10.1051/epjconf/201920609006.

Full text
Abstract:
We present a catalogue of galaxy photometric redshifts for the Sloan Digital Sky Survey (SDSS) Data Release 12. We use various supervised learning algorithms to calculate redshifts using photometric attributes on a spectroscopic training set. Two training sets are analysed in this paper. The first training set consists of 995,498 galaxies with redshifts up to z ≈ 0.8. On the first training set, we achieve a cost function of 0.00501 and a root mean squared error value of 0.0707 using the XGBoost algorithm. We achieved an outlier rate of 2.1% and 86.81%, 95.83%, 97.90% of our data points lie within one, two, and three standard deviation of the mean respectively. The second training set consists of 163,140 galaxies with redshifts up to z ≈ 0.2 and is merged with the Galaxy Zoo 2 full catalog. We also experimented on convolutional neural networks to predict five morphological features (Smooth, Features/Disk, Star, Edge-on, Spiral). We achieve a root mean squared error of 0.117 when validated against an unseen dataset with over 200 epochs. Morphological features from the Galaxy Zoo, trained with photometric features are found to consistently improve the accuracy of photometric redshifts.
APA, Harvard, Vancouver, ISO, and other styles
21

Chen, Chong, Ying Liu, Maneesh Kumar, Jian Qin, and Yunxia Ren. "Energy consumption modelling using deep learning embedded semi-supervised learning." Computers & Industrial Engineering 135 (September 2019): 757–65. http://dx.doi.org/10.1016/j.cie.2019.06.052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Le, Linh, Ying Xie, and Vijay V. Raghavan. "KNN Loss and Deep KNN." Fundamenta Informaticae 182, no. 2 (September 30, 2021): 95–110. http://dx.doi.org/10.3233/fi-2021-2068.

Full text
Abstract:
The k Nearest Neighbor (KNN) algorithm has been widely applied in various supervised learning tasks due to its simplicity and effectiveness. However, the quality of KNN decision making is directly affected by the quality of the neighborhoods in the modeling space. Efforts have been made to map data to a better feature space either implicitly with kernel functions, or explicitly through learning linear or nonlinear transformations. However, all these methods use pre-determined distance or similarity functions, which may limit their learning capacity. In this paper, we present two loss functions, namely KNN Loss and Fuzzy KNN Loss, to quantify the quality of neighborhoods formed by KNN with respect to supervised learning, such that minimizing the loss function on the training data leads to maximizing KNN decision accuracy on the training data. We further present a deep learning strategy that is able to learn, by minimizing KNN loss, pairwise similarities of data that implicitly maps data to a feature space where the quality of KNN neighborhoods is optimized. Experimental results show that this deep learning strategy (denoted as Deep KNN) outperforms state-of-the-art supervised learning methods on multiple benchmark data sets.
APA, Harvard, Vancouver, ISO, and other styles
23

Guo, Yuejun, Orhan Ermis, Qiang Tang, Hoang Trang, and Alexandre De Oliveira. "An Empirical Study of Deep Learning-Based SS7 Attack Detection." Information 14, no. 9 (September 16, 2023): 509. http://dx.doi.org/10.3390/info14090509.

Full text
Abstract:
Signalling protocols are responsible for fundamental tasks such as initiating and terminating communication and identifying the state of the communication in telecommunication core networks. Signalling System No. 7 (SS7), Diameter, and GPRS Tunneling Protocol (GTP) are the main protocols used in 2G to 4G, while 5G uses standard Internet protocols for its signalling. Despite their distinct features, and especially their security guarantees, they are most vulnerable to attacks in roaming scenarios: the attacks that target the location update function call for subscribers who are located in a visiting network. The literature tells us that rule-based detection mechanisms are ineffective against such attacks, while the hope lies in deep learning (DL)-based solutions. In this paper, we provide a large-scale empirical study of state-of-the-art DL models, including eight supervised and five semi-supervised, to detect attacks in the roaming scenario. Our experiments use a real-world dataset and a simulated dataset for SS7, and they can be straightforwardly carried out for other signalling protocols upon the availability of corresponding datasets. The results show that semi-supervised DL models generally outperform supervised ones since they leverage both labeled and unlabeled data for training. Nevertheless, the ensemble-based supervised model NODE outperforms others in its category and some in the semi-supervised category. Among all, the semi-supervised model PReNet performs the best regarding the Recall and F1 metrics when all unlabeled data are used for training, and it is also the most stable one. Our experiment also shows that the performances of different semi-supervised models could differ a lot regarding the size of used unlabeled data in training.
APA, Harvard, Vancouver, ISO, and other styles
24

Nafea, Ahmed Adil, Saeed Amer Alameri, Russel R. Majeed, Meaad Ali Khalaf, and Mohammed M. AL-Ani. "A Short Review on Supervised Machine Learning and Deep Learning Techniques in Computer Vision." Babylonian Journal of Machine Learning 2024 (February 11, 2024): 48–55. http://dx.doi.org/10.58496/bjml/2024/004.

Full text
Abstract:
In last years, computer vision has shown important advances, mainly using the application of supervised machine learning (ML) and deep learning (DL) techniques. The objective of this review is to show a brief review of the current state of the field of supervised ML and DL techniques, especially on computer vision tasks. This study focuses on the main ideas, advantages, and applications of DL in computer vision and highlights their main concepts and advantages. This study showed the strengths, limitations, and effects of computer vision supervised ML and DL techniques.
APA, Harvard, Vancouver, ISO, and other styles
25

Liu, MengYang, MingJun Li, and XiaoYang Zhang. "The Application of the Unsupervised Migration Method Based on Deep Learning Model in the Marketing Oriented Allocation of High Level Accounting Talents." Computational Intelligence and Neuroscience 2022 (June 6, 2022): 1–10. http://dx.doi.org/10.1155/2022/5653942.

Full text
Abstract:
Deep learning is a branch of machine learning that uses neural networks to mimic the behaviour of the human brain. Various types of models are used in deep learning technology. This article will look at two important models and especially concentrate on unsupervised learning methodology. The two important models are as follows: the supervised and unsupervised models. The main difference is the method of training that they undergo. Supervised models are provided with training on a particular dataset and its outcome. In the case of unsupervised models, only input data is given, and there is no set outcome from which they can learn. The predicting/forecasting column is not present in an unsupervised model, unlike in the supervised model. Supervised models use regression to predict continuous quantities and classification to predict discrete class labels; unsupervised models use clustering to group similar models and association learning to find associations between items. Unsupervised migration is a combination of the unsupervised learning method and migration. In unsupervised learning, there is no need to supervise the models. Migration is an effective tool in processing and imaging data. Unsupervised learning allows the model to work independently to discover patterns and information that were previously undetected. It mainly works on unlabeled data. Unsupervised learning can achieve more complex processing tasks when compared to supervised learning. The unsupervised learning method is more unpredictable when compared with other types of learning methods. Some of the popular unsupervised learning algorithms include k-means clustering, hierarchal clustering, Apriori algorithm, clustering, anomaly detection, association mining, neural networks, etc. In this research article, we implement this particular deep learning model in the marketing oriented asset allocation of high level accounting talents. When the proposed unsupervised migration algorithm was compared to the existing Fractional Hausdorff Grey Model, it was discovered that the proposed system provided 99.12% accuracy by the high level accounting talented candidate in market-oriented asset allocation.
APA, Harvard, Vancouver, ISO, and other styles
26

Liu, MengYang, MingJun Li, and XiaoYang Zhang. "The Application of the Unsupervised Migration Method Based on Deep Learning Model in the Marketing Oriented Allocation of High Level Accounting Talents." Computational Intelligence and Neuroscience 2022 (June 6, 2022): 1–10. http://dx.doi.org/10.1155/2022/5653942.

Full text
Abstract:
Deep learning is a branch of machine learning that uses neural networks to mimic the behaviour of the human brain. Various types of models are used in deep learning technology. This article will look at two important models and especially concentrate on unsupervised learning methodology. The two important models are as follows: the supervised and unsupervised models. The main difference is the method of training that they undergo. Supervised models are provided with training on a particular dataset and its outcome. In the case of unsupervised models, only input data is given, and there is no set outcome from which they can learn. The predicting/forecasting column is not present in an unsupervised model, unlike in the supervised model. Supervised models use regression to predict continuous quantities and classification to predict discrete class labels; unsupervised models use clustering to group similar models and association learning to find associations between items. Unsupervised migration is a combination of the unsupervised learning method and migration. In unsupervised learning, there is no need to supervise the models. Migration is an effective tool in processing and imaging data. Unsupervised learning allows the model to work independently to discover patterns and information that were previously undetected. It mainly works on unlabeled data. Unsupervised learning can achieve more complex processing tasks when compared to supervised learning. The unsupervised learning method is more unpredictable when compared with other types of learning methods. Some of the popular unsupervised learning algorithms include k-means clustering, hierarchal clustering, Apriori algorithm, clustering, anomaly detection, association mining, neural networks, etc. In this research article, we implement this particular deep learning model in the marketing oriented asset allocation of high level accounting talents. When the proposed unsupervised migration algorithm was compared to the existing Fractional Hausdorff Grey Model, it was discovered that the proposed system provided 99.12% accuracy by the high level accounting talented candidate in market-oriented asset allocation.
APA, Harvard, Vancouver, ISO, and other styles
27

Shwartz Ziv, Ravid, and Yann LeCun. "To Compress or Not to Compress—Self-Supervised Learning and Information Theory: A Review." Entropy 26, no. 3 (March 12, 2024): 252. http://dx.doi.org/10.3390/e26030252.

Full text
Abstract:
Deep neural networks excel in supervised learning tasks but are constrained by the need for extensive labeled data. Self-supervised learning emerges as a promising alternative, allowing models to learn without explicit labels. Information theory has shaped deep neural networks, particularly the information bottleneck principle. This principle optimizes the trade-off between compression and preserving relevant information, providing a foundation for efficient network design in supervised contexts. However, its precise role and adaptation in self-supervised learning remain unclear. In this work, we scrutinize various self-supervised learning approaches from an information-theoretic perspective, introducing a unified framework that encapsulates the self-supervised information-theoretic learning problem. This framework includes multiple encoders and decoders, suggesting that all existing work on self-supervised learning can be seen as specific instances. We aim to unify these approaches to understand their underlying principles better and address the main challenge: many works present different frameworks with differing theories that may seem contradictory. By weaving existing research into a cohesive narrative, we delve into contemporary self-supervised methodologies, spotlight potential research areas, and highlight inherent challenges. Moreover, we discuss how to estimate information-theoretic quantities and their associated empirical problems. Overall, this paper provides a comprehensive review of the intersection of information theory, self-supervised learning, and deep neural networks, aiming for a better understanding through our proposed unified approach.
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Guo-Hua, and Jianxin Wu. "Repetitive Reprediction Deep Decipher for Semi-Supervised Learning." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6170–77. http://dx.doi.org/10.1609/aaai.v34i04.6082.

Full text
Abstract:
Most recent semi-supervised deep learning (deep SSL) methods used a similar paradigm: use network predictions to update pseudo-labels and use pseudo-labels to update network parameters iteratively. However, they lack theoretical support and cannot explain why predictions are good candidates for pseudo-labels. In this paper, we propose a principled end-to-end framework named deep decipher (D2) for SSL. Within the D2 framework, we prove that pseudo-labels are related to network predictions by an exponential link function, which gives a theoretical support for using predictions as pseudo-labels. Furthermore, we demonstrate that updating pseudo-labels by network predictions will make them uncertain. To mitigate this problem, we propose a training strategy called repetitive reprediction (R2). Finally, the proposed R2-D2 method is tested on the large-scale ImageNet dataset and outperforms state-of-the-art methods by 5 percentage points.
APA, Harvard, Vancouver, ISO, and other styles
29

Augustine, Tanya N. "Weakly-supervised deep learning models in computational pathology." eBioMedicine 81 (July 2022): 104117. http://dx.doi.org/10.1016/j.ebiom.2022.104117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Kang, Xudong, Binbin Zhuo, and Puhong Duan. "Semi-supervised deep learning for hyperspectral image classification." Remote Sensing Letters 10, no. 4 (January 3, 2019): 353–62. http://dx.doi.org/10.1080/2150704x.2018.1557787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Augusta, Carolyn, Rob Deardon, and Graham Taylor. "Deep learning for supervised classification of spatial epidemics." Spatial and Spatio-temporal Epidemiology 29 (June 2019): 187–98. http://dx.doi.org/10.1016/j.sste.2018.08.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Zeng, Zeng, Yang Xulei, Yu Qiyun, Yao Meng, and Zhang Le. "SeSe-Net: Self-Supervised deep learning for segmentation." Pattern Recognition Letters 128 (December 2019): 23–29. http://dx.doi.org/10.1016/j.patrec.2019.08.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Ito, Ryo, Ken Nakae, Junichi Hata, Hideyuki Okano, and Shin Ishii. "Semi-supervised deep learning of brain tissue segmentation." Neural Networks 116 (August 2019): 25–34. http://dx.doi.org/10.1016/j.neunet.2019.03.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Tang, Xin, Fang Guo, Jianbing Shen, and Tianyuan Du. "Facial landmark detection by semi-supervised deep learning." Neurocomputing 297 (July 2018): 22–32. http://dx.doi.org/10.1016/j.neucom.2018.01.080.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Li, Zhun, ByungSoo Ko, and Ho-Jin Choi. "Naive semi-supervised deep learning using pseudo-label." Peer-to-Peer Networking and Applications 12, no. 5 (December 10, 2018): 1358–68. http://dx.doi.org/10.1007/s12083-018-0702-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Xiang, Xuezhi, Mingliang Zhai, Rongfang Zhang, Yulong Qiao, and Abdulmotaleb El Saddik. "Deep Optical Flow Supervised Learning With Prior Assumptions." IEEE Access 6 (2018): 43222–32. http://dx.doi.org/10.1109/access.2018.2863233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Hu, Yaxian, Senlin Luo, Longfei Han, Limin Pan, and Tiemei Zhang. "Deep supervised learning with mixture of neural networks." Artificial Intelligence in Medicine 102 (January 2020): 101764. http://dx.doi.org/10.1016/j.artmed.2019.101764.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Lingyi, Jiang, Zheng Yifeng, Chen Che, Li Guohe, and Zhang Wenjie. "Review of optimization methods for supervised deep learning." Journal of Image and Graphics 28, no. 4 (2023): 963–83. http://dx.doi.org/10.11834/jig.211139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Hu, Peng, Liangli Zhen, Xi Peng, Hongyuan Zhu, Jie Lin, Xu Wang, and Dezhong Peng. "Deep Supervised Multi-View Learning With Graph Priors." IEEE Transactions on Image Processing 33 (2024): 123–33. http://dx.doi.org/10.1109/tip.2023.3335825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Weikang, Xiang, Zhou Quan, Cui Jingcheng, Mo Zhiyi, Wu Xiaofu, Ou Weihua, Wang Jingdong, and Liu Wenyu. "Weakly supervised semantic segmentation based on deep learning." Journal of Image and Graphics 29, no. 5 (2024): 1146–68. http://dx.doi.org/10.11834/jig.230628.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Aversa, Rossella, Piero Coronica, Cristiano De Nobili, and Stefano Cozzini. "Deep Learning, Feature Learning, and Clustering Analysis for SEM Image Classification." Data Intelligence 2, no. 4 (October 2020): 513–28. http://dx.doi.org/10.1162/dint_a_00062.

Full text
Abstract:
In this paper, we report upon our recent work aimed at improving and adapting machine learning algorithms to automatically classify nanoscience images acquired by the Scanning Electron Microscope (SEM). This is done by coupling supervised and unsupervised learning approaches. We first investigate supervised learning on a ten-category data set of images and compare the performance of the different models in terms of training accuracy. Then, we reduce the dimensionality of the features through autoencoders to perform unsupervised learning on a subset of images in a selected range of scales (from 1 μm to 2 μm). Finally, we compare different clustering methods to uncover intrinsic structures in the images.
APA, Harvard, Vancouver, ISO, and other styles
42

Epstein, Sean C., Timothy J. P. Bray, Margaret Hall-Craggs, and Hui Zhang. "Choice of training label matters: how to best use deep learning for quantitative MRI parameter estimation." Machine Learning for Biomedical Imaging 2, January 2024 (January 23, 2024): 586–610. http://dx.doi.org/10.59275/j.melba.2024-geb5.

Full text
Abstract:
Deep learning (DL) is gaining popularity as a parameter estimation method for quantitative MRI. A range of competing implementations have been proposed, relying on either supervised or self-supervised learning. Self-supervised approaches, sometimes referred to as unsupervised, have been loosely based on auto-encoders, whereas supervised methods have, to date, been trained on groundtruth labels. These two learning paradigms have been shown to have distinct strengths. Notably, self-supervised approaches offer lower-bias parameter estimates than their supervised alternatives. This result is counterintuitive – incorporating prior knowledge with supervised labels should, in theory, lead to improved accuracy. In this work, we show that this apparent limitation of supervised approaches stems from the naïve choice of groundtruth training labels. By training on labels which are deliberately not groundtruth, we show that the low-bias parameter estimation previously associated with self-supervised methods can be replicated – and improved on – within a supervised learning framework. This approach sets the stage for a single, unifying, deep learning parameter estimation framework, based on supervised learning, where trade-offs between bias and variance are made by careful adjustment of training label.
APA, Harvard, Vancouver, ISO, and other styles
43

Prashant Krishnan, V., S. Rajarajeswari, Venkat Krishnamohan, Vivek Chandra Sheel, and R. Deepak. "Music Generation Using Deep Learning Techniques." Journal of Computational and Theoretical Nanoscience 17, no. 9 (July 1, 2020): 3983–87. http://dx.doi.org/10.1166/jctn.2020.9003.

Full text
Abstract:
This paper primarily aims to compare two deep learning techniques in the task of learning musical styles and generating novel musical content. Long Short Term Memory (LSTM), a supervised learning algorithm is used, which is a variation of the Recurrent Neural Network (RNN), frequently used for sequential data. Another technique explored is Generative Adversarial Networks (GAN), an unsupervised approach which is used to learn a distribution of a particular style, and novelly combine components to create sequences. The representation of data from the MIDI files as chord and note embedding are essential to the performance of the models. This type of embedding in the network helps it to discover structural patterns in the samples. Through the study, it is seen how a supervised learning technique performs better than the unsupervised one. A study helped in obtaining a Mean Opinion Score (MOS), which was used as an indicator of the comparative quality and performance of the respective techniques.
APA, Harvard, Vancouver, ISO, and other styles
44

Zheng, Huan, Tongyao Pang, and Hui Ji. "Unsupervised Deep Video Denoising with Untrained Network." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 3 (June 26, 2023): 3651–59. http://dx.doi.org/10.1609/aaai.v37i3.25476.

Full text
Abstract:
Deep learning has become a prominent tool for video denoising. However, most existing deep video denoising methods require supervised training using noise-free videos. Collecting noise-free videos can be costly and challenging in many applications. Therefore, this paper aims to develop an unsupervised deep learning method for video denoising that only uses a single test noisy video for training. To achieve this, an unsupervised loss function is presented that provides an unbiased estimator of its supervised counterpart defined on noise-free video. Additionally, a temporal attention mechanism is proposed to exploit redundancy among frames. The experiments on video denoising demonstrate that the proposed unsupervised method outperforms existing unsupervised methods and remains competitive against recent supervised deep learning methods.
APA, Harvard, Vancouver, ISO, and other styles
45

Song, Jingkuan, Lianli Gao, Fuhao Zou, Yan Yan, and Nicu Sebe. "Deep and fast: Deep learning hashing with semi-supervised graph construction." Image and Vision Computing 55 (November 2016): 101–8. http://dx.doi.org/10.1016/j.imavis.2016.02.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Vanyan, Ani, and Hrant Khachatrian. "Deep Semi-Supervised Image Classification Algorithms: a Survey." JUCS - Journal of Universal Computer Science 27, no. 12 (December 28, 2021): 1390–407. http://dx.doi.org/10.3897/jucs.77029.

Full text
Abstract:
Semi-supervised learning is a branch of machine learning focused on improving the performance of models when the labeled data is scarce, but there is access to large number of unlabeled examples. Over the past five years there has been a remarkable progress in designing algorithms which are able to get reasonable image classification accuracy having access to the labels for only 0.1% of the samples. In this survey, we describe most of the recently proposed deep semi-supervised learning algorithms for image classification and identify the main trends of research in the field. Next, we compare several components of the algorithms, discuss the challenges of reproducing the results in this area, and highlight recently proposed applications of the methods originally developed for semi-supervised learning.
APA, Harvard, Vancouver, ISO, and other styles
47

Tekleselassie, Hailye. "A Deep Learning Approach for DDoS Attack Detection Using Supervised Learning." MATEC Web of Conferences 348 (2021): 01012. http://dx.doi.org/10.1051/matecconf/202134801012.

Full text
Abstract:
This research presents a novel combined learning method for developing a novel DDoS model that is expandable and flexible property of deep learning. This method can advance the current practice and problems in DDoS detection. A combined method of deep learning with knowledge-graph classification is proposed for DDoS detection. Whereas deep learning algorithm is used to develop a classifier model, knowledge-graph system makes the model expandable and flexible. It is analytically verified with CICIDS2017 dataset of 53.127 entire occurrences, by using ten-fold cross validation. Experimental outcome indicates that 99.97% performance is registered after connection. Fascinatingly, significant knowledge ironic learning for DDoS detection varies as a basic behavior of DDoS detection and prevention methods. So, security professionals are suggested to mix DDoS detection in their internet and network.
APA, Harvard, Vancouver, ISO, and other styles
48

Adke, Shrinidhi, Changying Li, Khaled M. Rasheed, and Frederick W. Maier. "Supervised and Weakly Supervised Deep Learning for Segmentation and Counting of Cotton Bolls Using Proximal Imagery." Sensors 22, no. 10 (May 12, 2022): 3688. http://dx.doi.org/10.3390/s22103688.

Full text
Abstract:
The total boll count from a plant is one of the most important phenotypic traits for cotton breeding and is also an important factor for growers to estimate the final yield. With the recent advances in deep learning, many supervised learning approaches have been implemented to perform phenotypic trait measurement from images for various crops, but few studies have been conducted to count cotton bolls from field images. Supervised learning models require a vast number of annotated images for training, which has become a bottleneck for machine learning model development. The goal of this study is to develop both fully supervised and weakly supervised deep learning models to segment and count cotton bolls from proximal imagery. A total of 290 RGB images of cotton plants from both potted (indoor and outdoor) and in-field settings were taken by consumer-grade cameras and the raw images were divided into 4350 image tiles for further model training and testing. Two supervised models (Mask R-CNN and S-Count) and two weakly supervised approaches (WS-Count and CountSeg) were compared in terms of boll count accuracy and annotation costs. The results revealed that the weakly supervised counting approaches performed well with RMSE values of 1.826 and 1.284 for WS-Count and CountSeg, respectively, whereas the fully supervised models achieve RMSE values of 1.181 and 1.175 for S-Count and Mask R-CNN, respectively, when the number of bolls in an image patch is less than 10. In terms of data annotation costs, the weakly supervised approaches were at least 10 times more cost efficient than the supervised approach for boll counting. In the future, the deep learning models developed in this study can be extended to other plant organs, such as main stalks, nodes, and primary and secondary branches. Both the supervised and weakly supervised deep learning models for boll counting with low-cost RGB images can be used by cotton breeders, physiologists, and growers alike to improve crop breeding and yield estimation.
APA, Harvard, Vancouver, ISO, and other styles
49

Li, Ji, Yuesong Nan, and Hui Ji. "Un-supervised learning for blind image deconvolution via Monte-Carlo sampling." Inverse Problems 38, no. 3 (February 11, 2022): 035012. http://dx.doi.org/10.1088/1361-6420/ac4ede.

Full text
Abstract:
Abstract Deep learning has been a powerful tool for solving many inverse imaging problems. The majority of existing deep-learning-based solutions are supervised on an external dataset with many blurred/latent image pairs. Recently, there has been an increasing interest on developing dataset-free deep learning methods for image recovery without any prerequisite on external training dataset, including blind deconvolution. This paper aims at developing an un-supervised learning method for blind image deconvolution, which does not call any training sample yet provides very competitive performance. Based on the re-parametrization of latent image using a deep network with random weights, this paper proposed to approximate the maximum-a posteriori estimator of the blur kernel using the Monte-Carlo (MC) sampling method. The MC sampling is efficiently implemented by using dropout and random noise layer, which does not require conjugate model as traditional variational inference does. Extensive experiments on popular benchmark datasets for blind image deconvolution showed that the proposed method not only outperformed existing non-learning methods, but also noticeably outperformed existing deep learning methods, including both supervised and un-supervised ones.
APA, Harvard, Vancouver, ISO, and other styles
50

Nisha.C.M and N. Thangarasu. "Deep learning algorithms and their relevance: A review." International Journal of Data Informatics and Intelligent Computing 2, no. 4 (December 9, 2023): 1–10. http://dx.doi.org/10.59461/ijdiic.v2i4.78.

Full text
Abstract:
Nowadays, the most revolutionary area in computer science is deep learning algorithms and models. This paper discusses deep learning and various supervised, unsupervised, and reinforcement learning models. An overview of Artificial neural network(ANN), Convolutional neural network(CNN), Recurrent neural network (RNN), Long short-term memory(LSTM), Self-organizing maps(SOM), Restricted Boltzmann machine(RBM), Deep Belief Network (DBN), Generative adversarial network(GAN), autoencoders, long short-term memory(LSTM), Gated Recurrent Unit(GRU) and Bidirectional-LSTM is provided. Various deep-learning application areas are also discussed. The most trending Chat GPT, which can understand natural language and respond to needs in various ways, uses supervised and reinforcement learning techniques. Additionally, the limitations of deep learning are discussed. This paper provides a snapshot of deep learning.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography