Artigos de revistas sobre o tema "Unsupervised deep neural networks"

Siga este link para ver outros tipos de publicações sobre o tema: Unsupervised deep neural networks.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Unsupervised deep neural networks".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Banzi, Jamal, Isack Bulugu e Zhongfu Ye. "Deep Predictive Neural Network: Unsupervised Learning for Hand Pose Estimation". International Journal of Machine Learning and Computing 9, n.º 4 (agosto de 2019): 432–39. http://dx.doi.org/10.18178/ijmlc.2019.9.4.822.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Guo, Wenqi, Weixiong Zhang, Zheng Zhang, Ping Tang e Shichen Gao. "Deep Temporal Iterative Clustering for Satellite Image Time Series Land Cover Analysis". Remote Sensing 14, n.º 15 (29 de julho de 2022): 3635. http://dx.doi.org/10.3390/rs14153635.

Texto completo da fonte
Resumo:
The extensive amount of Satellite Image Time Series (SITS) data brings new opportunities and challenges for land cover analysis. Many supervised machine learning methods have been applied in SITS, but the labeled SITS samples are time- and effort-consuming to acquire. It is necessary to analyze SITS data with an unsupervised learning method. In this paper, we propose a new unsupervised learning method named Deep Temporal Iterative Clustering (DTIC) to deal with SITS data. The proposed method jointly learns a neural network’s parameters and the resulting features’ cluster assignments, which uses a standard clustering algorithm, K-means, to iteratively cluster the features produced by the feature extraction network and then uses the subsequent assignments as supervision to update the network’s weights. We apply DTIC to the unsupervised training of neural networks on both SITS datasets. Experimental results demonstrate that DTIC outperforms the state-of-the-art K-means clustering algorithm, which proves that the proposed approach successfully provides a novel idea for unsupervised training of SITS data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Xu, Jianqiao, Zhaolu Zuo, Danchao Wu, Bing Li, Xiaoni Li e Deyi Kong. "Bearing Defect Detection with Unsupervised Neural Networks". Shock and Vibration 2021 (19 de agosto de 2021): 1–11. http://dx.doi.org/10.1155/2021/9544809.

Texto completo da fonte
Resumo:
Bearings always suffer from surface defects, such as scratches, black spots, and pits. Those surface defects have great effects on the quality and service life of bearings. Therefore, the defect detection of the bearing has always been the focus of the bearing quality control. Deep learning has been successfully applied to the objection detection due to its excellent performance. However, it is difficult to realize automatic detection of bearing surface defects based on data-driven-based deep learning due to few samples data of bearing defects on the actual production line. Sample preprocessing algorithm based on normalized sample symmetry of bearing is adopted to greatly increase the number of samples. Two different convolutional neural networks, supervised networks and unsupervised networks, are tested separately for the bearing defect detection. The first experiment adopts the supervised networks, and ResNet neural networks are selected as the supervised networks in this experiment. The experiment result shows that the AUC of the model is 0.8567, which is low for the actual use. Also, the positive and negative samples should be labelled manually. To improve the AUC of the model and the flexibility of the samples labelling, a new unsupervised neural network based on autoencoder networks is proposed. Gradients of the unlabeled data are used as labels, and autoencoder networks are created with U-net to predict the output. In the second experiment, positive samples of the supervised experiment are used as the training set. The experiment of the unsupervised neural networks shows that the AUC of the model is 0.9721. In this experiment, the AUC is higher than the first experiment, but the positive samples must be selected. To overcome this shortage, the dataset of the third experiment is the same as the supervised experiment, where all the positive and negative samples are mixed together, which means that there is no need to label the samples. This experiment shows that the AUC of the model is 0.9623. Although the AUC is slightly lower than that of the second experiment, the AUC is high enough for actual use. The experiment results demonstrate the feasibility and superiority of the proposed unsupervised networks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Feng, Yu, e Hui Sun. "Basketball Footwork and Application Supported by Deep Learning Unsupervised Transfer Method". International Journal of Information Technology and Web Engineering 18, n.º 1 (1 de dezembro de 2023): 1–17. http://dx.doi.org/10.4018/ijitwe.334365.

Texto completo da fonte
Resumo:
The combination of traditional basketball footwork mobile teaching and AI will become a hot spot in basketball footwork research. This article used a deep learning (DL) unsupervised transfer method: Convolutional neural networks are used to extract source and target domain samples for transfer learning. Feature extraction is performed on the data, and the impending action of a basketball player is predicted. Meanwhile, the unsupervised human action transfer method is studied to provide new ideas for basketball footwork action series data modeling. Finally, the theoretical framework of DL unsupervised transfer learning is reviewed. Its principle is explored and applied in the teaching of basketball footwork. The results show that convolutional neural networks can predict players' movement trajectories, unsupervised training using network data dramatically increases the variety of actions during training. The classification accuracy of the transfer learning method is high, and it can be used for the different basketball footwork in the corresponding stage of the court.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Sun, Yanan, Gary G. Yen e Zhang Yi. "Evolving Unsupervised Deep Neural Networks for Learning Meaningful Representations". IEEE Transactions on Evolutionary Computation 23, n.º 1 (fevereiro de 2019): 89–103. http://dx.doi.org/10.1109/tevc.2018.2808689.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Shi, Yu, Cien Fan, Lian Zou, Caixia Sun e Yifeng Liu. "Unsupervised Adversarial Defense through Tandem Deep Image Priors". Electronics 9, n.º 11 (19 de novembro de 2020): 1957. http://dx.doi.org/10.3390/electronics9111957.

Texto completo da fonte
Resumo:
Deep neural networks are vulnerable to the adversarial example synthesized by adding imperceptible perturbations to the original image but can fool the classifier to provide wrong prediction outputs. This paper proposes an image restoration approach which provides a strong defense mechanism to provide robustness against adversarial attacks. We show that the unsupervised image restoration framework, deep image prior, can effectively eliminate the influence of adversarial perturbations. The proposed method uses multiple deep image prior networks called tandem deep image priors to recover the original image from adversarial example. Tandem deep image priors contain two deep image prior networks. The first network captures the main information of images and the second network recovers original image based on the prior information provided by the first network. The proposed method reduces the number of iterations originally required by deep image prior network and does not require adjusting the classifier or pre-training. It can be combined with other defensive methods. Our experiments show that the proposed method surprisingly achieves higher classification accuracy on ImageNet against a wide variety of adversarial attacks than previous state-of-the-art defense methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Thakur, Amey. "Generative Adversarial Networks". International Journal for Research in Applied Science and Engineering Technology 9, n.º 8 (31 de agosto de 2021): 2307–25. http://dx.doi.org/10.22214/ijraset.2021.37723.

Texto completo da fonte
Resumo:
Abstract: Deep learning's breakthrough in the field of artificial intelligence has resulted in the creation of a slew of deep learning models. One of these is the Generative Adversarial Network, which has only recently emerged. The goal of GAN is to use unsupervised learning to analyse the distribution of data and create more accurate results. The GAN allows the learning of deep representations in the absence of substantial labelled training information. Computer vision, language and video processing, and image synthesis are just a few of the applications that might benefit from these representations. The purpose of this research is to get the reader conversant with the GAN framework as well as to provide the background information on Generative Adversarial Networks, including the structure of both the generator and discriminator, as well as the various GAN variants along with their respective architectures. Applications of GANs are also discussed with examples. Keywords: Generative Adversarial Networks (GANs), Generator, Discriminator, Supervised and Unsupervised Learning, Discriminative and Generative Modelling, Backpropagation, Loss Functions, Machine Learning, Deep Learning, Neural Networks, Convolutional Neural Network (CNN), Deep Convolutional GAN (DCGAN), Conditional GAN (cGAN), Information Maximizing GAN (InfoGAN), Stacked GAN (StackGAN), Pix2Pix, Wasserstein GAN (WGAN), Progressive Growing GAN (ProGAN), BigGAN, StyleGAN, CycleGAN, Super-Resolution GAN (SRGAN), Image Synthesis, Image-to-Image Translation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Ferles, Christos, Yannis Papanikolaou, Stylianos P. Savaidis e Stelios A. Mitilineos. "Deep Self-Organizing Map of Convolutional Layers for Clustering and Visualizing Image Data". Machine Learning and Knowledge Extraction 3, n.º 4 (14 de novembro de 2021): 879–99. http://dx.doi.org/10.3390/make3040044.

Texto completo da fonte
Resumo:
The self-organizing convolutional map (SOCOM) hybridizes convolutional neural networks, self-organizing maps, and gradient backpropagation optimization into a novel integrated unsupervised deep learning model. SOCOM structurally combines, architecturally stacks, and algorithmically fuses its deep/unsupervised learning components. The higher-level representations produced by its underlying convolutional deep architecture are embedded in its topologically ordered neural map output. The ensuing unsupervised clustering and visualization operations reflect the model’s degree of synergy between its building blocks and synopsize its range of applications. Clustering results are reported on the STL-10 benchmark dataset coupled with the devised neural map visualizations. The series of conducted experiments utilize a deep VGG-based SOCOM model.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Zhuang, Chengxu, Siming Yan, Aran Nayebi, Martin Schrimpf, Michael C. Frank, James J. DiCarlo e Daniel L. K. Yamins. "Unsupervised neural network models of the ventral visual stream". Proceedings of the National Academy of Sciences 118, n.º 3 (11 de janeiro de 2021): e2014196118. http://dx.doi.org/10.1073/pnas.2014196118.

Texto completo da fonte
Resumo:
Deep neural networks currently provide the best quantitative models of the response patterns of neurons throughout the primate ventral visual stream. However, such networks have remained implausible as a model of the development of the ventral stream, in part because they are trained with supervised methods requiring many more labels than are accessible to infants during development. Here, we report that recent rapid progress in unsupervised learning has largely closed this gap. We find that neural network models learned with deep unsupervised contrastive embedding methods achieve neural prediction accuracy in multiple ventral visual cortical areas that equals or exceeds that of models derived using today’s best supervised methods and that the mapping of these neural network models’ hidden layers is neuroanatomically consistent across the ventral stream. Strikingly, we find that these methods produce brain-like representations even when trained solely with real human child developmental data collected from head-mounted cameras, despite the fact that these datasets are noisy and limited. We also find that semisupervised deep contrastive embeddings can leverage small numbers of labeled examples to produce representations with substantially improved error-pattern consistency to human behavior. Taken together, these results illustrate a use of unsupervised learning to provide a quantitative model of a multiarea cortical brain system and present a strong candidate for a biologically plausible computational theory of primate sensory learning.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Lin, Baihan. "Regularity Normalization: Neuroscience-Inspired Unsupervised Attention across Neural Network Layers". Entropy 24, n.º 1 (28 de dezembro de 2021): 59. http://dx.doi.org/10.3390/e24010059.

Texto completo da fonte
Resumo:
Inspired by the adaptation phenomenon of neuronal firing, we propose the regularity normalization (RN) as an unsupervised attention mechanism (UAM) which computes the statistical regularity in the implicit space of neural networks under the Minimum Description Length (MDL) principle. Treating the neural network optimization process as a partially observable model selection problem, the regularity normalization constrains the implicit space by a normalization factor, the universal code length. We compute this universal code incrementally across neural network layers and demonstrate the flexibility to include data priors such as top-down attention and other oracle information. Empirically, our approach outperforms existing normalization methods in tackling limited, imbalanced and non-stationary input distribution in image classification, classic control, procedurally-generated reinforcement learning, generative modeling, handwriting generation and question answering tasks with various neural network architectures. Lastly, the unsupervised attention mechanisms is a useful probing tool for neural networks by tracking the dependency and critical learning stages across layers and recurrent time steps of deep networks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Abiyev, Rahib H., e Mohammad Khaleel Sallam Ma’aitah. "Deep Convolutional Neural Networks for Chest Diseases Detection". Journal of Healthcare Engineering 2018 (1 de agosto de 2018): 1–11. http://dx.doi.org/10.1155/2018/4168538.

Texto completo da fonte
Resumo:
Chest diseases are very serious health problems in the life of people. These diseases include chronic obstructive pulmonary disease, pneumonia, asthma, tuberculosis, and lung diseases. The timely diagnosis of chest diseases is very important. Many methods have been developed for this purpose. In this paper, we demonstrate the feasibility of classifying the chest pathologies in chest X-rays using conventional and deep learning approaches. In the paper, convolutional neural networks (CNNs) are presented for the diagnosis of chest diseases. The architecture of CNN and its design principle are presented. For comparative purpose, backpropagation neural networks (BPNNs) with supervised learning, competitive neural networks (CpNNs) with unsupervised learning are also constructed for diagnosis chest diseases. All the considered networks CNN, BPNN, and CpNN are trained and tested on the same chest X-ray database, and the performance of each network is discussed. Comparative results in terms of accuracy, error rate, and training time between the networks are presented.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Browne, David, Michael Giering e Steven Prestwich. "PulseNetOne: Fast Unsupervised Pruning of Convolutional Neural Networks for Remote Sensing". Remote Sensing 12, n.º 7 (29 de março de 2020): 1092. http://dx.doi.org/10.3390/rs12071092.

Texto completo da fonte
Resumo:
Scene classification is an important aspect of image/video understanding and segmentation. However, remote-sensing scene classification is a challenging image recognition task, partly due to the limited training data, which causes deep-learning Convolutional Neural Networks (CNNs) to overfit. Another difficulty is that images often have very different scales and orientation (viewing angle). Yet another is that the resulting networks may be very large, again making them prone to overfitting and unsuitable for deployment on memory- and energy-limited devices. We propose an efficient deep-learning approach to tackle these problems. We use transfer learning to compensate for the lack of data, and data augmentation to tackle varying scale and orientation. To reduce network size, we use a novel unsupervised learning approach based on k-means clustering, applied to all parts of the network: most network reduction methods use computationally expensive supervised learning methods, and apply only to the convolutional or fully connected layers, but not both. In experiments, we set new standards in classification accuracy on four remote-sensing and two scene-recognition image datasets.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Yi, Cheng. "Application of Convolutional Networks in Clothing Design from the Perspective of Deep Learning". Scientific Programming 2022 (27 de setembro de 2022): 1–8. http://dx.doi.org/10.1155/2022/6173981.

Texto completo da fonte
Resumo:
A convolutional neural network (CNN) is a machine learning method under supervised learning. It not only has the advantages of high fault tolerance and self-learning ability of other traditional neural networks but also has the advantages of weight sharing, automatic feature extraction, and the combination of the input image and network. It avoids the process of data reconstruction and feature extraction in traditional recognition algorithms. For example, as an unsupervised generation model, the convolutional confidence network (CCN) generated by the combination of convolutional neural network and confidence network has been successfully applied to face feature extraction.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Ghosh, Saheb, Sathis Kumar B e Kathir Deivanai. "DETECTION OF WHALES USING DEEP LEARNING METHODS AND NEURAL NETWORKS". Asian Journal of Pharmaceutical and Clinical Research 10, n.º 13 (1 de abril de 2017): 489. http://dx.doi.org/10.22159/ajpcr.2017.v10s1.20767.

Texto completo da fonte
Resumo:
Deep learning methods are a great machine learning technique which is mostly used in artificial neural networks for pattern recognition. This project is to identify the Whales from under water Bioacoustics network using an efficient algorithm and data model, so that location of the whales can be send to the Ships travelling in the same region in order to avoid collision with the whale or disturbing their natural habitat as much as possible. This paper shows application of unsupervised machine learning techniques with help of deep belief network and manual feature extraction model for better results.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Solomon, Enoch, Abraham Woubie e Krzysztof J. Cios. "UFace: An Unsupervised Deep Learning Face Verification System". Electronics 11, n.º 23 (26 de novembro de 2022): 3909. http://dx.doi.org/10.3390/electronics11233909.

Texto completo da fonte
Resumo:
Deep convolutional neural networks are often used for image verification but require large amounts of labeled training data, which are not always available. To address this problem, an unsupervised deep learning face verification system, called UFace, is proposed here. It starts by selecting from large unlabeled data the k most similar and k most dissimilar images to a given face image and uses them for training. UFace is implemented using methods of the autoencoder and Siamese network; the latter is used in all comparisons as its performance is better. Unlike in typical deep neural network training, UFace computes the loss function k times for similar images and k times for dissimilar images for each input image. UFace’s performance is evaluated using four benchmark face verification datasets: Labeled Faces in the Wild (LFW), YouTube Faces (YTF), Cross-age LFW (CALFW) and Celebrities in Frontal Profile in the Wild (CFP-FP). UFace with the Siamese network achieved accuracies of 99.40%, 96.04%, 95.12% and 97.89%, respectively, on the four datasets. These results are comparable with the state-of-the-art methods, such as ArcFace, GroupFace and MegaFace. The biggest advantage of UFace is that it uses much less training data and does not require labeled data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Altuntas, Volkan. "NodeVector: A Novel Network Node Vectorization with Graph Analysis and Deep Learning". Applied Sciences 14, n.º 2 (16 de janeiro de 2024): 775. http://dx.doi.org/10.3390/app14020775.

Texto completo da fonte
Resumo:
Network node embedding captures structural and relational information of nodes in the network and allows for us to use machine learning algorithms for various prediction tasks on network data that have an inherently complex and disordered structure. Network node embedding should preserve as much information as possible about important network properties where information is stored, such as network structure and node properties, while representing nodes as numerical vectors in a lower-dimensional space than the original higher dimensional space. Superior node embedding algorithms are a powerful tool for machine learning with effective and efficient node representation. Recent research in representation learning has led to significant advances in automating features through unsupervised learning, inspired by advances in natural language processing. Here, we seek to improve the representation quality of node embeddings with a new node vectorization technique that uses network analysis to overcome network-based information loss. In this study, we introduce the NodeVector algorithm, which combines network analysis and neural networks to transfer information from the target network to node embedding. As a proof of concept, our experiments performed on different categories of network datasets showed that our method achieves better results than its competitors for target networks. This is the first study to produce node representation by unsupervised learning using the combination of network analysis and neural networks to consider network data structure. Based on experimental results, the use of network analysis, complex initial node representation, balanced negative sampling, and neural networks has a positive effect on the representation quality of network node embedding.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Ma, Chao, Yun Gu, Chen Gong, Jie Yang e Deying Feng. "Unsupervised Video Hashing via Deep Neural Network". Neural Processing Letters 47, n.º 3 (17 de março de 2018): 877–90. http://dx.doi.org/10.1007/s11063-018-9812-x.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Naidu, D. J. Samatha, e T. Mahammad Rafi. "HANDWRITTEN CHARACTER RECOGNITION USING CONVOLUTIONAL NEURAL NETWORKS". International Journal of Computer Science and Mobile Computing 10, n.º 8 (30 de agosto de 2021): 41–45. http://dx.doi.org/10.47760/ijcsmc.2021.v10i08.007.

Texto completo da fonte
Resumo:
Handwritten character Recognition is one of the active area of research where deep neural networks are been utilized. Handwritten character Recognition is a challenging task because of many reasons. The Primary reason is different people have different styles of handwriting. The secondary reason is there are lot of characters like capital letters, small letters & special symbols. In existing were immense research going on the field of handwritten character recognition system has been design using fuzzy logic and created on VLSI(very large scale integrated)structure. To Recognize the tamil characters they have use neural networks with the Kohonen self-organizing map(SOM) which is an unsupervised neural networks. In proposed system this project design a image segmentation based hand written character recognition system. The convolutional neural network is the current state of neural network which has wide application in fields like image, video recognition. The system easily identify or easily recognize text in English languages and letters, digits. By using Open cv for performing image processing and having tensor flow for training the neural network. To develop this concept proposing the innovative method for offline handwritten characters. detection using deep neural networks using python programming language.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Hu, Ruiqi, Shirui Pan, Guodong Long, Qinghua Lu, Liming Zhu e Jing Jiang. "Going Deep: Graph Convolutional Ladder-Shape Networks". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 03 (3 de abril de 2020): 2838–45. http://dx.doi.org/10.1609/aaai.v34i03.5673.

Texto completo da fonte
Resumo:
Neighborhood aggregation algorithms like spectral graph convolutional networks (GCNs) formulate graph convolutions as a symmetric Laplacian smoothing operation to aggregate the feature information of one node with that of its neighbors. While they have achieved great success in semi-supervised node classification on graphs, current approaches suffer from the over-smoothing problem when the depth of the neural networks increases, which always leads to a noticeable degradation of performance. To solve this problem, we present graph convolutional ladder-shape networks (GCLN), a novel graph neural network architecture that transmits messages from shallow layers to deeper layers to overcome the over-smoothing problem and dramatically extend the scale of the neural networks with improved performance. We have validated the effectiveness of proposed GCLN at a node-wise level with a semi-supervised task (node classification) and an unsupervised task (node clustering), and at a graph-wise level with graph classification by applying a differentiable pooling operation. The proposed GCLN outperforms original GCNs, deep GCNs and other state-of-the-art GCN-based models for all three tasks, which were designed from various perspectives on six real-world benchmark data sets.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Huang, Qiuyuan, Li Deng, Dapeng Wu, Chang Liu e Xiaodong He. "Attentive Tensor Product Learning". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 1344–51. http://dx.doi.org/10.1609/aaai.v33i01.33011344.

Texto completo da fonte
Resumo:
This paper proposes a novel neural architecture — Attentive Tensor Product Learning (ATPL) — to represent grammatical structures of natural language in deep learning models. ATPL exploits Tensor Product Representations (TPR), a structured neural-symbolic model developed in cognitive science, to integrate deep learning with explicit natural language structures and rules. The key ideas of ATPL are: 1) unsupervised learning of role-unbinding vectors of words via the TPR-based deep neural network; 2) the use of attention modules to compute TPR; and 3) the integration of TPR with typical deep learning architectures including long short-term memory and feedforward neural networks. The novelty of our approach lies in its ability to extract the grammatical structure of a sentence by using role-unbinding vectors, which are obtained in an unsupervised manner. Our ATPL approach is applied to 1) image captioning, 2) part of speech (POS) tagging, and 3) constituency parsing of a natural language sentence. The experimental results demonstrate the effectiveness of the proposed approach in all these three natural language processing tasks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Huang, Jiabo, Qi Dong, Shaogang Gong e Xiatian Zhu. "Unsupervised Deep Learning via Affinity Diffusion". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 07 (3 de abril de 2020): 11029–36. http://dx.doi.org/10.1609/aaai.v34i07.6757.

Texto completo da fonte
Resumo:
Convolutional neural networks (CNNs) have achieved unprecedented success in a variety of computer vision tasks. However, they usually rely on supervised model learning with the need for massive labelled training data, limiting dramatically their usability and deployability in real-world scenarios without any labelling budget. In this work, we introduce a general-purpose unsupervised deep learning approach to deriving discriminative feature representations. It is based on self-discovering semantically consistent groups of unlabelled training samples with the same class concepts through a progressive affinity diffusion process. Extensive experiments on object image classification and clustering show the performance superiority of the proposed method over the state-of-the-art unsupervised learning models using six common image recognition benchmarks including MNIST, SVHN, STL10, CIFAR10, CIFAR100 and ImageNet.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Tyshchenko, Vitalii. "ANALYSIS OF TRAINING METHODS AND NEURAL NETWORK TOOLS FOR FAKE NEWS DETECTION". Cybersecurity: Education, Science, Technique 4, n.º 20 (2023): 20–34. http://dx.doi.org/10.28925/2663-4023.2023.20.2034.

Texto completo da fonte
Resumo:
This article analyses various training methods and neural network tools for fake news detection. Approaches to fake news detection based on textual, visual and mixed data are considered, as well as the use of different types of neural networks, such as recurrent neural networks, convolutional neural networks, deep neural networks, generative adversarial networks and others. Also considered are supervised and unsupervised learning methods such as autoencoding neural networks and deep variational autoencoding neural networks. Based on the analysed studies, attention is drawn to the problems associated with limitations in the volume and quality of data, as well as the lack of efficiency of tools for detecting complex types of fakes. The author analyses neural network-based applications and tools and draws conclusions about their effectiveness and suitability for different types of data and fake detection tasks. The study found that machine and deep learning models, as well as adversarial learning methods and special tools for detecting fake media, are effective in detecting fakes. However, the effectiveness and accuracy of these methods and tools can be affected by factors such as data quality, methods used for training and evaluation, and the complexity of the fake media being detected. Based on the analysis of training methods and neural network characteristics, the advantages and disadvantages of fake news detection are identified. Ongoing research and development in this area is crucial to improve the accuracy and reliability of these methods and tools for fake news detection.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Heo, Seongmin, e Jay H. Lee. "Statistical Process Monitoring of the Tennessee Eastman Process Using Parallel Autoassociative Neural Networks and a Large Dataset". Processes 7, n.º 7 (1 de julho de 2019): 411. http://dx.doi.org/10.3390/pr7070411.

Texto completo da fonte
Resumo:
In this article, the statistical process monitoring problem of the Tennessee Eastman process is considered using deep learning techniques. This work is motivated by three limitations of the existing works for such problem. First, although deep learning has been used for process monitoring extensively, in the majority of the existing works, the neural networks were trained in a supervised manner assuming that the normal/fault labels were available. However, this is not always the case in real applications. Thus, in this work, autoassociative neural networks are used, which are trained in an unsupervised fashion. Another limitation is that the typical dataset used for the monitoring of the Tennessee Eastman process is comprised of just a small number of data samples, which can be highly limiting for deep learning. The dataset used in this work is 500-times larger than the typically-used dataset and is large enough for deep learning. Lastly, an alternative neural network architecture, which is called parallel autoassociative neural networks, is proposed to decouple the training of different principal components. The proposed architecture is expected to address the co-adaptation issue of the fully-connected autoassociative neural networks. An extensive case study is designed and performed to evaluate the effects of the following neural network settings: neural network size, type of regularization, training objective function, and training epoch. The results are compared with those obtained using linear principal component analysis, and the advantages and limitations of the parallel autoassociative neural networks are illustrated.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Cao, Yanpeng, Dayan Guan, Weilin Huang, Jiangxin Yang, Yanlong Cao e Yu Qiao. "Pedestrian detection with unsupervised multispectral feature learning using deep neural networks". Information Fusion 46 (março de 2019): 206–17. http://dx.doi.org/10.1016/j.inffus.2018.06.005.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Zhang, Pengfei, e Xiaoming Ju. "Adversarial Sample Detection with Gaussian Mixture Conditional Generative Adversarial Networks". Mathematical Problems in Engineering 2021 (13 de setembro de 2021): 1–18. http://dx.doi.org/10.1155/2021/8268249.

Texto completo da fonte
Resumo:
It is important to detect adversarial samples in the physical world that are far away from the training data distribution. Some adversarial samples can make a machine learning model generate a highly overconfident distribution in the testing stage. Thus, we proposed a mechanism for detecting adversarial samples based on semisupervised generative adversarial networks (GANs) with an encoder-decoder structure; this mechanism can be applied to any pretrained neural network without changing the network’s structure. The semisupervised GANs also give us insight into the behavior of adversarial samples and their flow through the layers of a deep neural network. In the supervised scenario, the latent feature of the semisupervised GAN and the target network’s logit information are used as the input of the external classifier support vector machine to detect the adversarial samples. In the unsupervised scenario, first, we proposed a one-class classier based on the semisupervised Gaussian mixture conditional generative adversarial network (GM-CGAN) to fit the joint feature information of the normal data, and then, we used a discriminator network to detect normal data and adversarial samples. In both supervised scenarios and unsupervised scenarios, experimental results show that our method outperforms latest methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Khodayar, Mahdi, e Jacob Regan. "Deep Neural Networks in Power Systems: A Review". Energies 16, n.º 12 (17 de junho de 2023): 4773. http://dx.doi.org/10.3390/en16124773.

Texto completo da fonte
Resumo:
Identifying statistical trends for a wide range of practical power system applications, including sustainable energy forecasting, demand response, energy decomposition, and state estimation, is regarded as a significant task given the rapid expansion of power system measurements in terms of scale and complexity. In the last decade, deep learning has arisen as a new kind of artificial intelligence technique that expresses power grid datasets via an extensive hypothesis space, resulting in an outstanding performance in comparison with the majority of recent algorithms. This paper investigates the theoretical benefits of deep data representation in the study of power networks. We examine deep learning techniques described and deployed in a variety of supervised, unsupervised, and reinforcement learning scenarios. We explore different scenarios in which discriminative deep frameworks, such as Stacked Autoencoder networks and Convolution Networks, and generative deep architectures, including Deep Belief Networks and Variational Autoencoders, solve problems. This study’s empirical and theoretical evaluation of deep learning encourages long-term studies on improving this modern category of methods to accomplish substantial advancements in the future of electrical systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Le Roux, Nicolas, e Yoshua Bengio. "Deep Belief Networks Are Compact Universal Approximators". Neural Computation 22, n.º 8 (agosto de 2010): 2192–207. http://dx.doi.org/10.1162/neco.2010.08-09-1081.

Texto completo da fonte
Resumo:
Deep belief networks (DBN) are generative models with many layers of hidden causal variables, recently introduced by Hinton, Osindero, and Teh ( 2006 ), along with a greedy layer-wise unsupervised learning algorithm. Building on Le Roux and Bengio ( 2008 ) and Sutskever and Hinton ( 2008 ), we show that deep but narrow generative networks do not require more parameters than shallow ones to achieve universal approximation. Exploiting the proof technique, we prove that deep but narrow feedforward neural networks with sigmoidal units can represent any Boolean expression.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Zhu, Yi, Xinke Zhou e Xindong Wu. "Unsupervised Domain Adaptation via Stacked Convolutional Autoencoder". Applied Sciences 13, n.º 1 (29 de dezembro de 2022): 481. http://dx.doi.org/10.3390/app13010481.

Texto completo da fonte
Resumo:
Unsupervised domain adaptation involves knowledge transfer from a labeled source to unlabeled target domains to assist target learning tasks. A critical aspect of unsupervised domain adaptation is the learning of more transferable and distinct feature representations from different domains. Although previous investigations, using, for example, CNN-based and auto-encoder-based methods, have produced remarkable results in domain adaptation, there are still two main problems that occur with these methods. The first is a training problem for deep neural networks; some optimization methods are ineffective when applied to unsupervised deep networks for domain adaptation tasks. The second problem that arises is that redundancy of image data results in performance degradation in feature learning for domain adaptation. To address these problems, in this paper, we propose an unsupervised domain adaptation method with a stacked convolutional sparse autoencoder, which is based on performing layer projection from the original data to obtain higher-level representations for unsupervised domain adaptation. More specifically, in a convolutional neural network, lower layers generate more discriminative features whose kernels are learned via a sparse autoencoder. A reconstruction independent component analysis optimization algorithm was introduced to perform individual component analysis on the input data. Experiments undertaken demonstrated superior classification performance of up to 89.3% in terms of accuracy compared to several state-of-the-art domain adaptation methods, such as SSRLDA and TLMRA.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Lin, Yi-Nan, Tsang-Yen Hsieh, Cheng-Ying Yang, Victor RL Shen, Tony Tong-Ying Juang e Wen-Hao Chen. "Deep Petri nets of unsupervised and supervised learning". Measurement and Control 53, n.º 7-8 (9 de junho de 2020): 1267–77. http://dx.doi.org/10.1177/0020294020923375.

Texto completo da fonte
Resumo:
Artificial intelligence is one of the hottest research topics in computer science. In general, when it comes to the needs to perform deep learning, the most intuitive and unique implementation method is to use neural network. But there are two shortcomings in neural network. First, it is not easy to be understood. When encountering the needs for implementation, it often requires a lot of relevant research efforts to implement the neural network. Second, the structure is complex. When constructing a perfect learning structure, in order to achieve the fully defined connection between nodes, the overall structure becomes complicated. It is hard for developers to track the parameter changes inside. Therefore, the goal of this article is to provide a more streamlined method so as to perform deep learning. A modified high-level fuzzy Petri net, called deep Petri net, is used to perform deep learning, in an attempt to propose a simple and easy structure and to track parameter changes, with faster speed than the deep neural network. The experimental results have shown that the deep Petri net performs better than the deep neural network.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Sewani, Harshini, e Rasha Kashef. "An Autoencoder-Based Deep Learning Classifier for Efficient Diagnosis of Autism". Children 7, n.º 10 (14 de outubro de 2020): 182. http://dx.doi.org/10.3390/children7100182.

Texto completo da fonte
Resumo:
Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by a lack of social communication and social interaction. Autism is a mental disorder investigated by social and computational intelligence scientists utilizing advanced technologies such as machine learning models to enhance clinicians’ ability to provide robust diagnosis and prognosis of autism. However, with dynamic changes in autism behaviour patterns, these models’ quality and accuracy have become a great challenge for clinical practitioners. We applied a deep neural network learning on a large brain image dataset obtained from ABIDE (autism brain imaging data exchange) to provide an efficient diagnosis of ASD, especially for children. Our deep learning model combines unsupervised neural network learning, an autoencoder, and supervised deep learning using convolutional neural networks. Our proposed algorithm outperforms individual-based classifiers measured by various validations and assessment measures. Experimental results indicate that the autoencoder combined with the convolution neural networks provides the best performance by achieving 84.05% accuracy and Area under the Curve (AUC) value of 0.78.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Ajay, P., B. Nagaraj, R. Arun Kumar, Ruihang Huang e P. Ananthi. "Unsupervised Hyperspectral Microscopic Image Segmentation Using Deep Embedded Clustering Algorithm". Scanning 2022 (6 de junho de 2022): 1–9. http://dx.doi.org/10.1155/2022/1200860.

Texto completo da fonte
Resumo:
Hyperspectral microscopy in biology and minerals, unsupervised deep learning neural network denoising SRS photos: hyperspectral resolution enhancement and denoising one hyperspectral picture is enough to teach unsupervised method. An intuitive chemical species map for a lithium ore sample is produced using k -means clustering. Many researchers are now interested in biosignals. Uncertainty limits the algorithms’ capacity to evaluate these signals for further information. Even while AI systems can answer puzzles, they remain limited. Deep learning is used when machine learning is inefficient. Supervised learning needs a lot of data. Deep learning is vital in modern AI. Supervised learning requires a large labeled dataset. The selection of parameters prevents over- or underfitting. Unsupervised learning is used to overcome the challenges outlined above (performed by the clustering algorithm). To accomplish this, two processing processes were used: (1) utilizing nonlinear deep learning networks to turn data into a latent feature space ( Z ). The Kullback–Leibler divergence is used to test the objective function convergence. This article explores a novel research on hyperspectral microscopic picture using deep learning and effective unsupervised learning.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Mamun, Abdullah Al, Em Poh Ping, Jakir Hossen, Anik Tahabilder e Busrat Jahan. "A Comprehensive Review on Lane Marking Detection Using Deep Neural Networks". Sensors 22, n.º 19 (10 de outubro de 2022): 7682. http://dx.doi.org/10.3390/s22197682.

Texto completo da fonte
Resumo:
Lane marking recognition is one of the most crucial features for automotive vehicles as it is one of the most fundamental requirements of all the autonomy features of Advanced Driver Assistance Systems (ADAS). Researchers have recently made promising improvements in the application of Lane Marking Detection (LMD). This research article has taken the initiative to review lane marking detection, mainly using deep learning techniques. This paper initially discusses the introduction of lane marking detection approaches using deep neural networks and conventional techniques. Lane marking detection frameworks can be categorized into single-stage and two-stage architectures. This paper elaborates on the network’s architecture and the loss function for improving the performance based on the categories. The network’s architecture is divided into object detection, classification, and segmentation, and each is discussed, including their contributions and limitations. There is also a brief indication of the simplification and optimization of the network for simplifying the architecture. Additionally, comparative performance results with a visualization of the final output of five existing techniques is elaborated. Finally, this review is concluded by pointing to particular challenges in lane marking detection, such as generalization problems and computational complexity. There is also a brief future direction for solving the issues, for instance, efficient neural network, Meta, and unsupervised learning.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Vélez, Paulina, Manuel Miranda, Carmen Serrano e Begoña Acha. "Does a Previous Segmentation Improve the Automatic Detection of Basal Cell Carcinoma Using Deep Neural Networks?" Applied Sciences 12, n.º 4 (17 de fevereiro de 2022): 2092. http://dx.doi.org/10.3390/app12042092.

Texto completo da fonte
Resumo:
Basal Cell Carcinoma (BCC) is the most frequent skin cancer and its increasing incidence is producing a high overload in dermatology services. In this sense, it is convenient to aid physicians in detecting it soon. Thus, in this paper, we propose a tool for the detection of BCC to provide a prioritization in the teledermatology consultation. Firstly, we analyze if a previous segmentation of the lesion improves the ulterior classification of the lesion. Secondly, we analyze three deep neural networks and ensemble architectures to distinguish between BCC and nevus, and BCC and other skin lesions. The best segmentation results are obtained with a SegNet deep neural network. A 98% accuracy for distinguishing BCC from nevus and a 95% accuracy classifying BCC vs. all lesions have been obtained. The proposed algorithm outperforms the winner of the challenge ISIC 2019 in almost all the metrics. Finally, we can conclude that when deep neural networks are used to classify, a previous segmentation of the lesion does not improve the classification results. Likewise, the ensemble of different neural network configurations improves the classification performance compared with individual neural network classifiers. Regarding the segmentation step, supervised deep learning-based methods outperform unsupervised ones.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Chu, Lei, Hao Pan e Wenping Wang. "Unsupervised Shape Completion via Deep Prior in the Neural Tangent Kernel Perspective". ACM Transactions on Graphics 40, n.º 3 (4 de julho de 2021): 1–17. http://dx.doi.org/10.1145/3459234.

Texto completo da fonte
Resumo:
We present a novel approach for completing and reconstructing 3D shapes from incomplete scanned data by using deep neural networks. Rather than being trained on supervised completion tasks and applied on a testing shape, the network is optimized from scratch on the single testing shape to fully adapt to the shape and complete the missing data using contextual guidance from the known regions. The ability to complete missing data by an untrained neural network is usually referred to as the deep prior . In this article, we interpret the deep prior from a neural tangent kernel (NTK) perspective and show that the completed shape patches by the trained CNN are naturally similar to existing patches, as they are proximate in the kernel feature space induced by NTK. The interpretation allows us to design more efficient network structures and learning mechanisms for the shape completion and reconstruction task. Being more aware of structural regularities than both traditional and other unsupervised learning-based reconstruction methods, our approach completes large missing regions with plausible shapes and complements supervised learning-based methods that use database priors by requiring no extra training dataset and showing flexible adaptation to a particular shape instance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Li, Yibing, Sitong Zhang, Xiang Li e Fang Ye. "Remote Sensing Image Classification with Few Labeled Data Using Semisupervised Learning". Wireless Communications and Mobile Computing 2023 (20 de abril de 2023): 1–11. http://dx.doi.org/10.1155/2023/7724264.

Texto completo da fonte
Resumo:
Synthetic aperture radar (SAR) as an imaging radar is capable of high-resolution remote sensing, independent of flight altitude, and independent of weather. Traditional SAR ship image classification tends to extract features manually. It relies too much on expert experience and is sensitive to the scale of SAR images. Recently, with the development of deep learning, deep neural networks such as convolutional neural networks are widely used to complete feature extraction and classification tasks, which improves algorithm accuracy and normalization capabilities to a large extent. However, deep learning requires a large number of labeled samples, and the vast bulk of SAR images are unlabeled. Therefore, the classification accuracy of deep neural networks is limited. To tackle the problem, we propose a semisupervised learning-based SAR image classification method considering that only few labeled images are available. The proposed method can train the classification model using both labeled and unlabeled samples. Moreover, we improve the unsupervised data augmentation (UDA) strategy by designing a symmetric function for unsupervised loss calculation. Experiments are carried out on the OpenSARShip dataset, and results show that the proposed method reaches a much higher accuracy than the original UDA.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Zhu, Yong, Yongwei Tao e Zequn Li. "Short-circuit Current-based Parametrically Identification for Doubly Fed Induction Generator". Advances in Engineering Technology Research 9, n.º 1 (27 de dezembro de 2023): 133. http://dx.doi.org/10.56028/aetr.9.1.133.2024.

Texto completo da fonte
Resumo:
Recently, deep learning has provided a new opportunity to achieve high precision and real-time parameter identification of the doubly-fed induction generator (DFIG) in the event of short-circuit fault. However, deep learning algorithms based on data training are facing the challenge of relying on a large amount of training data and poor generalization performance. In order to improve these shortcomings, we embed the forward calculation model of three-phase short-circuit current (SCC) into the neural network, and propose an unsupervised neural network which can realize high-precision parameter identification. The network only needs to convert the short circuit current curve into a two-dimensional gray level map to complete the precise training of the network without real labels, which effectively improves the fitting ability of the network for inverse problems. The simulation results show that the proposed method can achieve high precision identification of DFIG parameters both within and outside the domain, and verify the high precision identification and generalization ability of unsupervised networks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Prashant Krishnan, V., S. Rajarajeswari, Venkat Krishnamohan, Vivek Chandra Sheel e R. Deepak. "Music Generation Using Deep Learning Techniques". Journal of Computational and Theoretical Nanoscience 17, n.º 9 (1 de julho de 2020): 3983–87. http://dx.doi.org/10.1166/jctn.2020.9003.

Texto completo da fonte
Resumo:
This paper primarily aims to compare two deep learning techniques in the task of learning musical styles and generating novel musical content. Long Short Term Memory (LSTM), a supervised learning algorithm is used, which is a variation of the Recurrent Neural Network (RNN), frequently used for sequential data. Another technique explored is Generative Adversarial Networks (GAN), an unsupervised approach which is used to learn a distribution of a particular style, and novelly combine components to create sequences. The representation of data from the MIDI files as chord and note embedding are essential to the performance of the models. This type of embedding in the network helps it to discover structural patterns in the samples. Through the study, it is seen how a supervised learning technique performs better than the unsupervised one. A study helped in obtaining a Mean Opinion Score (MOS), which was used as an indicator of the comparative quality and performance of the respective techniques.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Zhu, Yancheng, Qiwei Wu e Jianzi Liu. "A Comparative Study of Contrastive Learning-Based Few-Shot Unsupervised Algorithms for Efficient Deep Learning". Journal of Physics: Conference Series 2560, n.º 1 (1 de agosto de 2023): 012048. http://dx.doi.org/10.1088/1742-6596/2560/1/012048.

Texto completo da fonte
Resumo:
Abstract The rapid development in computer vision algorithms, particularly in target detection algorithms based on deep neural networks, has led to significant advancements in various fields. However, recent research on target detection algorithms has shifted towards small sample scenarios with long-tailed distribution of categories, where achieving high accuracy target detection with limited data has become a crucial research topic. Deep neural networks often face several challenges when dealing with limited data, including convergence problems, overfitting, and poor generalization performance. In such scenarios, categories with little data are easily overwhelmed by the negative gradients generated by other categories during network training, which affects the final detection results. To address this issue, an unsupervised contrast learning algorithm has been proposed that can achieve accurate results on a small number of reference datasets without requiring a large number of datasets. One recently proposed contrast learning algorithm is PixelCL, which combines information from image depth in pretraining to improve results. The results of this algorithm show that it can obtain more accurate results through contrast learning with limited sample data. These findings highlight the potential of contrast learning algorithms in addressing negative gradient issues and improving the performance of deep neural networks in target detection tasks with limited data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Yang, Geunbo, Wongyu Lee, Youjung Seo, Choongseop Lee, Woojoon Seok, Jongkil Park, Donggyu Sim e Cheolsoo Park. "Unsupervised Spiking Neural Network with Dynamic Learning of Inhibitory Neurons". Sensors 23, n.º 16 (17 de agosto de 2023): 7232. http://dx.doi.org/10.3390/s23167232.

Texto completo da fonte
Resumo:
A spiking neural network (SNN) is a type of artificial neural network that operates based on discrete spikes to process timing information, similar to the manner in which the human brain processes real-world problems. In this paper, we propose a new spiking neural network (SNN) based on conventional, biologically plausible paradigms, such as the leaky integrate-and-fire model, spike timing-dependent plasticity, and the adaptive spiking threshold, by suggesting new biological models; that is, dynamic inhibition weight change, a synaptic wiring method, and Bayesian inference. The proposed network is designed for image recognition tasks, which are frequently used to evaluate the performance of conventional deep neural networks. To manifest the bio-realistic neural architecture, the learning is unsupervised, and the inhibition weight is dynamically changed; this, in turn, affects the synaptic wiring method based on Hebbian learning and the neuronal population. In the inference phase, Bayesian inference successfully classifies the input digits by counting the spikes from the responding neurons. The experimental results demonstrate that the proposed biological model ensures a performance improvement compared with other biologically plausible SNN models.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Soydaner, Derya. "A Comparison of Optimization Algorithms for Deep Learning". International Journal of Pattern Recognition and Artificial Intelligence 34, n.º 13 (30 de abril de 2020): 2052013. http://dx.doi.org/10.1142/s0218001420520138.

Texto completo da fonte
Resumo:
In recent years, we have witnessed the rise of deep learning. Deep neural networks have proved their success in many areas. However, the optimization of these networks has become more difficult as neural networks going deeper and datasets becoming bigger. Therefore, more advanced optimization algorithms have been proposed over the past years. In this study, widely used optimization algorithms for deep learning are examined in detail. To this end, these algorithms called adaptive gradient methods are implemented for both supervised and unsupervised tasks. The behavior of the algorithms during training and results on four image datasets, namely, MNIST, CIFAR-10, Kaggle Flowers and Labeled Faces in the Wild are compared by pointing out their differences against basic optimization algorithms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Polanski, Jaroslaw. "Unsupervised Learning in Drug Design from Self-Organization to Deep Chemistry". International Journal of Molecular Sciences 23, n.º 5 (3 de março de 2022): 2797. http://dx.doi.org/10.3390/ijms23052797.

Texto completo da fonte
Resumo:
The availability of computers has brought novel prospects in drug design. Neural networks (NN) were an early tool that cheminformatics tested for converting data into drugs. However, the initial interest faded for almost two decades. The recent success of Deep Learning (DL) has inspired a renaissance of neural networks for their potential application in deep chemistry. DL targets direct data analysis without any human intervention. Although back-propagation NN is the main algorithm in the DL that is currently being used, unsupervised learning can be even more efficient. We review self-organizing maps (SOM) in mapping molecular representations from the 1990s to the current deep chemistry. We discovered the enormous efficiency of SOM not only for features that could be expected by humans, but also for those that are not trivial to human chemists. We reviewed the DL projects in the current literature, especially unsupervised architectures. DL appears to be efficient in pattern recognition (Deep Face) or chess (Deep Blue). However, an efficient deep chemistry is still a matter for the future. This is because the availability of measured property data in chemistry is still limited.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Wani, M. Arif, e Saduf Afzal. "Optimization of deep network models through fine tuning". International Journal of Intelligent Computing and Cybernetics 11, n.º 3 (13 de agosto de 2018): 386–403. http://dx.doi.org/10.1108/ijicc-06-2017-0070.

Texto completo da fonte
Resumo:
Purpose Many strategies have been put forward for training deep network models, however, stacking of several layers of non-linearities typically results in poor propagation of gradients and activations. The purpose of this paper is to explore the use of two steps strategy where initial deep learning model is obtained first by unsupervised learning and then optimizing the initial deep learning model by fine tuning. A number of fine tuning algorithms are explored in this work for optimizing deep learning models. This includes proposing a new algorithm where Backpropagation with adaptive gain algorithm is integrated with Dropout technique and the authors evaluate its performance in the fine tuning of the pretrained deep network. Design/methodology/approach The parameters of deep neural networks are first learnt using greedy layer-wise unsupervised pretraining. The proposed technique is then used to perform supervised fine tuning of the deep neural network model. Extensive experimental study is performed to evaluate the performance of the proposed fine tuning technique on three benchmark data sets: USPS, Gisette and MNIST. The authors have tested the approach on varying size data sets which include randomly chosen training samples of size 20, 50, 70 and 100 percent from the original data set. Findings Through extensive experimental study, it is concluded that the two steps strategy and the proposed fine tuning technique significantly yield promising results in optimization of deep network models. Originality/value This paper proposes employing several algorithms for fine tuning of deep network model. A new approach that integrates adaptive gain Backpropagation (BP) algorithm with Dropout technique is proposed for fine tuning of deep networks. Evaluation and comparison of various algorithms proposed for fine tuning on three benchmark data sets is presented in the paper.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Liu, MengYang, MingJun Li e XiaoYang Zhang. "The Application of the Unsupervised Migration Method Based on Deep Learning Model in the Marketing Oriented Allocation of High Level Accounting Talents". Computational Intelligence and Neuroscience 2022 (6 de junho de 2022): 1–10. http://dx.doi.org/10.1155/2022/5653942.

Texto completo da fonte
Resumo:
Deep learning is a branch of machine learning that uses neural networks to mimic the behaviour of the human brain. Various types of models are used in deep learning technology. This article will look at two important models and especially concentrate on unsupervised learning methodology. The two important models are as follows: the supervised and unsupervised models. The main difference is the method of training that they undergo. Supervised models are provided with training on a particular dataset and its outcome. In the case of unsupervised models, only input data is given, and there is no set outcome from which they can learn. The predicting/forecasting column is not present in an unsupervised model, unlike in the supervised model. Supervised models use regression to predict continuous quantities and classification to predict discrete class labels; unsupervised models use clustering to group similar models and association learning to find associations between items. Unsupervised migration is a combination of the unsupervised learning method and migration. In unsupervised learning, there is no need to supervise the models. Migration is an effective tool in processing and imaging data. Unsupervised learning allows the model to work independently to discover patterns and information that were previously undetected. It mainly works on unlabeled data. Unsupervised learning can achieve more complex processing tasks when compared to supervised learning. The unsupervised learning method is more unpredictable when compared with other types of learning methods. Some of the popular unsupervised learning algorithms include k-means clustering, hierarchal clustering, Apriori algorithm, clustering, anomaly detection, association mining, neural networks, etc. In this research article, we implement this particular deep learning model in the marketing oriented asset allocation of high level accounting talents. When the proposed unsupervised migration algorithm was compared to the existing Fractional Hausdorff Grey Model, it was discovered that the proposed system provided 99.12% accuracy by the high level accounting talented candidate in market-oriented asset allocation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Liu, MengYang, MingJun Li e XiaoYang Zhang. "The Application of the Unsupervised Migration Method Based on Deep Learning Model in the Marketing Oriented Allocation of High Level Accounting Talents". Computational Intelligence and Neuroscience 2022 (6 de junho de 2022): 1–10. http://dx.doi.org/10.1155/2022/5653942.

Texto completo da fonte
Resumo:
Deep learning is a branch of machine learning that uses neural networks to mimic the behaviour of the human brain. Various types of models are used in deep learning technology. This article will look at two important models and especially concentrate on unsupervised learning methodology. The two important models are as follows: the supervised and unsupervised models. The main difference is the method of training that they undergo. Supervised models are provided with training on a particular dataset and its outcome. In the case of unsupervised models, only input data is given, and there is no set outcome from which they can learn. The predicting/forecasting column is not present in an unsupervised model, unlike in the supervised model. Supervised models use regression to predict continuous quantities and classification to predict discrete class labels; unsupervised models use clustering to group similar models and association learning to find associations between items. Unsupervised migration is a combination of the unsupervised learning method and migration. In unsupervised learning, there is no need to supervise the models. Migration is an effective tool in processing and imaging data. Unsupervised learning allows the model to work independently to discover patterns and information that were previously undetected. It mainly works on unlabeled data. Unsupervised learning can achieve more complex processing tasks when compared to supervised learning. The unsupervised learning method is more unpredictable when compared with other types of learning methods. Some of the popular unsupervised learning algorithms include k-means clustering, hierarchal clustering, Apriori algorithm, clustering, anomaly detection, association mining, neural networks, etc. In this research article, we implement this particular deep learning model in the marketing oriented asset allocation of high level accounting talents. When the proposed unsupervised migration algorithm was compared to the existing Fractional Hausdorff Grey Model, it was discovered that the proposed system provided 99.12% accuracy by the high level accounting talented candidate in market-oriented asset allocation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Cheerla, Anika, e Olivier Gevaert. "Deep learning with multimodal representation for pancancer prognosis prediction". Bioinformatics 35, n.º 14 (julho de 2019): i446—i454. http://dx.doi.org/10.1093/bioinformatics/btz342.

Texto completo da fonte
Resumo:
Abstract Motivation Estimating the future course of patients with cancer lesions is invaluable to physicians; however, current clinical methods fail to effectively use the vast amount of multimodal data that is available for cancer patients. To tackle this problem, we constructed a multimodal neural network-based model to predict the survival of patients for 20 different cancer types using clinical data, mRNA expression data, microRNA expression data and histopathology whole slide images (WSIs). We developed an unsupervised encoder to compress these four data modalities into a single feature vector for each patient, handling missing data through a resilient, multimodal dropout method. Encoding methods were tailored to each data type—using deep highway networks to extract features from clinical and genomic data, and convolutional neural networks to extract features from WSIs. Results We used pancancer data to train these feature encodings and predict single cancer and pancancer overall survival, achieving a C-index of 0.78 overall. This work shows that it is possible to build a pancancer model for prognosis that also predicts prognosis in single cancer sites. Furthermore, our model handles multiple data modalities, efficiently analyzes WSIs and represents patient multimodal data flexibly into an unsupervised, informative representation. We thus present a powerful automated tool to accurately determine prognosis, a key step towards personalized treatment for cancer patients. Availability and implementation https://github.com/gevaertlab/MultimodalPrognosis
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Zaveri, Zainab, Dhruv Gosain e Arul Prakash M. "Optical Compute Engine Using Deep CNN". International Journal of Engineering & Technology 7, n.º 2.24 (25 de abril de 2018): 541. http://dx.doi.org/10.14419/ijet.v7i2.24.12157.

Texto completo da fonte
Resumo:
We present an optical compute engine with implementation of Deep CNNs. CNNs are designed in an organized and hierarchical manner and their convolutional layers, subsampling layers alternate with each other, thus the intricacy of the data per layer escalates as we traverse in the layered structure, which gives us more efficient results when dealing with complex data sets and computations. CNNs are realised in a distinctive way and vary from other neural networks in how their convolutional and subsampling layers are organised. DCNNs bring us very proficient results when it comes to image classification tasks. Recently, we have understood that generalization is more important when compared to the neural network’s depth for more optimised image classification. Our feature extractors are learned in an unsupervised way, hence the results get more precise after every backpropagation and error correction.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Li, Jinlong, Xiaochen Yuan, Jinfeng Li, Guoheng Huang, Ping Li e Li Feng. "CD-SDN: Unsupervised Sensitivity Disparity Networks for Hyper-Spectral Image Change Detection". Remote Sensing 14, n.º 19 (26 de setembro de 2022): 4806. http://dx.doi.org/10.3390/rs14194806.

Texto completo da fonte
Resumo:
Deep neural networks (DNNs) could be affected by the regression level of learning frameworks and challenging changes caused by external factors; their deep expressiveness is greatly restricted. Inspired by the fine-tuned DNNs with sensitivity disparity to the pixels of two states, in this paper, we propose a novel change detection scheme served by sensitivity disparity networks (CD-SDN). The CD-SDN is proposed for detecting changes in bi-temporal hyper-spectral images captured by the AVIRIS sensor and HYPERION sensor over time. In the CD-SDN, two deep learning frameworks, unchanged sensitivity network (USNet) and changed sensitivity network (CSNet), are utilized as the dominant part for the generation of binary argument map (BAM) and high assurance map (HAM). Then two approaches, arithmetic mean and argument learning, are employed to re-estimate the changes of BAM. Finally, the detected results are merged with HAM and obtain the final detected binary change maps (BCMs). Experiments are performed on three real-world hyperspectral image datasets, and the results indicate the good universality and adaptability of the proposed scheme, as well as its superiority over other existing state-of-the-art algorithms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Zhu, Chang-Hao, e Jie Zhang. "Developing Soft Sensors for Polymer Melt Index in an Industrial Polymerization Process Using Deep Belief Networks". International Journal of Automation and Computing 17, n.º 1 (5 de novembro de 2019): 44–54. http://dx.doi.org/10.1007/s11633-019-1203-x.

Texto completo da fonte
Resumo:
Abstract This paper presents developing soft sensors for polymer melt index in an industrial polymerization process by using deep belief network (DBN). The important quality variable melt index of polypropylene is hard to measure in industrial processes. Lack of online measurement instruments becomes a problem in polymer quality control. One effective solution is to use soft sensors to estimate the quality variables from process data. In recent years, deep learning has achieved many successful applications in image classification and speech recognition. DBN as one novel technique has strong generalization capability to model complex dynamic processes due to its deep architecture. It can meet the demand of modelling accuracy when applied to actual processes. Compared to the conventional neural networks, the training of DBN contains a supervised training phase and an unsupervised training phase. To mine the valuable information from process data, DBN can be trained by the process data without existing labels in an unsupervised training phase to improve the performance of estimation. Selection of DBN structure is investigated in the paper. The modelling results achieved by DBN and feedforward neural networks are compared in this paper. It is shown that the DBN models give very accurate estimations of the polymer melt index.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Hoernle, Nick, Rafael Michael Karampatsis, Vaishak Belle e Kobi Gal. "MultiplexNet: Towards Fully Satisfied Logical Constraints in Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 5 (28 de junho de 2022): 5700–5709. http://dx.doi.org/10.1609/aaai.v36i5.20512.

Texto completo da fonte
Resumo:
We propose a novel way to incorporate expert knowledge into the training of deep neural networks. Many approaches encode domain constraints directly into the network architecture, requiring non-trivial or domain-specific engineering. In contrast, our approach, called MultiplexNet, represents domain knowledge as a quantifier-free logical formula in disjunctive normal form (DNF) which is easy to encode and to elicit from human experts. It introduces a latent Categorical variable that learns to choose which constraint term optimizes the error function of the network and it compiles the constraints directly into the output of existing learning algorithms. We demonstrate the efficacy of this approach empirically on several classical deep learning tasks, such as density estimation and classification in both supervised and unsupervised settings where prior knowledge about the domains was expressed as logical constraints. Our results show that the MultiplexNet approach learned to approximate unknown distributions well, often requiring fewer data samples than the alternative approaches. In some cases, MultiplexNet finds better solutions than the baselines; or solutions that could not be achieved with the alternative approaches. Our contribution is in encoding domain knowledge in a way that facilitates inference. We specifically focus on quantifier-free logical formulae that are specified over the output domain of a network. We show that this approach is both efficient and general; and critically, our approach guarantees 100% constraint satisfaction in a network's output.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Li, Xuelong, Zhenghang Yuan e Qi Wang. "Unsupervised Deep Noise Modeling for Hyperspectral Image Change Detection". Remote Sensing 11, n.º 3 (28 de janeiro de 2019): 258. http://dx.doi.org/10.3390/rs11030258.

Texto completo da fonte
Resumo:
Hyperspectral image (HSI) change detection plays an important role in remote sensing applications, and considerable research has been done focused on improving change detection performance. However, the high dimension of hyperspectral data makes it hard to extract discriminative features for hyperspectral processing tasks. Though deep convolutional neural networks (CNN) have superior capability in high-level semantic feature learning, it is difficult to employ CNN for change detection tasks. As a ground truth map is usually used for the evaluation of change detection algorithms, it cannot be directly used for supervised learning. In order to better extract discriminative CNN features, a novel noise modeling-based unsupervised fully convolutional network (FCN) framework is presented for HSI change detection in this paper. Specifically, the proposed method utilizes the change detection maps of existing unsupervised change detection methods to train the deep CNN, and then removes the noise during the end-to-end training process. The main contributions of this paper are threefold: (1) A new end-to-end FCN-based deep network architecture for HSI change detection is presented with powerful learning features; (2) An unsupervised noise modeling method is introduced for the robust training of the proposed deep network; (3) Experimental results on three datasets confirm the effectiveness of the proposed method.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia