Academic literature on the topic 'Unsupervised deep neural networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Unsupervised deep neural networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Unsupervised deep neural networks"

1

Banzi, Jamal, Isack Bulugu, and Zhongfu Ye. "Deep Predictive Neural Network: Unsupervised Learning for Hand Pose Estimation." International Journal of Machine Learning and Computing 9, no. 4 (August 2019): 432–39. http://dx.doi.org/10.18178/ijmlc.2019.9.4.822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Guo, Wenqi, Weixiong Zhang, Zheng Zhang, Ping Tang, and Shichen Gao. "Deep Temporal Iterative Clustering for Satellite Image Time Series Land Cover Analysis." Remote Sensing 14, no. 15 (July 29, 2022): 3635. http://dx.doi.org/10.3390/rs14153635.

Full text
Abstract:
The extensive amount of Satellite Image Time Series (SITS) data brings new opportunities and challenges for land cover analysis. Many supervised machine learning methods have been applied in SITS, but the labeled SITS samples are time- and effort-consuming to acquire. It is necessary to analyze SITS data with an unsupervised learning method. In this paper, we propose a new unsupervised learning method named Deep Temporal Iterative Clustering (DTIC) to deal with SITS data. The proposed method jointly learns a neural network’s parameters and the resulting features’ cluster assignments, which uses a standard clustering algorithm, K-means, to iteratively cluster the features produced by the feature extraction network and then uses the subsequent assignments as supervision to update the network’s weights. We apply DTIC to the unsupervised training of neural networks on both SITS datasets. Experimental results demonstrate that DTIC outperforms the state-of-the-art K-means clustering algorithm, which proves that the proposed approach successfully provides a novel idea for unsupervised training of SITS data.
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Jianqiao, Zhaolu Zuo, Danchao Wu, Bing Li, Xiaoni Li, and Deyi Kong. "Bearing Defect Detection with Unsupervised Neural Networks." Shock and Vibration 2021 (August 19, 2021): 1–11. http://dx.doi.org/10.1155/2021/9544809.

Full text
Abstract:
Bearings always suffer from surface defects, such as scratches, black spots, and pits. Those surface defects have great effects on the quality and service life of bearings. Therefore, the defect detection of the bearing has always been the focus of the bearing quality control. Deep learning has been successfully applied to the objection detection due to its excellent performance. However, it is difficult to realize automatic detection of bearing surface defects based on data-driven-based deep learning due to few samples data of bearing defects on the actual production line. Sample preprocessing algorithm based on normalized sample symmetry of bearing is adopted to greatly increase the number of samples. Two different convolutional neural networks, supervised networks and unsupervised networks, are tested separately for the bearing defect detection. The first experiment adopts the supervised networks, and ResNet neural networks are selected as the supervised networks in this experiment. The experiment result shows that the AUC of the model is 0.8567, which is low for the actual use. Also, the positive and negative samples should be labelled manually. To improve the AUC of the model and the flexibility of the samples labelling, a new unsupervised neural network based on autoencoder networks is proposed. Gradients of the unlabeled data are used as labels, and autoencoder networks are created with U-net to predict the output. In the second experiment, positive samples of the supervised experiment are used as the training set. The experiment of the unsupervised neural networks shows that the AUC of the model is 0.9721. In this experiment, the AUC is higher than the first experiment, but the positive samples must be selected. To overcome this shortage, the dataset of the third experiment is the same as the supervised experiment, where all the positive and negative samples are mixed together, which means that there is no need to label the samples. This experiment shows that the AUC of the model is 0.9623. Although the AUC is slightly lower than that of the second experiment, the AUC is high enough for actual use. The experiment results demonstrate the feasibility and superiority of the proposed unsupervised networks.
APA, Harvard, Vancouver, ISO, and other styles
4

Feng, Yu, and Hui Sun. "Basketball Footwork and Application Supported by Deep Learning Unsupervised Transfer Method." International Journal of Information Technology and Web Engineering 18, no. 1 (December 1, 2023): 1–17. http://dx.doi.org/10.4018/ijitwe.334365.

Full text
Abstract:
The combination of traditional basketball footwork mobile teaching and AI will become a hot spot in basketball footwork research. This article used a deep learning (DL) unsupervised transfer method: Convolutional neural networks are used to extract source and target domain samples for transfer learning. Feature extraction is performed on the data, and the impending action of a basketball player is predicted. Meanwhile, the unsupervised human action transfer method is studied to provide new ideas for basketball footwork action series data modeling. Finally, the theoretical framework of DL unsupervised transfer learning is reviewed. Its principle is explored and applied in the teaching of basketball footwork. The results show that convolutional neural networks can predict players' movement trajectories, unsupervised training using network data dramatically increases the variety of actions during training. The classification accuracy of the transfer learning method is high, and it can be used for the different basketball footwork in the corresponding stage of the court.
APA, Harvard, Vancouver, ISO, and other styles
5

Sun, Yanan, Gary G. Yen, and Zhang Yi. "Evolving Unsupervised Deep Neural Networks for Learning Meaningful Representations." IEEE Transactions on Evolutionary Computation 23, no. 1 (February 2019): 89–103. http://dx.doi.org/10.1109/tevc.2018.2808689.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shi, Yu, Cien Fan, Lian Zou, Caixia Sun, and Yifeng Liu. "Unsupervised Adversarial Defense through Tandem Deep Image Priors." Electronics 9, no. 11 (November 19, 2020): 1957. http://dx.doi.org/10.3390/electronics9111957.

Full text
Abstract:
Deep neural networks are vulnerable to the adversarial example synthesized by adding imperceptible perturbations to the original image but can fool the classifier to provide wrong prediction outputs. This paper proposes an image restoration approach which provides a strong defense mechanism to provide robustness against adversarial attacks. We show that the unsupervised image restoration framework, deep image prior, can effectively eliminate the influence of adversarial perturbations. The proposed method uses multiple deep image prior networks called tandem deep image priors to recover the original image from adversarial example. Tandem deep image priors contain two deep image prior networks. The first network captures the main information of images and the second network recovers original image based on the prior information provided by the first network. The proposed method reduces the number of iterations originally required by deep image prior network and does not require adjusting the classifier or pre-training. It can be combined with other defensive methods. Our experiments show that the proposed method surprisingly achieves higher classification accuracy on ImageNet against a wide variety of adversarial attacks than previous state-of-the-art defense methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Thakur, Amey. "Generative Adversarial Networks." International Journal for Research in Applied Science and Engineering Technology 9, no. 8 (August 31, 2021): 2307–25. http://dx.doi.org/10.22214/ijraset.2021.37723.

Full text
Abstract:
Abstract: Deep learning's breakthrough in the field of artificial intelligence has resulted in the creation of a slew of deep learning models. One of these is the Generative Adversarial Network, which has only recently emerged. The goal of GAN is to use unsupervised learning to analyse the distribution of data and create more accurate results. The GAN allows the learning of deep representations in the absence of substantial labelled training information. Computer vision, language and video processing, and image synthesis are just a few of the applications that might benefit from these representations. The purpose of this research is to get the reader conversant with the GAN framework as well as to provide the background information on Generative Adversarial Networks, including the structure of both the generator and discriminator, as well as the various GAN variants along with their respective architectures. Applications of GANs are also discussed with examples. Keywords: Generative Adversarial Networks (GANs), Generator, Discriminator, Supervised and Unsupervised Learning, Discriminative and Generative Modelling, Backpropagation, Loss Functions, Machine Learning, Deep Learning, Neural Networks, Convolutional Neural Network (CNN), Deep Convolutional GAN (DCGAN), Conditional GAN (cGAN), Information Maximizing GAN (InfoGAN), Stacked GAN (StackGAN), Pix2Pix, Wasserstein GAN (WGAN), Progressive Growing GAN (ProGAN), BigGAN, StyleGAN, CycleGAN, Super-Resolution GAN (SRGAN), Image Synthesis, Image-to-Image Translation.
APA, Harvard, Vancouver, ISO, and other styles
8

Ferles, Christos, Yannis Papanikolaou, Stylianos P. Savaidis, and Stelios A. Mitilineos. "Deep Self-Organizing Map of Convolutional Layers for Clustering and Visualizing Image Data." Machine Learning and Knowledge Extraction 3, no. 4 (November 14, 2021): 879–99. http://dx.doi.org/10.3390/make3040044.

Full text
Abstract:
The self-organizing convolutional map (SOCOM) hybridizes convolutional neural networks, self-organizing maps, and gradient backpropagation optimization into a novel integrated unsupervised deep learning model. SOCOM structurally combines, architecturally stacks, and algorithmically fuses its deep/unsupervised learning components. The higher-level representations produced by its underlying convolutional deep architecture are embedded in its topologically ordered neural map output. The ensuing unsupervised clustering and visualization operations reflect the model’s degree of synergy between its building blocks and synopsize its range of applications. Clustering results are reported on the STL-10 benchmark dataset coupled with the devised neural map visualizations. The series of conducted experiments utilize a deep VGG-based SOCOM model.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhuang, Chengxu, Siming Yan, Aran Nayebi, Martin Schrimpf, Michael C. Frank, James J. DiCarlo, and Daniel L. K. Yamins. "Unsupervised neural network models of the ventral visual stream." Proceedings of the National Academy of Sciences 118, no. 3 (January 11, 2021): e2014196118. http://dx.doi.org/10.1073/pnas.2014196118.

Full text
Abstract:
Deep neural networks currently provide the best quantitative models of the response patterns of neurons throughout the primate ventral visual stream. However, such networks have remained implausible as a model of the development of the ventral stream, in part because they are trained with supervised methods requiring many more labels than are accessible to infants during development. Here, we report that recent rapid progress in unsupervised learning has largely closed this gap. We find that neural network models learned with deep unsupervised contrastive embedding methods achieve neural prediction accuracy in multiple ventral visual cortical areas that equals or exceeds that of models derived using today’s best supervised methods and that the mapping of these neural network models’ hidden layers is neuroanatomically consistent across the ventral stream. Strikingly, we find that these methods produce brain-like representations even when trained solely with real human child developmental data collected from head-mounted cameras, despite the fact that these datasets are noisy and limited. We also find that semisupervised deep contrastive embeddings can leverage small numbers of labeled examples to produce representations with substantially improved error-pattern consistency to human behavior. Taken together, these results illustrate a use of unsupervised learning to provide a quantitative model of a multiarea cortical brain system and present a strong candidate for a biologically plausible computational theory of primate sensory learning.
APA, Harvard, Vancouver, ISO, and other styles
10

Lin, Baihan. "Regularity Normalization: Neuroscience-Inspired Unsupervised Attention across Neural Network Layers." Entropy 24, no. 1 (December 28, 2021): 59. http://dx.doi.org/10.3390/e24010059.

Full text
Abstract:
Inspired by the adaptation phenomenon of neuronal firing, we propose the regularity normalization (RN) as an unsupervised attention mechanism (UAM) which computes the statistical regularity in the implicit space of neural networks under the Minimum Description Length (MDL) principle. Treating the neural network optimization process as a partially observable model selection problem, the regularity normalization constrains the implicit space by a normalization factor, the universal code length. We compute this universal code incrementally across neural network layers and demonstrate the flexibility to include data priors such as top-down attention and other oracle information. Empirically, our approach outperforms existing normalization methods in tackling limited, imbalanced and non-stationary input distribution in image classification, classic control, procedurally-generated reinforcement learning, generative modeling, handwriting generation and question answering tasks with various neural network architectures. Lastly, the unsupervised attention mechanisms is a useful probing tool for neural networks by tracking the dependency and critical learning stages across layers and recurrent time steps of deep networks.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Unsupervised deep neural networks"

1

Donati, Lorenzo. "Domain Adaptation through Deep Neural Networks for Health Informatics." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14888/.

Full text
Abstract:
The PreventIT project is an EU Horizon 2020 project aimed at preventing early functional decline at younger old age. The analysis of causal links between risk factors and functional decline has been made possible by the cooperation of several research institutes' studies. However, since each research institute collects and delivers different kinds of data in different formats, so far the analysis has been assisted by expert geriatricians whose role is to detect the best candidates among hundreds of fields and offer a semantic interpretation of the values. This manual data harmonization approach is very common in both scientific and industrial environments. In this thesis project an alternative method for parsing heterogeneous data is proposed. Since all the datasets represent semantically related data, being all made from longitudinal studies on aging-related metrics, it is possible to train an artificial neural network to perform an automatic domain adaptation. To achieve this goal, a Stacked Denoising Autoencoder has been implemented and trained to extract a domain-invariant representation of the data. Then, from this high-level representation, multiple classifiers have been trained to validate the model and ultimately to predict the probability of functional decline of the patient. This innovative approach to the domain adaptation process can provide an easy and fast solution to many research fields that now rely on human interaction to analyze the semantic data model and perform cross-dataset analysis. Functional decline classifiers show a great improvement in their performance when trained on the domain-invariant features extracted by the Stacked Denoising Autoencoder. Furthermore, this project applies multiple deep neural network classifiers on top of the Stacked Denoising Autoencoder representation, achieving excellent results for the prediction of functional decline in a real case study that involves two different datasets.
APA, Harvard, Vancouver, ISO, and other styles
2

Ahn, Euijoon. "Unsupervised Deep Feature Learning for Medical Image Analysis." Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/23002.

Full text
Abstract:
The availability of annotated image datasets and recent advances in supervised deep learning methods are enabling the end-to-end derivation of representative image features that can impact a variety of image analysis problems. These supervised methods use prior knowledge derived from labelled training data and approaches, for example, convolutional neural networks (CNNs) have produced impressive results in natural (photographic) image classification. CNNs learn image features in a hierarchical fashion. Each deeper layer of the network learns a representation of the image data that is higher level and semantically more meaningful. However, the accuracy and robustness of image features with supervised CNNs are dependent on the availability of large-scale labelled training data. In medical imaging, these large labelled datasets are scarce mainly due to the complexity of manual annotation and inter- and intra-observer variability in label assignment. The concept of ‘transfer learning’ – the adoption of image features from different domains, e.g., image features learned from natural photographic images – was introduced to address the lack of large amounts of labelled medical image data. These image features, however, are often generic and do not perform well in specific medical image analysis problems. An alternative approach was to optimise these features by retraining the generic features using a relatively small set of labelled medical images. This ‘fine-tuning’ approach, however, is not able to match the overall accuracy of learning image features directly from large collections of data that are specifically related to the problem at hand. An alternative approach is to use unsupervised feature learning algorithms to build features from unlabelled data, which then allows unannotated image archives to be used. Many unsupervised feature learning algorithms such as sparse coding (SC), auto-encoder (AE) and Restricted Boltzmann Machines (RBMs), however, have often been limited to learning low-level features such as lines and edges. In an attempt to address these limitations, in this thesis, we present several new unsupervised deep learning methods to learn semantic high-level features from unlabelled medical images to address the challenge of learning representative visual features in medical image analysis. We present two methods to derive non-linear and non-parametric models, which are crucial to unsupervised feature learning algorithms; one method embeds a kernel learning within CNNs while the other couples clustering with CNNs. We then further improved the quality of image features using domain adaptation methods (DAs) that learn representations that are invariant to domains with different data distributions. We present a deep unsupervised feature extractor to transform the feature maps from the pre-trained CNN on natural images to a set of non-redundant and relevant medical image features. Our feature extractor preserves meaningful generic features from the pre-trained domain and learns specific local features that are more representative of the medical image data. We conducted extensive experiments on 4 public datasets which have diverse visual characteristics of medical images including X-ray, dermoscopic and CT images. Our results show that our methods had better accuracy when compared to other conventional unsupervised methods and competitive accuracy to methods that used state-of-the-art supervised CNNs. Our findings suggest that our methods could scale to many different transfer learning or domain adaptation approaches where they have none or small sets of labelled data.
APA, Harvard, Vancouver, ISO, and other styles
3

Cherti, Mehdi. "Deep generative neural networks for novelty generation : a foundational framework, metrics and experiments." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS029/document.

Full text
Abstract:
Des avancées significatives sur les réseaux de neurones profonds ont récemment permis le développement de technologies importantes comme les voitures autonomes et les assistants personnels intelligents basés sur la commande vocale. La plupart des succès en apprentissage profond concernent la prédiction, alors que les percées initiales viennent des modèles génératifs. Actuellement, même s'il existe des outils puissants dans la littérature des modèles génératifs basés sur les réseaux profonds, ces techniques sont essentiellement utilisées pour la prédiction ou pour générer des objets connus (i.e., des images de haute qualité qui appartiennent à des classes connues) : un objet généré qui est à priori inconnu est considéré comme une erreur (Salimans et al., 2016) ou comme un objet fallacieux (Bengio et al., 2013b). En d'autres termes, quand la prédiction est considérée comme le seul objectif possible, la nouveauté est vue comme une erreur - que les chercheurs ont essayé d'éliminer au maximum. Cette thèse défends le point de vue que, plutôt que d'éliminer ces nouveautés, on devrait les étudier et étudier le potentiel génératif des réseaux neuronaux pour créer de la nouveauté utile - particulièrement sachant l'importance économique et sociétale de la création d'objets nouveaux dans les sociétés contemporaines. Cette thèse a pour objectif d'étudier la génération de la nouveauté et sa relation avec les modèles de connaissance produits par les réseaux neurones profonds génératifs. Notre première contribution est la démonstration de l'importance des représentations et leur impact sur le type de nouveautés qui peuvent être générées : une conséquence clé est qu'un agent créatif a besoin de re-représenter les objets connus et utiliser cette représentation pour générer des objets nouveaux. Ensuite, on démontre que les fonctions objectives traditionnelles utilisées dans la théorie de l'apprentissage statistique, comme le maximum de vraisemblance, ne sont pas nécessairement les plus adaptées pour étudier la génération de nouveauté. On propose plusieurs alternatives à un niveau conceptuel. Un deuxième résultat clé est la confirmation que les modèles actuels - qui utilisent les fonctions objectives traditionnelles - peuvent en effet générer des objets inconnus. Cela montre que même si les fonctions objectives comme le maximum de vraisemblance s'efforcent à éliminer la nouveauté, les implémentations en pratique échouent à le faire. A travers une série d'expérimentations, on étudie le comportement de ces modèles ainsi que les objets qu'ils génèrent. En particulier, on propose une nouvelle tâche et des métriques pour la sélection de bons modèles génératifs pour la génération de la nouveauté. Finalement, la thèse conclue avec une série d'expérimentations qui clarifie les caractéristiques des modèles qui génèrent de la nouveauté. Les expériences montrent que la sparsité, le niveaux du niveau de corruption et la restriction de la capacité des modèles tuent la nouveauté et que les modèles qui arrivent à reconnaître des objets nouveaux arrivent généralement aussi à générer de la nouveauté
In recent years, significant advances made in deep neural networks enabled the creation of groundbreaking technologies such as self-driving cars and voice-enabled personal assistants. Almost all successes of deep neural networks are about prediction, whereas the initial breakthroughs came from generative models. Today, although we have very powerful deep generative modeling techniques, these techniques are essentially being used for prediction or for generating known objects (i.e., good quality images of known classes): any generated object that is a priori unknown is considered as a failure mode (Salimans et al., 2016) or as spurious (Bengio et al., 2013b). In other words, when prediction seems to be the only possible objective, novelty is seen as an error that researchers have been trying hard to eliminate. This thesis defends the point of view that, instead of trying to eliminate these novelties, we should study them and the generative potential of deep nets to create useful novelty, especially given the economic and societal importance of creating new objects in contemporary societies. The thesis sets out to study novelty generation in relationship with data-driven knowledge models produced by deep generative neural networks. Our first key contribution is the clarification of the importance of representations and their impact on the kind of novelties that can be generated: a key consequence is that a creative agent might need to rerepresent known objects to access various kinds of novelty. We then demonstrate that traditional objective functions of statistical learning theory, such as maximum likelihood, are not necessarily the best theoretical framework for studying novelty generation. We propose several other alternatives at the conceptual level. A second key result is the confirmation that current models, with traditional objective functions, can indeed generate unknown objects. This also shows that even though objectives like maximum likelihood are designed to eliminate novelty, practical implementations do generate novelty. Through a series of experiments, we study the behavior of these models and the novelty they generate. In particular, we propose a new task setup and metrics for selecting good generative models. Finally, the thesis concludes with a series of experiments clarifying the characteristics of models that can exhibit novelty. Experiments show that sparsity, noise level, and restricting the capacity of the net eliminates novelty and that models that are better at recognizing novelty are also good at generating novelty
APA, Harvard, Vancouver, ISO, and other styles
4

Kilinc, Ismail Ozsel. "Graph-based Latent Embedding, Annotation and Representation Learning in Neural Networks for Semi-supervised and Unsupervised Settings." Scholar Commons, 2017. https://scholarcommons.usf.edu/etd/7415.

Full text
Abstract:
Machine learning has been immensely successful in supervised learning with outstanding examples in major industrial applications such as voice and image recognition. Following these developments, the most recent research has now begun to focus primarily on algorithms which can exploit very large sets of unlabeled examples to reduce the amount of manually labeled data required for existing models to perform well. In this dissertation, we propose graph-based latent embedding/annotation/representation learning techniques in neural networks tailored for semi-supervised and unsupervised learning problems. Specifically, we propose a novel regularization technique called Graph-based Activity Regularization (GAR) and a novel output layer modification called Auto-clustering Output Layer (ACOL) which can be used separately or collaboratively to develop scalable and efficient learning frameworks for semi-supervised and unsupervised settings. First, singularly using the GAR technique, we develop a framework providing an effective and scalable graph-based solution for semi-supervised settings in which there exists a large number of observations but a small subset with ground-truth labels. The proposed approach is natural for the classification framework on neural networks as it requires no additional task calculating the reconstruction error (as in autoencoder based methods) or implementing zero-sum game mechanism (as in adversarial training based methods). We demonstrate that GAR effectively and accurately propagates the available labels to unlabeled examples. Our results show comparable performance with state-of-the-art generative approaches for this setting using an easier-to-train framework. Second, we explore a different type of semi-supervised setting where a coarse level of labeling is available for all the observations but the model has to learn a fine, deeper level of latent annotations for each one. Problems in this setting are likely to be encountered in many domains such as text categorization, protein function prediction, image classification as well as in exploratory scientific studies such as medical and genomics research. We consider this setting as simultaneously performed supervised classification (per the available coarse labels) and unsupervised clustering (within each one of the coarse labels) and propose a novel framework combining GAR with ACOL, which enables the network to perform concurrent classification and clustering. We demonstrate how the coarse label supervision impacts performance and the classification task actually helps propagate useful clustering information between sub-classes. Comparative tests on the most popular image datasets rigorously demonstrate the effectiveness and competitiveness of the proposed approach. The third and final setup builds on the prior framework to unlock fully unsupervised learning where we propose to substitute real, yet unavailable, parent- class information with pseudo class labels. In this novel unsupervised clustering approach the network can exploit hidden information indirectly introduced through a pseudo classification objective. We train an ACOL network through this pseudo supervision together with unsupervised objective based on GAR and ultimately obtain a k-means friendly latent representation. Furthermore, we demonstrate how the chosen transformation type impacts performance and helps propagate the latent information that is useful in revealing unknown clusters. Our results show state-of-the-art performance for unsupervised clustering tasks on MNIST, SVHN and USPS datasets with the highest accuracies reported to date in the literature.
APA, Harvard, Vancouver, ISO, and other styles
5

McClintick, Kyle W. "Training Data Generation Framework For Machine-Learning Based Classifiers." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-theses/1276.

Full text
Abstract:
In this thesis, we propose a new framework for the generation of training data for machine learning techniques used for classification in communications applications. Machine learning-based signal classifiers do not generalize well when training data does not describe the underlying probability distribution of real signals. The simplest way to accomplish statistical similarity between training and testing data is to synthesize training data passed through a permutation of plausible forms of noise. To accomplish this, a framework is proposed that implements arbitrary channel conditions and baseband signals. A dataset generated using the framework is considered, and is shown to be appropriately sized by having $11\%$ lower entropy than state-of-the-art datasets. Furthermore, unsupervised domain adaptation can allow for powerful generalized training via deep feature transforms on unlabeled evaluation-time signals. A novel Deep Reconstruction-Classification Network (DRCN) application is introduced, which attempts to maintain near-peak signal classification accuracy despite dataset bias, or perturbations on testing data unforeseen in training. Together, feature transforms and diverse training data generated from the proposed framework, teaching a range of plausible noise, can train a deep neural net to classify signals well in many real-world scenarios despite unforeseen perturbations.
APA, Harvard, Vancouver, ISO, and other styles
6

Boschini, Matteo. "Unsupervised Learning of Scene Flow." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16226/.

Full text
Abstract:
As Computer Vision-powered autonomous systems are increasingly deployed to solve problems in the wild, the case is made for developing visual understanding methods that are robust and flexible. One of the most challenging tasks for this purpose is given by the extraction of scene flow, that is the dense three-dimensional vector field that associates each world point with its corresponding position in the next observed frame, hence describing its three-dimensional motion entirely. The recent addition of a limited amount of ground truth scene flow information to the popular KITTI dataset prompted a renewed interest in the study of techniques for scene flow inference, although the proposed solutions in literature mostly rely on computation-intensive techniques and are characterised by execution times that are not suited for real-time application. In the wake of the recent widespread adoption of Deep Learning techniques to Computer Vision tasks and in light of the convenience of Unsupervised Learning for scenarios in which ground truth collection is difficult and time-consuming, this thesis work proposes the first neural network architecture to be trained in end-to-end fashion for unsupervised scene flow regression from monocular visual data, called Pantaflow. The proposed solution is much faster than currently available state-of-the-art methods and therefore represents a step towards the achievement of real-time scene flow inference.
APA, Harvard, Vancouver, ISO, and other styles
7

Kalinicheva, Ekaterina. "Unsupervised satellite image time series analysis using deep learning techniques." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS335.

Full text
Abstract:
Cette thèse présente un ensemble d'algorithmes non-supervisés pour l'analyse générique de séries temporelles d'images satellites (STIS). Nos algorithmes exploitent des méthodes de machine learning et, notamment, les réseaux de neurones afin de détecter les différentes entités spatio-temporelles et leurs changements éventuels dans le temps. Nous visons à identifier trois types de comportement temporel : les zones sans changements, les changements saisonniers, les changements non triviaux (changements permanents comme les constructions, la rotation des cultures agricoles, etc).Par conséquent, nous proposons deux frameworks : pour la détection et le clustering des changements non-triviaux et pour le clustering des changements saisonniers et des zones sans changements. Le premier framework est composé de deux étapes : la détection de changements bi-temporels et leur interprétation dans le contexte multi-temporel avec une approche basée graphes. La détection de changements bi-temporels est faite pour chaque couple d’images consécutives et basée sur la transformation des features avec les autoencodeurs (AEs). A l’étape suivante, les changements à différentes dates qui appartiennent à la même zone géographique forment les graphes d’évolution qui sont par la suite clusterisés avec un modèle AE de réseaux de neurones récurrents. Le deuxième framework présente le clustering basé objets de STIS. Premièrement, la STIS est encodée en image unique avec un AE convolutif 3D multi-vue. Dans un deuxième temps, nous faisons la segmentation en deux étapes en utilisant à la fois l’image encodée et la STIS. Finalement, les segments obtenus sont clusterisés avec leurs descripteurs encodés
This thesis presents a set of unsupervised algorithms for satellite image time series (SITS) analysis. Our methods exploit machine learning algorithms and, in particular, neural networks to detect different spatio-temporal entities and their eventual changes in the time.In our thesis, we aim to identify three different types of temporal behavior: no change areas, seasonal changes (vegetation and other phenomena that have seasonal recurrence) and non-trivial changes (permanent changes such as constructions or demolishment, crop rotation, etc). Therefore, we propose two frameworks: one for detection and clustering of non-trivial changes and another for clustering of “stable” areas (seasonal changes and no change areas). The first framework is composed of two steps which are bi-temporal change detection and the interpretation of detected changes in a multi-temporal context with graph-based approaches. The bi-temporal change detection is performed for each pair of consecutive images of the SITS and is based on feature translation with autoencoders (AEs). At the next step, the changes from different timestamps that belong to the same geographic area form evolution change graphs. The graphs are then clustered using a recurrent neural networks AE model to identify different types of change behavior. For the second framework, we propose an approach for object-based SITS clustering. First, we encode SITS with a multi-view 3D convolutional AE in a single image. Second, we perform a two steps SITS segmentation using the encoded SITS and original images. Finally, the obtained segments are clustered exploiting their encoded descriptors
APA, Harvard, Vancouver, ISO, and other styles
8

Yuan, Xiao. "Graph neural networks for spatial gene expression analysis of the developing human heart." Thesis, Uppsala universitet, Institutionen för biologisk grundutbildning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-427330.

Full text
Abstract:
Single-cell RNA sequencing and in situ sequencing were combined in a recent study of the developing human heart to explore the transcriptional landscape at three developmental stages. However, the method used in the study to create the spatial cellular maps has some limitations. It relies on image segmentation of the nuclei and cell types defined in advance by single-cell sequencing. In this study, we applied a new unsupervised approach based on graph neural networks on the in situ sequencing data of the human heart to find spatial gene expression patterns and detect novel cell and sub-cell types. In this thesis, we first introduce some relevant background knowledge about the sequencing techniques that generate our data, machine learning in single-cell analysis, and deep learning on graphs. We have explored several graph neural network models and algorithms to learn embeddings for spatial gene expression. Dimensionality reduction and cluster analysis were performed on the embeddings for visualization and identification of biologically functional domains. Based on the cluster gene expression profiles, locations of the clusters in the heart sections, and comparison with cell types defined in the previous study, the results of our experiments demonstrate that graph neural networks can learn meaningful representations of spatial gene expression in the human heart. We hope further validations of our clustering results could give new insights into cell development and differentiation processes of the human heart.
APA, Harvard, Vancouver, ISO, and other styles
9

VENTURA, FRANCESCO. "Explaining black-box deep neural models' predictions, behaviors, and performances through the unsupervised mining of their inner knowledge." Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2912972.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Yingzhen. "Approximate inference : new visions." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/277549.

Full text
Abstract:
Nowadays machine learning (especially deep learning) techniques are being incorporated to many intelligent systems affecting the quality of human life. The ultimate purpose of these systems is to perform automated decision making, and in order to achieve this, predictive systems need to return estimates of their confidence. Powered by the rules of probability, Bayesian inference is the gold standard method to perform coherent reasoning under uncertainty. It is generally believed that intelligent systems following the Bayesian approach can better incorporate uncertainty information for reliable decision making, and be less vulnerable to attacks such as data poisoning. Critically, the success of Bayesian methods in practice, including the recent resurgence of Bayesian deep learning, relies on fast and accurate approximate Bayesian inference applied to probabilistic models. These approximate inference methods perform (approximate) Bayesian reasoning at a relatively low cost in terms of time and memory, thus allowing the principles of Bayesian modelling to be applied to many practical settings. However, more work needs to be done to scale approximate Bayesian inference methods to big systems such as deep neural networks and large-scale dataset such as ImageNet. In this thesis we develop new algorithms towards addressing the open challenges in approximate inference. In the first part of the thesis we develop two new approximate inference algorithms, by drawing inspiration from the well known expectation propagation and message passing algorithms. Both approaches provide a unifying view of existing variational methods from different algorithmic perspectives. We also demonstrate that they lead to better calibrated inference results for complex models such as neural network classifiers and deep generative models, and scale to large datasets containing hundreds of thousands of data-points. In the second theme of the thesis we propose a new research direction for approximate inference: developing algorithms for fitting posterior approximations of arbitrary form, by rethinking the fundamental principles of Bayesian computation and the necessity of algorithmic constraints in traditional inference schemes. We specify four algorithmic options for the development of such new generation approximate inference methods, with one of them further investigated and applied to Bayesian deep learning tasks.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Unsupervised deep neural networks"

1

E, Hinton Geoffrey, and Sejnowski Terrence J, eds. Unsupervised learning: Foundations of neural computation. Cambridge, Mass: MIT Press, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Baruque, Bruno. Fusion methods for unsupervised learning ensembles. Berlin: Springer, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Aggarwal, Charu C. Neural Networks and Deep Learning. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-94463-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Aggarwal, Charu C. Neural Networks and Deep Learning. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-29642-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Moolayil, Jojo. Learn Keras for Deep Neural Networks. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-4240-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Caterini, Anthony L., and Dong Eui Chang. Deep Neural Networks in a Mathematical Framework. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75304-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Razaghi, Hooshmand Shokri. Statistical Machine Learning & Deep Neural Networks Applied to Neural Data Analysis. [New York, N.Y.?]: [publisher not identified], 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fingscheidt, Tim, Hanno Gottschalk, and Sebastian Houben, eds. Deep Neural Networks and Data for Automated Driving. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-01233-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Modrzyk, Nicolas. Real-Time IoT Imaging with Deep Neural Networks. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5722-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Supervised and unsupervised pattern recognition: Feature extraction and computational intelligence. Boca Raton, Fla: CRC Press, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Unsupervised deep neural networks"

1

Song, Zeyang, Xi Wu, Mengwen Yuan, and Huajin Tang. "An Unsupervised Spiking Deep Neural Network for Object Recognition." In Advances in Neural Networks – ISNN 2019, 361–70. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-22808-8_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Deshwal, Deepti, and Pardeep Sangwan. "A Comprehensive Study of Deep Neural Networks for Unsupervised Deep Learning." In Artificial Intelligence for Sustainable Development: Theory, Practice and Future Applications, 101–26. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-51920-9_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhou, Jianchao, Xiaoou Chen, and Deshun Yang. "Multimodel Music Emotion Recognition Using Unsupervised Deep Neural Networks." In Lecture Notes in Electrical Engineering, 27–39. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-8707-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yan, Ruqiang, and Zhibin Zhao. "Unsupervised Deep Transfer Learning for Intelligent Fault Diagnosis." In Deep Neural Networks-Enabled Intelligent Fault Diagnosis of Mechanical Systems, 109–36. Boca Raton: CRC Press, 2024. http://dx.doi.org/10.1201/9781003474463-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dreher, Kris K., Leonardo Ayala, Melanie Schellenberg, Marco Hübner, Jan-Hinrich Nölke, Tim J. Adler, Silvia Seidlitz, et al. "Unsupervised Domain Transfer with Conditional Invertible Neural Networks." In Lecture Notes in Computer Science, 770–80. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43907-0_73.

Full text
Abstract:
AbstractSynthetic medical image generation has evolved as a key technique for neural network training and validation. A core challenge, however, remains in the domain gap between simulations and real data. While deep learning-based domain transfer using Cycle Generative Adversarial Networks and similar architectures has led to substantial progress in the field, there are use cases in which state-of-the-art approaches still fail to generate training images that produce convincing results on relevant downstream tasks. Here, we address this issue with a domain transfer approach based on conditional invertible neural networks (cINNs). As a particular advantage, our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood training. To showcase our method’s generic applicability, we apply it to two spectral imaging modalities at different scales, namely hyperspectral imaging (pixel-level) and photoacoustic tomography (image-level). According to comprehensive experiments, our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks (binary and multi-class). cINN-based domain transfer could thus evolve as an important method for realistic synthetic data generation in the field of spectral imaging and beyond. The code is available at https://github.com/IMSY-DKFZ/UDT-cINN.
APA, Harvard, Vancouver, ISO, and other styles
6

Das, Debasmit, and C. S. George Lee. "Graph Matching and Pseudo-Label Guided Deep Unsupervised Domain Adaptation." In Artificial Neural Networks and Machine Learning – ICANN 2018, 342–52. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01424-7_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Slama, Dirk. "Artificial Intelligence 101." In The Digital Playbook, 11–17. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-030-88221-1_2.

Full text
Abstract:
AbstractThis chapter provides an Artificial Intelligence 101, including a basic overview, a summary of Supervised, Unsupervised and Reinforcement Learning, as well as Deep Learning and Artificial Neural Networks (Fig. 2.1).
APA, Harvard, Vancouver, ISO, and other styles
8

Zamora-Martínez, Francisco, Javier Muñoz-Almaraz, and Juan Pardo. "Integration of Unsupervised and Supervised Criteria for Deep Neural Networks Training." In Artificial Neural Networks and Machine Learning – ICANN 2016, 55–62. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-44781-0_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lin, Xianghong, and Pangao Du. "Spike-Train Level Unsupervised Learning Algorithm for Deep Spiking Belief Networks." In Artificial Neural Networks and Machine Learning – ICANN 2020, 634–45. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61616-8_51.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liang, Yu, Yi Yang, Furao Shen, Jinxi Zhao, and Tao Zhu. "An Incremental Deep Learning Network for On-line Unsupervised Feature Extraction." In Neural Information Processing, 383–92. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70096-0_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Unsupervised deep neural networks"

1

Cerisara, Christophe, Paul Caillon, and Guillaume Le Berre. "Unsupervised Post-Tuning of Deep Neural Networks." In 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9534198.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sato, Kazuki, Kenta Hama, Takashi Matsubara, and Kuniaki Uehara. "Predictable Uncertainty-Aware Unsupervised Deep Anomaly Segmentation." In 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. http://dx.doi.org/10.1109/ijcnn.2019.8852144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Xie, Ying, Linh Le, and Jie Hao. "Unsupervised deep kernel for high dimensional data." In 2017 International Joint Conference on Neural Networks (IJCNN). IEEE, 2017. http://dx.doi.org/10.1109/ijcnn.2017.7965868.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Braga, Pedro. "Backpropagating the Unsupervised Error of Self-Organizing Maps to Deep Neural Networks." In LatinX in AI at Neural Information Processing Systems Conference 2019. Journal of LatinX in AI Research, 2019. http://dx.doi.org/10.52591/lxai2019120818.

Full text
Abstract:
Previous research has shown the potential that Deep Neural Networks have in building representations that are useful not only for the task that the network was trained for but also for correlated tasks that take data from similar input distributions. For instance, recent works showed that representations built by a Convolutional Neural Network (CNN) are better than the state-of-art handcrafted features used for object classification and demonstrated that the representations learned by Google LeNet could be used for the task of Unsupervised Visual Object Recognition (UVOC), achieving about 75-90% of agreement with labels assigned by humans in an unseen dataset, when fed as input to a SOM-based clustering method. In this work, we propose an approach that combines SOM with Deep Learning in a synergic way to allow dealing with complex data structures, such as images and sound, by backpropagating the unsupervised error through layers of neurons.
APA, Harvard, Vancouver, ISO, and other styles
5

JUNGES, RAFAEL, ZAHRA RASTIN, LUCA LOMAZZI, MARCO GIGLIO, and FRANCESCO CADINI. "DAMAGE LOCALIZATION FRAMEWORKS BASED ON UNSUPERVISED DEEP LEARNING NEURAL NETWORKS." In Structural Health Monitoring 2023. Destech Publications, Inc., 2023. http://dx.doi.org/10.12783/shm2023/36889.

Full text
Abstract:
In recent years ultrasonic-guided waves (UGWs) have been successfully employed in structural health monitoring (SHM) for damage localization due to their high sensitivity to changes in the mechanical properties of the medium they travel through. Lamb waves (LW) are a particular type of UGW that can be generated by piezoelectric transducers placed on thin-walled structures, such as vehicles in general (terrestrial, naval, and aeronautical), and present characteristics that are favorable to SHM. Damage localization using LWs has been commonly accomplished through tomographic algorithms. However, these methods have unresolved issues such as artifacts generation in damage probability maps and a strong reliance on sensor network configuration for signal acquisition. As a solution, data-driven approaches based on supervised machine learning have been suggested. These methods have demonstrated good performance. However, for reliable results, they require large, labeled datasets, meaning that acquisitions must be performed before and after the structure is damaged. These datasets, especially data from the damaged state, are generally not available for real-life structures, given the cost and complexity to experimentally replicate certain damages. Unsupervised machine learning methods might be a solution to this problem, given that the neural network is trained using data acquired from the un-damaged structure only. To this date, no fully unsupervised damage localization frameworks have been proposed. Hence, in this work, two unsupervised data-driven methods are presented to process LWs to localize damage. Specifically, convolutional auto-associative neural networks (CAANNs) and generative adversarial networks (GANs). Both methods process diagnostic signals without requiring any prior feature extraction. After all signals are processed, a damage probability map is generated. The performance of both methods is tested using an experimental dataset of LW acquisitions using a set of piezoelectric transducers on a full-scale composite wing. Results showed that the proposed methods have good damage localization accuracy.
APA, Harvard, Vancouver, ISO, and other styles
6

Feng, Guanchao, J. Gerald Quirk, and Petar M. Djuric. "Supervised and Unsupervised Learning of Fetal Heart Rate Tracings with Deep Gaussian Processes." In 2018 14th Symposium on Neural Networks and Applications (NEUREL). IEEE, 2018. http://dx.doi.org/10.1109/neurel.2018.8586992.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yu, Chaohui, Jindong Wang, Yiqiang Chen, and Zijing Wu. "Accelerating Deep Unsupervised Domain Adaptation with Transfer Channel Pruning." In 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. http://dx.doi.org/10.1109/ijcnn.2019.8851810.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Dong, Miaomiao Cheng, Chen Min, and Liping Jing. "Unsupervised Deep Imputed Hashing for Partial Cross-modal Retrieval." In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. http://dx.doi.org/10.1109/ijcnn48605.2020.9206611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tian, Qiangxing, Jinxin Liu, Guanchu Wang, and Donglin Wang. "Unsupervised Discovery of Transitional Skills for Deep Reinforcement Learning." In 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9533820.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Qian, Fanlin Meng, and Toby P. Breckon. "On Fine-Tuned Deep Features for Unsupervised Domain Adaptation." In 2023 International Joint Conference on Neural Networks (IJCNN). IEEE, 2023. http://dx.doi.org/10.1109/ijcnn54540.2023.10191262.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Unsupervised deep neural networks"

1

Yu, Haichao, Haoxiang Li, Honghui Shi, Thomas S. Huang, and Gang Hua. Any-Precision Deep Neural Networks. Web of Open Science, December 2020. http://dx.doi.org/10.37686/ejai.v1i1.82.

Full text
Abstract:
We present Any-Precision Deep Neural Networks (Any- Precision DNNs), which are trained with a new method that empowers learned DNNs to be flexible in any numerical precision during inference. The same model in runtime can be flexibly and directly set to different bit-width, by trun- cating the least significant bits, to support dynamic speed and accuracy trade-off. When all layers are set to low- bits, we show that the model achieved accuracy compara- ble to dedicated models trained at the same precision. This nice property facilitates flexible deployment of deep learn- ing models in real-world applications, where in practice trade-offs between model accuracy and runtime efficiency are often sought. Previous literature presents solutions to train models at each individual fixed efficiency/accuracy trade-off point. But how to produce a model flexible in runtime precision is largely unexplored. When the demand of efficiency/accuracy trade-off varies from time to time or even dynamically changes in runtime, it is infeasible to re-train models accordingly, and the storage budget may forbid keeping multiple models. Our proposed framework achieves this flexibility without performance degradation. More importantly, we demonstrate that this achievement is agnostic to model architectures. We experimentally validated our method with different deep network backbones (AlexNet-small, Resnet-20, Resnet-50) on different datasets (SVHN, Cifar-10, ImageNet) and observed consistent results.
APA, Harvard, Vancouver, ISO, and other styles
2

Mbani, Benson, Timm Schoening, and Jens Greinert. Automated and Integrated Seafloor Classification Workflow (AI-SCW). GEOMAR, May 2023. http://dx.doi.org/10.3289/sw_2_2023.

Full text
Abstract:
The Automated and Integrated Seafloor Classification Workflow (AI-SCW) is a semi-automated underwater image processing pipeline that has been customized for use in classifying the seafloor into semantic habitat categories. The current implementation has been tested against a sequence of underwater images collected by the Ocean Floor Observation System (OFOS), in the Clarion-Clipperton Zone of the Pacific Ocean. Despite this, the workflow could also be applied to images acquired by other platforms such as an Autonomous Underwater Vehicle (AUV), or Remotely Operated Vehicle (ROV). The modules in AI-SCW have been implemented using the python programming language, specifically using libraries such as scikit-image for image processing, scikit-learn for machine learning and dimensionality reduction, keras for computer vision with deep learning, and matplotlib for generating visualizations. Therefore, AI-SCW modularized implementation allows users to accomplish a variety of underwater computer vision tasks, which include: detecting laser points from the underwater images for use in scale determination; performing contrast enhancement and color normalization to improve the visual quality of the images; semi-automated generation of annotations to be used downstream during supervised classification; training a convolutional neural network (Inception v3) using the generated annotations to semantically classify each image into one of pre-defined seafloor habitat categories; evaluating sampling strategies for generation of balanced training images to be used for fitting an unsupervised k-means classifier; and visualization of classification results in both feature space view and in map view geospatial co-ordinates. Thus, the workflow is useful for a quick but objective generation of image-based seafloor habitat maps to support monitoring of remote benthic ecosystems.
APA, Harvard, Vancouver, ISO, and other styles
3

Koh, Christopher Fu-Chai, and Sergey Igorevich Magedov. Bond Order Prediction Using Deep Neural Networks. Office of Scientific and Technical Information (OSTI), August 2019. http://dx.doi.org/10.2172/1557202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shevitski, Brian, Yijing Watkins, Nicole Man, and Michael Girard. Digital Signal Processing Using Deep Neural Networks. Office of Scientific and Technical Information (OSTI), April 2023. http://dx.doi.org/10.2172/1984848.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lin, Youzuo. Physics-guided Machine Learning: from Supervised Deep Networks to Unsupervised Lightweight Models. Office of Scientific and Technical Information (OSTI), August 2023. http://dx.doi.org/10.2172/1994110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chavez, Wesley. An Exploration of Linear Classifiers for Unsupervised Spiking Neural Networks with Event-Driven Data. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.6323.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Talathi, S. S. Deep Recurrent Neural Networks for seizure detection and early seizure detection systems. Office of Scientific and Technical Information (OSTI), June 2017. http://dx.doi.org/10.2172/1366924.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Armstrong, Derek Elswick, and Joseph Gabriel Gorka. Using Deep Neural Networks to Extract Fireball Parameters from Infrared Spectral Data. Office of Scientific and Technical Information (OSTI), May 2020. http://dx.doi.org/10.2172/1623398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Thulasidasan, Sunil, Gopinath Chennupati, Jeff Bilmes, Tanmoy Bhattacharya, and Sarah E. Michalak. On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks. Office of Scientific and Technical Information (OSTI), June 2019. http://dx.doi.org/10.2172/1525811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ellis, John, Attila Cangi, Normand Modine, John Stephens, Aidan Thompson, and Sivasankaran Rajamanickam. Accelerating Finite-temperature Kohn-Sham Density Functional Theory\ with Deep Neural Networks. Office of Scientific and Technical Information (OSTI), October 2020. http://dx.doi.org/10.2172/1677521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography