Academic literature on the topic 'Deep Unsupervised Learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Deep Unsupervised Learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Deep Unsupervised Learning"

1

Zhao, Tingting, Zifeng Wang, Aria Masoomi, and Jennifer Dy. "Deep Bayesian Unsupervised Lifelong Learning." Neural Networks 149 (May 2022): 95–106. http://dx.doi.org/10.1016/j.neunet.2022.02.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Banzi, Jamal, Isack Bulugu, and Zhongfu Ye. "Deep Predictive Neural Network: Unsupervised Learning for Hand Pose Estimation." International Journal of Machine Learning and Computing 9, no. 4 (August 2019): 432–39. http://dx.doi.org/10.18178/ijmlc.2019.9.4.822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fong, A. C. M., and G. Hong. "Boosted Supervised Intensional Learning Supported by Unsupervised Learning." International Journal of Machine Learning and Computing 11, no. 2 (March 2021): 98–102. http://dx.doi.org/10.18178/ijmlc.2021.11.2.1020.

Full text
Abstract:
Traditionally, supervised machine learning (ML) algorithms rely heavily on large sets of annotated data. This is especially true for deep learning (DL) neural networks, which need huge annotated data sets for good performance. However, large volumes of annotated data are not always readily available. In addition, some of the best performing ML and DL algorithms lack explainability – it is often difficult even for domain experts to interpret the results. This is an important consideration especially in safety-critical applications, such as AI-assisted medical endeavors, in which a DL’s failure mode is not well understood. This lack of explainability also increases the risk of malicious attacks by adversarial actors because these actions can become obscured in the decision-making process that lacks transparency. This paper describes an intensional learning approach which uses boosting to enhance prediction performance while minimizing reliance on availability of annotated data. The intensional information is derived from an unsupervised learning preprocessing step involving clustering. Preliminary evaluation on the MNIST data set has shown encouraging results. Specifically, using the proposed approach, it is now possible to achieve similar accuracy result as extensional learning alone while using only a small fraction of the original training data set.
APA, Harvard, Vancouver, ISO, and other styles
4

Huang, Jiabo, Qi Dong, Shaogang Gong, and Xiatian Zhu. "Unsupervised Deep Learning via Affinity Diffusion." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 11029–36. http://dx.doi.org/10.1609/aaai.v34i07.6757.

Full text
Abstract:
Convolutional neural networks (CNNs) have achieved unprecedented success in a variety of computer vision tasks. However, they usually rely on supervised model learning with the need for massive labelled training data, limiting dramatically their usability and deployability in real-world scenarios without any labelling budget. In this work, we introduce a general-purpose unsupervised deep learning approach to deriving discriminative feature representations. It is based on self-discovering semantically consistent groups of unlabelled training samples with the same class concepts through a progressive affinity diffusion process. Extensive experiments on object image classification and clustering show the performance superiority of the proposed method over the state-of-the-art unsupervised learning models using six common image recognition benchmarks including MNIST, SVHN, STL10, CIFAR10, CIFAR100 and ImageNet.
APA, Harvard, Vancouver, ISO, and other styles
5

Sanakoyeu, Artsiom, Miguel A. Bautista, and Björn Ommer. "Deep unsupervised learning of visual similarities." Pattern Recognition 78 (June 2018): 331–43. http://dx.doi.org/10.1016/j.patcog.2018.01.036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yousefi-Azar, Mahmood, and Len Hamey. "Text summarization using unsupervised deep learning." Expert Systems with Applications 68 (February 2017): 93–105. http://dx.doi.org/10.1016/j.eswa.2016.10.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Dong, Chengjian Sun, Chenyang Yang, and Lajos Hanzo. "Optimizing Wireless Systems Using Unsupervised and Reinforced-Unsupervised Deep Learning." IEEE Network 34, no. 4 (July 2020): 270–77. http://dx.doi.org/10.1109/mnet.001.1900517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xuejun Zhang, Xuejun Zhang, Jiyang Gai Xuejun Zhang, Zhili Ma Jiyang Gai, Jinxiong Zhao Zhili Ma, Hongzhong Ma Jinxiong Zhao, Fucun He Hongzhong Ma, and Tao Ju Fucun He. "Exploring Unsupervised Learning with Clustering and Deep Autoencoder to Detect DDoS Attack." 電腦學刊 33, no. 4 (August 2022): 029–44. http://dx.doi.org/10.53106/199115992022083304003.

Full text
Abstract:
<p>With the proliferation of services available on the Internet, network attacks have become one of the seri-ous issues. The distributed denial of service (DDoS) attack is such a devastating attack, which poses an enormous threat to network communication and applications and easily disrupts services. To defense against DDoS attacks effectively, this paper proposes a novel DDoS attack detection method that trains detection models in an unsupervised learning manner using preprocessed and unlabeled normal network traffic data, which can not only avoid the impact of unbalanced training data on the detection model per-formance but also detect unknown attacks. Specifically, the proposed method firstly uses Balanced Itera-tive Reducing and Clustering Using Hierarchies algorithm (BIRCH) to pre-cluster the normal network traf-fic data, and then explores autoencoder (AE) to build the detection model in an unsupervised manner based on the cluster subsets. In order to verify the performance of our method, we perform experiments on benchmark network intrusion detection datasets KDDCUP99 and UNSWNB15. The results show that, compared with the state-of-the-art DDoS detection models that used supervised learning and unsuper-vised learning, our proposed method achieves better performance in terms of detection accuracy rate and false positive rate (FPR).</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
9

Kim, Seonghyeon, Sunjin Jung, Kwanggyoon Seo, Roger Blanco i Ribera, and Junyong Noh. "Deep Learning‐Based Unsupervised Human Facial Retargeting." Computer Graphics Forum 40, no. 7 (October 2021): 45–55. http://dx.doi.org/10.1111/cgf.14400.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Changsheng, Rongqing Li, Ye Yuan, Guoren Wang, and Dong Xu. "Deep Unsupervised Active Learning via Matrix Sketching." IEEE Transactions on Image Processing 30 (2021): 9280–93. http://dx.doi.org/10.1109/tip.2021.3124317.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Deep Unsupervised Learning"

1

Drexler, Jennifer Fox. "Deep unsupervised learning from speech." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105696.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 87-92).
Automatic speech recognition (ASR) systems have become hugely successful in recent years - we have become accustomed to speech interfaces across all kinds of devices. However, despite the huge impact ASR has had on the way we interact with technology, it is out of reach for a significant portion of the world's population. This is because these systems rely on a variety of manually-generated resources - like transcripts and pronunciation dictionaries - that can be both expensive and difficult to acquire. In this thesis, we explore techniques for learning about speech directly from speech, with no manually generated transcriptions. Such techniques have the potential to revolutionize speech technologies for the vast majority of the world's population. The cognitive science and computer science communities have both been investing increasing time and resources into exploring this problem. However, a full unsupervised speech recognition system is a hugely complicated undertaking and is still a long ways away. As in previous work, we focus on the lower-level tasks which will underlie an eventual unsupervised speech recognizer. We specifically focus on two tasks: developing linguistically meaningful representations of speech and segmenting speech into phonetic units. This thesis approaches these tasks from a new direction: deep learning. While modern deep learning methods have their roots in ideas from the 1960s and even earlier, deep learning techniques have recently seen a resurgence, thanks to huge increases in computational power and new efficient learning algorithms. Deep learning algorithms have been instrumental in the recent progress of traditional supervised speech recognition; here, we extend that work to unsupervised learning from speech.
by Jennifer Fox Drexler.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
2

Ahn, Euijoon. "Unsupervised Deep Feature Learning for Medical Image Analysis." Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/23002.

Full text
Abstract:
The availability of annotated image datasets and recent advances in supervised deep learning methods are enabling the end-to-end derivation of representative image features that can impact a variety of image analysis problems. These supervised methods use prior knowledge derived from labelled training data and approaches, for example, convolutional neural networks (CNNs) have produced impressive results in natural (photographic) image classification. CNNs learn image features in a hierarchical fashion. Each deeper layer of the network learns a representation of the image data that is higher level and semantically more meaningful. However, the accuracy and robustness of image features with supervised CNNs are dependent on the availability of large-scale labelled training data. In medical imaging, these large labelled datasets are scarce mainly due to the complexity of manual annotation and inter- and intra-observer variability in label assignment. The concept of ‘transfer learning’ – the adoption of image features from different domains, e.g., image features learned from natural photographic images – was introduced to address the lack of large amounts of labelled medical image data. These image features, however, are often generic and do not perform well in specific medical image analysis problems. An alternative approach was to optimise these features by retraining the generic features using a relatively small set of labelled medical images. This ‘fine-tuning’ approach, however, is not able to match the overall accuracy of learning image features directly from large collections of data that are specifically related to the problem at hand. An alternative approach is to use unsupervised feature learning algorithms to build features from unlabelled data, which then allows unannotated image archives to be used. Many unsupervised feature learning algorithms such as sparse coding (SC), auto-encoder (AE) and Restricted Boltzmann Machines (RBMs), however, have often been limited to learning low-level features such as lines and edges. In an attempt to address these limitations, in this thesis, we present several new unsupervised deep learning methods to learn semantic high-level features from unlabelled medical images to address the challenge of learning representative visual features in medical image analysis. We present two methods to derive non-linear and non-parametric models, which are crucial to unsupervised feature learning algorithms; one method embeds a kernel learning within CNNs while the other couples clustering with CNNs. We then further improved the quality of image features using domain adaptation methods (DAs) that learn representations that are invariant to domains with different data distributions. We present a deep unsupervised feature extractor to transform the feature maps from the pre-trained CNN on natural images to a set of non-redundant and relevant medical image features. Our feature extractor preserves meaningful generic features from the pre-trained domain and learns specific local features that are more representative of the medical image data. We conducted extensive experiments on 4 public datasets which have diverse visual characteristics of medical images including X-ray, dermoscopic and CT images. Our results show that our methods had better accuracy when compared to other conventional unsupervised methods and competitive accuracy to methods that used state-of-the-art supervised CNNs. Our findings suggest that our methods could scale to many different transfer learning or domain adaptation approaches where they have none or small sets of labelled data.
APA, Harvard, Vancouver, ISO, and other styles
3

Caron, Mathilde. "Unsupervised Representation Learning with Clustering in Deep Convolutional Networks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-227926.

Full text
Abstract:
This master thesis tackles the problem of unsupervised learning of visual representations with deep Convolutional Neural Networks (CNN). This is one of the main actual challenges in image recognition to close the gap between unsupervised and supervised representation learning. We propose a novel and simple way of training CNN on fully unlabeled datasets. Our method jointly optimizes a grouping of the representations and trains a CNN using the groups as supervision. We evaluate the models trained with our method on standard transfer learning experiments from the literature. We find out that our method outperforms all self-supervised and unsupervised state-of-the-art approaches. More importantly, our method outperforms those methods even when the unsupervised training set is not ImageNet but an arbitrary subset of images from Flickr.
Detta examensarbete behandlar problemet med oövervakat lärande av visuella representationer med djupa konvolutionella neurala nätverk (CNN). Detta är en av de viktigaste faktiska utmaningarna i datorseende för att överbrygga klyftan mellan oövervakad och övervakad representationstjänst. Vi föreslår ett nytt och enkelt sätt att träna CNN på helt omärkta dataset. Vår metod består i att tillsammans optimera en gruppering av representationerna och träna ett CNN med hjälp av grupperna som tillsyn. Vi utvärderar modellerna som tränats med vår metod på standardöverföringslärande experiment från litteraturen. Vi finner att vår metod överträffar alla självövervakade och oövervakade, toppmoderna tillvägagångssätt, hur sofistikerade de än är. Ännu viktigare är att vår metod överträffar de metoderna även när den oövervakade träningsuppsättningen inte är ImageNet men en godtycklig delmängd av bilder från Flickr.
APA, Harvard, Vancouver, ISO, and other styles
4

Manjunatha, Bharadwaj Sandhya. "Land Cover Quantification using Autoencoder based Unsupervised Deep Learning." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/99861.

Full text
Abstract:
This work aims to develop a deep learning model for land cover quantification through hyperspectral unmixing using an unsupervised autoencoder. Land cover identification and classification is instrumental in urban planning, environmental monitoring and land management. With the technological advancements in remote sensing, hyperspectral imagery which captures high resolution images of the earth's surface across hundreds of wavelength bands, is becoming increasingly popular. The high spectral information in these images can be analyzed to identify the various target materials present in the image scene based on their unique reflectance patterns. An autoencoder is a deep learning model that can perform spectral unmixing by decomposing the complex image spectra into its constituent materials and estimating their abundance compositions. The advantage of using this technique for land cover quantification is that it is completely unsupervised and eliminates the need for labelled data which generally requires years of field survey and formulation of detailed maps. We evaluate the performance of the autoencoder on various synthetic and real hyperspectral images consisting of different land covers using similarity metrics and abundance maps. The scalability of the technique with respect to landscapes is assessed by evaluating its performance on hyperspectral images spanning across 100m x 100m, 200m x 200m, 1000m x 1000m, 4000m x 4000m and 5000m x 5000m regions. Finally, we analyze the performance of this technique by comparing it to several supervised learning methods like Support Vector Machine (SVM), Random Forest (RF) and multilayer perceptron using F1-score, Precision and Recall metrics and other unsupervised techniques like K-Means, N-Findr, and VCA using cosine similarity, mean square error and estimated abundances. The land cover classification obtained using this technique is compared to the existing United States National Land Cover Database (NLCD) classification standard.
Master of Science
This work aims to develop an automated deep learning model for identifying and estimating the composition of the different land covers in a region using hyperspectral remote sensing imagery. With the technological advancements in remote sensing, hyperspectral imagery which captures high resolution images of the earth's surface across hundreds of wavelength bands, is becoming increasingly popular. As every surface has a unique reflectance pattern, the high spectral information contained in these images can be analyzed to identify the various target materials present in the image scene. An autoencoder is a deep learning model that can perform spectral unmixing by decomposing the complex image spectra into its constituent materials and estimate their percent compositions. The advantage of this method in land cover quantification is that it is an unsupervised technique which does not require labelled data which generally requires years of field survey and formulation of detailed maps. The performance of this technique is evaluated on various synthetic and real hyperspectral datasets consisting of different land covers. We assess the scalability of the model by evaluating its performance on images of different sizes spanning over a few hundred square meters to thousands of square meters. Finally, we compare the performance of the autoencoder based approach with other supervised and unsupervised deep learning techniques and with the current land cover classification standard.
APA, Harvard, Vancouver, ISO, and other styles
5

Martin, Damien W. "Fault detection in manufacturing equipment using unsupervised deep learning." Thesis, Massachusetts Institute of Technology, 2021. https://hdl.handle.net/1721.1/130698.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 87-90).
We investigate the use of unsupervised deep learning to create a general purpose automated fault detection system for manufacturing equipment. Unexpected equipment faults can be costly to manufacturing lines, but data driven fault detection systems often require a high level of application specific expertise to implement and continued human oversight. Collecting large labeled datasets to train such a system can also be challenging due to the sparse nature of faults. To address this, we focus on unsupervised deep learning approaches, and their ability to generalize across applications without changes to the hyper-parameters or architecture. Previous work has demonstrated the efficacy of autoencoders in unsupervised anomaly detection systems. In this work we propose a novel variant of the deep auto-encoding Gaussian mixture model, optimized for time series applications, and test its efficacy in detecting faults across a range of manufacturing equipment. It was tested against fault datasets from three milling machines, two plasma etchers, and one spinning ball bearing. In our tests, the model is able to detect over 80% of faults in all cases without the use of labeled data and without hyperparameter changes between applications. We also find that the model is capable of classifying different failure modes in some of our tests, and explore other ways the system can be used to provide useful diagnostic information. We present preliminary results from a continual learning variant of our fault detection architecture aimed at tackling the problem of system drift.
by Damien W. Martin.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Dongnan. "Supervised and Unsupervised Deep Learning-based Biomedical Image Segmentation." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/24744.

Full text
Abstract:
Biomedical image analysis plays a crucial role in the development of healthcare, with a wide scope of applications including the disease diagnosis, clinical treatment, and future prognosis. Among various biomedical image analysis techniques, segmentation is an essential step, which aims at assigning each pixel with labels of interest on the category and instance. At the early stage, the segmentation results were obtained via manual annotation, which is time-consuming and error-prone. Over the past few decades, hand-craft feature based methods have been proposed to segment the biomedical images automatically. However, these methods heavily rely on prior knowledge, which limits their generalization ability on various biomedical images. With the recent advance of the deep learning technique, convolutional neural network (CNN) based methods have achieved state-of-the-art performance on various nature and biomedical image segmentation tasks. The great success of the CNN based segmentation methods results from the ability to learn contextual and local information from the high dimensional feature space. However, the biomedical image segmentation tasks are particularly challenging, due to the complicated background components, the high variability of object appearances, numerous overlapping objects, and ambiguous object boundaries. To this end, it is necessary to establish automated deep learning-based segmentation paradigms, which are capable of processing the complicated semantic and morphological relationships in various biomedical images. In this thesis, we propose novel deep learning-based methods for fully supervised and unsupervised biomedical image segmentation tasks. For the first part of the thesis, we introduce fully supervised deep learning-based segmentation methods on various biomedical image analysis scenarios. First, we design a panoptic structure paradigm for nuclei instance segmentation in the histopathology images, and cell instance segmentation in the fluorescence microscopy images. Traditional proposal-based and proposal-free instance segmentation methods are only capable to leverage either global contextual or local instance information. However, our panoptic paradigm integrates both of them and therefore achieves better performance. Second, we propose a multi-level feature fusion architecture for semantic neuron membrane segmentation in the electron microscopy (EM) images. Third, we propose a 3D anisotropic paradigm for brain tumor segmentation in magnetic resonance images, which enlarges the model receptive field while maintaining the memory efficiency. Although our fully supervised methods achieve competitive performance on several biomedical image segmentation tasks, they heavily rely on the annotations of the training images. However, labeling pixel-level segmentation ground truth for biomedical images is expensive and labor-intensive. Subsequently, exploring unsupervised segmentation methods without accessing annotations is an important topic for biomedical image analysis. In the second part of the thesis, we focus on the unsupervised biomedical image segmentation methods. First, we proposed a panoptic feature alignment paradigm for unsupervised nuclei instance segmentation in the histopathology images, and mitochondria instance segmentation in EM images. To the best of our knowledge, we are for the first time to design an unsupervised deep learning-based method for various biomedical image instance segmentation tasks. Second, we design a feature disentanglement architecture for unsupervised object recognition. In addition to the unsupervised instance segmentation for the biomedical images, our method also achieves state-of-the-art performance on the unsupervised object detection for natural images, which further demonstrates its effectiveness and high generalization ability.
APA, Harvard, Vancouver, ISO, and other styles
7

Nasrin, Mst Shamima. "Pathological Image Analysis with Supervised and Unsupervised Deep Learning Approaches." University of Dayton / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1620052562772676.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Xinheng. "A Deep Unsupervised Anomaly Detection Model for Automated Tumor Segmentation." Thesis, The University of Sydney, 2020. https://hdl.handle.net/2123/22502.

Full text
Abstract:
Many researches have been investigated to provide the computer aided diagnosis (CAD) automated tumor segmentation in various medical images, e.g., magnetic resonance (MR), computed tomography (CT) and positron-emission tomography (PET). The recent advances in automated tumor segmentation have been achieved by supervised deep learning (DL) methods trained on large labelled data to cover tumor variations. However, there is a scarcity in such training data due to the cost of labeling process. Thus, with insufficient training data, supervised DL methods have difficulty in generating effective feature representations for tumor segmentation. This thesis aims to develop an unsupervised DL method to exploit large unlabeled data generated during clinical process. Our assumption is unsupervised anomaly detection (UAD) that, normal data have constrained anatomy and variations, while anomalies, i.e., tumors, usually differ from the normality with high diversity. We demonstrate our method for automated tumor segmentation on two different image modalities. Firstly, given that bilateral symmetry in normal human brains and unsymmetry in brain tumors, we propose a symmetric-driven deep UAD model using GAN model to model the normal symmetric variations thus segmenting tumors by their being unsymmetrical. We evaluated our method on two benchmarked datasets. Our results show that our method outperformed the state-of-the-art unsupervised brain tumor segmentation methods and achieved competitive performance to the supervised segmentation methods. Secondly, we propose a multi-modal deep UAD model for PET-CT tumor segmentation. We model a manifold of normal variations shared across normal CT and PET pairs; this manifold representing the normal pairing that can be used to segment the anomalies. We evaluated our method on two PET-CT datasets and the results show that we outperformed the state-of-the-art unsupervised methods, supervised methods and baseline fusion techniques.
APA, Harvard, Vancouver, ISO, and other styles
9

Längkvist, Martin. "Modeling time-series with deep networks." Doctoral thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-39415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dekhtiar, Jonathan. "Deep Learning and unsupervised learning to automate visual inspection in the manufacturing industry." Thesis, Compiègne, 2019. http://www.theses.fr/2019COMP2513.

Full text
Abstract:
La croissance exponentielle des besoins et moyens informatiques implique un besoin croissant d’automatisation des procédés industriels. Ce constat est en particulier visible pour l’inspection visuelle automatique sur ligne de production. Bien qu’étudiée depuis 1970, peine toujours à être appliquée à de larges échelles et à faible coûts. Les méthodes employées dépendent grandement de la disponibilité des experts métiers. Ce qui provoque inévitablement une augmentation des coûts et une réduction de la flexibilité des méthodes employées. Depuis 2012, les avancées dans le domaine associé à l’étude des réseaux neuronaux profonds (i.e. Deep Learning) a permis de nombreux progrès en ce sens, notamment grâce au réseaux neuronaux convolutif qui ont atteint des performances proches de l’humain dans de nombreux domaines associées à la perception visuelle (e.g. reconnaissance et détection d’objets, etc.). Cette thèse propose une approche non supervisée pour répondre aux besoins de l’inspection visuelle automatique. Cette méthode, baptisé AnoAEGAN, combine l’apprentissage adversaire et l’estimation d’une fonction de densité de probabilité. Ces deux approches complémentaires permettent d’estimer jointement la probabilité pixel par pixel d’un défaut visuel sur une image. Le modèle est entrainé à partir d’un nombre très limités d’images (i.e. inférieur à 1000 images) sans utilisation de connaissance expert pour « étiqueter » préalablement les données. Cette méthode permet une flexibilité accrue par la rapidité d’entrainement du modèle et une grande versatilité, démontrée sur dix tâches différentes sans la moindre modification du modèle. Cette méthode devrait permettre de réduire les coûts de développement et le temps nécessaire de déploiement en production. Cette méthode peut être également déployée de manière complémentaire à une approche supervisée afin de bénéficier des avantages de chaque approche
Although studied since 1970, automatic visual inspection on production lines still struggles to be applied on a large scale and at low cost. The methods used depend greatly on the availability of domain experts. This inevitably leads to increased costs and reduced flexibility in the methods used. Since 2012, advances in the field of Deep Learning have enabled many advances in this direction, particularly thanks to convolutional neura networks that have achieved near-human performance in many areas associated with visual perception (e.g. object recognition and detection, etc.). This thesis proposes an unsupervised approach to meet the needs of automatic visual inspection. This method, called AnoAEGAN, combines adversarial learning and the estimation of a probability density function. These two complementary approaches make it possible to jointly estimate the pixel-by-pixel probability of a visual defect on an image. The model is trained from a very limited number of images (i.e. less than 1000 images) without using expert knowledge to "label" the data beforehand. This method allows increased flexibility with a limited training time and therefore great versatility, demonstrated on ten different tasks without any modification of the model. This method should reduce development costs and the time required to deploy in production. This method can also be deployed in a complementary way to a supervised approach in order to benefit from the advantages of each approach
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Deep Unsupervised Learning"

1

Pal, Sujit, Amita Kapoor, Antonio Gulli, and François Chollet. Deep Learning with TensorFlow and Keras: Build and Deploy Supervised, Unsupervised, Deep, and Reinforcement Learning Models. Packt Publishing, Limited, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bonaccorso, Giuseppe. Hands-On Unsupervised Learning with Python: Implement Machine Learning and Deep Learning Models Using Scikit-Learn, TensorFlow, and More. Packt Publishing, Limited, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Advanced Deep Learning with TensorFlow 2 and Keras: Apply DL, GANs, VAEs, Deep RL, Unsupervised Learning, Object Detection and Segmentation, and More, 2nd Edition. Packt Publishing, Limited, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Leordeanu, Marius. Unsupervised Learning in Space and Time: A Modern Approach for Computer Vision Using Graph-Based Techniques and Deep Neural Networks. Springer International Publishing AG, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Leordeanu, Marius. Unsupervised Learning in Space and Time: A Modern Approach for Computer Vision using Graph-based Techniques and Deep Neural Networks. Springer, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Deep Unsupervised Learning"

1

Jo, Taeho. "Unsupervised Learning." In Deep Learning Foundations, 57–81. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-32879-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tanaka, Akinori, Akio Tomiya, and Koji Hashimoto. "Unsupervised Deep Learning." In Deep Learning and Physics, 103–26. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-6108-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Vermeulen, Andreas François. "Unsupervised Learning: Deep Learning." In Industrial Machine Learning, 225–41. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-5316-8_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wani, M. Arif, Farooq Ahmad Bhat, Saduf Afzal, and Asif Iqbal Khan. "Unsupervised Deep Learning Architectures." In Studies in Big Data, 77–94. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-6794-6_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ye, Jong Chul. "Generative Models and Unsupervised Learning." In Geometry of Deep Learning, 267–313. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-6046-7_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lei, Chen. "Unsupervised Learning: Deep Generative Model." In Cognitive Intelligence and Robotics, 183–215. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-2233-5_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hu, Weihua, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, and Masashi Sugiyama. "Unsupervised Discrete Representation Learning." In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 97–119. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28954-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wani, M. Arif, Farooq Ahmad Bhat, Saduf Afzal, and Asif Iqbal Khan. "Unsupervised Deep Learning in Character Recognition." In Studies in Big Data, 133–49. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-6794-6_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Shufei, Kaizhu Huang, Rui Zhang, and Amir Hussain. "Improve Deep Learning with Unsupervised Objective." In Neural Information Processing, 720–28. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70087-8_74.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tripathy, Sushreeta, and Muskaan Tabasum. "Autoencoder: An Unsupervised Deep Learning Approach." In Emerging Technologies in Data Mining and Information Security, 261–67. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-4052-1_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Deep Unsupervised Learning"

1

Maggu, Jyoti, and Angshul Majumdar. "Unsupervised Deep Transform Learning." In ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. http://dx.doi.org/10.1109/icassp.2018.8461498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Changsheng, Handong Ma, Zhao Kang, Ye Yuan, Xiao-Yu Zhang, and Guoren Wang. "On Deep Unsupervised Active Learning." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/364.

Full text
Abstract:
Unsupervised active learning has attracted increasing attention in recent years, where its goal is to select representative samples in an unsupervised setting for human annotating. Most existing works are based on shallow linear models by assuming that each sample can be well approximated by the span (i.e., the set of all linear combinations) of certain selected samples, and then take these selected samples as representative ones to label. However, in practice, the data do not necessarily conform to linear models, and how to model nonlinearity of data often becomes the key point to success. In this paper, we present a novel Deep neural network framework for Unsupervised Active Learning, called DUAL. DUAL can explicitly learn a nonlinear embedding to map each input into a latent space through an encoder-decoder architecture, and introduce a selection block to select representative samples in the the learnt latent space. In the selection block, DUAL considers to simultaneously preserve the whole input patterns as well as the cluster structure of data. Extensive experiments are performed on six publicly available datasets, and experimental results clearly demonstrate the efficacy of our method, compared with state-of-the-arts.
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Junjie, William K. Cheung, and Anran Wang. "Learning Deep Unsupervised Binary Codes for Image Retrieval." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/85.

Full text
Abstract:
Hashing is an efficient approximate nearest neighbor search method and has been widely adopted for large-scale multimedia retrieval. While supervised learning is more popular for the data-dependent hashing, deep unsupervised hashing methods have recently been developed to learn non-linear transformations for converting multimedia inputs to binary codes. Most of existing deep unsupervised hashing methods make use of a quadratic constraint for minimizing the difference between the compact representations and the target binary codes, which inevitably causes severe information loss. In this paper, we propose a novel deep unsupervised method called DeepQuan for hashing. The DeepQuan model utilizes a deep autoencoder network, where the encoder is used to learn compact representations and the decoder is for manifold preservation. To contrast with the existing unsupervised methods, DeepQuan learns the binary codes by minimizing the quantization error through product quantization technique. Furthermore, a weighted triplet loss is proposed to avoid trivial solution and poor generalization. Extensive experimental results on standard datasets show that the proposed DeepQuan model outperforms the state-of-the-art unsupervised hashing methods for image retrieval tasks.
APA, Harvard, Vancouver, ISO, and other styles
4

Xu, Yueyao. "Unsupervised Deep Learning for Text Steganalysis." In 2020 International Workshop on Electronic Communication and Artificial Intelligence (IWECAI). IEEE, 2020. http://dx.doi.org/10.1109/iwecai50956.2020.00030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Siddique, A. B., Samet Oymak, and Vagelis Hristidis. "Unsupervised Paraphrasing via Deep Reinforcement Learning." In KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3394486.3403231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sekmen, Ali, Ahmet Bugra Koku, Mustafa Parlaktuna, Ayad Abdul-Malek, and Nagendrababu Vanamala. "Unsupervised deep learning for subspace clustering." In 2017 IEEE International Conference on Big Data (Big Data). IEEE, 2017. http://dx.doi.org/10.1109/bigdata.2017.8258156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Yixin, Yu Zheng, Daokun Zhang, Hongxu Chen, Hao Peng, and Shirui Pan. "Towards Unsupervised Deep Graph Structure Learning." In WWW '22: The ACM Web Conference 2022. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3485447.3512186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tang, Hao, and Kun Zhan. "Unsupervised confident co-promoting: refinery for pseudo labels on unsupervised person re-identification." In International Conference on Cloud Computing, Performance Computing, and Deep Learning (CCPCDL 2022), edited by Sandeep Saxena. SPIE, 2022. http://dx.doi.org/10.1117/12.2640850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sumalvico, Maciej. "Unsupervised Learning of Morphology with Graph Sampling." In RANLP 2017 - Recent Advances in Natural Language Processing Meet Deep Learning. Incoma Ltd. Shoumen, Bulgaria, 2017. http://dx.doi.org/10.26615/978-954-452-049-6_093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhan, Xiaohang, Jiahao Xie, Ziwei Liu, Yew-Soon Ong, and Chen Change Loy. "Online Deep Clustering for Unsupervised Representation Learning." In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020. http://dx.doi.org/10.1109/cvpr42600.2020.00672.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Deep Unsupervised Learning"

1

Lin, Youzuo. Physics-guided Machine Learning: from Supervised Deep Networks to Unsupervised Lightweight Models. Office of Scientific and Technical Information (OSTI), August 2023. http://dx.doi.org/10.2172/1994110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tran, Anh, Theron Rodgers, and Timothy Wildey. Reification of latent microstructures: On supervised unsupervised and semi-supervised deep learning applications for microstructures in materials informatics. Office of Scientific and Technical Information (OSTI), September 2020. http://dx.doi.org/10.2172/1673174.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mbani, Benson, Timm Schoening, and Jens Greinert. Automated and Integrated Seafloor Classification Workflow (AI-SCW). GEOMAR, May 2023. http://dx.doi.org/10.3289/sw_2_2023.

Full text
Abstract:
The Automated and Integrated Seafloor Classification Workflow (AI-SCW) is a semi-automated underwater image processing pipeline that has been customized for use in classifying the seafloor into semantic habitat categories. The current implementation has been tested against a sequence of underwater images collected by the Ocean Floor Observation System (OFOS), in the Clarion-Clipperton Zone of the Pacific Ocean. Despite this, the workflow could also be applied to images acquired by other platforms such as an Autonomous Underwater Vehicle (AUV), or Remotely Operated Vehicle (ROV). The modules in AI-SCW have been implemented using the python programming language, specifically using libraries such as scikit-image for image processing, scikit-learn for machine learning and dimensionality reduction, keras for computer vision with deep learning, and matplotlib for generating visualizations. Therefore, AI-SCW modularized implementation allows users to accomplish a variety of underwater computer vision tasks, which include: detecting laser points from the underwater images for use in scale determination; performing contrast enhancement and color normalization to improve the visual quality of the images; semi-automated generation of annotations to be used downstream during supervised classification; training a convolutional neural network (Inception v3) using the generated annotations to semantically classify each image into one of pre-defined seafloor habitat categories; evaluating sampling strategies for generation of balanced training images to be used for fitting an unsupervised k-means classifier; and visualization of classification results in both feature space view and in map view geospatial co-ordinates. Thus, the workflow is useful for a quick but objective generation of image-based seafloor habitat maps to support monitoring of remote benthic ecosystems.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography