Littérature scientifique sur le sujet « Unsupervised deep neural networks »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Sommaire
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Unsupervised deep neural networks ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Unsupervised deep neural networks"
Banzi, Jamal, Isack Bulugu et Zhongfu Ye. « Deep Predictive Neural Network : Unsupervised Learning for Hand Pose Estimation ». International Journal of Machine Learning and Computing 9, no 4 (août 2019) : 432–39. http://dx.doi.org/10.18178/ijmlc.2019.9.4.822.
Texte intégralGuo, Wenqi, Weixiong Zhang, Zheng Zhang, Ping Tang et Shichen Gao. « Deep Temporal Iterative Clustering for Satellite Image Time Series Land Cover Analysis ». Remote Sensing 14, no 15 (29 juillet 2022) : 3635. http://dx.doi.org/10.3390/rs14153635.
Texte intégralXu, Jianqiao, Zhaolu Zuo, Danchao Wu, Bing Li, Xiaoni Li et Deyi Kong. « Bearing Defect Detection with Unsupervised Neural Networks ». Shock and Vibration 2021 (19 août 2021) : 1–11. http://dx.doi.org/10.1155/2021/9544809.
Texte intégralFeng, Yu, et Hui Sun. « Basketball Footwork and Application Supported by Deep Learning Unsupervised Transfer Method ». International Journal of Information Technology and Web Engineering 18, no 1 (1 décembre 2023) : 1–17. http://dx.doi.org/10.4018/ijitwe.334365.
Texte intégralSun, Yanan, Gary G. Yen et Zhang Yi. « Evolving Unsupervised Deep Neural Networks for Learning Meaningful Representations ». IEEE Transactions on Evolutionary Computation 23, no 1 (février 2019) : 89–103. http://dx.doi.org/10.1109/tevc.2018.2808689.
Texte intégralShi, Yu, Cien Fan, Lian Zou, Caixia Sun et Yifeng Liu. « Unsupervised Adversarial Defense through Tandem Deep Image Priors ». Electronics 9, no 11 (19 novembre 2020) : 1957. http://dx.doi.org/10.3390/electronics9111957.
Texte intégralThakur, Amey. « Generative Adversarial Networks ». International Journal for Research in Applied Science and Engineering Technology 9, no 8 (31 août 2021) : 2307–25. http://dx.doi.org/10.22214/ijraset.2021.37723.
Texte intégralFerles, Christos, Yannis Papanikolaou, Stylianos P. Savaidis et Stelios A. Mitilineos. « Deep Self-Organizing Map of Convolutional Layers for Clustering and Visualizing Image Data ». Machine Learning and Knowledge Extraction 3, no 4 (14 novembre 2021) : 879–99. http://dx.doi.org/10.3390/make3040044.
Texte intégralZhuang, Chengxu, Siming Yan, Aran Nayebi, Martin Schrimpf, Michael C. Frank, James J. DiCarlo et Daniel L. K. Yamins. « Unsupervised neural network models of the ventral visual stream ». Proceedings of the National Academy of Sciences 118, no 3 (11 janvier 2021) : e2014196118. http://dx.doi.org/10.1073/pnas.2014196118.
Texte intégralLin, Baihan. « Regularity Normalization : Neuroscience-Inspired Unsupervised Attention across Neural Network Layers ». Entropy 24, no 1 (28 décembre 2021) : 59. http://dx.doi.org/10.3390/e24010059.
Texte intégralThèses sur le sujet "Unsupervised deep neural networks"
Donati, Lorenzo. « Domain Adaptation through Deep Neural Networks for Health Informatics ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14888/.
Texte intégralAhn, Euijoon. « Unsupervised Deep Feature Learning for Medical Image Analysis ». Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/23002.
Texte intégralCherti, Mehdi. « Deep generative neural networks for novelty generation : a foundational framework, metrics and experiments ». Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS029/document.
Texte intégralIn recent years, significant advances made in deep neural networks enabled the creation of groundbreaking technologies such as self-driving cars and voice-enabled personal assistants. Almost all successes of deep neural networks are about prediction, whereas the initial breakthroughs came from generative models. Today, although we have very powerful deep generative modeling techniques, these techniques are essentially being used for prediction or for generating known objects (i.e., good quality images of known classes): any generated object that is a priori unknown is considered as a failure mode (Salimans et al., 2016) or as spurious (Bengio et al., 2013b). In other words, when prediction seems to be the only possible objective, novelty is seen as an error that researchers have been trying hard to eliminate. This thesis defends the point of view that, instead of trying to eliminate these novelties, we should study them and the generative potential of deep nets to create useful novelty, especially given the economic and societal importance of creating new objects in contemporary societies. The thesis sets out to study novelty generation in relationship with data-driven knowledge models produced by deep generative neural networks. Our first key contribution is the clarification of the importance of representations and their impact on the kind of novelties that can be generated: a key consequence is that a creative agent might need to rerepresent known objects to access various kinds of novelty. We then demonstrate that traditional objective functions of statistical learning theory, such as maximum likelihood, are not necessarily the best theoretical framework for studying novelty generation. We propose several other alternatives at the conceptual level. A second key result is the confirmation that current models, with traditional objective functions, can indeed generate unknown objects. This also shows that even though objectives like maximum likelihood are designed to eliminate novelty, practical implementations do generate novelty. Through a series of experiments, we study the behavior of these models and the novelty they generate. In particular, we propose a new task setup and metrics for selecting good generative models. Finally, the thesis concludes with a series of experiments clarifying the characteristics of models that can exhibit novelty. Experiments show that sparsity, noise level, and restricting the capacity of the net eliminates novelty and that models that are better at recognizing novelty are also good at generating novelty
Kilinc, Ismail Ozsel. « Graph-based Latent Embedding, Annotation and Representation Learning in Neural Networks for Semi-supervised and Unsupervised Settings ». Scholar Commons, 2017. https://scholarcommons.usf.edu/etd/7415.
Texte intégralMcClintick, Kyle W. « Training Data Generation Framework For Machine-Learning Based Classifiers ». Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-theses/1276.
Texte intégralBoschini, Matteo. « Unsupervised Learning of Scene Flow ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16226/.
Texte intégralKalinicheva, Ekaterina. « Unsupervised satellite image time series analysis using deep learning techniques ». Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS335.
Texte intégralThis thesis presents a set of unsupervised algorithms for satellite image time series (SITS) analysis. Our methods exploit machine learning algorithms and, in particular, neural networks to detect different spatio-temporal entities and their eventual changes in the time.In our thesis, we aim to identify three different types of temporal behavior: no change areas, seasonal changes (vegetation and other phenomena that have seasonal recurrence) and non-trivial changes (permanent changes such as constructions or demolishment, crop rotation, etc). Therefore, we propose two frameworks: one for detection and clustering of non-trivial changes and another for clustering of “stable” areas (seasonal changes and no change areas). The first framework is composed of two steps which are bi-temporal change detection and the interpretation of detected changes in a multi-temporal context with graph-based approaches. The bi-temporal change detection is performed for each pair of consecutive images of the SITS and is based on feature translation with autoencoders (AEs). At the next step, the changes from different timestamps that belong to the same geographic area form evolution change graphs. The graphs are then clustered using a recurrent neural networks AE model to identify different types of change behavior. For the second framework, we propose an approach for object-based SITS clustering. First, we encode SITS with a multi-view 3D convolutional AE in a single image. Second, we perform a two steps SITS segmentation using the encoded SITS and original images. Finally, the obtained segments are clustered exploiting their encoded descriptors
Yuan, Xiao. « Graph neural networks for spatial gene expression analysis of the developing human heart ». Thesis, Uppsala universitet, Institutionen för biologisk grundutbildning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-427330.
Texte intégralVENTURA, FRANCESCO. « Explaining black-box deep neural models' predictions, behaviors, and performances through the unsupervised mining of their inner knowledge ». Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2912972.
Texte intégralLi, Yingzhen. « Approximate inference : new visions ». Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/277549.
Texte intégralLivres sur le sujet "Unsupervised deep neural networks"
E, Hinton Geoffrey, et Sejnowski Terrence J, dir. Unsupervised learning : Foundations of neural computation. Cambridge, Mass : MIT Press, 1999.
Trouver le texte intégralBaruque, Bruno. Fusion methods for unsupervised learning ensembles. Berlin : Springer, 2010.
Trouver le texte intégralAggarwal, Charu C. Neural Networks and Deep Learning. Cham : Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-94463-0.
Texte intégralAggarwal, Charu C. Neural Networks and Deep Learning. Cham : Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-29642-0.
Texte intégralMoolayil, Jojo. Learn Keras for Deep Neural Networks. Berkeley, CA : Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-4240-7.
Texte intégralCaterini, Anthony L., et Dong Eui Chang. Deep Neural Networks in a Mathematical Framework. Cham : Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75304-1.
Texte intégralRazaghi, Hooshmand Shokri. Statistical Machine Learning & Deep Neural Networks Applied to Neural Data Analysis. [New York, N.Y.?] : [publisher not identified], 2020.
Trouver le texte intégralFingscheidt, Tim, Hanno Gottschalk et Sebastian Houben, dir. Deep Neural Networks and Data for Automated Driving. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-01233-4.
Texte intégralModrzyk, Nicolas. Real-Time IoT Imaging with Deep Neural Networks. Berkeley, CA : Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5722-7.
Texte intégralSupervised and unsupervised pattern recognition : Feature extraction and computational intelligence. Boca Raton, Fla : CRC Press, 2000.
Trouver le texte intégralChapitres de livres sur le sujet "Unsupervised deep neural networks"
Song, Zeyang, Xi Wu, Mengwen Yuan et Huajin Tang. « An Unsupervised Spiking Deep Neural Network for Object Recognition ». Dans Advances in Neural Networks – ISNN 2019, 361–70. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-22808-8_36.
Texte intégralDeshwal, Deepti, et Pardeep Sangwan. « A Comprehensive Study of Deep Neural Networks for Unsupervised Deep Learning ». Dans Artificial Intelligence for Sustainable Development : Theory, Practice and Future Applications, 101–26. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-51920-9_7.
Texte intégralZhou, Jianchao, Xiaoou Chen et Deshun Yang. « Multimodel Music Emotion Recognition Using Unsupervised Deep Neural Networks ». Dans Lecture Notes in Electrical Engineering, 27–39. Singapore : Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-8707-4_3.
Texte intégralYan, Ruqiang, et Zhibin Zhao. « Unsupervised Deep Transfer Learning for Intelligent Fault Diagnosis ». Dans Deep Neural Networks-Enabled Intelligent Fault Diagnosis of Mechanical Systems, 109–36. Boca Raton : CRC Press, 2024. http://dx.doi.org/10.1201/9781003474463-9.
Texte intégralDreher, Kris K., Leonardo Ayala, Melanie Schellenberg, Marco Hübner, Jan-Hinrich Nölke, Tim J. Adler, Silvia Seidlitz et al. « Unsupervised Domain Transfer with Conditional Invertible Neural Networks ». Dans Lecture Notes in Computer Science, 770–80. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43907-0_73.
Texte intégralDas, Debasmit, et C. S. George Lee. « Graph Matching and Pseudo-Label Guided Deep Unsupervised Domain Adaptation ». Dans Artificial Neural Networks and Machine Learning – ICANN 2018, 342–52. Cham : Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01424-7_34.
Texte intégralSlama, Dirk. « Artificial Intelligence 101 ». Dans The Digital Playbook, 11–17. Cham : Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-030-88221-1_2.
Texte intégralZamora-Martínez, Francisco, Javier Muñoz-Almaraz et Juan Pardo. « Integration of Unsupervised and Supervised Criteria for Deep Neural Networks Training ». Dans Artificial Neural Networks and Machine Learning – ICANN 2016, 55–62. Cham : Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-44781-0_7.
Texte intégralLin, Xianghong, et Pangao Du. « Spike-Train Level Unsupervised Learning Algorithm for Deep Spiking Belief Networks ». Dans Artificial Neural Networks and Machine Learning – ICANN 2020, 634–45. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61616-8_51.
Texte intégralLiang, Yu, Yi Yang, Furao Shen, Jinxi Zhao et Tao Zhu. « An Incremental Deep Learning Network for On-line Unsupervised Feature Extraction ». Dans Neural Information Processing, 383–92. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70096-0_40.
Texte intégralActes de conférences sur le sujet "Unsupervised deep neural networks"
Cerisara, Christophe, Paul Caillon et Guillaume Le Berre. « Unsupervised Post-Tuning of Deep Neural Networks ». Dans 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9534198.
Texte intégralSato, Kazuki, Kenta Hama, Takashi Matsubara et Kuniaki Uehara. « Predictable Uncertainty-Aware Unsupervised Deep Anomaly Segmentation ». Dans 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. http://dx.doi.org/10.1109/ijcnn.2019.8852144.
Texte intégralXie, Ying, Linh Le et Jie Hao. « Unsupervised deep kernel for high dimensional data ». Dans 2017 International Joint Conference on Neural Networks (IJCNN). IEEE, 2017. http://dx.doi.org/10.1109/ijcnn.2017.7965868.
Texte intégralBraga, Pedro. « Backpropagating the Unsupervised Error of Self-Organizing Maps to Deep Neural Networks ». Dans LatinX in AI at Neural Information Processing Systems Conference 2019. Journal of LatinX in AI Research, 2019. http://dx.doi.org/10.52591/lxai2019120818.
Texte intégralJUNGES, RAFAEL, ZAHRA RASTIN, LUCA LOMAZZI, MARCO GIGLIO et FRANCESCO CADINI. « DAMAGE LOCALIZATION FRAMEWORKS BASED ON UNSUPERVISED DEEP LEARNING NEURAL NETWORKS ». Dans Structural Health Monitoring 2023. Destech Publications, Inc., 2023. http://dx.doi.org/10.12783/shm2023/36889.
Texte intégralFeng, Guanchao, J. Gerald Quirk et Petar M. Djuric. « Supervised and Unsupervised Learning of Fetal Heart Rate Tracings with Deep Gaussian Processes ». Dans 2018 14th Symposium on Neural Networks and Applications (NEUREL). IEEE, 2018. http://dx.doi.org/10.1109/neurel.2018.8586992.
Texte intégralYu, Chaohui, Jindong Wang, Yiqiang Chen et Zijing Wu. « Accelerating Deep Unsupervised Domain Adaptation with Transfer Channel Pruning ». Dans 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. http://dx.doi.org/10.1109/ijcnn.2019.8851810.
Texte intégralChen, Dong, Miaomiao Cheng, Chen Min et Liping Jing. « Unsupervised Deep Imputed Hashing for Partial Cross-modal Retrieval ». Dans 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. http://dx.doi.org/10.1109/ijcnn48605.2020.9206611.
Texte intégralTian, Qiangxing, Jinxin Liu, Guanchu Wang et Donglin Wang. « Unsupervised Discovery of Transitional Skills for Deep Reinforcement Learning ». Dans 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9533820.
Texte intégralWang, Qian, Fanlin Meng et Toby P. Breckon. « On Fine-Tuned Deep Features for Unsupervised Domain Adaptation ». Dans 2023 International Joint Conference on Neural Networks (IJCNN). IEEE, 2023. http://dx.doi.org/10.1109/ijcnn54540.2023.10191262.
Texte intégralRapports d'organisations sur le sujet "Unsupervised deep neural networks"
Yu, Haichao, Haoxiang Li, Honghui Shi, Thomas S. Huang et Gang Hua. Any-Precision Deep Neural Networks. Web of Open Science, décembre 2020. http://dx.doi.org/10.37686/ejai.v1i1.82.
Texte intégralMbani, Benson, Timm Schoening et Jens Greinert. Automated and Integrated Seafloor Classification Workflow (AI-SCW). GEOMAR, mai 2023. http://dx.doi.org/10.3289/sw_2_2023.
Texte intégralKoh, Christopher Fu-Chai, et Sergey Igorevich Magedov. Bond Order Prediction Using Deep Neural Networks. Office of Scientific and Technical Information (OSTI), août 2019. http://dx.doi.org/10.2172/1557202.
Texte intégralShevitski, Brian, Yijing Watkins, Nicole Man et Michael Girard. Digital Signal Processing Using Deep Neural Networks. Office of Scientific and Technical Information (OSTI), avril 2023. http://dx.doi.org/10.2172/1984848.
Texte intégralLin, Youzuo. Physics-guided Machine Learning : from Supervised Deep Networks to Unsupervised Lightweight Models. Office of Scientific and Technical Information (OSTI), août 2023. http://dx.doi.org/10.2172/1994110.
Texte intégralChavez, Wesley. An Exploration of Linear Classifiers for Unsupervised Spiking Neural Networks with Event-Driven Data. Portland State University Library, janvier 2000. http://dx.doi.org/10.15760/etd.6323.
Texte intégralTalathi, S. S. Deep Recurrent Neural Networks for seizure detection and early seizure detection systems. Office of Scientific and Technical Information (OSTI), juin 2017. http://dx.doi.org/10.2172/1366924.
Texte intégralArmstrong, Derek Elswick, et Joseph Gabriel Gorka. Using Deep Neural Networks to Extract Fireball Parameters from Infrared Spectral Data. Office of Scientific and Technical Information (OSTI), mai 2020. http://dx.doi.org/10.2172/1623398.
Texte intégralThulasidasan, Sunil, Gopinath Chennupati, Jeff Bilmes, Tanmoy Bhattacharya et Sarah E. Michalak. On Mixup Training : Improved Calibration and Predictive Uncertainty for Deep Neural Networks. Office of Scientific and Technical Information (OSTI), juin 2019. http://dx.doi.org/10.2172/1525811.
Texte intégralEllis, John, Attila Cangi, Normand Modine, John Stephens, Aidan Thompson et Sivasankaran Rajamanickam. Accelerating Finite-temperature Kohn-Sham Density Functional Theory\ with Deep Neural Networks. Office of Scientific and Technical Information (OSTI), octobre 2020. http://dx.doi.org/10.2172/1677521.
Texte intégral