Auswahl der wissenschaftlichen Literatur zum Thema „Unsupervised deep neural networks“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Unsupervised deep neural networks" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Unsupervised deep neural networks"
Banzi, Jamal, Isack Bulugu und Zhongfu Ye. „Deep Predictive Neural Network: Unsupervised Learning for Hand Pose Estimation“. International Journal of Machine Learning and Computing 9, Nr. 4 (August 2019): 432–39. http://dx.doi.org/10.18178/ijmlc.2019.9.4.822.
Der volle Inhalt der QuelleGuo, Wenqi, Weixiong Zhang, Zheng Zhang, Ping Tang und Shichen Gao. „Deep Temporal Iterative Clustering for Satellite Image Time Series Land Cover Analysis“. Remote Sensing 14, Nr. 15 (29.07.2022): 3635. http://dx.doi.org/10.3390/rs14153635.
Der volle Inhalt der QuelleXu, Jianqiao, Zhaolu Zuo, Danchao Wu, Bing Li, Xiaoni Li und Deyi Kong. „Bearing Defect Detection with Unsupervised Neural Networks“. Shock and Vibration 2021 (19.08.2021): 1–11. http://dx.doi.org/10.1155/2021/9544809.
Der volle Inhalt der QuelleFeng, Yu, und Hui Sun. „Basketball Footwork and Application Supported by Deep Learning Unsupervised Transfer Method“. International Journal of Information Technology and Web Engineering 18, Nr. 1 (01.12.2023): 1–17. http://dx.doi.org/10.4018/ijitwe.334365.
Der volle Inhalt der QuelleSun, Yanan, Gary G. Yen und Zhang Yi. „Evolving Unsupervised Deep Neural Networks for Learning Meaningful Representations“. IEEE Transactions on Evolutionary Computation 23, Nr. 1 (Februar 2019): 89–103. http://dx.doi.org/10.1109/tevc.2018.2808689.
Der volle Inhalt der QuelleShi, Yu, Cien Fan, Lian Zou, Caixia Sun und Yifeng Liu. „Unsupervised Adversarial Defense through Tandem Deep Image Priors“. Electronics 9, Nr. 11 (19.11.2020): 1957. http://dx.doi.org/10.3390/electronics9111957.
Der volle Inhalt der QuelleThakur, Amey. „Generative Adversarial Networks“. International Journal for Research in Applied Science and Engineering Technology 9, Nr. 8 (31.08.2021): 2307–25. http://dx.doi.org/10.22214/ijraset.2021.37723.
Der volle Inhalt der QuelleFerles, Christos, Yannis Papanikolaou, Stylianos P. Savaidis und Stelios A. Mitilineos. „Deep Self-Organizing Map of Convolutional Layers for Clustering and Visualizing Image Data“. Machine Learning and Knowledge Extraction 3, Nr. 4 (14.11.2021): 879–99. http://dx.doi.org/10.3390/make3040044.
Der volle Inhalt der QuelleZhuang, Chengxu, Siming Yan, Aran Nayebi, Martin Schrimpf, Michael C. Frank, James J. DiCarlo und Daniel L. K. Yamins. „Unsupervised neural network models of the ventral visual stream“. Proceedings of the National Academy of Sciences 118, Nr. 3 (11.01.2021): e2014196118. http://dx.doi.org/10.1073/pnas.2014196118.
Der volle Inhalt der QuelleLin, Baihan. „Regularity Normalization: Neuroscience-Inspired Unsupervised Attention across Neural Network Layers“. Entropy 24, Nr. 1 (28.12.2021): 59. http://dx.doi.org/10.3390/e24010059.
Der volle Inhalt der QuelleDissertationen zum Thema "Unsupervised deep neural networks"
Donati, Lorenzo. „Domain Adaptation through Deep Neural Networks for Health Informatics“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14888/.
Der volle Inhalt der QuelleAhn, Euijoon. „Unsupervised Deep Feature Learning for Medical Image Analysis“. Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/23002.
Der volle Inhalt der QuelleCherti, Mehdi. „Deep generative neural networks for novelty generation : a foundational framework, metrics and experiments“. Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS029/document.
Der volle Inhalt der QuelleIn recent years, significant advances made in deep neural networks enabled the creation of groundbreaking technologies such as self-driving cars and voice-enabled personal assistants. Almost all successes of deep neural networks are about prediction, whereas the initial breakthroughs came from generative models. Today, although we have very powerful deep generative modeling techniques, these techniques are essentially being used for prediction or for generating known objects (i.e., good quality images of known classes): any generated object that is a priori unknown is considered as a failure mode (Salimans et al., 2016) or as spurious (Bengio et al., 2013b). In other words, when prediction seems to be the only possible objective, novelty is seen as an error that researchers have been trying hard to eliminate. This thesis defends the point of view that, instead of trying to eliminate these novelties, we should study them and the generative potential of deep nets to create useful novelty, especially given the economic and societal importance of creating new objects in contemporary societies. The thesis sets out to study novelty generation in relationship with data-driven knowledge models produced by deep generative neural networks. Our first key contribution is the clarification of the importance of representations and their impact on the kind of novelties that can be generated: a key consequence is that a creative agent might need to rerepresent known objects to access various kinds of novelty. We then demonstrate that traditional objective functions of statistical learning theory, such as maximum likelihood, are not necessarily the best theoretical framework for studying novelty generation. We propose several other alternatives at the conceptual level. A second key result is the confirmation that current models, with traditional objective functions, can indeed generate unknown objects. This also shows that even though objectives like maximum likelihood are designed to eliminate novelty, practical implementations do generate novelty. Through a series of experiments, we study the behavior of these models and the novelty they generate. In particular, we propose a new task setup and metrics for selecting good generative models. Finally, the thesis concludes with a series of experiments clarifying the characteristics of models that can exhibit novelty. Experiments show that sparsity, noise level, and restricting the capacity of the net eliminates novelty and that models that are better at recognizing novelty are also good at generating novelty
Kilinc, Ismail Ozsel. „Graph-based Latent Embedding, Annotation and Representation Learning in Neural Networks for Semi-supervised and Unsupervised Settings“. Scholar Commons, 2017. https://scholarcommons.usf.edu/etd/7415.
Der volle Inhalt der QuelleMcClintick, Kyle W. „Training Data Generation Framework For Machine-Learning Based Classifiers“. Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-theses/1276.
Der volle Inhalt der QuelleBoschini, Matteo. „Unsupervised Learning of Scene Flow“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16226/.
Der volle Inhalt der QuelleKalinicheva, Ekaterina. „Unsupervised satellite image time series analysis using deep learning techniques“. Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS335.
Der volle Inhalt der QuelleThis thesis presents a set of unsupervised algorithms for satellite image time series (SITS) analysis. Our methods exploit machine learning algorithms and, in particular, neural networks to detect different spatio-temporal entities and their eventual changes in the time.In our thesis, we aim to identify three different types of temporal behavior: no change areas, seasonal changes (vegetation and other phenomena that have seasonal recurrence) and non-trivial changes (permanent changes such as constructions or demolishment, crop rotation, etc). Therefore, we propose two frameworks: one for detection and clustering of non-trivial changes and another for clustering of “stable” areas (seasonal changes and no change areas). The first framework is composed of two steps which are bi-temporal change detection and the interpretation of detected changes in a multi-temporal context with graph-based approaches. The bi-temporal change detection is performed for each pair of consecutive images of the SITS and is based on feature translation with autoencoders (AEs). At the next step, the changes from different timestamps that belong to the same geographic area form evolution change graphs. The graphs are then clustered using a recurrent neural networks AE model to identify different types of change behavior. For the second framework, we propose an approach for object-based SITS clustering. First, we encode SITS with a multi-view 3D convolutional AE in a single image. Second, we perform a two steps SITS segmentation using the encoded SITS and original images. Finally, the obtained segments are clustered exploiting their encoded descriptors
Yuan, Xiao. „Graph neural networks for spatial gene expression analysis of the developing human heart“. Thesis, Uppsala universitet, Institutionen för biologisk grundutbildning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-427330.
Der volle Inhalt der QuelleVENTURA, FRANCESCO. „Explaining black-box deep neural models' predictions, behaviors, and performances through the unsupervised mining of their inner knowledge“. Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2912972.
Der volle Inhalt der QuelleLi, Yingzhen. „Approximate inference : new visions“. Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/277549.
Der volle Inhalt der QuelleBücher zum Thema "Unsupervised deep neural networks"
E, Hinton Geoffrey, und Sejnowski Terrence J, Hrsg. Unsupervised learning: Foundations of neural computation. Cambridge, Mass: MIT Press, 1999.
Den vollen Inhalt der Quelle findenBaruque, Bruno. Fusion methods for unsupervised learning ensembles. Berlin: Springer, 2010.
Den vollen Inhalt der Quelle findenAggarwal, Charu C. Neural Networks and Deep Learning. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-94463-0.
Der volle Inhalt der QuelleAggarwal, Charu C. Neural Networks and Deep Learning. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-29642-0.
Der volle Inhalt der QuelleMoolayil, Jojo. Learn Keras for Deep Neural Networks. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-4240-7.
Der volle Inhalt der QuelleCaterini, Anthony L., und Dong Eui Chang. Deep Neural Networks in a Mathematical Framework. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75304-1.
Der volle Inhalt der QuelleRazaghi, Hooshmand Shokri. Statistical Machine Learning & Deep Neural Networks Applied to Neural Data Analysis. [New York, N.Y.?]: [publisher not identified], 2020.
Den vollen Inhalt der Quelle findenFingscheidt, Tim, Hanno Gottschalk und Sebastian Houben, Hrsg. Deep Neural Networks and Data for Automated Driving. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-01233-4.
Der volle Inhalt der QuelleModrzyk, Nicolas. Real-Time IoT Imaging with Deep Neural Networks. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5722-7.
Der volle Inhalt der QuelleSupervised and unsupervised pattern recognition: Feature extraction and computational intelligence. Boca Raton, Fla: CRC Press, 2000.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Unsupervised deep neural networks"
Song, Zeyang, Xi Wu, Mengwen Yuan und Huajin Tang. „An Unsupervised Spiking Deep Neural Network for Object Recognition“. In Advances in Neural Networks – ISNN 2019, 361–70. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-22808-8_36.
Der volle Inhalt der QuelleDeshwal, Deepti, und Pardeep Sangwan. „A Comprehensive Study of Deep Neural Networks for Unsupervised Deep Learning“. In Artificial Intelligence for Sustainable Development: Theory, Practice and Future Applications, 101–26. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-51920-9_7.
Der volle Inhalt der QuelleZhou, Jianchao, Xiaoou Chen und Deshun Yang. „Multimodel Music Emotion Recognition Using Unsupervised Deep Neural Networks“. In Lecture Notes in Electrical Engineering, 27–39. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-8707-4_3.
Der volle Inhalt der QuelleYan, Ruqiang, und Zhibin Zhao. „Unsupervised Deep Transfer Learning for Intelligent Fault Diagnosis“. In Deep Neural Networks-Enabled Intelligent Fault Diagnosis of Mechanical Systems, 109–36. Boca Raton: CRC Press, 2024. http://dx.doi.org/10.1201/9781003474463-9.
Der volle Inhalt der QuelleDreher, Kris K., Leonardo Ayala, Melanie Schellenberg, Marco Hübner, Jan-Hinrich Nölke, Tim J. Adler, Silvia Seidlitz et al. „Unsupervised Domain Transfer with Conditional Invertible Neural Networks“. In Lecture Notes in Computer Science, 770–80. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43907-0_73.
Der volle Inhalt der QuelleDas, Debasmit, und C. S. George Lee. „Graph Matching and Pseudo-Label Guided Deep Unsupervised Domain Adaptation“. In Artificial Neural Networks and Machine Learning – ICANN 2018, 342–52. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01424-7_34.
Der volle Inhalt der QuelleSlama, Dirk. „Artificial Intelligence 101“. In The Digital Playbook, 11–17. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-030-88221-1_2.
Der volle Inhalt der QuelleZamora-Martínez, Francisco, Javier Muñoz-Almaraz und Juan Pardo. „Integration of Unsupervised and Supervised Criteria for Deep Neural Networks Training“. In Artificial Neural Networks and Machine Learning – ICANN 2016, 55–62. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-44781-0_7.
Der volle Inhalt der QuelleLin, Xianghong, und Pangao Du. „Spike-Train Level Unsupervised Learning Algorithm for Deep Spiking Belief Networks“. In Artificial Neural Networks and Machine Learning – ICANN 2020, 634–45. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61616-8_51.
Der volle Inhalt der QuelleLiang, Yu, Yi Yang, Furao Shen, Jinxi Zhao und Tao Zhu. „An Incremental Deep Learning Network for On-line Unsupervised Feature Extraction“. In Neural Information Processing, 383–92. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70096-0_40.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Unsupervised deep neural networks"
Cerisara, Christophe, Paul Caillon und Guillaume Le Berre. „Unsupervised Post-Tuning of Deep Neural Networks“. In 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9534198.
Der volle Inhalt der QuelleSato, Kazuki, Kenta Hama, Takashi Matsubara und Kuniaki Uehara. „Predictable Uncertainty-Aware Unsupervised Deep Anomaly Segmentation“. In 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. http://dx.doi.org/10.1109/ijcnn.2019.8852144.
Der volle Inhalt der QuelleXie, Ying, Linh Le und Jie Hao. „Unsupervised deep kernel for high dimensional data“. In 2017 International Joint Conference on Neural Networks (IJCNN). IEEE, 2017. http://dx.doi.org/10.1109/ijcnn.2017.7965868.
Der volle Inhalt der QuelleBraga, Pedro. „Backpropagating the Unsupervised Error of Self-Organizing Maps to Deep Neural Networks“. In LatinX in AI at Neural Information Processing Systems Conference 2019. Journal of LatinX in AI Research, 2019. http://dx.doi.org/10.52591/lxai2019120818.
Der volle Inhalt der QuelleJUNGES, RAFAEL, ZAHRA RASTIN, LUCA LOMAZZI, MARCO GIGLIO und FRANCESCO CADINI. „DAMAGE LOCALIZATION FRAMEWORKS BASED ON UNSUPERVISED DEEP LEARNING NEURAL NETWORKS“. In Structural Health Monitoring 2023. Destech Publications, Inc., 2023. http://dx.doi.org/10.12783/shm2023/36889.
Der volle Inhalt der QuelleFeng, Guanchao, J. Gerald Quirk und Petar M. Djuric. „Supervised and Unsupervised Learning of Fetal Heart Rate Tracings with Deep Gaussian Processes“. In 2018 14th Symposium on Neural Networks and Applications (NEUREL). IEEE, 2018. http://dx.doi.org/10.1109/neurel.2018.8586992.
Der volle Inhalt der QuelleYu, Chaohui, Jindong Wang, Yiqiang Chen und Zijing Wu. „Accelerating Deep Unsupervised Domain Adaptation with Transfer Channel Pruning“. In 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. http://dx.doi.org/10.1109/ijcnn.2019.8851810.
Der volle Inhalt der QuelleChen, Dong, Miaomiao Cheng, Chen Min und Liping Jing. „Unsupervised Deep Imputed Hashing for Partial Cross-modal Retrieval“. In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. http://dx.doi.org/10.1109/ijcnn48605.2020.9206611.
Der volle Inhalt der QuelleTian, Qiangxing, Jinxin Liu, Guanchu Wang und Donglin Wang. „Unsupervised Discovery of Transitional Skills for Deep Reinforcement Learning“. In 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9533820.
Der volle Inhalt der QuelleWang, Qian, Fanlin Meng und Toby P. Breckon. „On Fine-Tuned Deep Features for Unsupervised Domain Adaptation“. In 2023 International Joint Conference on Neural Networks (IJCNN). IEEE, 2023. http://dx.doi.org/10.1109/ijcnn54540.2023.10191262.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Unsupervised deep neural networks"
Yu, Haichao, Haoxiang Li, Honghui Shi, Thomas S. Huang und Gang Hua. Any-Precision Deep Neural Networks. Web of Open Science, Dezember 2020. http://dx.doi.org/10.37686/ejai.v1i1.82.
Der volle Inhalt der QuelleMbani, Benson, Timm Schoening und Jens Greinert. Automated and Integrated Seafloor Classification Workflow (AI-SCW). GEOMAR, Mai 2023. http://dx.doi.org/10.3289/sw_2_2023.
Der volle Inhalt der QuelleKoh, Christopher Fu-Chai, und Sergey Igorevich Magedov. Bond Order Prediction Using Deep Neural Networks. Office of Scientific and Technical Information (OSTI), August 2019. http://dx.doi.org/10.2172/1557202.
Der volle Inhalt der QuelleShevitski, Brian, Yijing Watkins, Nicole Man und Michael Girard. Digital Signal Processing Using Deep Neural Networks. Office of Scientific and Technical Information (OSTI), April 2023. http://dx.doi.org/10.2172/1984848.
Der volle Inhalt der QuelleLin, Youzuo. Physics-guided Machine Learning: from Supervised Deep Networks to Unsupervised Lightweight Models. Office of Scientific and Technical Information (OSTI), August 2023. http://dx.doi.org/10.2172/1994110.
Der volle Inhalt der QuelleChavez, Wesley. An Exploration of Linear Classifiers for Unsupervised Spiking Neural Networks with Event-Driven Data. Portland State University Library, Januar 2000. http://dx.doi.org/10.15760/etd.6323.
Der volle Inhalt der QuelleTalathi, S. S. Deep Recurrent Neural Networks for seizure detection and early seizure detection systems. Office of Scientific and Technical Information (OSTI), Juni 2017. http://dx.doi.org/10.2172/1366924.
Der volle Inhalt der QuelleArmstrong, Derek Elswick, und Joseph Gabriel Gorka. Using Deep Neural Networks to Extract Fireball Parameters from Infrared Spectral Data. Office of Scientific and Technical Information (OSTI), Mai 2020. http://dx.doi.org/10.2172/1623398.
Der volle Inhalt der QuelleThulasidasan, Sunil, Gopinath Chennupati, Jeff Bilmes, Tanmoy Bhattacharya und Sarah E. Michalak. On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks. Office of Scientific and Technical Information (OSTI), Juni 2019. http://dx.doi.org/10.2172/1525811.
Der volle Inhalt der QuelleEllis, John, Attila Cangi, Normand Modine, John Stephens, Aidan Thompson und Sivasankaran Rajamanickam. Accelerating Finite-temperature Kohn-Sham Density Functional Theory\ with Deep Neural Networks. Office of Scientific and Technical Information (OSTI), Oktober 2020. http://dx.doi.org/10.2172/1677521.
Der volle Inhalt der Quelle