Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Self-supervised learninig“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Self-supervised learninig" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Self-supervised learninig"
Zhao, Qingyu, Zixuan Liu, Ehsan Adeli und Kilian M. Pohl. „Longitudinal self-supervised learning“. Medical Image Analysis 71 (Juli 2021): 102051. http://dx.doi.org/10.1016/j.media.2021.102051.
Der volle Inhalt der QuelleWang, Fei, und Changshui Zhang. „Robust self-tuning semi-supervised learning“. Neurocomputing 70, Nr. 16-18 (Oktober 2007): 2931–39. http://dx.doi.org/10.1016/j.neucom.2006.11.004.
Der volle Inhalt der QuelleHrycej, Tomas. „Supporting supervised learning by self-organization“. Neurocomputing 4, Nr. 1-2 (Februar 1992): 17–30. http://dx.doi.org/10.1016/0925-2312(92)90040-v.
Der volle Inhalt der QuelleShin, Sungho, Jongwon Kim, Yeonguk Yu, Seongju Lee und Kyoobin Lee. „Self-Supervised Transfer Learning from Natural Images for Sound Classification“. Applied Sciences 11, Nr. 7 (29.03.2021): 3043. http://dx.doi.org/10.3390/app11073043.
Der volle Inhalt der QuelleLiu, Yuanyuan, und Qianqian Liu. „Research on Self-Supervised Comparative Learning for Computer Vision“. Journal of Electronic Research and Application 5, Nr. 3 (17.08.2021): 5–17. http://dx.doi.org/10.26689/jera.v5i3.2320.
Der volle Inhalt der QuelleJaiswal, Ashish, Ashwin Ramesh Babu, Mohammad Zaki Zadeh, Debapriya Banerjee und Fillia Makedon. „A Survey on Contrastive Self-Supervised Learning“. Technologies 9, Nr. 1 (28.12.2020): 2. http://dx.doi.org/10.3390/technologies9010002.
Der volle Inhalt der QuelleITO, Seiya, Naoshi KANEKO und Kazuhiko SUMI. „Self-Supervised Learning for Multi-View Stereo“. Journal of the Japan Society for Precision Engineering 86, Nr. 12 (05.12.2020): 1042–50. http://dx.doi.org/10.2493/jjspe.86.1042.
Der volle Inhalt der QuelleTenorio, M. F., und W. T. Lee. „Self-organizing network for optimum supervised learning“. IEEE Transactions on Neural Networks 1, Nr. 1 (März 1990): 100–110. http://dx.doi.org/10.1109/72.80209.
Der volle Inhalt der QuelleFlorence, Peter, Lucas Manuelli und Russ Tedrake. „Self-Supervised Correspondence in Visuomotor Policy Learning“. IEEE Robotics and Automation Letters 5, Nr. 2 (April 2020): 492–99. http://dx.doi.org/10.1109/lra.2019.2956365.
Der volle Inhalt der QuelleLiu, Chicheng, Libin Song, Jiwen Zhang, Ken Chen und Jing Xu. „Self-Supervised Learning for Specified Latent Representation“. IEEE Transactions on Fuzzy Systems 28, Nr. 1 (Januar 2020): 47–59. http://dx.doi.org/10.1109/tfuzz.2019.2904237.
Der volle Inhalt der QuelleDissertationen zum Thema "Self-supervised learninig"
Vančo, Timotej. „Self-supervised učení v aplikacích počítačového vidění“. Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442510.
Der volle Inhalt der QuelleKhan, Umair. „Self-supervised deep learning approaches to speaker recognition“. Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671496.
Der volle Inhalt der QuelleLos avances recientes en Deep Learning (DL) para el reconocimiento del hablante están mejorado el rendimiento de los sistemas tradicionales basados en i-vectors. En el reconocimiento de locutor basado en i-vectors, la distancia coseno y el análisis discriminante lineal probabilístico (PLDA) son las dos técnicas más usadas de puntuación. La primera no es supervisada, pero la segunda necesita datos etiquetados por el hablante, que no son siempre fácilmente accesibles en la práctica. Esto crea una gran brecha de rendimiento entre estas dos técnicas de puntuación. La pregunta es: ¿cómo llenar esta brecha de rendimiento sin usar etiquetas del hablante en los datos de background? En esta tesis, el problema anterior se ha abordado utilizando técnicas de DL sin utilizar y/o limitar el uso de datos etiquetados. Se han realizado tres propuestas basadas en DL. En la primera, se propone una representación vectorial de voz basada en la máquina de Boltzmann restringida (RBM) para las tareas de agrupación de hablantes y seguimiento de hablantes en programas de televisión. Los experimentos en la base de datos AGORA, muestran que en agrupación de hablantes los vectores RBM suponen una mejora relativa del 12%. Y, por otro lado, en seguimiento del hablante, los vectores RBM,utilizados solo en la etapa de identificación del hablante, muestran una mejora relativa del 11% (coseno) y 7% (PLDA). En la segunda, se utiliza DL para aumentar el poder discriminativo de los i-vectors en la verificación del hablante. Se ha propuesto el uso del autocodificador de varias formas. En primer lugar, se utiliza un autocodificador como preentrenamiento de una red neuronal profunda (DNN) utilizando una gran cantidad de datos de background sin etiquetar, para posteriormente entrenar un clasificador DNN utilizando un conjunto reducido de datos etiquetados. En segundo lugar, se entrena un autocodificador para transformar i-vectors en una nueva representación para aumentar el poder discriminativo de los i-vectors. El entrenamiento se lleva a cabo en base a los i-vectors vecinos más cercanos, que se eligen de forma no supervisada. La evaluación se ha realizado con la base de datos VoxCeleb-1. Los resultados muestran que usando el primer sistema obtenemos una mejora relativa del 21% sobre i-vectors, mientras que usando el segundo sistema, se obtiene una mejora relativa del 42%. Además, si utilizamos los datos de background en la etapa de prueba, se obtiene una mejora relativa del 53%. En la tercera, entrenamos un sistema auto-supervisado de verificación de locutor de principio a fin. Utilizamos impostores junto con los vecinos más cercanos para formar pares cliente/impostor sin supervisión. La arquitectura se basa en un codificador de red neuronal convolucional (CNN) que se entrena como una red siamesa con dos ramas. Además, se entrena otra red con tres ramas utilizando la función de pérdida triplete para extraer embeddings de locutores. Los resultados muestran que tanto el sistema de principio a fin como los embeddings de locutores, a pesar de no estar supervisados, tienen un rendimiento comparable a una referencia supervisada. Cada uno de los enfoques propuestos tienen sus pros y sus contras. El mejor resultado se obtuvo utilizando el autocodificador con el vecino más cercano, con la desventaja de que necesita los i-vectors de background en el test. El uso del preentrenamiento del autocodificador para DNN no tiene este problema, pero es un enfoque semi-supervisado, es decir, requiere etiquetas de hablantes solo de una parte pequeña de los datos de background. La tercera propuesta no tienes estas dos limitaciones y funciona de manera razonable. Es un en
Korecki, John Nicholas. „Semi-Supervised Self-Learning on Imbalanced Data Sets“. Scholar Commons, 2010. https://scholarcommons.usf.edu/etd/1686.
Der volle Inhalt der QuelleGovindarajan, Hariprasath. „Self-Supervised Representation Learning for Content Based Image Retrieval“. Thesis, Linköpings universitet, Statistik och maskininlärning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166223.
Der volle Inhalt der QuelleZangeneh, Kamali Fereidoon. „Self-supervised learning of camera egomotion using epipolar geometry“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-286286.
Der volle Inhalt der QuelleVisuell odometri är en av de vanligast förekommande teknikerna för positionering av autonoma agenter utrustade med kameror. Flera senare arbeten inom detta område har på olika sätt försökt utnyttja kapaciteten hos djupa neurala nätverk för att förbättra prestandan hos lösningar baserade på visuell odometri. Ett av dessa tillvägagångssätt består i att använda en inlärningsbaserad lösning för att härleda kamerans rörelse utifrån en sekvens av bilder. Gemensamt för de flesta senare lösningar är en självövervakande träningsstrategi som minimerar det uppfattade fotometriska fel som uppskattas genom att syntetisera synvinkeln utifrån givna bildsekvenser. Eftersom detta fel är en funktion av den estimerade kamerarörelsen motsvarar minimering av felet att nätverket lär sig uppskatta kamerarörelsen. Denna inlärning kräver dock även information om djupet i bilderna, vilket fås genom att introducera ett nätverk specifikt för estimering av djup. Detta innebär att för uppskattning av kamerans rörelse krävs inlärning av ytterligare en uppsättning parametrar vilka inte används i den slutgiltiga uppskattningen. I detta arbete föreslår vi en ny inlärningsstrategi baserad på epipolär geometri, vilket inte beror på djupskattningar. Empirisk utvärdering av vår metod visar att dess resultat är jämförbara med tidigare metoder som använder explicita djupskattningar för träning.
Sharma, Vivek [Verfasser], und R. [Akademischer Betreuer] Stiefelhagen. „Self-supervised Face Representation Learning / Vivek Sharma ; Betreuer: R. Stiefelhagen“. Karlsruhe : KIT-Bibliothek, 2020. http://d-nb.info/1212512545/34.
Der volle Inhalt der QuelleCoen, Michael Harlan. „Multimodal dynamics : self-supervised learning in perceptual and motor systems“. Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/34022.
Der volle Inhalt der QuelleThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (leaves 178-192).
This thesis presents a self-supervised framework for perceptual and motor learning based upon correlations in different sensory modalities. The brain and cognitive sciences have gathered an enormous body of neurological and phenomenological evidence in the past half century demonstrating the extraordinary degree of interaction between sensory modalities during the course of ordinary perception. We develop a framework for creating artificial perceptual systems that draws on these findings, where the primary architectural motif is the cross-modal transmission of perceptual information to enhance each sensory channel individually. We present self-supervised algorithms for learning perceptual grounding, intersensory influence, and sensorymotor coordination, which derive training signals from internal cross-modal correlations rather than from external supervision. Our goal is to create systems that develop by interacting with the world around them, inspired by development in animals. We demonstrate this framework with: (1) a system that learns the number and structure of vowels in American English by simultaneously watching and listening to someone speak. The system then cross-modally clusters the correlated auditory and visual data.
(cont.) It has no advance linguistic knowledge and receives no information outside of its sensory channels. This work is the first unsupervised acquisition of phonetic structure of which we are aware, outside of that done by human infants. (2) a system that learns to sing like a zebra finch, following the developmental stages of a juvenile zebra finch. It first learns the song of an adult male and then listens to its own initially nascent attempts at mimicry through an articulatory synthesizer. In acquiring the birdsong to which it was initially exposed, this system demonstrates self-supervised sensorimotor learning. It also demonstrates afferent and efferent equivalence - the system learns motor maps with the same computational framework used for learning sensory maps.
by Michael Harlan Coen.
Ph.D.
Nyströmer, Carl. „Musical Instrument Activity Detection using Self-Supervised Learning and Domain Adaptation“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280810.
Der volle Inhalt der QuelleI och med de ständigt växande media- och musikkatalogerna krävs verktyg för att söka och navigera i dessa. För mer komplexa sökförfrågningar så behövs det metadata, men att manuellt annotera de enorma mängderna av ny data är omöjligt. I denna uppsats undersöks automatisk annotering utav instrumentsaktivitet inom musik, med ett fokus på bristen av annoterad data för modellerna för instrumentaktivitetsigenkänning. Två metoder för att komma runt bristen på data föreslås och undersöks. Den första metoden bygger på självövervakad inlärning baserad på automatisk annotering och slumpartad mixning av olika instrumentspår. Den andra metoden använder domänadaption genom att träna modeller på samplade MIDI-filer för detektering av instrument i inspelad musik. Metoden med självövervakning gav bättre resultat än baseline och pekar på att djupinlärningsmodeller kan lära sig instrumentigenkänning trots att ljudmixarna saknar musikalisk struktur. Domänadaptionsmodellerna som endast var tränade på samplad MIDI-data presterade sämre än baseline, men att använda MIDI-data tillsammans med data från inspelad musik gav förbättrade resultat. En hybridmodell som kombinerade både självövervakad inlärning och domänadaption genom att använda både samplad MIDI-data och inspelad musik gav de bästa resultaten totalt.
Nett, Ryan. „Dataset and Evaluation of Self-Supervised Learning for Panoramic Depth Estimation“. DigitalCommons@CalPoly, 2020. https://digitalcommons.calpoly.edu/theses/2234.
Der volle Inhalt der QuelleBaleia, José Rodrigo Ferreira. „Haptic robot-environment interaction for self-supervised learning in ground mobility“. Master's thesis, Faculdade de Ciências e Tecnologia, 2014. http://hdl.handle.net/10362/12475.
Der volle Inhalt der QuelleThis dissertation presents a system for haptic interaction and self-supervised learning mechanisms to ascertain navigation affordances from depth cues. A simple pan-tilt telescopic arm and a structured light sensor, both fitted to the robot’s body frame, provide the required haptic and depth sensory feedback. The system aims at incrementally develop the ability to assess the cost of navigating in natural environments. For this purpose the robot learns a mapping between the appearance of objects, given sensory data provided by the sensor, and their bendability, perceived by the pan-tilt telescopic arm. The object descriptor, representing the object in memory and used for comparisons with other objects, is rich for a robust comparison and simple enough to allow for fast computations. The output of the memory learning mechanism allied with the haptic interaction point evaluation prioritize interaction points to increase the confidence on the interaction and correctly identifying obstacles, reducing the risk of the robot getting stuck or damaged. If the system concludes that the object is traversable, the environment change detection system allows the robot to overcome it. A set of field trials show the ability of the robot to progressively learn which elements of environment are traversable.
Bücher zum Thema "Self-supervised learninig"
Munro, Paul. Self-supervised learning of concepts by single units and "weakly local" representations. Pittsburgh, PA: School of Library and Information Science, University of Pittsburgh, 1988.
Den vollen Inhalt der Quelle findenMunro, Paul. Self-supervised learning of concepts by single units and "weakly local" representations. School of Library and Information Science, University of Pittsburgh, 1988.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Self-supervised learninig"
Nedelkoski, Sasho, Jasmin Bogatinovski, Alexander Acker, Jorge Cardoso und Odej Kao. „Self-supervised Log Parsing“. In Machine Learning and Knowledge Discovery in Databases: Applied Data Science Track, 122–38. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67667-4_8.
Der volle Inhalt der QuelleJawed, Shayan, Josif Grabocka und Lars Schmidt-Thieme. „Self-supervised Learning for Semi-supervised Time Series Classification“. In Advances in Knowledge Discovery and Data Mining, 499–511. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-47426-3_39.
Der volle Inhalt der QuelleJamaludin, Amir, Timor Kadir und Andrew Zisserman. „Self-supervised Learning for Spinal MRIs“. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, 294–302. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-67558-9_34.
Der volle Inhalt der QuelleLiu, Fengbei, Yu Tian, Filipe R. Cordeiro, Vasileios Belagiannis, Ian Reid und Gustavo Carneiro. „Self-supervised Mean Teacher for Semi-supervised Chest X-Ray Classification“. In Machine Learning in Medical Imaging, 426–36. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87589-3_44.
Der volle Inhalt der QuelleSi, Chenyang, Xuecheng Nie, Wei Wang, Liang Wang, Tieniu Tan und Jiashi Feng. „Adversarial Self-supervised Learning for Semi-supervised 3D Action Recognition“. In Computer Vision – ECCV 2020, 35–51. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58571-6_3.
Der volle Inhalt der QuelleZhang, Ruifei, Sishuo Liu, Yizhou Yu und Guanbin Li. „Self-supervised Correction Learning for Semi-supervised Biomedical Image Segmentation“. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2021, 134–44. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87196-3_13.
Der volle Inhalt der QuelleValvano, Gabriele, Andrea Leo und Sotirios A. Tsaftaris. „Self-supervised Multi-scale Consistency for Weakly Supervised Segmentation Learning“. In Domain Adaptation and Representation Transfer, and Affordable Healthcare and AI for Resource Diverse Global Health, 14–24. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87722-4_2.
Der volle Inhalt der QuelleFeng, Ruibin, Zongwei Zhou, Michael B. Gotway und Jianming Liang. „Parts2Whole: Self-supervised Contrastive Learning via Reconstruction“. In Domain Adaptation and Representation Transfer, and Distributed and Collaborative Learning, 85–95. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60548-3_9.
Der volle Inhalt der QuelleCervera, Enrique, und Angel P. Pobil. „Multiple self-organizing maps for supervised learning“. In Lecture Notes in Computer Science, 345–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/3-540-59497-3_195.
Der volle Inhalt der QuelleKarlos, Stamatis, Nikos Fazakis, Sotiris Kotsiantis und Kyriakos Sgarbas. „Self-Train LogitBoost for Semi-supervised Learning“. In Engineering Applications of Neural Networks, 139–48. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23983-5_14.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Self-supervised learninig"
An, Yuexuan, Hui Xue, Xingyu Zhao und Lu Zhang. „Conditional Self-Supervised Learning for Few-Shot Classification“. In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/295.
Der volle Inhalt der QuelleBeyer, Lucas, Xiaohua Zhai, Avital Oliver und Alexander Kolesnikov. „S4L: Self-Supervised Semi-Supervised Learning“. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2019. http://dx.doi.org/10.1109/iccv.2019.00156.
Der volle Inhalt der QuelleBasaj, Dominika, Witold Oleszkiewicz, Igor Sieradzki, Michał Górszczak, Barbara Rychalska, Tomasz Trzcinski und Bartosz Zieliński. „Explaining Self-Supervised Image Representations with Visual Probing“. In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/82.
Der volle Inhalt der QuelleSong, Jinwoo, und Young B. Moon. „Infill Defective Detection System Augmented by Semi-Supervised Learning“. In ASME 2020 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/imece2020-23249.
Der volle Inhalt der QuelleWu, Jiawei, Xin Wang und William Yang Wang. „Self-Supervised Dialogue Learning“. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/p19-1375.
Der volle Inhalt der QuelleLi, Pengyong, Jun Wang, Ziliang Li, Yixuan Qiao, Xianggen Liu, Fei Ma, Peng Gao, Sen Song und Guotong Xie. „Pairwise Half-graph Discrimination: A Simple Graph-level Self-supervised Strategy for Pre-training Graph Neural Networks“. In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/371.
Der volle Inhalt der QuelleHu, Yazhe, und Tomonari Furukawa. „A Self-Supervised Learning Technique for Road Defects Detection Based on Monocular Three-Dimensional Reconstruction“. In ASME 2019 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/detc2019-98135.
Der volle Inhalt der QuelleShao, Shuai, Lei Xing, Wei Yu, Rui Xu, Yan-Jiang Wang und Bao-Di Liu. „SSDL: Self-Supervised Dictionary Learning“. In 2021 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2021. http://dx.doi.org/10.1109/icme51207.2021.9428336.
Der volle Inhalt der QuelleKamimura, Ryotaro. „Self-enhancement learning: Self-supervised and target-creating learning“. In 2009 International Joint Conference on Neural Networks (IJCNN 2009 - Atlanta). IEEE, 2009. http://dx.doi.org/10.1109/ijcnn.2009.5178677.
Der volle Inhalt der QuelleCho, Hyunsoo, Jinseok Seol und Sang-goo Lee. „Masked Contrastive Learning for Anomaly Detection“. In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/198.
Der volle Inhalt der Quelle