Academic literature on the topic 'Self-supervised learninig'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Self-supervised learninig.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Self-supervised learninig"
Zhao, Qingyu, Zixuan Liu, Ehsan Adeli, and Kilian M. Pohl. "Longitudinal self-supervised learning." Medical Image Analysis 71 (July 2021): 102051. http://dx.doi.org/10.1016/j.media.2021.102051.
Full textWang, Fei, and Changshui Zhang. "Robust self-tuning semi-supervised learning." Neurocomputing 70, no. 16-18 (October 2007): 2931–39. http://dx.doi.org/10.1016/j.neucom.2006.11.004.
Full textHrycej, Tomas. "Supporting supervised learning by self-organization." Neurocomputing 4, no. 1-2 (February 1992): 17–30. http://dx.doi.org/10.1016/0925-2312(92)90040-v.
Full textShin, Sungho, Jongwon Kim, Yeonguk Yu, Seongju Lee, and Kyoobin Lee. "Self-Supervised Transfer Learning from Natural Images for Sound Classification." Applied Sciences 11, no. 7 (March 29, 2021): 3043. http://dx.doi.org/10.3390/app11073043.
Full textLiu, Yuanyuan, and Qianqian Liu. "Research on Self-Supervised Comparative Learning for Computer Vision." Journal of Electronic Research and Application 5, no. 3 (August 17, 2021): 5–17. http://dx.doi.org/10.26689/jera.v5i3.2320.
Full textJaiswal, Ashish, Ashwin Ramesh Babu, Mohammad Zaki Zadeh, Debapriya Banerjee, and Fillia Makedon. "A Survey on Contrastive Self-Supervised Learning." Technologies 9, no. 1 (December 28, 2020): 2. http://dx.doi.org/10.3390/technologies9010002.
Full textITO, Seiya, Naoshi KANEKO, and Kazuhiko SUMI. "Self-Supervised Learning for Multi-View Stereo." Journal of the Japan Society for Precision Engineering 86, no. 12 (December 5, 2020): 1042–50. http://dx.doi.org/10.2493/jjspe.86.1042.
Full textTenorio, M. F., and W. T. Lee. "Self-organizing network for optimum supervised learning." IEEE Transactions on Neural Networks 1, no. 1 (March 1990): 100–110. http://dx.doi.org/10.1109/72.80209.
Full textFlorence, Peter, Lucas Manuelli, and Russ Tedrake. "Self-Supervised Correspondence in Visuomotor Policy Learning." IEEE Robotics and Automation Letters 5, no. 2 (April 2020): 492–99. http://dx.doi.org/10.1109/lra.2019.2956365.
Full textLiu, Chicheng, Libin Song, Jiwen Zhang, Ken Chen, and Jing Xu. "Self-Supervised Learning for Specified Latent Representation." IEEE Transactions on Fuzzy Systems 28, no. 1 (January 2020): 47–59. http://dx.doi.org/10.1109/tfuzz.2019.2904237.
Full textDissertations / Theses on the topic "Self-supervised learninig"
Vančo, Timotej. "Self-supervised učení v aplikacích počítačového vidění." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442510.
Full textKhan, Umair. "Self-supervised deep learning approaches to speaker recognition." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671496.
Full textLos avances recientes en Deep Learning (DL) para el reconocimiento del hablante están mejorado el rendimiento de los sistemas tradicionales basados en i-vectors. En el reconocimiento de locutor basado en i-vectors, la distancia coseno y el análisis discriminante lineal probabilístico (PLDA) son las dos técnicas más usadas de puntuación. La primera no es supervisada, pero la segunda necesita datos etiquetados por el hablante, que no son siempre fácilmente accesibles en la práctica. Esto crea una gran brecha de rendimiento entre estas dos técnicas de puntuación. La pregunta es: ¿cómo llenar esta brecha de rendimiento sin usar etiquetas del hablante en los datos de background? En esta tesis, el problema anterior se ha abordado utilizando técnicas de DL sin utilizar y/o limitar el uso de datos etiquetados. Se han realizado tres propuestas basadas en DL. En la primera, se propone una representación vectorial de voz basada en la máquina de Boltzmann restringida (RBM) para las tareas de agrupación de hablantes y seguimiento de hablantes en programas de televisión. Los experimentos en la base de datos AGORA, muestran que en agrupación de hablantes los vectores RBM suponen una mejora relativa del 12%. Y, por otro lado, en seguimiento del hablante, los vectores RBM,utilizados solo en la etapa de identificación del hablante, muestran una mejora relativa del 11% (coseno) y 7% (PLDA). En la segunda, se utiliza DL para aumentar el poder discriminativo de los i-vectors en la verificación del hablante. Se ha propuesto el uso del autocodificador de varias formas. En primer lugar, se utiliza un autocodificador como preentrenamiento de una red neuronal profunda (DNN) utilizando una gran cantidad de datos de background sin etiquetar, para posteriormente entrenar un clasificador DNN utilizando un conjunto reducido de datos etiquetados. En segundo lugar, se entrena un autocodificador para transformar i-vectors en una nueva representación para aumentar el poder discriminativo de los i-vectors. El entrenamiento se lleva a cabo en base a los i-vectors vecinos más cercanos, que se eligen de forma no supervisada. La evaluación se ha realizado con la base de datos VoxCeleb-1. Los resultados muestran que usando el primer sistema obtenemos una mejora relativa del 21% sobre i-vectors, mientras que usando el segundo sistema, se obtiene una mejora relativa del 42%. Además, si utilizamos los datos de background en la etapa de prueba, se obtiene una mejora relativa del 53%. En la tercera, entrenamos un sistema auto-supervisado de verificación de locutor de principio a fin. Utilizamos impostores junto con los vecinos más cercanos para formar pares cliente/impostor sin supervisión. La arquitectura se basa en un codificador de red neuronal convolucional (CNN) que se entrena como una red siamesa con dos ramas. Además, se entrena otra red con tres ramas utilizando la función de pérdida triplete para extraer embeddings de locutores. Los resultados muestran que tanto el sistema de principio a fin como los embeddings de locutores, a pesar de no estar supervisados, tienen un rendimiento comparable a una referencia supervisada. Cada uno de los enfoques propuestos tienen sus pros y sus contras. El mejor resultado se obtuvo utilizando el autocodificador con el vecino más cercano, con la desventaja de que necesita los i-vectors de background en el test. El uso del preentrenamiento del autocodificador para DNN no tiene este problema, pero es un enfoque semi-supervisado, es decir, requiere etiquetas de hablantes solo de una parte pequeña de los datos de background. La tercera propuesta no tienes estas dos limitaciones y funciona de manera razonable. Es un en
Korecki, John Nicholas. "Semi-Supervised Self-Learning on Imbalanced Data Sets." Scholar Commons, 2010. https://scholarcommons.usf.edu/etd/1686.
Full textGovindarajan, Hariprasath. "Self-Supervised Representation Learning for Content Based Image Retrieval." Thesis, Linköpings universitet, Statistik och maskininlärning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166223.
Full textZangeneh, Kamali Fereidoon. "Self-supervised learning of camera egomotion using epipolar geometry." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-286286.
Full textVisuell odometri är en av de vanligast förekommande teknikerna för positionering av autonoma agenter utrustade med kameror. Flera senare arbeten inom detta område har på olika sätt försökt utnyttja kapaciteten hos djupa neurala nätverk för att förbättra prestandan hos lösningar baserade på visuell odometri. Ett av dessa tillvägagångssätt består i att använda en inlärningsbaserad lösning för att härleda kamerans rörelse utifrån en sekvens av bilder. Gemensamt för de flesta senare lösningar är en självövervakande träningsstrategi som minimerar det uppfattade fotometriska fel som uppskattas genom att syntetisera synvinkeln utifrån givna bildsekvenser. Eftersom detta fel är en funktion av den estimerade kamerarörelsen motsvarar minimering av felet att nätverket lär sig uppskatta kamerarörelsen. Denna inlärning kräver dock även information om djupet i bilderna, vilket fås genom att introducera ett nätverk specifikt för estimering av djup. Detta innebär att för uppskattning av kamerans rörelse krävs inlärning av ytterligare en uppsättning parametrar vilka inte används i den slutgiltiga uppskattningen. I detta arbete föreslår vi en ny inlärningsstrategi baserad på epipolär geometri, vilket inte beror på djupskattningar. Empirisk utvärdering av vår metod visar att dess resultat är jämförbara med tidigare metoder som använder explicita djupskattningar för träning.
Sharma, Vivek [Verfasser], and R. [Akademischer Betreuer] Stiefelhagen. "Self-supervised Face Representation Learning / Vivek Sharma ; Betreuer: R. Stiefelhagen." Karlsruhe : KIT-Bibliothek, 2020. http://d-nb.info/1212512545/34.
Full textCoen, Michael Harlan. "Multimodal dynamics : self-supervised learning in perceptual and motor systems." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/34022.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (leaves 178-192).
This thesis presents a self-supervised framework for perceptual and motor learning based upon correlations in different sensory modalities. The brain and cognitive sciences have gathered an enormous body of neurological and phenomenological evidence in the past half century demonstrating the extraordinary degree of interaction between sensory modalities during the course of ordinary perception. We develop a framework for creating artificial perceptual systems that draws on these findings, where the primary architectural motif is the cross-modal transmission of perceptual information to enhance each sensory channel individually. We present self-supervised algorithms for learning perceptual grounding, intersensory influence, and sensorymotor coordination, which derive training signals from internal cross-modal correlations rather than from external supervision. Our goal is to create systems that develop by interacting with the world around them, inspired by development in animals. We demonstrate this framework with: (1) a system that learns the number and structure of vowels in American English by simultaneously watching and listening to someone speak. The system then cross-modally clusters the correlated auditory and visual data.
(cont.) It has no advance linguistic knowledge and receives no information outside of its sensory channels. This work is the first unsupervised acquisition of phonetic structure of which we are aware, outside of that done by human infants. (2) a system that learns to sing like a zebra finch, following the developmental stages of a juvenile zebra finch. It first learns the song of an adult male and then listens to its own initially nascent attempts at mimicry through an articulatory synthesizer. In acquiring the birdsong to which it was initially exposed, this system demonstrates self-supervised sensorimotor learning. It also demonstrates afferent and efferent equivalence - the system learns motor maps with the same computational framework used for learning sensory maps.
by Michael Harlan Coen.
Ph.D.
Nyströmer, Carl. "Musical Instrument Activity Detection using Self-Supervised Learning and Domain Adaptation." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280810.
Full textI och med de ständigt växande media- och musikkatalogerna krävs verktyg för att söka och navigera i dessa. För mer komplexa sökförfrågningar så behövs det metadata, men att manuellt annotera de enorma mängderna av ny data är omöjligt. I denna uppsats undersöks automatisk annotering utav instrumentsaktivitet inom musik, med ett fokus på bristen av annoterad data för modellerna för instrumentaktivitetsigenkänning. Två metoder för att komma runt bristen på data föreslås och undersöks. Den första metoden bygger på självövervakad inlärning baserad på automatisk annotering och slumpartad mixning av olika instrumentspår. Den andra metoden använder domänadaption genom att träna modeller på samplade MIDI-filer för detektering av instrument i inspelad musik. Metoden med självövervakning gav bättre resultat än baseline och pekar på att djupinlärningsmodeller kan lära sig instrumentigenkänning trots att ljudmixarna saknar musikalisk struktur. Domänadaptionsmodellerna som endast var tränade på samplad MIDI-data presterade sämre än baseline, men att använda MIDI-data tillsammans med data från inspelad musik gav förbättrade resultat. En hybridmodell som kombinerade både självövervakad inlärning och domänadaption genom att använda både samplad MIDI-data och inspelad musik gav de bästa resultaten totalt.
Nett, Ryan. "Dataset and Evaluation of Self-Supervised Learning for Panoramic Depth Estimation." DigitalCommons@CalPoly, 2020. https://digitalcommons.calpoly.edu/theses/2234.
Full textBaleia, José Rodrigo Ferreira. "Haptic robot-environment interaction for self-supervised learning in ground mobility." Master's thesis, Faculdade de Ciências e Tecnologia, 2014. http://hdl.handle.net/10362/12475.
Full textThis dissertation presents a system for haptic interaction and self-supervised learning mechanisms to ascertain navigation affordances from depth cues. A simple pan-tilt telescopic arm and a structured light sensor, both fitted to the robot’s body frame, provide the required haptic and depth sensory feedback. The system aims at incrementally develop the ability to assess the cost of navigating in natural environments. For this purpose the robot learns a mapping between the appearance of objects, given sensory data provided by the sensor, and their bendability, perceived by the pan-tilt telescopic arm. The object descriptor, representing the object in memory and used for comparisons with other objects, is rich for a robust comparison and simple enough to allow for fast computations. The output of the memory learning mechanism allied with the haptic interaction point evaluation prioritize interaction points to increase the confidence on the interaction and correctly identifying obstacles, reducing the risk of the robot getting stuck or damaged. If the system concludes that the object is traversable, the environment change detection system allows the robot to overcome it. A set of field trials show the ability of the robot to progressively learn which elements of environment are traversable.
Books on the topic "Self-supervised learninig"
Munro, Paul. Self-supervised learning of concepts by single units and "weakly local" representations. Pittsburgh, PA: School of Library and Information Science, University of Pittsburgh, 1988.
Find full textMunro, Paul. Self-supervised learning of concepts by single units and "weakly local" representations. School of Library and Information Science, University of Pittsburgh, 1988.
Find full textBook chapters on the topic "Self-supervised learninig"
Nedelkoski, Sasho, Jasmin Bogatinovski, Alexander Acker, Jorge Cardoso, and Odej Kao. "Self-supervised Log Parsing." In Machine Learning and Knowledge Discovery in Databases: Applied Data Science Track, 122–38. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67667-4_8.
Full textJawed, Shayan, Josif Grabocka, and Lars Schmidt-Thieme. "Self-supervised Learning for Semi-supervised Time Series Classification." In Advances in Knowledge Discovery and Data Mining, 499–511. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-47426-3_39.
Full textJamaludin, Amir, Timor Kadir, and Andrew Zisserman. "Self-supervised Learning for Spinal MRIs." In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, 294–302. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-67558-9_34.
Full textLiu, Fengbei, Yu Tian, Filipe R. Cordeiro, Vasileios Belagiannis, Ian Reid, and Gustavo Carneiro. "Self-supervised Mean Teacher for Semi-supervised Chest X-Ray Classification." In Machine Learning in Medical Imaging, 426–36. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87589-3_44.
Full textSi, Chenyang, Xuecheng Nie, Wei Wang, Liang Wang, Tieniu Tan, and Jiashi Feng. "Adversarial Self-supervised Learning for Semi-supervised 3D Action Recognition." In Computer Vision – ECCV 2020, 35–51. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58571-6_3.
Full textZhang, Ruifei, Sishuo Liu, Yizhou Yu, and Guanbin Li. "Self-supervised Correction Learning for Semi-supervised Biomedical Image Segmentation." In Medical Image Computing and Computer Assisted Intervention – MICCAI 2021, 134–44. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87196-3_13.
Full textValvano, Gabriele, Andrea Leo, and Sotirios A. Tsaftaris. "Self-supervised Multi-scale Consistency for Weakly Supervised Segmentation Learning." In Domain Adaptation and Representation Transfer, and Affordable Healthcare and AI for Resource Diverse Global Health, 14–24. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87722-4_2.
Full textFeng, Ruibin, Zongwei Zhou, Michael B. Gotway, and Jianming Liang. "Parts2Whole: Self-supervised Contrastive Learning via Reconstruction." In Domain Adaptation and Representation Transfer, and Distributed and Collaborative Learning, 85–95. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60548-3_9.
Full textCervera, Enrique, and Angel P. Pobil. "Multiple self-organizing maps for supervised learning." In Lecture Notes in Computer Science, 345–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/3-540-59497-3_195.
Full textKarlos, Stamatis, Nikos Fazakis, Sotiris Kotsiantis, and Kyriakos Sgarbas. "Self-Train LogitBoost for Semi-supervised Learning." In Engineering Applications of Neural Networks, 139–48. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23983-5_14.
Full textConference papers on the topic "Self-supervised learninig"
An, Yuexuan, Hui Xue, Xingyu Zhao, and Lu Zhang. "Conditional Self-Supervised Learning for Few-Shot Classification." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/295.
Full textBeyer, Lucas, Xiaohua Zhai, Avital Oliver, and Alexander Kolesnikov. "S4L: Self-Supervised Semi-Supervised Learning." In 2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2019. http://dx.doi.org/10.1109/iccv.2019.00156.
Full textBasaj, Dominika, Witold Oleszkiewicz, Igor Sieradzki, Michał Górszczak, Barbara Rychalska, Tomasz Trzcinski, and Bartosz Zieliński. "Explaining Self-Supervised Image Representations with Visual Probing." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/82.
Full textSong, Jinwoo, and Young B. Moon. "Infill Defective Detection System Augmented by Semi-Supervised Learning." In ASME 2020 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/imece2020-23249.
Full textWu, Jiawei, Xin Wang, and William Yang Wang. "Self-Supervised Dialogue Learning." In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/p19-1375.
Full textLi, Pengyong, Jun Wang, Ziliang Li, Yixuan Qiao, Xianggen Liu, Fei Ma, Peng Gao, Sen Song, and Guotong Xie. "Pairwise Half-graph Discrimination: A Simple Graph-level Self-supervised Strategy for Pre-training Graph Neural Networks." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/371.
Full textHu, Yazhe, and Tomonari Furukawa. "A Self-Supervised Learning Technique for Road Defects Detection Based on Monocular Three-Dimensional Reconstruction." In ASME 2019 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/detc2019-98135.
Full textShao, Shuai, Lei Xing, Wei Yu, Rui Xu, Yan-Jiang Wang, and Bao-Di Liu. "SSDL: Self-Supervised Dictionary Learning." In 2021 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2021. http://dx.doi.org/10.1109/icme51207.2021.9428336.
Full textKamimura, Ryotaro. "Self-enhancement learning: Self-supervised and target-creating learning." In 2009 International Joint Conference on Neural Networks (IJCNN 2009 - Atlanta). IEEE, 2009. http://dx.doi.org/10.1109/ijcnn.2009.5178677.
Full textCho, Hyunsoo, Jinseok Seol, and Sang-goo Lee. "Masked Contrastive Learning for Anomaly Detection." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/198.
Full text