Littérature scientifique sur le sujet « Self-Supervised models »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Self-Supervised models ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Self-Supervised models"
Anton, Jonah, Liam Castelli, Mun Fai Chan, Mathilde Outters, Wan Hee Tang, Venus Cheung, Pancham Shukla, Rahee Walambe et Ketan Kotecha. « How Well Do Self-Supervised Models Transfer to Medical Imaging ? » Journal of Imaging 8, no 12 (1 décembre 2022) : 320. http://dx.doi.org/10.3390/jimaging8120320.
Texte intégralGatopoulos, Ioannis, et Jakub M. Tomczak. « Self-Supervised Variational Auto-Encoders ». Entropy 23, no 6 (14 juin 2021) : 747. http://dx.doi.org/10.3390/e23060747.
Texte intégralZhang, Ronghua, Yuanyuan Wang, Fangyuan Liu, Changzheng Liu, Yaping Song et Baohua Yu. « S2NMF : Information Self-Enhancement Self-Supervised Nonnegative Matrix Factorization for Recommendation ». Wireless Communications and Mobile Computing 2022 (30 août 2022) : 1–10. http://dx.doi.org/10.1155/2022/4748858.
Texte intégralDang, Thanh-Vu, JinYoung Kim, Gwang-Hyun Yu, Ji Yong Kim, Young Hwan Park et ChilWoo Lee. « Korean Text to Gloss : Self-Supervised Learning approach ». Korean Institute of Smart Media 12, no 1 (28 février 2023) : 32–46. http://dx.doi.org/10.30693/smj.2023.12.1.32.
Texte intégralRisojević, V., et V. Stojnić. « DO WE STILL NEED IMAGENET PRE-TRAINING IN REMOTE SENSING SCENE CLASSIFICATION ? » International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2022 (31 mai 2022) : 1399–406. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2022-1399-2022.
Texte intégralImran, Abdullah-Al-Zubaer, Chao Huang, Hui Tang, Wei Fan, Yuan Xiao, Dingjun Hao, Zhen Qian et Demetri Terzopoulos. « Self-Supervised, Semi-Supervised, Multi-Context Learning for the Combined Classification and Segmentation of Medical Images (Student Abstract) ». Proceedings of the AAAI Conference on Artificial Intelligence 34, no 10 (3 avril 2020) : 13815–16. http://dx.doi.org/10.1609/aaai.v34i10.7179.
Texte intégralZhou, Meng, Zechen Li et Pengtao Xie. « Self-supervised Regularization for Text Classification ». Transactions of the Association for Computational Linguistics 9 (2021) : 641–56. http://dx.doi.org/10.1162/tacl_a_00389.
Texte intégralGong, Yuan, Cheng-I. Lai, Yu-An Chung et James Glass. « SSAST : Self-Supervised Audio Spectrogram Transformer ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 10 (28 juin 2022) : 10699–709. http://dx.doi.org/10.1609/aaai.v36i10.21315.
Texte intégralChen, Xuehao, Jin Zhou, Yuehui Chen, Shiyuan Han, Yingxu Wang, Tao Du, Cheng Yang et Bowen Liu. « Self-Supervised Clustering Models Based on BYOL Network Structure ». Electronics 12, no 23 (21 novembre 2023) : 4723. http://dx.doi.org/10.3390/electronics12234723.
Texte intégralLuo, Dezhao, Chang Liu, Yu Zhou, Dongbao Yang, Can Ma, Qixiang Ye et Weiping Wang. « Video Cloze Procedure for Self-Supervised Spatio-Temporal Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 34, no 07 (3 avril 2020) : 11701–8. http://dx.doi.org/10.1609/aaai.v34i07.6840.
Texte intégralThèses sur le sujet "Self-Supervised models"
Rossi, Alex. « Self-supervised information retrieval : a novel approach based on Deep Metric Learning and Neural Language Models ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.
Trouver le texte intégralDouzon, Thibault. « Language models for document understanding ». Electronic Thesis or Diss., Lyon, INSA, 2023. http://www.theses.fr/2023ISAL0075.
Texte intégralEvery day, an uncountable amount of documents are received and processed by companies worldwide. In an effort to reduce the cost of processing each document, the largest companies have resorted to document automation technologies. In an ideal world, a document can be automatically processed without any human intervention: its content is read, and information is extracted and forwarded to the relevant service. The state-of-the-art techniques have quickly evolved in the last decades, from rule-based algorithms to statistical models. This thesis focuses on machine learning models for document information extraction. Recent advances in model architecture for natural language processing have shown the importance of the attention mechanism. Transformers have revolutionized the field by generalizing the use of attention and by pushing self-supervised pre-training to the next level. In the first part, we confirm that transformers with appropriate pre-training were able to perform document understanding tasks with high performance. We show that, when used as a token classifier for information extraction, transformers are able to exceptionally efficiently learn the task compared to recurrent networks. Transformers only need a small proportion of the training data to reach close to maximum performance. This highlights the importance of self-supervised pre-training for future fine-tuning. In the following part, we design specialized pre-training tasks, to better prepare the model for specific data distributions such as business documents. By acknowledging the specificities of business documents such as their table structure and their over-representation of numeric figures, we are able to target specific skills useful for the model in its future tasks. We show that those new tasks improve the model's downstream performances, even with small models. Using this pre-training approach, we are able to reach the performances of significantly bigger models without any additional cost during finetuning or inference. Finally, in the last part, we address one drawback of the transformer architecture which is its computational cost when used on long sequences. We show that efficient architectures derived from the classic transformer require fewer resources and perform better on long sequences. However, due to how they approximate the attention computation, efficient models suffer from a small but significant performance drop on short sequences compared to classical architectures. This incentivizes the use of different models depending on the input length and enables concatenating multimodal inputs into a single sequence
Lin, Lyu. « Transformer-based Model for Molecular Property Prediction with Self-Supervised Transfer Learning ». Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-284682.
Texte intégralPrediktion av molekylers egenskaper har en stor mängd tillämpningar inom kemiindustrin. Kraftfulla metoder för att predicera molekylära egenskaper kan främja vetenskapliga experiment och produktionsprocesser. Ansatsen i detta arbete är att använda överförd inlärning (eng. transfer learning) för att predicera egenskaper hos molekyler. Projektet är indelat i två delar. Den första delen fokuserar på att utveckla och förträna en modell. Modellen består av Transformer-lager med attention- mekanismer och förtränas genom att återställa maskerade kanter i molekylgrafer från storskaliga mängder icke-annoterad data. Efteråt utvärderas prestandan hos den förtränade modellen i en mängd olika uppgifter baserade på prediktion av molekylegenskaper vilket bekräftar fördelen med överförd inlärning.Resultaten visar att modellen efter självövervakad förträning besitter utmärkt förmåga till att generalisera. Den kan finjusteras med liten tidskostnad och presterar väl i specialiserade uppgifter. Effektiviteten hos överförd inlärning visas också i experimenten. Den förtränade modellen förkortar inte bara tiden för uppgifts-specifik inlärning utan uppnår även bättre prestanda och undviker att övertränas på grund otillräckliga mängder data i uppgifter för prediktion av molekylegenskaper.
Ter-Hovhannisyan, Vardges. « Unsupervised and semi-supervised training methods for eukaryotic gene prediction ». Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26645.
Texte intégralCommittee Chair: Mark Borodovky; Committee Member: Jung H. Choi; Committee Member: King Jordan; Committee Member: Leonid Bunimovich; Committee Member: Yury Chernoff. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Pelloin, Valentin. « La compréhension de la parole dans les systèmes de dialogues humain-machine à l'heure des modèles pré-entraînés ». Electronic Thesis or Diss., Le Mans, 2024. http://www.theses.fr/2024LEMA1002.
Texte intégralIn this thesis, spoken language understanding (SLU) is studied in the application context of telephone dialogues with defined goals (hotel booking reservations, for example). Historically, SLU was performed through a cascade of systems: a first system would transcribe the speech into words, and a natural language understanding system would link those words to a semantic annotation. The development of deep neural methods has led to the emergence of end-to-end architectures, where the understanding task is performed by a single system, applied directly to the speech signal to extract the semantic annotation. Recently, so-called self-supervised learning (SSL) pre-trained models have brought new advances in natural language processing (NLP). Learned in a generic way on very large datasets, they can then be adapted for other applications. To date, the best SLU results have been obtained with pipeline systems incorporating SSL models.However, none of the architectures, pipeline or end-to-end, is perfect. In this thesis, we study these architectures and propose hybrid versions that attempt to benefit from the advantages of each. After developing a state-of-the-art end-to-end SLU model, we evaluated different hybrid strategies. The advances made by SSL models during the course of this thesis led us to integrate them into our hybrid architecture
Cavallucci, Martina. « Speech Recognition per l'italiano : Sviluppo e Sperimentazione di Soluzioni Neurali con Language Model ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.
Trouver le texte intégralAmmari, Ahmad N. « Transforming user data into user value by novel mining techniques for extraction of web content, structure and usage patterns. The Development and Evaluation of New Web Mining Methods that enhance Information Retrieval and improve the Understanding of User¿s Web Behavior in Websites and Social Blogs ». Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/5269.
Texte intégralAmmari, Ahmad N. « Transforming user data into user value by novel mining techniques for extraction of web content, structure and usage patterns : the development and evaluation of new Web mining methods that enhance information retrieval and improve the understanding of users' Web behavior in websites and social blogs ». Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/5269.
Texte intégralMartínez, Brito Izacar Jesús. « Quantitative structure fate relationships for multimedia environmental analysis ». Doctoral thesis, Universitat Rovira i Virgili, 2010. http://hdl.handle.net/10803/8590.
Texte intégralLas propiedades fisicoquímicas de un gran espectro de contaminantes químicos son desconocidas. Esta tesis analiza la posibilidad de evaluar la distribución ambiental de compuestos utilizando algoritmos de aprendizaje supervisados alimentados con descriptores moleculares, en vez de modelos ambientales multimedia alimentados con propiedades estimadas por QSARs. Se han comparado fracciones másicas adimensionales, en unidades logarítmicas, de 468 compuestos entre: a) SimpleBox 3, un modelo de nivel III, propagando valores aleatorios de propiedades dentro de distribuciones estadísticas de QSARs recomendados; y, b) regresiones de vectores soporte (SVRs) actuando como relaciones cuantitativas de estructura y destino (QSFRs), relacionando fracciones másicas con pesos moleculares y cuentas de constituyentes (átomos, enlaces, grupos funcionales y anillos) para compuestos de entrenamiento. Las mejores predicciones resultaron para compuestos de test y validación correctamente localizados dentro del dominio de aplicabilidad de los QSFRs, evidenciado por valores bajos de MAE y valores altos de q2 (en aire, MAE≤0.54 y q2≥0.92; en agua, MAE≤0.27 y q2≥0.92).
« Robots that Anticipate Pain : Anticipating Physical Perturbations from Visual Cues through Deep Predictive Models ». Master's thesis, 2017. http://hdl.handle.net/2286/R.I.44032.
Texte intégralDissertation/Thesis
Masters Thesis Computer Science 2017
Livres sur le sujet "Self-Supervised models"
Munro, Paul. Self-supervised learning of concepts by single units and "weakly local" representations. Pittsburgh, PA : School of Library and Information Science, University of Pittsburgh, 1988.
Trouver le texte intégralMunro, Paul. Self-supervised learning of concepts by single units and "weakly local" representations. School of Library and Information Science, University of Pittsburgh, 1988.
Trouver le texte intégralSawarkar, Kunal, et Dheeraj Arremsetty. Deep Learning with PyTorch Lightning : Build and Train High-Performance Artificial Intelligence and Self-Supervised Models Using Python. Packt Publishing, Limited, 2021.
Trouver le texte intégralChapitres de livres sur le sujet "Self-Supervised models"
Kordík, Pavel, et Jan Černý. « Self-organization of Supervised Models ». Dans Studies in Computational Intelligence, 179–223. Berlin, Heidelberg : Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-20980-2_6.
Texte intégralKorkmaz, Yilmaz, Tolga Cukur et Vishal M. Patel. « Self-supervised MRI Reconstruction with Unrolled Diffusion Models ». Dans Lecture Notes in Computer Science, 491–501. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43999-5_47.
Texte intégralPantazi, Xanthoula Eirini, Dimitrios Moshou, Abdul Mounem Mouazen, Boyan Kuang et Thomas Alexandridis. « Application of Supervised Self Organising Models for Wheat Yield Prediction ». Dans Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, 556–65. Cham : Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-662-44654-6_55.
Texte intégralSu, Jingyu, Chuanhao Li, Chenchen Jing et Yuwei Wu. « A Self-supervised Strategy for the Robustness of VQA Models ». Dans IFIP Advances in Information and Communication Technology, 290–98. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-03948-5_23.
Texte intégralVasylechko, Serge, Onur Afacan et Sila Kurugol. « Self Supervised Denoising Diffusion Probabilistic Models for Abdominal DW-MRI ». Dans Computational Diffusion MRI, 80–91. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-47292-3_8.
Texte intégralWagner, Fabian, Mareike Thies, Laura Pfaff, Oliver Aust, Sabrina Pechmann, Daniela Weidner, Noah Maul et al. « Abstract : Self-supervised CT Dual Domain Denoising using Low-parameter Models ». Dans Bildverarbeitung für die Medizin 2024, 159. Wiesbaden : Springer Fachmedien Wiesbaden, 2024. http://dx.doi.org/10.1007/978-3-658-44037-4_48.
Texte intégralLin, Yankai, Ning Ding, Zhiyuan Liu et Maosong Sun. « Pre-trained Models for Representation Learning ». Dans Representation Learning for Natural Language Processing, 127–67. Singapore : Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1600-9_5.
Texte intégralGao, Yuting, Jia-Xin Zhuang, Shaohui Lin, Hao Cheng, Xing Sun, Ke Li et Chunhua Shen. « DisCo : Remedying Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning ». Dans Lecture Notes in Computer Science, 237–53. Cham : Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19809-0_14.
Texte intégralLúčny, Andrej, Kristína Malinovská et Igor Farkaš. « Robot at the Mirror : Learning to Imitate via Associating Self-supervised Models ». Dans Artificial Neural Networks and Machine Learning – ICANN 2023, 471–82. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44207-0_39.
Texte intégralFessant, F., P. Aknin, L. Oukhellou et S. Midenet. « Comparison of Supervised Self-Organizing Maps Using Euclidian or Mahalanobis Distance in Classification Context ». Dans Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence, 637–44. Berlin, Heidelberg : Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45720-8_76.
Texte intégralActes de conférences sur le sujet "Self-Supervised models"
Fini, Enrico, Victor G. Turrisi Da Costa, Xavier Alameda-Pineda, Elisa Ricci, Karteek Alahari et Julien Mairal. « Self-Supervised Models are Continual Learners ». Dans 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.00940.
Texte intégralPu, Jie, Yuguang Yang, Ruirui Li, Oguz Elibol et Jasha Droppo. « Scaling Effect of Self-Supervised Speech Models ». Dans Interspeech 2021. ISCA : ISCA, 2021. http://dx.doi.org/10.21437/interspeech.2021-1935.
Texte intégralStrgar, Luke, et David Harwath. « Phoneme Segmentation Using Self-Supervised Speech Models ». Dans 2022 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2023. http://dx.doi.org/10.1109/slt54892.2023.10022827.
Texte intégralEricsson, Linus, Henry Gouk et Timothy M. Hospedales. « How Well Do Self-Supervised Models Transfer ? » Dans 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021. http://dx.doi.org/10.1109/cvpr46437.2021.00537.
Texte intégralCho, Gyusang, et Chan-Hyun Youn. « Supervised vs. Self-supervised Pre-trained models for Hand Pose Estimation ». Dans 2022 13th International Conference on Information and Communication Technology Convergence (ICTC). IEEE, 2022. http://dx.doi.org/10.1109/ictc55196.2022.9953011.
Texte intégralKurumbudel, Prashanth Ram. « Deep Self-Supervised Learning Models for Automotive Systems ». Dans Symposium on International Automotive Technology. 400 Commonwealth Drive, Warrendale, PA, United States : SAE International, 2021. http://dx.doi.org/10.4271/2021-26-0129.
Texte intégralRosenberg, C., M. Hebert et H. Schneiderman. « Semi-Supervised Self-Training of Object Detection Models ». Dans 2005 Seventh IEEE Workshops on Applications of Computer Vision (WACV/MOTION'05). IEEE, 2005. http://dx.doi.org/10.1109/acvmot.2005.107.
Texte intégralTseng, Wei-Cheng, Wei-Tsung Kao et Hung-yi Lee. « Membership Inference Attacks Against Self-supervised Speech Models ». Dans Interspeech 2022. ISCA : ISCA, 2022. http://dx.doi.org/10.21437/interspeech.2022-11245.
Texte intégralWei, Fangyin, Rohan Chabra, Lingni Ma, Christoph Lassner, Michael Zollhoefer, Szymon Rusinkiewicz, Chris Sweeney, Richard Newcombe et Mira Slavcheva. « Self-supervised Neural Articulated Shape and Appearance Models ». Dans 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.01536.
Texte intégralXu, Puyang, Damianos Karakos et Sanjeev Khudanpur. « Self-supervised discriminative training of statistical language models ». Dans Understanding (ASRU). IEEE, 2009. http://dx.doi.org/10.1109/asru.2009.5373401.
Texte intégral