Letteratura scientifica selezionata sul tema "Self-Supervised models"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Self-Supervised models".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Articoli di riviste sul tema "Self-Supervised models"
Anton, Jonah, Liam Castelli, Mun Fai Chan, Mathilde Outters, Wan Hee Tang, Venus Cheung, Pancham Shukla, Rahee Walambe e Ketan Kotecha. "How Well Do Self-Supervised Models Transfer to Medical Imaging?" Journal of Imaging 8, n. 12 (1 dicembre 2022): 320. http://dx.doi.org/10.3390/jimaging8120320.
Testo completoGatopoulos, Ioannis, e Jakub M. Tomczak. "Self-Supervised Variational Auto-Encoders". Entropy 23, n. 6 (14 giugno 2021): 747. http://dx.doi.org/10.3390/e23060747.
Testo completoZhang, Ronghua, Yuanyuan Wang, Fangyuan Liu, Changzheng Liu, Yaping Song e Baohua Yu. "S2NMF: Information Self-Enhancement Self-Supervised Nonnegative Matrix Factorization for Recommendation". Wireless Communications and Mobile Computing 2022 (30 agosto 2022): 1–10. http://dx.doi.org/10.1155/2022/4748858.
Testo completoDang, Thanh-Vu, JinYoung Kim, Gwang-Hyun Yu, Ji Yong Kim, Young Hwan Park e ChilWoo Lee. "Korean Text to Gloss: Self-Supervised Learning approach". Korean Institute of Smart Media 12, n. 1 (28 febbraio 2023): 32–46. http://dx.doi.org/10.30693/smj.2023.12.1.32.
Testo completoRisojević, V., e V. Stojnić. "DO WE STILL NEED IMAGENET PRE-TRAINING IN REMOTE SENSING SCENE CLASSIFICATION?" International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2022 (31 maggio 2022): 1399–406. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2022-1399-2022.
Testo completoImran, Abdullah-Al-Zubaer, Chao Huang, Hui Tang, Wei Fan, Yuan Xiao, Dingjun Hao, Zhen Qian e Demetri Terzopoulos. "Self-Supervised, Semi-Supervised, Multi-Context Learning for the Combined Classification and Segmentation of Medical Images (Student Abstract)". Proceedings of the AAAI Conference on Artificial Intelligence 34, n. 10 (3 aprile 2020): 13815–16. http://dx.doi.org/10.1609/aaai.v34i10.7179.
Testo completoZhou, Meng, Zechen Li e Pengtao Xie. "Self-supervised Regularization for Text Classification". Transactions of the Association for Computational Linguistics 9 (2021): 641–56. http://dx.doi.org/10.1162/tacl_a_00389.
Testo completoGong, Yuan, Cheng-I. Lai, Yu-An Chung e James Glass. "SSAST: Self-Supervised Audio Spectrogram Transformer". Proceedings of the AAAI Conference on Artificial Intelligence 36, n. 10 (28 giugno 2022): 10699–709. http://dx.doi.org/10.1609/aaai.v36i10.21315.
Testo completoChen, Xuehao, Jin Zhou, Yuehui Chen, Shiyuan Han, Yingxu Wang, Tao Du, Cheng Yang e Bowen Liu. "Self-Supervised Clustering Models Based on BYOL Network Structure". Electronics 12, n. 23 (21 novembre 2023): 4723. http://dx.doi.org/10.3390/electronics12234723.
Testo completoLuo, Dezhao, Chang Liu, Yu Zhou, Dongbao Yang, Can Ma, Qixiang Ye e Weiping Wang. "Video Cloze Procedure for Self-Supervised Spatio-Temporal Learning". Proceedings of the AAAI Conference on Artificial Intelligence 34, n. 07 (3 aprile 2020): 11701–8. http://dx.doi.org/10.1609/aaai.v34i07.6840.
Testo completoTesi sul tema "Self-Supervised models"
Rossi, Alex. "Self-supervised information retrieval: a novel approach based on Deep Metric Learning and Neural Language Models". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.
Cerca il testo completoDouzon, Thibault. "Language models for document understanding". Electronic Thesis or Diss., Lyon, INSA, 2023. http://www.theses.fr/2023ISAL0075.
Testo completoEvery day, an uncountable amount of documents are received and processed by companies worldwide. In an effort to reduce the cost of processing each document, the largest companies have resorted to document automation technologies. In an ideal world, a document can be automatically processed without any human intervention: its content is read, and information is extracted and forwarded to the relevant service. The state-of-the-art techniques have quickly evolved in the last decades, from rule-based algorithms to statistical models. This thesis focuses on machine learning models for document information extraction. Recent advances in model architecture for natural language processing have shown the importance of the attention mechanism. Transformers have revolutionized the field by generalizing the use of attention and by pushing self-supervised pre-training to the next level. In the first part, we confirm that transformers with appropriate pre-training were able to perform document understanding tasks with high performance. We show that, when used as a token classifier for information extraction, transformers are able to exceptionally efficiently learn the task compared to recurrent networks. Transformers only need a small proportion of the training data to reach close to maximum performance. This highlights the importance of self-supervised pre-training for future fine-tuning. In the following part, we design specialized pre-training tasks, to better prepare the model for specific data distributions such as business documents. By acknowledging the specificities of business documents such as their table structure and their over-representation of numeric figures, we are able to target specific skills useful for the model in its future tasks. We show that those new tasks improve the model's downstream performances, even with small models. Using this pre-training approach, we are able to reach the performances of significantly bigger models without any additional cost during finetuning or inference. Finally, in the last part, we address one drawback of the transformer architecture which is its computational cost when used on long sequences. We show that efficient architectures derived from the classic transformer require fewer resources and perform better on long sequences. However, due to how they approximate the attention computation, efficient models suffer from a small but significant performance drop on short sequences compared to classical architectures. This incentivizes the use of different models depending on the input length and enables concatenating multimodal inputs into a single sequence
Lin, Lyu. "Transformer-based Model for Molecular Property Prediction with Self-Supervised Transfer Learning". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-284682.
Testo completoPrediktion av molekylers egenskaper har en stor mängd tillämpningar inom kemiindustrin. Kraftfulla metoder för att predicera molekylära egenskaper kan främja vetenskapliga experiment och produktionsprocesser. Ansatsen i detta arbete är att använda överförd inlärning (eng. transfer learning) för att predicera egenskaper hos molekyler. Projektet är indelat i två delar. Den första delen fokuserar på att utveckla och förträna en modell. Modellen består av Transformer-lager med attention- mekanismer och förtränas genom att återställa maskerade kanter i molekylgrafer från storskaliga mängder icke-annoterad data. Efteråt utvärderas prestandan hos den förtränade modellen i en mängd olika uppgifter baserade på prediktion av molekylegenskaper vilket bekräftar fördelen med överförd inlärning.Resultaten visar att modellen efter självövervakad förträning besitter utmärkt förmåga till att generalisera. Den kan finjusteras med liten tidskostnad och presterar väl i specialiserade uppgifter. Effektiviteten hos överförd inlärning visas också i experimenten. Den förtränade modellen förkortar inte bara tiden för uppgifts-specifik inlärning utan uppnår även bättre prestanda och undviker att övertränas på grund otillräckliga mängder data i uppgifter för prediktion av molekylegenskaper.
Ter-Hovhannisyan, Vardges. "Unsupervised and semi-supervised training methods for eukaryotic gene prediction". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26645.
Testo completoCommittee Chair: Mark Borodovky; Committee Member: Jung H. Choi; Committee Member: King Jordan; Committee Member: Leonid Bunimovich; Committee Member: Yury Chernoff. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Pelloin, Valentin. "La compréhension de la parole dans les systèmes de dialogues humain-machine à l'heure des modèles pré-entraînés". Electronic Thesis or Diss., Le Mans, 2024. http://www.theses.fr/2024LEMA1002.
Testo completoIn this thesis, spoken language understanding (SLU) is studied in the application context of telephone dialogues with defined goals (hotel booking reservations, for example). Historically, SLU was performed through a cascade of systems: a first system would transcribe the speech into words, and a natural language understanding system would link those words to a semantic annotation. The development of deep neural methods has led to the emergence of end-to-end architectures, where the understanding task is performed by a single system, applied directly to the speech signal to extract the semantic annotation. Recently, so-called self-supervised learning (SSL) pre-trained models have brought new advances in natural language processing (NLP). Learned in a generic way on very large datasets, they can then be adapted for other applications. To date, the best SLU results have been obtained with pipeline systems incorporating SSL models.However, none of the architectures, pipeline or end-to-end, is perfect. In this thesis, we study these architectures and propose hybrid versions that attempt to benefit from the advantages of each. After developing a state-of-the-art end-to-end SLU model, we evaluated different hybrid strategies. The advances made by SSL models during the course of this thesis led us to integrate them into our hybrid architecture
Cavallucci, Martina. "Speech Recognition per l'italiano: Sviluppo e Sperimentazione di Soluzioni Neurali con Language Model". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.
Cerca il testo completoAmmari, Ahmad N. "Transforming user data into user value by novel mining techniques for extraction of web content, structure and usage patterns. The Development and Evaluation of New Web Mining Methods that enhance Information Retrieval and improve the Understanding of User¿s Web Behavior in Websites and Social Blogs". Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/5269.
Testo completoAmmari, Ahmad N. "Transforming user data into user value by novel mining techniques for extraction of web content, structure and usage patterns : the development and evaluation of new Web mining methods that enhance information retrieval and improve the understanding of users' Web behavior in websites and social blogs". Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/5269.
Testo completoMartínez, Brito Izacar Jesús. "Quantitative structure fate relationships for multimedia environmental analysis". Doctoral thesis, Universitat Rovira i Virgili, 2010. http://hdl.handle.net/10803/8590.
Testo completoLas propiedades fisicoquímicas de un gran espectro de contaminantes químicos son desconocidas. Esta tesis analiza la posibilidad de evaluar la distribución ambiental de compuestos utilizando algoritmos de aprendizaje supervisados alimentados con descriptores moleculares, en vez de modelos ambientales multimedia alimentados con propiedades estimadas por QSARs. Se han comparado fracciones másicas adimensionales, en unidades logarítmicas, de 468 compuestos entre: a) SimpleBox 3, un modelo de nivel III, propagando valores aleatorios de propiedades dentro de distribuciones estadísticas de QSARs recomendados; y, b) regresiones de vectores soporte (SVRs) actuando como relaciones cuantitativas de estructura y destino (QSFRs), relacionando fracciones másicas con pesos moleculares y cuentas de constituyentes (átomos, enlaces, grupos funcionales y anillos) para compuestos de entrenamiento. Las mejores predicciones resultaron para compuestos de test y validación correctamente localizados dentro del dominio de aplicabilidad de los QSFRs, evidenciado por valores bajos de MAE y valores altos de q2 (en aire, MAE≤0.54 y q2≥0.92; en agua, MAE≤0.27 y q2≥0.92).
"Robots that Anticipate Pain: Anticipating Physical Perturbations from Visual Cues through Deep Predictive Models". Master's thesis, 2017. http://hdl.handle.net/2286/R.I.44032.
Testo completoDissertation/Thesis
Masters Thesis Computer Science 2017
Libri sul tema "Self-Supervised models"
Munro, Paul. Self-supervised learning of concepts by single units and "weakly local" representations. Pittsburgh, PA: School of Library and Information Science, University of Pittsburgh, 1988.
Cerca il testo completoMunro, Paul. Self-supervised learning of concepts by single units and "weakly local" representations. School of Library and Information Science, University of Pittsburgh, 1988.
Cerca il testo completoSawarkar, Kunal, e Dheeraj Arremsetty. Deep Learning with PyTorch Lightning: Build and Train High-Performance Artificial Intelligence and Self-Supervised Models Using Python. Packt Publishing, Limited, 2021.
Cerca il testo completoCapitoli di libri sul tema "Self-Supervised models"
Kordík, Pavel, e Jan Černý. "Self-organization of Supervised Models". In Studies in Computational Intelligence, 179–223. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-20980-2_6.
Testo completoKorkmaz, Yilmaz, Tolga Cukur e Vishal M. Patel. "Self-supervised MRI Reconstruction with Unrolled Diffusion Models". In Lecture Notes in Computer Science, 491–501. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43999-5_47.
Testo completoPantazi, Xanthoula Eirini, Dimitrios Moshou, Abdul Mounem Mouazen, Boyan Kuang e Thomas Alexandridis. "Application of Supervised Self Organising Models for Wheat Yield Prediction". In Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, 556–65. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-662-44654-6_55.
Testo completoSu, Jingyu, Chuanhao Li, Chenchen Jing e Yuwei Wu. "A Self-supervised Strategy for the Robustness of VQA Models". In IFIP Advances in Information and Communication Technology, 290–98. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-03948-5_23.
Testo completoVasylechko, Serge, Onur Afacan e Sila Kurugol. "Self Supervised Denoising Diffusion Probabilistic Models for Abdominal DW-MRI". In Computational Diffusion MRI, 80–91. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-47292-3_8.
Testo completoWagner, Fabian, Mareike Thies, Laura Pfaff, Oliver Aust, Sabrina Pechmann, Daniela Weidner, Noah Maul et al. "Abstract: Self-supervised CT Dual Domain Denoising using Low-parameter Models". In Bildverarbeitung für die Medizin 2024, 159. Wiesbaden: Springer Fachmedien Wiesbaden, 2024. http://dx.doi.org/10.1007/978-3-658-44037-4_48.
Testo completoLin, Yankai, Ning Ding, Zhiyuan Liu e Maosong Sun. "Pre-trained Models for Representation Learning". In Representation Learning for Natural Language Processing, 127–67. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1600-9_5.
Testo completoGao, Yuting, Jia-Xin Zhuang, Shaohui Lin, Hao Cheng, Xing Sun, Ke Li e Chunhua Shen. "DisCo: Remedying Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning". In Lecture Notes in Computer Science, 237–53. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19809-0_14.
Testo completoLúčny, Andrej, Kristína Malinovská e Igor Farkaš. "Robot at the Mirror: Learning to Imitate via Associating Self-supervised Models". In Artificial Neural Networks and Machine Learning – ICANN 2023, 471–82. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44207-0_39.
Testo completoFessant, F., P. Aknin, L. Oukhellou e S. Midenet. "Comparison of Supervised Self-Organizing Maps Using Euclidian or Mahalanobis Distance in Classification Context". In Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence, 637–44. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45720-8_76.
Testo completoAtti di convegni sul tema "Self-Supervised models"
Fini, Enrico, Victor G. Turrisi Da Costa, Xavier Alameda-Pineda, Elisa Ricci, Karteek Alahari e Julien Mairal. "Self-Supervised Models are Continual Learners". In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.00940.
Testo completoPu, Jie, Yuguang Yang, Ruirui Li, Oguz Elibol e Jasha Droppo. "Scaling Effect of Self-Supervised Speech Models". In Interspeech 2021. ISCA: ISCA, 2021. http://dx.doi.org/10.21437/interspeech.2021-1935.
Testo completoStrgar, Luke, e David Harwath. "Phoneme Segmentation Using Self-Supervised Speech Models". In 2022 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2023. http://dx.doi.org/10.1109/slt54892.2023.10022827.
Testo completoEricsson, Linus, Henry Gouk e Timothy M. Hospedales. "How Well Do Self-Supervised Models Transfer?" In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021. http://dx.doi.org/10.1109/cvpr46437.2021.00537.
Testo completoCho, Gyusang, e Chan-Hyun Youn. "Supervised vs. Self-supervised Pre-trained models for Hand Pose Estimation". In 2022 13th International Conference on Information and Communication Technology Convergence (ICTC). IEEE, 2022. http://dx.doi.org/10.1109/ictc55196.2022.9953011.
Testo completoKurumbudel, Prashanth Ram. "Deep Self-Supervised Learning Models for Automotive Systems". In Symposium on International Automotive Technology. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2021. http://dx.doi.org/10.4271/2021-26-0129.
Testo completoRosenberg, C., M. Hebert e H. Schneiderman. "Semi-Supervised Self-Training of Object Detection Models". In 2005 Seventh IEEE Workshops on Applications of Computer Vision (WACV/MOTION'05). IEEE, 2005. http://dx.doi.org/10.1109/acvmot.2005.107.
Testo completoTseng, Wei-Cheng, Wei-Tsung Kao e Hung-yi Lee. "Membership Inference Attacks Against Self-supervised Speech Models". In Interspeech 2022. ISCA: ISCA, 2022. http://dx.doi.org/10.21437/interspeech.2022-11245.
Testo completoWei, Fangyin, Rohan Chabra, Lingni Ma, Christoph Lassner, Michael Zollhoefer, Szymon Rusinkiewicz, Chris Sweeney, Richard Newcombe e Mira Slavcheva. "Self-supervised Neural Articulated Shape and Appearance Models". In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.01536.
Testo completoXu, Puyang, Damianos Karakos e Sanjeev Khudanpur. "Self-supervised discriminative training of statistical language models". In Understanding (ASRU). IEEE, 2009. http://dx.doi.org/10.1109/asru.2009.5373401.
Testo completo