Literatura científica selecionada sobre o tema "Self-Supervised models"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Índice
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Self-Supervised models".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Self-Supervised models"
Anton, Jonah, Liam Castelli, Mun Fai Chan, Mathilde Outters, Wan Hee Tang, Venus Cheung, Pancham Shukla, Rahee Walambe e Ketan Kotecha. "How Well Do Self-Supervised Models Transfer to Medical Imaging?" Journal of Imaging 8, n.º 12 (1 de dezembro de 2022): 320. http://dx.doi.org/10.3390/jimaging8120320.
Texto completo da fonteGatopoulos, Ioannis, e Jakub M. Tomczak. "Self-Supervised Variational Auto-Encoders". Entropy 23, n.º 6 (14 de junho de 2021): 747. http://dx.doi.org/10.3390/e23060747.
Texto completo da fonteZhang, Ronghua, Yuanyuan Wang, Fangyuan Liu, Changzheng Liu, Yaping Song e Baohua Yu. "S2NMF: Information Self-Enhancement Self-Supervised Nonnegative Matrix Factorization for Recommendation". Wireless Communications and Mobile Computing 2022 (30 de agosto de 2022): 1–10. http://dx.doi.org/10.1155/2022/4748858.
Texto completo da fonteDang, Thanh-Vu, JinYoung Kim, Gwang-Hyun Yu, Ji Yong Kim, Young Hwan Park e ChilWoo Lee. "Korean Text to Gloss: Self-Supervised Learning approach". Korean Institute of Smart Media 12, n.º 1 (28 de fevereiro de 2023): 32–46. http://dx.doi.org/10.30693/smj.2023.12.1.32.
Texto completo da fonteRisojević, V., e V. Stojnić. "DO WE STILL NEED IMAGENET PRE-TRAINING IN REMOTE SENSING SCENE CLASSIFICATION?" International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2022 (31 de maio de 2022): 1399–406. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2022-1399-2022.
Texto completo da fonteImran, Abdullah-Al-Zubaer, Chao Huang, Hui Tang, Wei Fan, Yuan Xiao, Dingjun Hao, Zhen Qian e Demetri Terzopoulos. "Self-Supervised, Semi-Supervised, Multi-Context Learning for the Combined Classification and Segmentation of Medical Images (Student Abstract)". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 10 (3 de abril de 2020): 13815–16. http://dx.doi.org/10.1609/aaai.v34i10.7179.
Texto completo da fonteZhou, Meng, Zechen Li e Pengtao Xie. "Self-supervised Regularization for Text Classification". Transactions of the Association for Computational Linguistics 9 (2021): 641–56. http://dx.doi.org/10.1162/tacl_a_00389.
Texto completo da fonteGong, Yuan, Cheng-I. Lai, Yu-An Chung e James Glass. "SSAST: Self-Supervised Audio Spectrogram Transformer". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 10 (28 de junho de 2022): 10699–709. http://dx.doi.org/10.1609/aaai.v36i10.21315.
Texto completo da fonteChen, Xuehao, Jin Zhou, Yuehui Chen, Shiyuan Han, Yingxu Wang, Tao Du, Cheng Yang e Bowen Liu. "Self-Supervised Clustering Models Based on BYOL Network Structure". Electronics 12, n.º 23 (21 de novembro de 2023): 4723. http://dx.doi.org/10.3390/electronics12234723.
Texto completo da fonteLuo, Dezhao, Chang Liu, Yu Zhou, Dongbao Yang, Can Ma, Qixiang Ye e Weiping Wang. "Video Cloze Procedure for Self-Supervised Spatio-Temporal Learning". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 07 (3 de abril de 2020): 11701–8. http://dx.doi.org/10.1609/aaai.v34i07.6840.
Texto completo da fonteTeses / dissertações sobre o assunto "Self-Supervised models"
Rossi, Alex. "Self-supervised information retrieval: a novel approach based on Deep Metric Learning and Neural Language Models". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.
Encontre o texto completo da fonteDouzon, Thibault. "Language models for document understanding". Electronic Thesis or Diss., Lyon, INSA, 2023. http://www.theses.fr/2023ISAL0075.
Texto completo da fonteEvery day, an uncountable amount of documents are received and processed by companies worldwide. In an effort to reduce the cost of processing each document, the largest companies have resorted to document automation technologies. In an ideal world, a document can be automatically processed without any human intervention: its content is read, and information is extracted and forwarded to the relevant service. The state-of-the-art techniques have quickly evolved in the last decades, from rule-based algorithms to statistical models. This thesis focuses on machine learning models for document information extraction. Recent advances in model architecture for natural language processing have shown the importance of the attention mechanism. Transformers have revolutionized the field by generalizing the use of attention and by pushing self-supervised pre-training to the next level. In the first part, we confirm that transformers with appropriate pre-training were able to perform document understanding tasks with high performance. We show that, when used as a token classifier for information extraction, transformers are able to exceptionally efficiently learn the task compared to recurrent networks. Transformers only need a small proportion of the training data to reach close to maximum performance. This highlights the importance of self-supervised pre-training for future fine-tuning. In the following part, we design specialized pre-training tasks, to better prepare the model for specific data distributions such as business documents. By acknowledging the specificities of business documents such as their table structure and their over-representation of numeric figures, we are able to target specific skills useful for the model in its future tasks. We show that those new tasks improve the model's downstream performances, even with small models. Using this pre-training approach, we are able to reach the performances of significantly bigger models without any additional cost during finetuning or inference. Finally, in the last part, we address one drawback of the transformer architecture which is its computational cost when used on long sequences. We show that efficient architectures derived from the classic transformer require fewer resources and perform better on long sequences. However, due to how they approximate the attention computation, efficient models suffer from a small but significant performance drop on short sequences compared to classical architectures. This incentivizes the use of different models depending on the input length and enables concatenating multimodal inputs into a single sequence
Lin, Lyu. "Transformer-based Model for Molecular Property Prediction with Self-Supervised Transfer Learning". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-284682.
Texto completo da fontePrediktion av molekylers egenskaper har en stor mängd tillämpningar inom kemiindustrin. Kraftfulla metoder för att predicera molekylära egenskaper kan främja vetenskapliga experiment och produktionsprocesser. Ansatsen i detta arbete är att använda överförd inlärning (eng. transfer learning) för att predicera egenskaper hos molekyler. Projektet är indelat i två delar. Den första delen fokuserar på att utveckla och förträna en modell. Modellen består av Transformer-lager med attention- mekanismer och förtränas genom att återställa maskerade kanter i molekylgrafer från storskaliga mängder icke-annoterad data. Efteråt utvärderas prestandan hos den förtränade modellen i en mängd olika uppgifter baserade på prediktion av molekylegenskaper vilket bekräftar fördelen med överförd inlärning.Resultaten visar att modellen efter självövervakad förträning besitter utmärkt förmåga till att generalisera. Den kan finjusteras med liten tidskostnad och presterar väl i specialiserade uppgifter. Effektiviteten hos överförd inlärning visas också i experimenten. Den förtränade modellen förkortar inte bara tiden för uppgifts-specifik inlärning utan uppnår även bättre prestanda och undviker att övertränas på grund otillräckliga mängder data i uppgifter för prediktion av molekylegenskaper.
Ter-Hovhannisyan, Vardges. "Unsupervised and semi-supervised training methods for eukaryotic gene prediction". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26645.
Texto completo da fonteCommittee Chair: Mark Borodovky; Committee Member: Jung H. Choi; Committee Member: King Jordan; Committee Member: Leonid Bunimovich; Committee Member: Yury Chernoff. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Pelloin, Valentin. "La compréhension de la parole dans les systèmes de dialogues humain-machine à l'heure des modèles pré-entraînés". Electronic Thesis or Diss., Le Mans, 2024. http://www.theses.fr/2024LEMA1002.
Texto completo da fonteIn this thesis, spoken language understanding (SLU) is studied in the application context of telephone dialogues with defined goals (hotel booking reservations, for example). Historically, SLU was performed through a cascade of systems: a first system would transcribe the speech into words, and a natural language understanding system would link those words to a semantic annotation. The development of deep neural methods has led to the emergence of end-to-end architectures, where the understanding task is performed by a single system, applied directly to the speech signal to extract the semantic annotation. Recently, so-called self-supervised learning (SSL) pre-trained models have brought new advances in natural language processing (NLP). Learned in a generic way on very large datasets, they can then be adapted for other applications. To date, the best SLU results have been obtained with pipeline systems incorporating SSL models.However, none of the architectures, pipeline or end-to-end, is perfect. In this thesis, we study these architectures and propose hybrid versions that attempt to benefit from the advantages of each. After developing a state-of-the-art end-to-end SLU model, we evaluated different hybrid strategies. The advances made by SSL models during the course of this thesis led us to integrate them into our hybrid architecture
Cavallucci, Martina. "Speech Recognition per l'italiano: Sviluppo e Sperimentazione di Soluzioni Neurali con Language Model". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.
Encontre o texto completo da fonteAmmari, Ahmad N. "Transforming user data into user value by novel mining techniques for extraction of web content, structure and usage patterns. The Development and Evaluation of New Web Mining Methods that enhance Information Retrieval and improve the Understanding of User¿s Web Behavior in Websites and Social Blogs". Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/5269.
Texto completo da fonteAmmari, Ahmad N. "Transforming user data into user value by novel mining techniques for extraction of web content, structure and usage patterns : the development and evaluation of new Web mining methods that enhance information retrieval and improve the understanding of users' Web behavior in websites and social blogs". Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/5269.
Texto completo da fonteMartínez, Brito Izacar Jesús. "Quantitative structure fate relationships for multimedia environmental analysis". Doctoral thesis, Universitat Rovira i Virgili, 2010. http://hdl.handle.net/10803/8590.
Texto completo da fonteLas propiedades fisicoquímicas de un gran espectro de contaminantes químicos son desconocidas. Esta tesis analiza la posibilidad de evaluar la distribución ambiental de compuestos utilizando algoritmos de aprendizaje supervisados alimentados con descriptores moleculares, en vez de modelos ambientales multimedia alimentados con propiedades estimadas por QSARs. Se han comparado fracciones másicas adimensionales, en unidades logarítmicas, de 468 compuestos entre: a) SimpleBox 3, un modelo de nivel III, propagando valores aleatorios de propiedades dentro de distribuciones estadísticas de QSARs recomendados; y, b) regresiones de vectores soporte (SVRs) actuando como relaciones cuantitativas de estructura y destino (QSFRs), relacionando fracciones másicas con pesos moleculares y cuentas de constituyentes (átomos, enlaces, grupos funcionales y anillos) para compuestos de entrenamiento. Las mejores predicciones resultaron para compuestos de test y validación correctamente localizados dentro del dominio de aplicabilidad de los QSFRs, evidenciado por valores bajos de MAE y valores altos de q2 (en aire, MAE≤0.54 y q2≥0.92; en agua, MAE≤0.27 y q2≥0.92).
"Robots that Anticipate Pain: Anticipating Physical Perturbations from Visual Cues through Deep Predictive Models". Master's thesis, 2017. http://hdl.handle.net/2286/R.I.44032.
Texto completo da fonteDissertation/Thesis
Masters Thesis Computer Science 2017
Livros sobre o assunto "Self-Supervised models"
Munro, Paul. Self-supervised learning of concepts by single units and "weakly local" representations. Pittsburgh, PA: School of Library and Information Science, University of Pittsburgh, 1988.
Encontre o texto completo da fonteMunro, Paul. Self-supervised learning of concepts by single units and "weakly local" representations. School of Library and Information Science, University of Pittsburgh, 1988.
Encontre o texto completo da fonteSawarkar, Kunal, e Dheeraj Arremsetty. Deep Learning with PyTorch Lightning: Build and Train High-Performance Artificial Intelligence and Self-Supervised Models Using Python. Packt Publishing, Limited, 2021.
Encontre o texto completo da fonteCapítulos de livros sobre o assunto "Self-Supervised models"
Kordík, Pavel, e Jan Černý. "Self-organization of Supervised Models". In Studies in Computational Intelligence, 179–223. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-20980-2_6.
Texto completo da fonteKorkmaz, Yilmaz, Tolga Cukur e Vishal M. Patel. "Self-supervised MRI Reconstruction with Unrolled Diffusion Models". In Lecture Notes in Computer Science, 491–501. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43999-5_47.
Texto completo da fontePantazi, Xanthoula Eirini, Dimitrios Moshou, Abdul Mounem Mouazen, Boyan Kuang e Thomas Alexandridis. "Application of Supervised Self Organising Models for Wheat Yield Prediction". In Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, 556–65. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-662-44654-6_55.
Texto completo da fonteSu, Jingyu, Chuanhao Li, Chenchen Jing e Yuwei Wu. "A Self-supervised Strategy for the Robustness of VQA Models". In IFIP Advances in Information and Communication Technology, 290–98. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-03948-5_23.
Texto completo da fonteVasylechko, Serge, Onur Afacan e Sila Kurugol. "Self Supervised Denoising Diffusion Probabilistic Models for Abdominal DW-MRI". In Computational Diffusion MRI, 80–91. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-47292-3_8.
Texto completo da fonteWagner, Fabian, Mareike Thies, Laura Pfaff, Oliver Aust, Sabrina Pechmann, Daniela Weidner, Noah Maul et al. "Abstract: Self-supervised CT Dual Domain Denoising using Low-parameter Models". In Bildverarbeitung für die Medizin 2024, 159. Wiesbaden: Springer Fachmedien Wiesbaden, 2024. http://dx.doi.org/10.1007/978-3-658-44037-4_48.
Texto completo da fonteLin, Yankai, Ning Ding, Zhiyuan Liu e Maosong Sun. "Pre-trained Models for Representation Learning". In Representation Learning for Natural Language Processing, 127–67. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1600-9_5.
Texto completo da fonteGao, Yuting, Jia-Xin Zhuang, Shaohui Lin, Hao Cheng, Xing Sun, Ke Li e Chunhua Shen. "DisCo: Remedying Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning". In Lecture Notes in Computer Science, 237–53. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19809-0_14.
Texto completo da fonteLúčny, Andrej, Kristína Malinovská e Igor Farkaš. "Robot at the Mirror: Learning to Imitate via Associating Self-supervised Models". In Artificial Neural Networks and Machine Learning – ICANN 2023, 471–82. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44207-0_39.
Texto completo da fonteFessant, F., P. Aknin, L. Oukhellou e S. Midenet. "Comparison of Supervised Self-Organizing Maps Using Euclidian or Mahalanobis Distance in Classification Context". In Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence, 637–44. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45720-8_76.
Texto completo da fonteTrabalhos de conferências sobre o assunto "Self-Supervised models"
Fini, Enrico, Victor G. Turrisi Da Costa, Xavier Alameda-Pineda, Elisa Ricci, Karteek Alahari e Julien Mairal. "Self-Supervised Models are Continual Learners". In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.00940.
Texto completo da fontePu, Jie, Yuguang Yang, Ruirui Li, Oguz Elibol e Jasha Droppo. "Scaling Effect of Self-Supervised Speech Models". In Interspeech 2021. ISCA: ISCA, 2021. http://dx.doi.org/10.21437/interspeech.2021-1935.
Texto completo da fonteStrgar, Luke, e David Harwath. "Phoneme Segmentation Using Self-Supervised Speech Models". In 2022 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2023. http://dx.doi.org/10.1109/slt54892.2023.10022827.
Texto completo da fonteEricsson, Linus, Henry Gouk e Timothy M. Hospedales. "How Well Do Self-Supervised Models Transfer?" In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021. http://dx.doi.org/10.1109/cvpr46437.2021.00537.
Texto completo da fonteCho, Gyusang, e Chan-Hyun Youn. "Supervised vs. Self-supervised Pre-trained models for Hand Pose Estimation". In 2022 13th International Conference on Information and Communication Technology Convergence (ICTC). IEEE, 2022. http://dx.doi.org/10.1109/ictc55196.2022.9953011.
Texto completo da fonteKurumbudel, Prashanth Ram. "Deep Self-Supervised Learning Models for Automotive Systems". In Symposium on International Automotive Technology. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2021. http://dx.doi.org/10.4271/2021-26-0129.
Texto completo da fonteRosenberg, C., M. Hebert e H. Schneiderman. "Semi-Supervised Self-Training of Object Detection Models". In 2005 Seventh IEEE Workshops on Applications of Computer Vision (WACV/MOTION'05). IEEE, 2005. http://dx.doi.org/10.1109/acvmot.2005.107.
Texto completo da fonteTseng, Wei-Cheng, Wei-Tsung Kao e Hung-yi Lee. "Membership Inference Attacks Against Self-supervised Speech Models". In Interspeech 2022. ISCA: ISCA, 2022. http://dx.doi.org/10.21437/interspeech.2022-11245.
Texto completo da fonteWei, Fangyin, Rohan Chabra, Lingni Ma, Christoph Lassner, Michael Zollhoefer, Szymon Rusinkiewicz, Chris Sweeney, Richard Newcombe e Mira Slavcheva. "Self-supervised Neural Articulated Shape and Appearance Models". In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.01536.
Texto completo da fonteXu, Puyang, Damianos Karakos e Sanjeev Khudanpur. "Self-supervised discriminative training of statistical language models". In Understanding (ASRU). IEEE, 2009. http://dx.doi.org/10.1109/asru.2009.5373401.
Texto completo da fonte