Littérature scientifique sur le sujet « Self-supervised learning (artificial intelligence) »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Sommaire
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Self-supervised learning (artificial intelligence) ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Self-supervised learning (artificial intelligence)"
Neghawi, Elie, et Yan Liu. « Enhancing Self-Supervised Learning through Explainable Artificial Intelligence Mechanisms : A Computational Analysis ». Big Data and Cognitive Computing 8, no 6 (3 juin 2024) : 58. http://dx.doi.org/10.3390/bdcc8060058.
Texte intégralCHAN, JASON, IRENA KOPRINSKA et JOSIAH POON. « SEMI-SUPERVISED CLASSIFICATION USING BRIDGING ». International Journal on Artificial Intelligence Tools 17, no 03 (juin 2008) : 415–31. http://dx.doi.org/10.1142/s0218213008003972.
Texte intégralYuya, KOBAYASHI, Masahiro SUZUKI et Yutaka MATSUO. « Scene Interpretation Method using Transformer and Self-supervised Learning ». Transactions of the Japanese Society for Artificial Intelligence 37, no 2 (1 mars 2022) : I—L75_1–17. http://dx.doi.org/10.1527/tjsai.37-2_i-l75.
Texte intégralHrycej, Tomas. « Supporting supervised learning by self-organization ». Neurocomputing 4, no 1-2 (février 1992) : 17–30. http://dx.doi.org/10.1016/0925-2312(92)90040-v.
Texte intégralWang, Fei, et Changshui Zhang. « Robust self-tuning semi-supervised learning ». Neurocomputing 70, no 16-18 (octobre 2007) : 2931–39. http://dx.doi.org/10.1016/j.neucom.2006.11.004.
Texte intégralBiscione, Valerio, et Jeffrey S. Bowers. « Learning online visual invariances for novel objects via supervised and self-supervised training ». Neural Networks 150 (juin 2022) : 222–36. http://dx.doi.org/10.1016/j.neunet.2022.02.017.
Texte intégralMa, Jun, Yakun Wen et Liming Yang. « Lagrangian supervised and semi-supervised extreme learning machine ». Applied Intelligence 49, no 2 (25 août 2018) : 303–18. http://dx.doi.org/10.1007/s10489-018-1273-4.
Texte intégralChe, Feihu, Guohua Yang, Dawei Zhang, Jianhua Tao et Tong Liu. « Self-supervised graph representation learning via bootstrapping ». Neurocomputing 456 (octobre 2021) : 88–96. http://dx.doi.org/10.1016/j.neucom.2021.03.123.
Texte intégralGu, Nannan, Pengying Fan, Mingyu Fan et Di Wang. « Structure regularized self-paced learning for robust semi-supervised pattern classification ». Neural Computing and Applications 31, no 10 (19 avril 2018) : 6559–74. http://dx.doi.org/10.1007/s00521-018-3478-1.
Texte intégralSaravana Kumar, N. M. « IMPLEMENTATION OF ARTIFICIAL INTELLIGENCE IN IMPARTING EDUCATION AND EVALUATING STUDENT PERFORMANCE ». Journal of Artificial Intelligence and Capsule Networks 01, no 01 (2 septembre 2019) : 1–9. http://dx.doi.org/10.36548/jaicn.2019.1.001.
Texte intégralThèses sur le sujet "Self-supervised learning (artificial intelligence)"
Denize, Julien. « Self-supervised representation learning and applications to image and video analysis ». Electronic Thesis or Diss., Normandie, 2023. http://www.theses.fr/2023NORMIR37.
Texte intégralIn this thesis, we develop approaches to perform self-supervised learning for image and video analysis. Self-supervised representation learning allows to pretrain neural networks to learn general concepts without labels before specializing in downstream tasks faster and with few annotations. We present three contributions to self-supervised image and video representation learning. First, we introduce the theoretical paradigm of soft contrastive learning and its practical implementation called Similarity Contrastive Estimation (SCE) connecting contrastive and relational learning for image representation. Second, SCE is extended to global temporal video representation learning. Lastly, we propose COMEDIAN a pipeline for local-temporal video representation learning for transformers. These contributions achieved state-of-the-art results on multiple benchmarks and led to several academic and technical published contributions
Nett, Ryan. « Dataset and Evaluation of Self-Supervised Learning for Panoramic Depth Estimation ». DigitalCommons@CalPoly, 2020. https://digitalcommons.calpoly.edu/theses/2234.
Texte intégralStanescu, Ana. « Semi-supervised learning for biological sequence classification ». Diss., Kansas State University, 2015. http://hdl.handle.net/2097/35810.
Texte intégralDepartment of Computing and Information Sciences
Doina Caragea
Successful advances in biochemical technologies have led to inexpensive, time-efficient production of massive volumes of data, DNA and protein sequences. As a result, numerous computational methods for genome annotation have emerged, including machine learning and statistical analysis approaches that practically and efficiently analyze and interpret data. Traditional machine learning approaches to genome annotation typically rely on large amounts of labeled data in order to build quality classifiers. The process of labeling data can be expensive and time consuming, as it requires domain knowledge and expert involvement. Semi-supervised learning approaches that can make use of unlabeled data, in addition to small amounts of labeled data, can help reduce the costs associated with labeling. In this context, we focus on semi-supervised learning approaches for biological sequence classification. Although an attractive concept, semi-supervised learning does not invariably work as intended. Since the assumptions made by learning algorithms cannot be easily verified without considerable domain knowledge or data exploration, semi-supervised learning is not always "safe" to use. Advantageous utilization of the unlabeled data is problem dependent, and more research is needed to identify algorithms that can be used to increase the effectiveness of semi-supervised learning, in general, and for bioinformatics problems, in particular. At a high level, we aim to identify semi-supervised algorithms and data representations that can be used to learn effective classifiers for genome annotation tasks such as cassette exon identification, splice site identification, and protein localization. In addition, one specific challenge that we address is the "data imbalance" problem, which is prevalent in many domains, including bioinformatics. The data imbalance phenomenon arises when one of the classes to be predicted is underrepresented in the data because instances belonging to that class are rare (noteworthy cases) or difficult to obtain. Ironically, minority classes are typically the most important to learn, because they may be associated with special cases, as in the case of splice site prediction. We propose two main techniques to deal with the data imbalance problem, namely a technique based on "dynamic balancing" (augmenting the originally labeled data only with positive instances during the semi-supervised iterations of the algorithms) and another technique based on ensemble approaches. The results show that with limited amounts of labeled data, semisupervised approaches can successfully leverage the unlabeled data, thereby surpassing their completely supervised counterparts. A type of semi-supervised learning, known as "transductive" learning aims to classify the unlabeled data without generalizing to new, previously not encountered instances. Theoretically, this aspect makes transductive learning particularly suitable for the task of genome annotation, in which an entirely sequenced genome is typically available, sometimes accompanied by limited annotation. We study and evaluate various transductive approaches (such as transductive support vector machines and graph based approaches) and sequence representations for the problems of cassette exon identification. The results obtained demonstrate the effectiveness of transductive algorithms in sequence annotation tasks.
Abou-Moustafa, Karim. « Metric learning revisited : new approaches for supervised and unsupervised metric learning with analysis and algorithms ». Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=106370.
Texte intégralDans cette thèse, je propose deux algorithmes pour l'apprentissage de la métrique dX; le premier pour l'apprentissage supervisé, et le deuxième pour l'apprentissage non-supervisé, ainsi que pour l'apprentissage supervisé et semi-supervisé. En particulier, je propose des algorithmes qui prennent en considération la structure et la géométrie de X d'une part, et les caractéristiques des ensembles de données du monde réel d'autre part. Cependant, si on cherche également la réduction de dimension, donc sous certaines hypothèses légères sur la topologie de X, et en même temps basé sur des informations disponibles a priori, on peut apprendre une intégration de X dans un espace Euclidien de petite dimension Rp0 p0 << p, où la distance Euclidienne révèle mieux les ressemblances entre les éléments de X et leurs groupements (clusters). Alors, comme un sous-produit, on obtient simultanément une réduction de dimension et un apprentissage métrique. Pour l'apprentissage supervisé, je propose PARDA, ou Pareto discriminant analysis, pour la discriminante réduction linéaire de dimension. PARDA est basé sur le mécanisme d'optimisation à multi-objectifs; optimisant simultanément plusieurs fonctions objectives, éventuellement des fonctions contradictoires. Cela permet à PARDA de s'adapter à la topologie de classe dans un espace dimensionnel plus petit, et naturellement gère le problème de masquage de classe associé au discriminant Fisher dans le cadre d'analyse de problèmes à multi-classes. En conséquence, PARDA permet des meilleurs résultats de classification par rapport aux techniques modernes de réduction discriminante de dimension. Pour l'apprentissage non-supervisés, je propose un cadre algorithmique, noté par ??, qui encapsule les algorithmes spectraux d'apprentissage formant an algorithme d'apprentissage de métrique. Le cadre ?? capture la structure locale et la densité locale d'information de chaque point dans un ensemble de données, et donc il porte toutes les informations sur la densité d'échantillon différente dans l'espace d'entrée. La structure de ?? induit deux métriques de distance pour ses éléments: la métrique Bhattacharyya-Riemann dBR et la métrique Jeffreys-Riemann dJR. Les deux mesures réorganisent la proximité entre les points de X basé sur la structure locale et la densité autour de chaque point. En conséquence, lorsqu'on combine l'espace métrique (??, dBR) ou (??, dJR) avec les algorithmes de "spectral clustering" et "Euclidean embedding", ils donnent des améliorations significatives dans les précisions de regroupement et les taux d'erreur pour une grande variété de tâches de clustering et de classification.
Halpern, Yonatan. « Semi-Supervised Learning for Electronic Phenotyping in Support of Precision Medicine ». Thesis, New York University, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10192124.
Texte intégralMedical informatics plays an important role in precision medicine, delivering the right information to the right person, at the right time. With the introduction and widespread adoption of electronic medical records, in the United States and world-wide, there is now a tremendous amount of health data available for analysis.
Electronic record phenotyping refers to the task of determining, from an electronic medical record entry, a concise descriptor of the patient, comprising of their medical history, current problems, presentation, etc. In inferring such a phenotype descriptor from the record, a computer, in a sense, "understands'' the relevant parts of the record. These phenotypes can then be used in downstream applications such as cohort selection for retrospective studies, real-time clinical decision support, contextual displays, intelligent search, and precise alerting mechanisms.
We are faced with three main challenges:
First, the unstructured and incomplete nature of the data recorded in the electronic medical records requires special attention. Relevant information can be missing or written in an obscure way that the computer does not understand.
Second, the scale of the data makes it important to develop efficient methods at all steps of the machine learning pipeline, including data collection and labeling, model learning and inference.
Third, large parts of medicine are well understood by health professionals. How do we combine the expert knowledge of specialists with the statistical insights from the electronic medical record?
Probabilistic graphical models such as Bayesian networks provide a useful abstraction for quantifying uncertainty and describing complex dependencies in data. Although significant progress has been made over the last decade on approximate inference algorithms and structure learning from complete data, learning models with incomplete data remains one of machine learning’s most challenging problems. How can we model the effects of latent variables that are not directly observed?
The first part of the thesis presents two different structural conditions under which learning with latent variables is computationally tractable. The first is the "anchored'' condition, where every latent variable has at least one child that is not shared by any other parent. The second is the "singly-coupled'' condition, where every latent variable is connected to at least three children that satisfy conditional independence (possibly after transforming the data).
Variables that satisfy these conditions can be specified by an expert without requiring that the entire structure or its parameters be specified, allowing for effective use of human expertise and making room for statistical learning to do some of the heavy lifting. For both the anchored and singly-coupled conditions, practical algorithms are presented.
The second part of the thesis describes real-life applications using the anchored condition for electronic phenotyping. A human-in-the-loop learning system and a functioning emergency informatics system for real-time extraction of important clinical variables are described and evaluated.
The algorithms and discussion presented here were developed for the purpose of improving healthcare, but are much more widely applicable, dealing with the very basic questions of identifiability and learning models with latent variables - a problem that lies at the very heart of the natural and social sciences.
Taylor, Farrell R. « Evaluation of Supervised Machine Learning for Classifying Video Traffic ». NSUWorks, 2016. http://nsuworks.nova.edu/gscis_etd/972.
Texte intégralCoursey, Kino High. « An Approach Towards Self-Supervised Classification Using Cyc ». Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5470/.
Texte intégralLivi, Federico. « Supervised Learning with Graph Structured Data for Transprecision Computing ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19714/.
Texte intégralRossi, Alex. « Self-supervised information retrieval : a novel approach based on Deep Metric Learning and Neural Language Models ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.
Trouver le texte intégralStroulia, Eleni. « Failure-driven learning as model-based self-redesign ». Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/8291.
Texte intégralLivres sur le sujet "Self-supervised learning (artificial intelligence)"
Graves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Berlin, Heidelberg : Springer Berlin Heidelberg, 2012.
Trouver le texte intégralKanerva, Pentti. The organization of an autonomous learning system. Moffett Field, CA : Research Institute for Advanced Computer Science, NASA Ames Research Center, 1988.
Trouver le texte intégralEkici, Berk. Towards self-sufficient high-rises : Performance optimisation using artificial intelligence. Delft : BK Books, 2022.
Trouver le texte intégralHe, Haibo. Self-adaptive systems for machine intelligence. Hoboken, N.J : Wiley-Interscience, 2011.
Trouver le texte intégralNajim, K. Learning automata : Theory and applications. Oxford, OX, U.K : Pergamon, 1994.
Trouver le texte intégralWang, Huaiqing. Manufacturing intelligence for industrial engineering : Methods for system self-organization, learning, and adaptation. Hershey PA : Engineering Science Reference, 2010.
Trouver le texte intégralZhou, Zude. Manufacturing intelligence for industrial engineering : Methods for system self-organization, learning, and adaptation. Hershey PA : Engineering Science Reference, 2010.
Trouver le texte intégralZhou, Zude. Manufacturing intelligence for industrial engineering : Methods for system self-organization, learning, and adaptation. Hershey PA : Engineering Science Reference, 2010.
Trouver le texte intégralZhou, Zude. Manufacturing intelligence for industrial engineering : Methods for system self-organization, learning, and adaptation. Hershey, PA : Engineering Science Reference, 2010.
Trouver le texte intégralKlimenko, A. V. Osnovy estestvennogo intellekta : Rekurrentnai͡a︡ teorii͡a︡ samoorganizat͡s︡ii : versii͡a︡ 3. Rostov-na-Donu : Izd-vo Rostovskogo universiteta, 1994.
Trouver le texte intégralChapitres de livres sur le sujet "Self-supervised learning (artificial intelligence)"
Kim, Haesik. « Supervised Learning ». Dans Artificial Intelligence for 6G, 87–182. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95041-5_4.
Texte intégralTalukdar, Jyotismita, Thipendra P. Singh et Basanta Barman. « Supervised Learning ». Dans Artificial Intelligence in Healthcare Industry, 51–86. Singapore : Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-3157-6_4.
Texte intégralLiu, Dongxin, et Tarek Abdelzaher. « Self-Supervised Learning from Unlabeled IoT Data ». Dans Artificial Intelligence for Edge Computing, 27–110. Cham : Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-40787-1_2.
Texte intégralYe, Linwei, et Zhenhua Wang. « Self-supervised Meta Auxiliary Learning for Actor and Action Video Segmentation from Natural Language ». Dans Artificial Intelligence, 317–28. Singapore : Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-99-8850-1_26.
Texte intégralLong, Jiefeng, Chun Li et Lin Shang. « Few-Shot Crowd Counting via Self-supervised Learning ». Dans PRICAI 2021 : Trends in Artificial Intelligence, 379–90. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89370-5_28.
Texte intégralSiriborvornratanakul, Thitirat. « Reducing Human Annotation Effort Using Self-supervised Learning for Image Segmentation ». Dans Artificial Intelligence in HCI, 436–45. Cham : Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-60606-9_26.
Texte intégralSlama, Dirk. « Artificial Intelligence 101 ». Dans The Digital Playbook, 11–17. Cham : Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-030-88221-1_2.
Texte intégralYang, Yu, Fang Wan, Qixiang Ye et Xiangyang Ji. « Weakly Supervised Learning of Instance Segmentation with Confidence Feedback ». Dans Artificial Intelligence, 392–403. Cham : Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20497-5_32.
Texte intégralChen, Zhiyuan, et Bing Liu. « Lifelong Supervised Learning ». Dans Synthesis Lectures on Artificial Intelligence and Machine Learning, 27–51. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-031-01575-5_3.
Texte intégralMosalam, Khalid M., et Yuqing Gao. « Semi-Supervised Learning ». Dans Artificial Intelligence in Vision-Based Structural Health Monitoring, 279–305. Cham : Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-52407-3_10.
Texte intégralActes de conférences sur le sujet "Self-supervised learning (artificial intelligence)"
An, Yuexuan, Hui Xue, Xingyu Zhao et Lu Zhang. « Conditional Self-Supervised Learning for Few-Shot Classification ». Dans Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California : International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/295.
Texte intégralLiang, Yudong, Bin Wang, Wangmeng Zuo, Jiaying Liu et Wenqi Ren. « Self-supervised Learning and Adaptation for Single Image Dehazing ». Dans Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California : International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/159.
Texte intégralShen, Jiahao. « Self-supervised boundary offline reinforcement learning ». Dans International Conference on Computer Graphics, Artificial Intelligence, and Data Processing (ICCAID 2023), sous la direction de Harris Wu et Haiwu Li. SPIE, 2024. http://dx.doi.org/10.1117/12.3026355.
Texte intégralIsmail-Fawaz, Ali, Maxime Devanne, Jonathan Weber et Germain Forestier. « Enhancing Time Series Classification with Self-Supervised Learning ». Dans 15th International Conference on Agents and Artificial Intelligence. SCITEPRESS - Science and Technology Publications, 2023. http://dx.doi.org/10.5220/0011611300003393.
Texte intégralTang, Yixin, Hua Cheng, Yiquan Fang et Yiming Pan. « In-Batch Negatives' Enhanced Self-Supervised Learning ». Dans 2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2022. http://dx.doi.org/10.1109/ictai56018.2022.00031.
Texte intégralWicaksono, R. Satrio Hariomurti, Ali Akbar Septiandri et Ade Jamal. « Human Embryo Classification Using Self-Supervised Learning ». Dans 2021 2nd International Conference on Artificial Intelligence and Data Sciences (AiDAS). IEEE, 2021. http://dx.doi.org/10.1109/aidas53897.2021.9574328.
Texte intégralKhan, Adnan, Sarah AlBarri et Muhammad Arslan Manzoor. « Contrastive Self-Supervised Learning : A Survey on Different Architectures ». Dans 2022 2nd International Conference on Artificial Intelligence (ICAI). IEEE, 2022. http://dx.doi.org/10.1109/icai55435.2022.9773725.
Texte intégralBasaj, Dominika, Witold Oleszkiewicz, Igor Sieradzki, Michał Górszczak, Barbara Rychalska, Tomasz Trzcinski et Bartosz Zieliński. « Explaining Self-Supervised Image Representations with Visual Probing ». Dans Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California : International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/82.
Texte intégralBhattacharjee, Amrita, Mansooreh Karami et Huan Liu. « Text Transformations in Contrastive Self-Supervised Learning : A Review ». Dans Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California : International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/757.
Texte intégralYang, XiaoYu, et CaiFeng Zhou. « Self-supervised learning-based waste classification model ». Dans 3rd International Conference on Artificial Intelligence, Automation, and High-Performance Computing (AIAHPC2023), sous la direction de Dimitrios A. Karras et Simon X. Yang. SPIE, 2023. http://dx.doi.org/10.1117/12.2684730.
Texte intégralRapports d'organisations sur le sujet "Self-supervised learning (artificial intelligence)"
Alexander, Serena, Bo Yang, Owen Hussey et Derek Hicks. Examining the Externalities of Highway Capacity Expansions in California : An Analysis of Land Use and Land Cover (LULC) Using Remote Sensing Technology. Mineta Transportation Institute, novembre 2023. http://dx.doi.org/10.31979/mti.2023.2251.
Texte intégralKulhandjian, Hovannes. AI-Based Bridge and Road Inspection Framework Using Drones. Mineta Transportation Institute, novembre 2023. http://dx.doi.org/10.31979/mti.2023.2226.
Texte intégral