Добірка наукової літератури з теми "Self-supervised learning (artificial intelligence)"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Self-supervised learning (artificial intelligence)".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Self-supervised learning (artificial intelligence)":

1

Neghawi, Elie, and Yan Liu. "Enhancing Self-Supervised Learning through Explainable Artificial Intelligence Mechanisms: A Computational Analysis." Big Data and Cognitive Computing 8, no. 6 (June 3, 2024): 58. http://dx.doi.org/10.3390/bdcc8060058.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Self-supervised learning continues to drive advancements in machine learning. However, the absence of unified computational processes for benchmarking and evaluation remains a challenge. This study conducts a comprehensive analysis of state-of-the-art self-supervised learning algorithms, emphasizing their underlying mechanisms and computational intricacies. Building upon this analysis, we introduce a unified model-agnostic computation (UMAC) process, tailored to complement modern self-supervised learning algorithms. UMAC serves as a model-agnostic and global explainable artificial intelligence (XAI) methodology that is capable of systematically integrating and enhancing state-of-the-art algorithms. Through UMAC, we identify key computational mechanisms and craft a unified framework for self-supervised learning evaluation. Leveraging UMAC, we integrate an XAI methodology to enhance transparency and interpretability. Our systematic approach yields a 17.12% increase in improvement in training time complexity and a 13.1% boost in improvement in testing time complexity. Notably, improvements are observed in augmentation, encoder architecture, and auxiliary components within the network classifier. These findings underscore the importance of structured computational processes in enhancing model efficiency and fortifying algorithmic transparency in self-supervised learning, paving the way for more interpretable and efficient AI models.
2

CHAN, JASON, IRENA KOPRINSKA, and JOSIAH POON. "SEMI-SUPERVISED CLASSIFICATION USING BRIDGING." International Journal on Artificial Intelligence Tools 17, no. 03 (June 2008): 415–31. http://dx.doi.org/10.1142/s0218213008003972.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Traditional supervised classification algorithms require a large number of labelled examples to perform accurately. Semi-supervised classification algorithms attempt to overcome this major limitation by also using unlabelled examples. Unlabelled examples have also been used to improve nearest neighbour text classification in a method called bridging. In this paper, we propose the use of bridging in a semi-supervised setting. We introduce a new bridging algorithm that can be used as a base classifier in most semi-supervised approaches. We empirically show that the classification performance of two semi-supervised algorithms, self-learning and co-training, improves with the use of our new bridging algorithm in comparison to using the standard classifier, JRipper. We propose a similarity metric for short texts and also study the performance of self-learning with a number of instance selection heuristics.
3

Yuya, KOBAYASHI, Masahiro SUZUKI, and Yutaka MATSUO. "Scene Interpretation Method using Transformer and Self-supervised Learning." Transactions of the Japanese Society for Artificial Intelligence 37, no. 2 (March 1, 2022): I—L75_1–17. http://dx.doi.org/10.1527/tjsai.37-2_i-l75.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Hrycej, Tomas. "Supporting supervised learning by self-organization." Neurocomputing 4, no. 1-2 (February 1992): 17–30. http://dx.doi.org/10.1016/0925-2312(92)90040-v.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Wang, Fei, and Changshui Zhang. "Robust self-tuning semi-supervised learning." Neurocomputing 70, no. 16-18 (October 2007): 2931–39. http://dx.doi.org/10.1016/j.neucom.2006.11.004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Biscione, Valerio, and Jeffrey S. Bowers. "Learning online visual invariances for novel objects via supervised and self-supervised training." Neural Networks 150 (June 2022): 222–36. http://dx.doi.org/10.1016/j.neunet.2022.02.017.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Ma, Jun, Yakun Wen, and Liming Yang. "Lagrangian supervised and semi-supervised extreme learning machine." Applied Intelligence 49, no. 2 (August 25, 2018): 303–18. http://dx.doi.org/10.1007/s10489-018-1273-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Che, Feihu, Guohua Yang, Dawei Zhang, Jianhua Tao, and Tong Liu. "Self-supervised graph representation learning via bootstrapping." Neurocomputing 456 (October 2021): 88–96. http://dx.doi.org/10.1016/j.neucom.2021.03.123.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Gu, Nannan, Pengying Fan, Mingyu Fan, and Di Wang. "Structure regularized self-paced learning for robust semi-supervised pattern classification." Neural Computing and Applications 31, no. 10 (April 19, 2018): 6559–74. http://dx.doi.org/10.1007/s00521-018-3478-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Saravana Kumar, N. M. "IMPLEMENTATION OF ARTIFICIAL INTELLIGENCE IN IMPARTING EDUCATION AND EVALUATING STUDENT PERFORMANCE." Journal of Artificial Intelligence and Capsule Networks 01, no. 01 (September 2, 2019): 1–9. http://dx.doi.org/10.36548/jaicn.2019.1.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Simulation of human intelligence process is made possible with the help of artificial intelligence. The learning, reasoning and self-correction properties are made possible in computer systems. Along with AI, other technologies are combined effectively in order to create remarkable applications. We apply the changing role of AI and its techniques in new educational paradigms to create a personalised teaching-learning environment. Features like recognition, pattern matching, decision making, reasoning, problem solving and so on are applied along with knowledge based system and supervised machine learning for a complete learning and assessment process.

Дисертації з теми "Self-supervised learning (artificial intelligence)":

1

Denize, Julien. "Self-supervised representation learning and applications to image and video analysis." Electronic Thesis or Diss., Normandie, 2023. http://www.theses.fr/2023NORMIR37.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans cette thèse, nous développons des approches d'apprentissage auto-supervisé pour l'analyse d'images et de vidéos. L'apprentissage de représentation auto-supervisé permet de pré-entraîner les réseaux neuronaux à apprendre des concepts généraux sans annotations avant de les spécialiser plus rapidement à effectuer des tâches, et avec peu d'annotations. Nous présentons trois contributions à l'apprentissage auto-supervisé de représentations d'images et de vidéos. Premièrement, nous introduisons le paradigme théorique de l'apprentissage contrastif doux et sa mise en œuvre pratique appelée Estimation Contrastive de Similarité (SCE) qui relie l'apprentissage contrastif et relationnel pour la représentation d'images. Ensuite, SCE est étendue à l'apprentissage de représentation vidéo temporelle globale. Enfin, nous proposons COMEDIAN, un pipeline pour l'apprentissage de représentation vidéo locale-temporelle pour l'architecture transformer. Ces contributions ont conduit à des résultats de pointe sur de nombreux benchmarks et ont donné lieu à de multiples contributions académiques et techniques publiées
In this thesis, we develop approaches to perform self-supervised learning for image and video analysis. Self-supervised representation learning allows to pretrain neural networks to learn general concepts without labels before specializing in downstream tasks faster and with few annotations. We present three contributions to self-supervised image and video representation learning. First, we introduce the theoretical paradigm of soft contrastive learning and its practical implementation called Similarity Contrastive Estimation (SCE) connecting contrastive and relational learning for image representation. Second, SCE is extended to global temporal video representation learning. Lastly, we propose COMEDIAN a pipeline for local-temporal video representation learning for transformers. These contributions achieved state-of-the-art results on multiple benchmarks and led to several academic and technical published contributions
2

Nett, Ryan. "Dataset and Evaluation of Self-Supervised Learning for Panoramic Depth Estimation." DigitalCommons@CalPoly, 2020. https://digitalcommons.calpoly.edu/theses/2234.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Depth detection is a very common computer vision problem. It shows up primarily in robotics, automation, or 3D visualization domains, as it is essential for converting images to point clouds. One of the poster child applications is self driving cars. Currently, the best methods for depth detection are either very expensive, like LIDAR, or require precise calibration, like stereo cameras. These costs have given rise to attempts to detect depth from a monocular camera (a single camera). While this is possible, it is harder than LIDAR or stereo methods since depth can't be measured from monocular images, it has to be inferred. A good example is covering one eye: you still have some idea how far away things are, but it's not exact. Neural networks are a natural fit for this. Here, we build on previous neural network methods by applying a recent state of the art model to panoramic images in addition to pinhole ones and performing a comparative evaluation. First, we create a simulated depth detection dataset that lends itself to panoramic comparisons and contains pre-made cylindrical and spherical panoramas. We then modify monodepth2 to support cylindrical and cubemap panoramas, incorporating current best practices for depth detection on those panorama types, and evaluate its performance for each type of image using our dataset. We also consider the resources used in training and other qualitative factors.
3

Stanescu, Ana. "Semi-supervised learning for biological sequence classification." Diss., Kansas State University, 2015. http://hdl.handle.net/2097/35810.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Doctor of Philosophy
Department of Computing and Information Sciences
Doina Caragea
Successful advances in biochemical technologies have led to inexpensive, time-efficient production of massive volumes of data, DNA and protein sequences. As a result, numerous computational methods for genome annotation have emerged, including machine learning and statistical analysis approaches that practically and efficiently analyze and interpret data. Traditional machine learning approaches to genome annotation typically rely on large amounts of labeled data in order to build quality classifiers. The process of labeling data can be expensive and time consuming, as it requires domain knowledge and expert involvement. Semi-supervised learning approaches that can make use of unlabeled data, in addition to small amounts of labeled data, can help reduce the costs associated with labeling. In this context, we focus on semi-supervised learning approaches for biological sequence classification. Although an attractive concept, semi-supervised learning does not invariably work as intended. Since the assumptions made by learning algorithms cannot be easily verified without considerable domain knowledge or data exploration, semi-supervised learning is not always "safe" to use. Advantageous utilization of the unlabeled data is problem dependent, and more research is needed to identify algorithms that can be used to increase the effectiveness of semi-supervised learning, in general, and for bioinformatics problems, in particular. At a high level, we aim to identify semi-supervised algorithms and data representations that can be used to learn effective classifiers for genome annotation tasks such as cassette exon identification, splice site identification, and protein localization. In addition, one specific challenge that we address is the "data imbalance" problem, which is prevalent in many domains, including bioinformatics. The data imbalance phenomenon arises when one of the classes to be predicted is underrepresented in the data because instances belonging to that class are rare (noteworthy cases) or difficult to obtain. Ironically, minority classes are typically the most important to learn, because they may be associated with special cases, as in the case of splice site prediction. We propose two main techniques to deal with the data imbalance problem, namely a technique based on "dynamic balancing" (augmenting the originally labeled data only with positive instances during the semi-supervised iterations of the algorithms) and another technique based on ensemble approaches. The results show that with limited amounts of labeled data, semisupervised approaches can successfully leverage the unlabeled data, thereby surpassing their completely supervised counterparts. A type of semi-supervised learning, known as "transductive" learning aims to classify the unlabeled data without generalizing to new, previously not encountered instances. Theoretically, this aspect makes transductive learning particularly suitable for the task of genome annotation, in which an entirely sequenced genome is typically available, sometimes accompanied by limited annotation. We study and evaluate various transductive approaches (such as transductive support vector machines and graph based approaches) and sequence representations for the problems of cassette exon identification. The results obtained demonstrate the effectiveness of transductive algorithms in sequence annotation tasks.
4

Abou-Moustafa, Karim. "Metric learning revisited: new approaches for supervised and unsupervised metric learning with analysis and algorithms." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=106370.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In machine learning one is usually given a data set of real high dimensional vectors X, based on which it is desired to select a hypothesis θ from the space of hypotheses Θ using a learning algorithm. An immediate assumption that is usually imposed on X is that it is a subset from the very general embedding space Rp which makes the Euclidean distance ∥•∥2 to become the default metric for the elements of X. Since various learning algorithms assume that the input space is Rp with its endowed metric ∥•∥2 as a (dis)similarity measure, it follows that selecting hypothesis θ becomes intrinsically tied to the Euclidean distance. Metric learning is the problem of selecting a specific metric dX from a certain family of metrics D based on the properties of the elements in the set X. Under some performance measure, the metric dX is expected to perform better on X than any other metric d 2 D. If the learning algorithm replaces the very general metric ∥•∥2 with the metric dX , then selecting hypothesis θ will be tied to the more specific metric dX which carries all the information on the properties of the elements in X. In this thesis I propose two algorithms for learning the metric dX ; the first for supervised learning settings, and the second for unsupervised, as well as for supervised and semi-supervised settings. In particular, I propose algorithms that take into consideration the structure and geometry of X on one hand, and the characteristics of real world data sets on the other. However, if we are also seeking dimensionality reduction, then under some mild assumptions on the topology of X, and based on the available a priori information, one can learn an embedding for X into a low dimensional Euclidean space Rp0, p0 << p, where the Euclidean distance better reveals the similarities between the elements of X and their groupings (clusters). That is, as a by-product, we obtain dimensionality reduction together with metric learning. In the supervised setting, I propose PARDA, or Pareto discriminant analysis for discriminative linear dimensionality reduction. PARDA is based on the machinery of multi-objective optimization; simultaneously optimizing multiple, possibly conflicting, objective functions. This allows PARDA to adapt to the class topology in the lower dimensional space, and naturally handles the class masking problem that is inherent in Fisher's discriminant analysis framework for multiclass problems. As a result, PARDA yields significantly better classification results when compared with modern techniques for discriminative dimensionality reduction. In the unsupervised setting, I propose an algorithmic framework, denoted by ?? (note the different notation), that encapsulates spectral manifold learning algorithms and gears them for metric learning. The framework ?? captures the local structure and the local density information from each point in a data set, and hence it carries all the information on the varying sample density in the input space. The structure of ?? induces two distance metrics for its elements, the Bhattacharyya-Riemann metric dBR and the Jeffreys-Riemann metric dJR. Both metrics reorganize the proximity between the points in X based on the local structure and density around each point. As a result, when combining the metric space (??, dBR) or (??, dJR) with spectral clustering and Euclidean embedding, they yield significant improvements in clustering accuracies and error rates for a large variety of clustering and classification tasks.
Dans cette thèse, je propose deux algorithmes pour l'apprentissage de la métrique dX; le premier pour l'apprentissage supervisé, et le deuxième pour l'apprentissage non-supervisé, ainsi que pour l'apprentissage supervisé et semi-supervisé. En particulier, je propose des algorithmes qui prennent en considération la structure et la géométrie de X d'une part, et les caractéristiques des ensembles de données du monde réel d'autre part. Cependant, si on cherche également la réduction de dimension, donc sous certaines hypothèses légères sur la topologie de X, et en même temps basé sur des informations disponibles a priori, on peut apprendre une intégration de X dans un espace Euclidien de petite dimension Rp0 p0 << p, où la distance Euclidienne révèle mieux les ressemblances entre les éléments de X et leurs groupements (clusters). Alors, comme un sous-produit, on obtient simultanément une réduction de dimension et un apprentissage métrique. Pour l'apprentissage supervisé, je propose PARDA, ou Pareto discriminant analysis, pour la discriminante réduction linéaire de dimension. PARDA est basé sur le mécanisme d'optimisation à multi-objectifs; optimisant simultanément plusieurs fonctions objectives, éventuellement des fonctions contradictoires. Cela permet à PARDA de s'adapter à la topologie de classe dans un espace dimensionnel plus petit, et naturellement gère le problème de masquage de classe associé au discriminant Fisher dans le cadre d'analyse de problèmes à multi-classes. En conséquence, PARDA permet des meilleurs résultats de classification par rapport aux techniques modernes de réduction discriminante de dimension. Pour l'apprentissage non-supervisés, je propose un cadre algorithmique, noté par ??, qui encapsule les algorithmes spectraux d'apprentissage formant an algorithme d'apprentissage de métrique. Le cadre ?? capture la structure locale et la densité locale d'information de chaque point dans un ensemble de données, et donc il porte toutes les informations sur la densité d'échantillon différente dans l'espace d'entrée. La structure de ?? induit deux métriques de distance pour ses éléments: la métrique Bhattacharyya-Riemann dBR et la métrique Jeffreys-Riemann dJR. Les deux mesures réorganisent la proximité entre les points de X basé sur la structure locale et la densité autour de chaque point. En conséquence, lorsqu'on combine l'espace métrique (??, dBR) ou (??, dJR) avec les algorithmes de "spectral clustering" et "Euclidean embedding", ils donnent des améliorations significatives dans les précisions de regroupement et les taux d'erreur pour une grande variété de tâches de clustering et de classification.
5

Halpern, Yonatan. "Semi-Supervised Learning for Electronic Phenotyping in Support of Precision Medicine." Thesis, New York University, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10192124.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:

Medical informatics plays an important role in precision medicine, delivering the right information to the right person, at the right time. With the introduction and widespread adoption of electronic medical records, in the United States and world-wide, there is now a tremendous amount of health data available for analysis.

Electronic record phenotyping refers to the task of determining, from an electronic medical record entry, a concise descriptor of the patient, comprising of their medical history, current problems, presentation, etc. In inferring such a phenotype descriptor from the record, a computer, in a sense, "understands'' the relevant parts of the record. These phenotypes can then be used in downstream applications such as cohort selection for retrospective studies, real-time clinical decision support, contextual displays, intelligent search, and precise alerting mechanisms.

We are faced with three main challenges:

First, the unstructured and incomplete nature of the data recorded in the electronic medical records requires special attention. Relevant information can be missing or written in an obscure way that the computer does not understand.

Second, the scale of the data makes it important to develop efficient methods at all steps of the machine learning pipeline, including data collection and labeling, model learning and inference.

Third, large parts of medicine are well understood by health professionals. How do we combine the expert knowledge of specialists with the statistical insights from the electronic medical record?

Probabilistic graphical models such as Bayesian networks provide a useful abstraction for quantifying uncertainty and describing complex dependencies in data. Although significant progress has been made over the last decade on approximate inference algorithms and structure learning from complete data, learning models with incomplete data remains one of machine learning’s most challenging problems. How can we model the effects of latent variables that are not directly observed?

The first part of the thesis presents two different structural conditions under which learning with latent variables is computationally tractable. The first is the "anchored'' condition, where every latent variable has at least one child that is not shared by any other parent. The second is the "singly-coupled'' condition, where every latent variable is connected to at least three children that satisfy conditional independence (possibly after transforming the data).

Variables that satisfy these conditions can be specified by an expert without requiring that the entire structure or its parameters be specified, allowing for effective use of human expertise and making room for statistical learning to do some of the heavy lifting. For both the anchored and singly-coupled conditions, practical algorithms are presented.

The second part of the thesis describes real-life applications using the anchored condition for electronic phenotyping. A human-in-the-loop learning system and a functioning emergency informatics system for real-time extraction of important clinical variables are described and evaluated.

The algorithms and discussion presented here were developed for the purpose of improving healthcare, but are much more widely applicable, dealing with the very basic questions of identifiability and learning models with latent variables - a problem that lies at the very heart of the natural and social sciences.

6

Taylor, Farrell R. "Evaluation of Supervised Machine Learning for Classifying Video Traffic." NSUWorks, 2016. http://nsuworks.nova.edu/gscis_etd/972.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Operational deployment of machine learning based classifiers in real-world networks has become an important area of research to support automated real-time quality of service decisions by Internet service providers (ISPs) and more generally, network administrators. As the Internet has evolved, multimedia applications, such as voice over Internet protocol (VoIP), gaming, and video streaming, have become commonplace. These traffic types are sensitive to network perturbations, e.g. jitter and delay. Automated quality of service (QoS) capabilities offer a degree of relief by prioritizing network traffic without human intervention; however, they rely on the integration of real-time traffic classification to identify applications. Accordingly, researchers have begun to explore various techniques to incorporate into real-world networks. One method that shows promise is the use of machine learning techniques trained on sub-flows – a small number of consecutive packets selected from different phases of the full application flow. Generally, research on machine learning classifiers was based on statistics derived from full traffic flows, which can limit their effectiveness (recall and precision) if partial data captures are encountered by the classifier. In real-world networks, partial data captures can be caused by unscheduled restarts/reboots of the classifier or data capture capabilities, network interruptions, or application errors. Research on the use of machine learning algorithms trained on sub-flows to classify VoIP and gaming traffic has shown promise, even when partial data captures are encountered. This research extends that work by applying machine learning algorithms trained on multiple sub-flows to classification of video streaming traffic. Results from this research indicate that sub-flow classifiers have much higher and more consistent recall and precision than full flow classifiers when applied to video traffic. Moreover, the application of ensemble methods, specifically Bagging and adaptive boosting (AdaBoost) further improves recall and precision for sub-flow classifiers. Findings indicate sub-flow classifiers based on AdaBoost in combination with the C4.5 algorithm exhibited the best performance with the most consistent results for classification of video streaming traffic.
7

Coursey, Kino High. "An Approach Towards Self-Supervised Classification Using Cyc." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5470/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Due to the long duration required to perform manual knowledge entry by human knowledge engineers it is desirable to find methods to automatically acquire knowledge about the world by accessing online information. In this work I examine using the Cyc ontology to guide the creation of Naïve Bayes classifiers to provide knowledge about items described in Wikipedia articles. Given an initial set of Wikipedia articles the system uses the ontology to create positive and negative training sets for the classifiers in each category. The order in which classifiers are generated and used to test articles is also guided by the ontology. The research conducted shows that a system can be created that utilizes statistical text classification methods to extract information from an ad-hoc generated information source like Wikipedia for use in a formal semantic ontology like Cyc. Benefits and limitations of the system are discussed along with future work.
8

Livi, Federico. "Supervised Learning with Graph Structured Data for Transprecision Computing." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19714/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Nell'era dell'Internet of things, dei Big Data e dell'industria 4.0, la crescente richiesta di risorse e strumenti atti ad elaborare la grande quantità di dati e di informazioni disponibili in ogni momento, ha posto l'attenzione su problemi oramai non più trascurabili inerenti al consumo di energia e ai costi che ne derivano. Si tratta del cosiddetto powerwall, ovvero della difficoltà fisica dei macchinari di sostenere il consumo di potenza necessario per il processamento di moli di dati sempre più grandi e per l'esecuzione di task sempre più sofisticati. Tra le nuove tecniche che si sono affermate negli ultimi anni per tentare di arginare questo problema è importante citare la cosiddetta Transprecision Computing, approccio che si impegna a migliorare il consumo dell'energia a discapito della precisione. Infatti, tramite la riduzione di bit di precisione nelle operazioni di floating point, è possibile ottenere una maggiore efficienza energetica ma anche una decrescita non lineare della precisione di computazione. A seconda del dominio di applicazione, questo tradeoff può portare effettivamente ad importanti miglioramenti, ma purtroppo risulta ancora complesso trovare la precisione ottimale per tutte le variabili rispettando nel mentre un limite superiore relativo all'errore. In letteratura, questo problema è perciò affrontato utilizzando euristiche e metodologie che coinvolgono direttamente modelli di ottimizzazione e di machine learning. Nel presente elaborato, si cerca di migliorare ulteriormente questi approcci, introducendo nuovi modelli di machine learning basati anche sull'analisi di relazioni complesse tra le variabili. In questo senso, si arriva anche ad esaminare tecniche che lavorano direttamente su dati strutturati a grafo, tramite lo studio di reti neurali più complesse, le cosiddette graph convolutional networks.
9

Rossi, Alex. "Self-supervised information retrieval: a novel approach based on Deep Metric Learning and Neural Language Models." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Most of the existing open-source search engines, utilize keyword or tf-idf based techniques to find relevant documents and web pages relative to an input query. Although these methods, with the help of a page rank or knowledge graphs, proved to be effective in some cases, they often fail to retrieve relevant instances for more complicated queries that would require a semantic understanding to be exploited. In this Thesis, a self-supervised information retrieval system based on transformers is employed to build a semantic search engine over the library of Gruppo Maggioli company. Semantic search or search with meaning can refer to an understanding of the query, instead of simply finding words matches and, in general, it represents knowledge in a way suitable for retrieval. We chose to investigate a new self-supervised strategy to handle the training of unlabeled data based on the creation of pairs of ’artificial’ queries and the respective positive passages. We claim that by removing the reliance on labeled data, we may use the large volume of unlabeled material on the web without being limited to languages or domains where labeled data is abundant.
10

Stroulia, Eleni. "Failure-driven learning as model-based self-redesign." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/8291.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Self-supervised learning (artificial intelligence)":

1

Graves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Kanerva, Pentti. The organization of an autonomous learning system. Moffett Field, CA: Research Institute for Advanced Computer Science, NASA Ames Research Center, 1988.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ekici, Berk. Towards self-sufficient high-rises: Performance optimisation using artificial intelligence. Delft: BK Books, 2022.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

He, Haibo. Self-adaptive systems for machine intelligence. Hoboken, N.J: Wiley-Interscience, 2011.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Najim, K. Learning automata: Theory and applications. Oxford, OX, U.K: Pergamon, 1994.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Wang, Huaiqing. Manufacturing intelligence for industrial engineering: Methods for system self-organization, learning, and adaptation. Hershey PA: Engineering Science Reference, 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zhou, Zude. Manufacturing intelligence for industrial engineering: Methods for system self-organization, learning, and adaptation. Hershey PA: Engineering Science Reference, 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Zhou, Zude. Manufacturing intelligence for industrial engineering: Methods for system self-organization, learning, and adaptation. Hershey PA: Engineering Science Reference, 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Zhou, Zude. Manufacturing intelligence for industrial engineering: Methods for system self-organization, learning, and adaptation. Hershey, PA: Engineering Science Reference, 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Klimenko, A. V. Osnovy estestvennogo intellekta: Rekurrentnai͡a︡ teorii͡a︡ samoorganizat͡s︡ii : versii͡a︡ 3. Rostov-na-Donu: Izd-vo Rostovskogo universiteta, 1994.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Self-supervised learning (artificial intelligence)":

1

Kim, Haesik. "Supervised Learning." In Artificial Intelligence for 6G, 87–182. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95041-5_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Talukdar, Jyotismita, Thipendra P. Singh, and Basanta Barman. "Supervised Learning." In Artificial Intelligence in Healthcare Industry, 51–86. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-3157-6_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Liu, Dongxin, and Tarek Abdelzaher. "Self-Supervised Learning from Unlabeled IoT Data." In Artificial Intelligence for Edge Computing, 27–110. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-40787-1_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ye, Linwei, and Zhenhua Wang. "Self-supervised Meta Auxiliary Learning for Actor and Action Video Segmentation from Natural Language." In Artificial Intelligence, 317–28. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-99-8850-1_26.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Long, Jiefeng, Chun Li, and Lin Shang. "Few-Shot Crowd Counting via Self-supervised Learning." In PRICAI 2021: Trends in Artificial Intelligence, 379–90. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89370-5_28.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Siriborvornratanakul, Thitirat. "Reducing Human Annotation Effort Using Self-supervised Learning for Image Segmentation." In Artificial Intelligence in HCI, 436–45. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-60606-9_26.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Slama, Dirk. "Artificial Intelligence 101." In The Digital Playbook, 11–17. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-030-88221-1_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThis chapter provides an Artificial Intelligence 101, including a basic overview, a summary of Supervised, Unsupervised and Reinforcement Learning, as well as Deep Learning and Artificial Neural Networks (Fig. 2.1).
8

Yang, Yu, Fang Wan, Qixiang Ye, and Xiangyang Ji. "Weakly Supervised Learning of Instance Segmentation with Confidence Feedback." In Artificial Intelligence, 392–403. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20497-5_32.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Chen, Zhiyuan, and Bing Liu. "Lifelong Supervised Learning." In Synthesis Lectures on Artificial Intelligence and Machine Learning, 27–51. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-031-01575-5_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Mosalam, Khalid M., and Yuqing Gao. "Semi-Supervised Learning." In Artificial Intelligence in Vision-Based Structural Health Monitoring, 279–305. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-52407-3_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Self-supervised learning (artificial intelligence)":

1

An, Yuexuan, Hui Xue, Xingyu Zhao, and Lu Zhang. "Conditional Self-Supervised Learning for Few-Shot Classification." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/295.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
How to learn a transferable feature representation from limited examples is a key challenge for few-shot classification. Self-supervision as an auxiliary task to the main supervised few-shot task is considered to be a conceivable way to solve the problem since self-supervision can provide additional structural information easily ignored by the main task. However, learning a good representation by traditional self-supervised methods is usually dependent on large training samples. In few-shot scenarios, due to the lack of sufficient samples, these self-supervised methods might learn a biased representation, which more likely leads to the wrong guidance for the main tasks and finally causes the performance degradation. In this paper, we propose conditional self-supervised learning (CSS) to use auxiliary information to guide the representation learning of self-supervised tasks. Specifically, CSS leverages supervised information as prior knowledge to shape and improve the learning feature manifold of self-supervision without auxiliary unlabeled data, so as to reduce representation bias and mine more effective semantic information. Moreover, CSS exploits more meaningful information through supervised and the improved self-supervised learning respectively and integrates the information into a unified distribution, which can further enrich and broaden the original representation. Extensive experiments demonstrate that our proposed method without any fine-tuning can achieve a significant accuracy improvement on the few-shot classification scenarios compared to the state-of-the-art few-shot learning methods.
2

Liang, Yudong, Bin Wang, Wangmeng Zuo, Jiaying Liu, and Wenqi Ren. "Self-supervised Learning and Adaptation for Single Image Dehazing." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/159.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Existing deep image dehazing methods usually depend on supervised learning with a large number of hazy-clean image pairs which are expensive or difficult to collect. Moreover, dehazing performance of the learned model may deteriorate significantly when the training hazy-clean image pairs are insufficient and are different from real hazy images in applications. In this paper, we show that exploiting large scale training set and adapting to real hazy images are two critical issues in learning effective deep dehazing models. Under the depth guidance estimated by a well-trained depth estimation network, we leverage the conventional atmospheric scattering model to generate massive hazy-clean image pairs for the self-supervised pre-training of dehazing network. Furthermore, self-supervised adaptation is presented to adapt pre-trained network to real hazy images. Learning without forgetting strategy is also deployed in self-supervised adaptation by combining self-supervision and model adaptation via contrastive learning. Experiments show that our proposed method performs favorably against the state-of-the-art methods, and is quite efficient, i.e., handling a 4K image in 23 ms. The codes are available at https://github.com/DongLiangSXU/SLAdehazing.
3

Shen, Jiahao. "Self-supervised boundary offline reinforcement learning." In International Conference on Computer Graphics, Artificial Intelligence, and Data Processing (ICCAID 2023), edited by Harris Wu and Haiwu Li. SPIE, 2024. http://dx.doi.org/10.1117/12.3026355.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ismail-Fawaz, Ali, Maxime Devanne, Jonathan Weber, and Germain Forestier. "Enhancing Time Series Classification with Self-Supervised Learning." In 15th International Conference on Agents and Artificial Intelligence. SCITEPRESS - Science and Technology Publications, 2023. http://dx.doi.org/10.5220/0011611300003393.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Tang, Yixin, Hua Cheng, Yiquan Fang, and Yiming Pan. "In-Batch Negatives' Enhanced Self-Supervised Learning." In 2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2022. http://dx.doi.org/10.1109/ictai56018.2022.00031.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Wicaksono, R. Satrio Hariomurti, Ali Akbar Septiandri, and Ade Jamal. "Human Embryo Classification Using Self-Supervised Learning." In 2021 2nd International Conference on Artificial Intelligence and Data Sciences (AiDAS). IEEE, 2021. http://dx.doi.org/10.1109/aidas53897.2021.9574328.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Khan, Adnan, Sarah AlBarri, and Muhammad Arslan Manzoor. "Contrastive Self-Supervised Learning: A Survey on Different Architectures." In 2022 2nd International Conference on Artificial Intelligence (ICAI). IEEE, 2022. http://dx.doi.org/10.1109/icai55435.2022.9773725.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Basaj, Dominika, Witold Oleszkiewicz, Igor Sieradzki, Michał Górszczak, Barbara Rychalska, Tomasz Trzcinski, and Bartosz Zieliński. "Explaining Self-Supervised Image Representations with Visual Probing." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/82.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recently introduced self-supervised methods for image representation learning provide on par or superior results to their fully supervised competitors, yet the corresponding efforts to explain the self-supervised approaches lag behind. Motivated by this observation, we introduce a novel visual probing framework for explaining the self-supervised models by leveraging probing tasks employed previously in natural language processing. The probing tasks require knowledge about semantic relationships between image parts. Hence, we propose a systematic approach to obtain analogs of natural language in vision, such as visual words, context, and taxonomy. We show the effectiveness and applicability of those analogs in the context of explaining self-supervised representations. Our key findings emphasize that relations between language and vision can serve as an effective yet intuitive tool for discovering how machine learning models work, independently of data modality. Our work opens a plethora of research pathways towards more explainable and transparent AI.
9

Bhattacharjee, Amrita, Mansooreh Karami, and Huan Liu. "Text Transformations in Contrastive Self-Supervised Learning: A Review." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/757.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Contrastive self-supervised learning has become a prominent technique in representation learning. The main step in these methods is to contrast semantically similar and dissimilar pairs of samples. However, in the domain of Natural Language Processing (NLP), the augmentation methods used in creating similar pairs with regard to contrastive learning (CL) assumptions are challenging. This is because, even simply modifying a word in the input might change the semantic meaning of the sentence, and hence, would violate the distributional hypothesis. In this review paper, we formalize the contrastive learning framework, emphasize the considerations that need to be addressed in the data transformation step, and review the state-of-the-art methods and evaluations for contrastive representation learning in NLP. Finally, we describe some challenges and potential directions for learning better text representations using contrastive methods.
10

Yang, XiaoYu, and CaiFeng Zhou. "Self-supervised learning-based waste classification model." In 3rd International Conference on Artificial Intelligence, Automation, and High-Performance Computing (AIAHPC2023), edited by Dimitrios A. Karras and Simon X. Yang. SPIE, 2023. http://dx.doi.org/10.1117/12.2684730.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Self-supervised learning (artificial intelligence)":

1

Alexander, Serena, Bo Yang, Owen Hussey, and Derek Hicks. Examining the Externalities of Highway Capacity Expansions in California: An Analysis of Land Use and Land Cover (LULC) Using Remote Sensing Technology. Mineta Transportation Institute, November 2023. http://dx.doi.org/10.31979/mti.2023.2251.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
There are over 590,000 bridges dispersed across the roadway network that stretches across the United States alone. Each bridge with a length of 20 feet or greater must be inspected at least once every 24 months, according to the Federal Highway Act (FHWA) of 1968. This research developed an artificial intelligence (AI)-based framework for bridge and road inspection using drones with multiple sensors collecting capabilities. It is not sufficient to conduct inspections of bridges and roads using cameras alone, so the research team utilized an infrared (IR) camera along with a high-resolution optical camera. In many instances, the IR camera can provide more details to the interior structural damages of a bridge or a road surface than an optical camera, which is more suitable for inspecting damages on the surface of a bridge or a road. In addition, the drone inspection system is equipped with a minicomputer that runs Machine Learning algorithms. These algorithms enable autonomous drone navigation, image capture of the bridge or road structure, and analysis of the images. Whenever any damage is detected, the location coordinates are saved. Thus, the drone can self-operate and carry out the inspection process using advanced AI algorithms developed by the research team. The experimental results reveal the system can detect potholes with an average accuracy of 84.62% using the visible light camera and 95.12% using a thermal camera. This developed bridge and road inspection framework can save time, money, and lives by automating and having drones conduct major inspection operations in place of humans.
2

Kulhandjian, Hovannes. AI-Based Bridge and Road Inspection Framework Using Drones. Mineta Transportation Institute, November 2023. http://dx.doi.org/10.31979/mti.2023.2226.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
There are over 590,000 bridges dispersed across the roadway network that stretches across the United States alone. Each bridge with a length of 20 feet or greater must be inspected at least once every 24 months, according to the Federal Highway Act (FHWA) of 1968. This research developed an artificial intelligence (AI)-based framework for bridge and road inspection using drones with multiple sensors collecting capabilities. It is not sufficient to conduct inspections of bridges and roads using cameras alone, so the research team utilized an infrared (IR) camera along with a high-resolution optical camera. In many instances, the IR camera can provide more details to the interior structural damages of a bridge or a road surface than an optical camera, which is more suitable for inspecting damages on the surface of a bridge or a road. In addition, the drone inspection system is equipped with a minicomputer that runs Machine Learning algorithms. These algorithms enable autonomous drone navigation, image capture of the bridge or road structure, and analysis of the images. Whenever any damage is detected, the location coordinates are saved. Thus, the drone can self-operate and carry out the inspection process using advanced AI algorithms developed by the research team. The experimental results reveal the system can detect potholes with an average accuracy of 84.62% using the visible light camera and 95.12% using a thermal camera. This developed bridge and road inspection framework can save time, money, and lives by automating and having drones conduct major inspection operations in place of humans.

До бібліографії