Academic literature on the topic 'Metric learning paradigm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Metric learning paradigm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Metric learning paradigm"

1

Brockmeier, Austin J., John S. Choi, Evan G. Kriminger, Joseph T. Francis, and Jose C. Principe. "Neural Decoding with Kernel-Based Metric Learning." Neural Computation 26, no. 6 (2014): 1080–107. http://dx.doi.org/10.1162/neco_a_00591.

Full text
Abstract:
In studies of the nervous system, the choice of metric for the neural responses is a pivotal assumption. For instance, a well-suited distance metric enables us to gauge the similarity of neural responses to various stimuli and assess the variability of responses to a repeated stimulus—exploratory steps in understanding how the stimuli are encoded neurally. Here we introduce an approach where the metric is tuned for a particular neural decoding task. Neural spike train metrics have been used to quantify the information content carried by the timing of action potentials. While a number of metric
APA, Harvard, Vancouver, ISO, and other styles
2

Saha, Soumadeep, Utpal Garain, Arijit Ukil, Arpan Pal, and Sundeep Khandelwal. "MedTric : A clinically applicable metric for evaluation of multi-label computational diagnostic systems." PLOS ONE 18, no. 8 (2023): e0283895. http://dx.doi.org/10.1371/journal.pone.0283895.

Full text
Abstract:
When judging the quality of a computational system for a pathological screening task, several factors seem to be important, like sensitivity, specificity, accuracy, etc. With machine learning based approaches showing promise in the multi-label paradigm, they are being widely adopted to diagnostics and digital therapeutics. Metrics are usually borrowed from machine learning literature, and the current consensus is to report results on a diverse set of metrics. It is infeasible to compare efficacy of computational systems which have been evaluated on different sets of metrics. From a diagnostic
APA, Harvard, Vancouver, ISO, and other styles
3

Gong, Xiuwen, Dong Yuan, and Wei Bao. "Online Metric Learning for Multi-Label Classification." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 4012–19. http://dx.doi.org/10.1609/aaai.v34i04.5818.

Full text
Abstract:
Existing research into online multi-label classification, such as online sequential multi-label extreme learning machine (OSML-ELM) and stochastic gradient descent (SGD), has achieved promising performance. However, these works lack an analysis of loss function and do not consider label dependency. Accordingly, to fill the current research gap, we propose a novel online metric learning paradigm for multi-label classification. More specifically, we first project instances and labels into a lower dimension for comparison, then leverage the large margin principle to learn a metric with an efficie
APA, Harvard, Vancouver, ISO, and other styles
4

Qiu, Wei. "Based on Semi-Supervised Clustering with the Boost Similarity Metric Method for Face Retrieval." Applied Mechanics and Materials 543-547 (March 2014): 2720–23. http://dx.doi.org/10.4028/www.scientific.net/amm.543-547.2720.

Full text
Abstract:
The focus of this paper is on Metric Learning, with particular interest in incorporating side information to make it semi-supervised. This study is primarily motivated by an application: face-image clustering. In the paper introduces metric learning and semi-supervised clustering, Boost the similarity metric learning method that adapt the underlying similarity metric used by the clustering algorithm. we propose a novel idea of learning with historical relevance feedback log data, and adopt a new paradigm called Boost the Similarity Metric Method for Face Retrieval, Experimental results demonst
APA, Harvard, Vancouver, ISO, and other styles
5

Xiao, Qiao, Khuan Lee, Siti Aisah Mokhtar, et al. "Deep Learning-Based ECG Arrhythmia Classification: A Systematic Review." Applied Sciences 13, no. 8 (2023): 4964. http://dx.doi.org/10.3390/app13084964.

Full text
Abstract:
Deep learning (DL) has been introduced in automatic heart-abnormality classification using ECG signals, while its application in practical medical procedures is limited. A systematic review is performed from perspectives of the ECG database, preprocessing, DL methodology, evaluation paradigm, performance metric, and code availability to identify research trends, challenges, and opportunities for DL-based ECG arrhythmia classification. Specifically, 368 studies meeting the eligibility criteria are included. A total of 223 (61%) studies use MIT-BIH Arrhythmia Database to design DL models. A tota
APA, Harvard, Vancouver, ISO, and other styles
6

Niu, Gang, Bo Dai, Makoto Yamada, and Masashi Sugiyama. "Information-Theoretic Semi-Supervised Metric Learning via Entropy Regularization." Neural Computation 26, no. 8 (2014): 1717–62. http://dx.doi.org/10.1162/neco_a_00614.

Full text
Abstract:
We propose a general information-theoretic approach to semi-supervised metric learning called SERAPH (SEmi-supervised metRic leArning Paradigm with Hypersparsity) that does not rely on the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize its entropy on labeled data and minimize its entropy on unlabeled data following entropy regularization. For metric learning, entropy regularization improves manifold regularization by considering the dissimilarity information of unlabeled data in the unsupervised part, and hence it allows the supervised and unsup
APA, Harvard, Vancouver, ISO, and other styles
7

Wilde, Henry, Vincent Knight, and Jonathan Gillard. "Evolutionary dataset optimisation: learning algorithm quality through evolution." Applied Intelligence 50, no. 4 (2019): 1172–91. http://dx.doi.org/10.1007/s10489-019-01592-4.

Full text
Abstract:
AbstractIn this paper we propose a novel method for learning how algorithms perform. Classically, algorithms are compared on a finite number of existing (or newly simulated) benchmark datasets based on some fixed metrics. The algorithm(s) with the smallest value of this metric are chosen to be the ‘best performing’. We offer a new approach to flip this paradigm. We instead aim to gain a richer picture of the performance of an algorithm by generating artificial data through genetic evolution, the purpose of which is to create populations of datasets for which a particular algorithm performs wel
APA, Harvard, Vancouver, ISO, and other styles
8

Zhukov, Alexey, Jenny Benois-Pineau, and Romain Giot. "Evaluation of Explanation Methods of AI - CNNs in Image Classification Tasks with Reference-based and No-reference Metrics." Advances in Artificial Intelligence and Machine Learning 03, no. 01 (2023): 620–46. http://dx.doi.org/10.54364/aaiml.2023.1143.

Full text
Abstract:
The most popular methods in AI-machine learning paradigm are mainly black boxes. This is why explanation of AI decisions is of emergency. Although dedicated explanation tools have been massively developed, the evaluation of their quality remains an open research question. In this paper, we generalize the methodologies of evaluation of post-hoc explainers of CNNs’ decisions in visual classification tasks with reference and no-reference based metrics. We apply them on our previously developed explainers (FEM1 , MLFEM), and popular Grad-CAM. The reference-based metrics are Pearson correlation coe
APA, Harvard, Vancouver, ISO, and other styles
9

Pinto, Danna, Anat Prior, and Elana Zion Golumbic. "Assessing the Sensitivity of EEG-Based Frequency-Tagging as a Metric for Statistical Learning." Neurobiology of Language 3, no. 2 (2022): 214–34. http://dx.doi.org/10.1162/nol_a_00061.

Full text
Abstract:
Abstract Statistical learning (SL) is hypothesized to play an important role in language development. However, the measures typically used to assess SL, particularly at the level of individual participants, are largely indirect and have low sensitivity. Recently, a neural metric based on frequency-tagging has been proposed as an alternative measure for studying SL. We tested the sensitivity of frequency-tagging measures for studying SL in individual participants in an artificial language paradigm, using non-invasive electroencephalograph (EEG) recordings of neural activity in humans. Important
APA, Harvard, Vancouver, ISO, and other styles
10

Gomoluch, Paweł, Dalal Alrajeh, and Alessandra Russo. "Learning Classical Planning Strategies with Policy Gradient." Proceedings of the International Conference on Automated Planning and Scheduling 29 (May 25, 2021): 637–45. http://dx.doi.org/10.1609/icaps.v29i1.3531.

Full text
Abstract:
A common paradigm in classical planning is heuristic forward search. Forward search planners often rely on simple best-first search which remains fixed throughout the search process. In this paper, we introduce a novel search framework capable of alternating between several forward search approaches while solving a particular planning problem. Selection of the approach is performed using a trainable stochastic policy, mapping the state of the search to a probability distribution over the approaches. This enables using policy gradient to learn search strategies tailored to a specific distributi
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!