Academic literature on the topic 'Metric learning paradigm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Metric learning paradigm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Metric learning paradigm"

1

Brockmeier, Austin J., John S. Choi, Evan G. Kriminger, Joseph T. Francis, and Jose C. Principe. "Neural Decoding with Kernel-Based Metric Learning." Neural Computation 26, no. 6 (June 2014): 1080–107. http://dx.doi.org/10.1162/neco_a_00591.

Full text
Abstract:
In studies of the nervous system, the choice of metric for the neural responses is a pivotal assumption. For instance, a well-suited distance metric enables us to gauge the similarity of neural responses to various stimuli and assess the variability of responses to a repeated stimulus—exploratory steps in understanding how the stimuli are encoded neurally. Here we introduce an approach where the metric is tuned for a particular neural decoding task. Neural spike train metrics have been used to quantify the information content carried by the timing of action potentials. While a number of metrics for individual neurons exist, a method to optimally combine single-neuron metrics into multineuron, or population-based, metrics is lacking. We pose the problem of optimizing multineuron metrics and other metrics using centered alignment, a kernel-based dependence measure. The approach is demonstrated on invasively recorded neural data consisting of both spike trains and local field potentials. The experimental paradigm consists of decoding the location of tactile stimulation on the forepaws of anesthetized rats. We show that the optimized metrics highlight the distinguishing dimensions of the neural response, significantly increase the decoding accuracy, and improve nonlinear dimensionality reduction methods for exploratory neural analysis.
APA, Harvard, Vancouver, ISO, and other styles
2

Saha, Soumadeep, Utpal Garain, Arijit Ukil, Arpan Pal, and Sundeep Khandelwal. "MedTric : A clinically applicable metric for evaluation of multi-label computational diagnostic systems." PLOS ONE 18, no. 8 (August 10, 2023): e0283895. http://dx.doi.org/10.1371/journal.pone.0283895.

Full text
Abstract:
When judging the quality of a computational system for a pathological screening task, several factors seem to be important, like sensitivity, specificity, accuracy, etc. With machine learning based approaches showing promise in the multi-label paradigm, they are being widely adopted to diagnostics and digital therapeutics. Metrics are usually borrowed from machine learning literature, and the current consensus is to report results on a diverse set of metrics. It is infeasible to compare efficacy of computational systems which have been evaluated on different sets of metrics. From a diagnostic utility standpoint, the current metrics themselves are far from perfect, often biased by prevalence of negative samples or other statistical factors and importantly, they are designed to evaluate general purpose machine learning tasks. In this paper we outline the various parameters that are important in constructing a clinical metric aligned with diagnostic practice, and demonstrate their incompatibility with existing metrics. We propose a new metric, MedTric that takes into account several factors that are of clinical importance. MedTric is built from the ground up keeping in mind the unique context of computational diagnostics and the principle of risk minimization, penalizing missed diagnosis more harshly than over-diagnosis. MedTric is a unified metric for medical or pathological screening system evaluation. We compare this metric against other widely used metrics and demonstrate how our system outperforms them in key areas of medical relevance.
APA, Harvard, Vancouver, ISO, and other styles
3

Gong, Xiuwen, Dong Yuan, and Wei Bao. "Online Metric Learning for Multi-Label Classification." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 4012–19. http://dx.doi.org/10.1609/aaai.v34i04.5818.

Full text
Abstract:
Existing research into online multi-label classification, such as online sequential multi-label extreme learning machine (OSML-ELM) and stochastic gradient descent (SGD), has achieved promising performance. However, these works lack an analysis of loss function and do not consider label dependency. Accordingly, to fill the current research gap, we propose a novel online metric learning paradigm for multi-label classification. More specifically, we first project instances and labels into a lower dimension for comparison, then leverage the large margin principle to learn a metric with an efficient optimization algorithm. Moreover, we provide theoretical analysis on the upper bound of the cumulative loss for our method. Comprehensive experiments on a number of benchmark multi-label datasets validate our theoretical approach and illustrate that our proposed online metric learning (OML) algorithm outperforms state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Qiu, Wei. "Based on Semi-Supervised Clustering with the Boost Similarity Metric Method for Face Retrieval." Applied Mechanics and Materials 543-547 (March 2014): 2720–23. http://dx.doi.org/10.4028/www.scientific.net/amm.543-547.2720.

Full text
Abstract:
The focus of this paper is on Metric Learning, with particular interest in incorporating side information to make it semi-supervised. This study is primarily motivated by an application: face-image clustering. In the paper introduces metric learning and semi-supervised clustering, Boost the similarity metric learning method that adapt the underlying similarity metric used by the clustering algorithm. we propose a novel idea of learning with historical relevance feedback log data, and adopt a new paradigm called Boost the Similarity Metric Method for Face Retrieval, Experimental results demonstrate that the unified approach produces better clusters than both individual approaches as well as previously proposed semi-supervised clustering algorithms. This paper followed by the discussion of experiments on face-image clustering.
APA, Harvard, Vancouver, ISO, and other styles
5

Xiao, Qiao, Khuan Lee, Siti Aisah Mokhtar, Iskasymar Ismail, Ahmad Luqman bin Md Pauzi, Qiuxia Zhang, and Poh Ying Lim. "Deep Learning-Based ECG Arrhythmia Classification: A Systematic Review." Applied Sciences 13, no. 8 (April 14, 2023): 4964. http://dx.doi.org/10.3390/app13084964.

Full text
Abstract:
Deep learning (DL) has been introduced in automatic heart-abnormality classification using ECG signals, while its application in practical medical procedures is limited. A systematic review is performed from perspectives of the ECG database, preprocessing, DL methodology, evaluation paradigm, performance metric, and code availability to identify research trends, challenges, and opportunities for DL-based ECG arrhythmia classification. Specifically, 368 studies meeting the eligibility criteria are included. A total of 223 (61%) studies use MIT-BIH Arrhythmia Database to design DL models. A total of 138 (38%) studies considered removing noise or artifacts in ECG signals, and 102 (28%) studies performed data augmentation to extend the minority arrhythmia categories. Convolutional neural networks are the dominant models (58.7%, 216) used in the reviewed studies while growing studies have integrated multiple DL structures in recent years. A total of 319 (86.7%) and 38 (10.3%) studies explicitly mention their evaluation paradigms, i.e., intra- and inter-patient paradigms, respectively, where notable performance degradation is observed in the inter-patient paradigm. Compared to the overall accuracy, the average F1 score, sensitivity, and precision are significantly lower in the selected studies. To implement the DL-based ECG classification in real clinical scenarios, leveraging diverse ECG databases, designing advanced denoising and data augmentation techniques, integrating novel DL models, and deeper investigation in the inter-patient paradigm could be future research opportunities.
APA, Harvard, Vancouver, ISO, and other styles
6

Niu, Gang, Bo Dai, Makoto Yamada, and Masashi Sugiyama. "Information-Theoretic Semi-Supervised Metric Learning via Entropy Regularization." Neural Computation 26, no. 8 (August 2014): 1717–62. http://dx.doi.org/10.1162/neco_a_00614.

Full text
Abstract:
We propose a general information-theoretic approach to semi-supervised metric learning called SERAPH (SEmi-supervised metRic leArning Paradigm with Hypersparsity) that does not rely on the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize its entropy on labeled data and minimize its entropy on unlabeled data following entropy regularization. For metric learning, entropy regularization improves manifold regularization by considering the dissimilarity information of unlabeled data in the unsupervised part, and hence it allows the supervised and unsupervised parts to be integrated in a natural and meaningful way. Moreover, we regularize SERAPH by trace-norm regularization to encourage low-dimensional projections associated with the distance metric. The nonconvex optimization problem of SERAPH could be solved efficiently and stably by either a gradient projection algorithm or an EM-like iterative algorithm whose M-step is convex. Experiments demonstrate that SERAPH compares favorably with many well-known metric learning methods, and the learned Mahalanobis distance possesses high discriminability even under noisy environments.
APA, Harvard, Vancouver, ISO, and other styles
7

Wilde, Henry, Vincent Knight, and Jonathan Gillard. "Evolutionary dataset optimisation: learning algorithm quality through evolution." Applied Intelligence 50, no. 4 (December 27, 2019): 1172–91. http://dx.doi.org/10.1007/s10489-019-01592-4.

Full text
Abstract:
AbstractIn this paper we propose a novel method for learning how algorithms perform. Classically, algorithms are compared on a finite number of existing (or newly simulated) benchmark datasets based on some fixed metrics. The algorithm(s) with the smallest value of this metric are chosen to be the ‘best performing’. We offer a new approach to flip this paradigm. We instead aim to gain a richer picture of the performance of an algorithm by generating artificial data through genetic evolution, the purpose of which is to create populations of datasets for which a particular algorithm performs well on a given metric. These datasets can be studied so as to learn what attributes lead to a particular progression of a given algorithm. Following a detailed description of the algorithm as well as a brief description of an open source implementation, a case study in clustering is presented. This case study demonstrates the performance and nuances of the method which we call Evolutionary Dataset Optimisation. In this study, a number of known properties about preferable datasets for the clustering algorithms known as k-means and DBSCAN are realised in the generated datasets.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhukov, Alexey, Jenny Benois-Pineau, and Romain Giot. "Evaluation of Explanation Methods of AI - CNNs in Image Classification Tasks with Reference-based and No-reference Metrics." Advances in Artificial Intelligence and Machine Learning 03, no. 01 (2023): 620–46. http://dx.doi.org/10.54364/aaiml.2023.1143.

Full text
Abstract:
The most popular methods in AI-machine learning paradigm are mainly black boxes. This is why explanation of AI decisions is of emergency. Although dedicated explanation tools have been massively developed, the evaluation of their quality remains an open research question. In this paper, we generalize the methodologies of evaluation of post-hoc explainers of CNNs’ decisions in visual classification tasks with reference and no-reference based metrics. We apply them on our previously developed explainers (FEM1 , MLFEM), and popular Grad-CAM. The reference-based metrics are Pearson correlation coefficient and Similarity computed between the explanation map and its ground truth represented by a Gaze Fixation Density Map obtained with a psycho-visual experiment. As a no-reference metric, we use stability metric, proposed by Alvarez-Melis and Jaakkola. We study its behaviour, consensus with reference-based metrics and show that in case of several kinds of degradation on input images, this metric is in agreement with reference-based ones. Therefore, it can be used for evaluation of the quality of explainers when the ground truth is not available.
APA, Harvard, Vancouver, ISO, and other styles
9

Pinto, Danna, Anat Prior, and Elana Zion Golumbic. "Assessing the Sensitivity of EEG-Based Frequency-Tagging as a Metric for Statistical Learning." Neurobiology of Language 3, no. 2 (2022): 214–34. http://dx.doi.org/10.1162/nol_a_00061.

Full text
Abstract:
Abstract Statistical learning (SL) is hypothesized to play an important role in language development. However, the measures typically used to assess SL, particularly at the level of individual participants, are largely indirect and have low sensitivity. Recently, a neural metric based on frequency-tagging has been proposed as an alternative measure for studying SL. We tested the sensitivity of frequency-tagging measures for studying SL in individual participants in an artificial language paradigm, using non-invasive electroencephalograph (EEG) recordings of neural activity in humans. Importantly, we used carefully constructed controls to address potential acoustic confounds of the frequency-tagging approach, and compared the sensitivity of EEG-based metrics to both explicit and implicit behavioral tests of SL. Group-level results confirm that frequency-tagging can provide a robust indication of SL for an artificial language, above and beyond potential acoustic confounds. However, this metric had very low sensitivity at the level of individual participants, with significant effects found only in 30% of participants. Comparison of the neural metric to previously established behavioral measures for assessing SL showed a significant yet weak correspondence with performance on an implicit task, which was above-chance in 70% of participants, but no correspondence with the more common explicit 2-alternative forced-choice task, where performance did not exceed chance-level. Given the proposed ubiquitous nature of SL, our results highlight some of the operational and methodological challenges of obtaining robust metrics for assessing SL, as well as the potential confounds that should be taken into account when using the frequency-tagging approach in EEG studies.
APA, Harvard, Vancouver, ISO, and other styles
10

Gomoluch, Paweł, Dalal Alrajeh, and Alessandra Russo. "Learning Classical Planning Strategies with Policy Gradient." Proceedings of the International Conference on Automated Planning and Scheduling 29 (May 25, 2021): 637–45. http://dx.doi.org/10.1609/icaps.v29i1.3531.

Full text
Abstract:
A common paradigm in classical planning is heuristic forward search. Forward search planners often rely on simple best-first search which remains fixed throughout the search process. In this paper, we introduce a novel search framework capable of alternating between several forward search approaches while solving a particular planning problem. Selection of the approach is performed using a trainable stochastic policy, mapping the state of the search to a probability distribution over the approaches. This enables using policy gradient to learn search strategies tailored to a specific distributions of planning problems and a selected performance metric, e.g. the IPC score. We instantiate the framework by constructing a policy space consisting of five search approaches and a two-dimensional representation of the planner’s state. Then, we train the system on randomly generated problems from five IPC domains using three different performance metrics. Our experimental results show that the learner is able to discover domain-specific search strategies, improving the planner’s performance relative to the baselines of plain bestfirst search and a uniform policy.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Metric learning paradigm"

1

Berry, Chadwick Alan. "The fidelity of long-term memory for perceptual magnitudes, symbolic vs. metric learning paradigms." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ29183.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Metric learning paradigm"

1

The fidelity of long-term memory for perceptual magnitudes: Symbolic vs. metric learning paradigms. Ottawa: National Library of Canada = Bibliothèque nationale du Canada, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Metric learning paradigm"

1

Biehl, Michael, Barbara Hammer, Petra Schneider, and Thomas Villmann. "Metric Learning for Prototype-Based Classification." In Innovations in Neural Information Paradigms and Applications, 183–99. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04003-0_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Stevens, Alexander, Johannes De Smedt, and Jari Peeperkorn. "Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring." In Lecture Notes in Business Information Processing, 194–206. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.

Full text
Abstract:
AbstractThe growing interest in applying machine and deep learning algorithms in an Outcome-Oriented Predictive Process Monitoring (OOPPM) context has recently fuelled a shift to use models from the explainable artificial intelligence (XAI) paradigm, a field of study focused on creating explainability techniques on top of AI models in order to legitimize the predictions made. Nonetheless, most classification models are evaluated primarily on a performance level, where XAI requires striking a balance between either simple models (e.g. linear regression) or models using complex inference structures (e.g. neural networks) with post-processing to calculate feature importance. In this paper, a comprehensive overview of predictive models with varying intrinsic complexity are measured based on explainability with model-agnostic quantitative evaluation metrics. To this end, explainability is designed as a symbiosis between interpretability and faithfulness and thereby allowing to compare inherently created explanations (e.g. decision tree rules) with post-hoc explainability techniques (e.g. Shapley values) on top of AI models. Moreover, two improved versions of the logistic regression model capable of capturing non-linear interactions and both inherently generating their own explanations are proposed in the OOPPM context. These models are benchmarked with two common state-of-the-art models with post-hoc explanation techniques in the explainability-performance space.
APA, Harvard, Vancouver, ISO, and other styles
3

Barbalet, Thomas S. "Noble Ape’s Cognitive Simulation." In Machine Learning, 1839–55. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-60960-818-7.ch709.

Full text
Abstract:
Inspired by observing bacterial growth in agar and by the transfer of information through simple agar simulations, the cognitive simulation of Noble Ape (originally developed in 1996) has defined itself as both a philosophical simulation tool and a processor metric. The Noble Ape cognitive simulation was originally developed based on diverse philosophical texts and in methodological objection to the neural network paradigm of artificial intelligence. This chapter explores the movement from biological observation to agar simulation through information transfer into a coherent cognitive simulation. The cognitive simulation had to be tuned to produce meaningful results. The cognitive simulation was adopted as processor metrics for tuning performance. This “brain cycles per second” metric was first used by Apple in 2003 and then Intel in 2005. Through this development, both the legacy of primitive agar information-transfer and the use of this as a cognitive simulation method raised novel computational and philosophical issues.
APA, Harvard, Vancouver, ISO, and other styles
4

Anand, Poonam, and Starr Ackley. "Equitable Assessment and Evaluation of Young Language Learners." In Advances in Early Childhood and K-12 Education, 84–107. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-6487-5.ch005.

Full text
Abstract:
This chapter discusses major contributions in research and professional assessment development and reviews key classifications in young language learner assessment (YLLA). Using the five-level metric (close, immediate, proximal, distal, and remote) by Ruiz-Primo et al., the authors classify assessments as curriculum aligned or non-aligned. Inequalities limiting access to learning and to opportunities for achievement (economic status, pre-primary education, digital environment) are linked to the five metrics. They review international examinations for YLLs (Cambridge, TOEFL, Pearson) and measure their alignment with an interactive and performative-enacted curriculum. Recommendations are given for separating external assessments as local or international in washback phenomena, for the inclusion of national assessment specialists in the research paradigm, and for greater attention to language assessment literacy in teacher training. The authors predict that increases in distance and digital learning will determine future forms of YLLA and exacerbate existing inequities.
APA, Harvard, Vancouver, ISO, and other styles
5

Markowitz, John C. "Interpersonal Psychotherapy." In In the Aftermath of the Pandemic, 18–39. Oxford University Press, 2021. http://dx.doi.org/10.1093/med-psych/9780197554500.003.0004.

Full text
Abstract:
This chapter introduces the basic principles, structure, and techniques of IPT: a brief treatment manual for the (tele-)clinician. IPT is a time-limited, affect- and life event–based psychotherapy that helps patients master a life crisis, often by mobilizing social support and learning to use feelings to understand and manage interpersonal encounters. The basic paradigm is that feelings and symptoms arise in an interpersonal context: feeling and situation are connected. The IPT framework includes making a diagnosis, taking an interpersonal inventory to explore the patient’s relationships and current life crises, giving patients the no-fault, medical model “sick role,” setting a time limit, and providing a formulation linking the patient’s diagnosis to a focal problem area on which the therapy will thereafter focus. The problem areas are grief (complicated bereavement following the death of a significant other), role dispute (a struggle with someone), or a role transition (any major life change). The chapter also stresses the importance of affect tolerance and building a treatment alliance and provides a Social Rhythm Metric and a Covid Behavioral Checklist.
APA, Harvard, Vancouver, ISO, and other styles
6

Catal, Cagatay, and Soumya Banerjee. "Application of Artificial Immune Systems Paradigm for Developing Software Fault Prediction Models." In Machine Learning, 371–87. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-60960-818-7.ch302.

Full text
Abstract:
Artificial Immune Systems, a biologically inspired computing paradigm such as Artificial Neural Networks, Genetic Algorithms, and Swarm Intelligence, embody the principles and advantages of vertebrate immune systems. It has been applied to solve several complex problems in different areas such as data mining, computer security, robotics, aircraft control, scheduling, optimization, and pattern recognition. There is an increasing interest in the use of this paradigm and they are widely used in conjunction with other methods such as Artificial Neural Networks, Swarm Intelligence and Fuzzy Logic. In this chapter, we demonstrate the procedure for applying this paradigm and bio-inspired algorithm for developing software fault prediction models. The fault prediction unit is to identify the modules, which are likely to contain the faults at the next release in a large software system. Software metrics and fault data belonging to a previous software version are used to build the model. Fault-prone modules of the next release are predicted by using this model and current software metrics. From machine learning perspective, this type of modeling approach is called supervised learning. A sample fault dataset is used to show the elaborated approach of working of Artificial Immune Recognition Systems (AIRS).
APA, Harvard, Vancouver, ISO, and other styles
7

Ivanov, Bogdan, Victorița Trif, and Ana Trif. "Assessment and Paradigms." In Analyzing Paradigms Used in Education and Educational Psychology, 121–43. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-1427-6.ch006.

Full text
Abstract:
This chapter analyzes the assessment literature linked with 21st century paradigms. The study aims to examine the narratives on assessment (How can the metabolism of assessment be illustrated?) and to present the dissonances related to the subject topic (How do you measure educational results?). The qualitative investigation of the rhetoric on assessment is connected to the variety of educational challenges from the real life of schools: it is the shift from the traditional tools to contemporary technology. The shift from atomistic to holistic perspective presses for rethinking paradigms. This implies changing educational paradigms from being taught to learning on your own with guidance, from providing instruction to effective teaching, from teaching to producing learning. To conclude, this chapter argues for a network of paradigms connected to the multiple metrics of success translated into a different learning environment.
APA, Harvard, Vancouver, ISO, and other styles
8

Madan, Shipra, Tapan Kumar Gandhi, and Santanu Chaudhury. "Bone age assessment using metric learning on small dataset of hand radiographs." In Advanced Machine Vision Paradigms for Medical Image Analysis, 259–71. Elsevier, 2021. http://dx.doi.org/10.1016/b978-0-12-819295-5.00010-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Suganthi, J., B. Nagarajan, and S. Muhtumari. "Network Anomaly Detection Using Hybrid Deep Learning Technique." In Advances in Parallel Computing Algorithms, Tools and Paradigms. IOS Press, 2022. http://dx.doi.org/10.3233/apc220014.

Full text
Abstract:
Deep learning based intrusion detection system has acquired prominence in digital protection frameworks. The fundamental component of such a framework is to give an assurance to the ICT foundation in the interruption recognition framework (IDS). Wise arrangements are exceptionally essential and expected to control the intricacy and identification of new assault types. The smart frameworks, for example, Deep learning and Machine learning have broadly been acquainted with its advantages with actually manage intricate and layered information.The IDS has various types of known and unknown attacks, however there is a chance to improve the detection of attacks on implementing in real case scenario. Thus, this paper proposes a hybrid deep learning technique that combines convolutional neural network model with Long short term memory model to improvise the performance in recognizing the anomaly packets in the network. Experimentation has been carried out with NSL KDD dataset and the performances are compared with the traditional machine and deep learning models in terms of common metrics such as accuracy, sensitivity and specificity.
APA, Harvard, Vancouver, ISO, and other styles
10

Adriaans, Pieter. "A Computational Theory of Meaning." In Advances in Info-Metrics, 32–78. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780190636685.003.0002.

Full text
Abstract:
A computational theory of meaning tries to understand the phenomenon of meaning in terms of computation. Here we give an analysis in the context of Kolmogorov complexity. This theory measures the complexity of a data set in terms of the length of the smallest program that generates the data set on a universal computer. As a natural extension, the set of all programs that produce a data set on a computer can be interpreted as the set of meanings of the data set. We give an analysis of the Kolmogorov structure function and some other attempts to formulate a mathematical theory of meaning in terms of two-part optimal model selection. We show that such theories will always be context dependent: the invariance conditions that make Kolmogorov complexity a valid theory of measurement fail for this more general notion of meaning. One cause is the notion of polysemy: one data set (i.e., a string of symbols) can have different programs with no mutual information that compresses it. Another cause is the existence of recursive bijections between ℕ and ℕ2 for which the two-part code is always more efficient. This generates vacuous optimal two-part codes. We introduce a formal framework to study such contexts in the form of a theory that generalizes the concept of Turing machines to learning agents that have a memory and have access to each other’s functions in terms of a possible world semantics. In such a framework, the notions of randomness and informativeness become agent dependent. We show that such a rich framework explains many of the anomalies of the correct theory of algorithmic complexity. It also provides perspectives for, among other things, the study of cognitive and social processes. Finally, we sketch some application paradigms of the theory.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Metric learning paradigm"

1

Gao, Qiang, Xiaohan Wang, Chaoran Liu, Goce Trajcevski, Li Huang, and Fan Zhou. "Open Anomalous Trajectory Recognition via Probabilistic Metric Learning." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/233.

Full text
Abstract:
Typically, trajectories considered anomalous are the ones deviating from usual (e.g., traffic-dictated) driving patterns. However, this closed-set context fails to recognize the unknown anomalous trajectories, resulting in an insufficient self-motivated learning paradigm. In this study, we investigate the novel Anomalous Trajectory Recognition problem in an Open-world scenario (ATRO) and introduce a novel probabilistic Metric learning model, namely ATROM, to address it. Specifically, ATROM can detect the presence of unknown anomalous behavior in addition to identifying known behavior. It has a Mutual Interaction Distillation that uses contrastive metric learning to explore the interactive semantics regarding the diverse behavioral intents and a Probabilistic Trajectory Embedding that forces the trajectories with distinct behaviors to follow different Gaussian priors. More importantly, ATROM offers a probabilistic metric rule to discriminate between known and unknown behavioral patterns by taking advantage of the approximation of multiple priors. Experimental results on two large-scale trajectory datasets demonstrate the superiority of ATROM in addressing both known and unknown anomalous patterns.
APA, Harvard, Vancouver, ISO, and other styles
2

Yonghe, Chu, Hongfei Lin, Liang Yang, Yufeng Diao, Shaowu Zhang, and Fan Xiaochao. "Refining Word Representations by Manifold Learning." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/749.

Full text
Abstract:
Pre-trained distributed word representations have been proven useful in various natural language processing (NLP) tasks. However, the effect of words’ geometric structure on word representations has not been carefully studied yet. The existing word representations methods underestimate the words whose distances are close in the Euclidean space, while overestimating words with a much greater distance. In this paper, we propose a word vector refinement model to correct the pre-trained word embedding, which brings the similarity of words in Euclidean space closer to word semantics by using manifold learning. This approach is theoretically founded in the metric recovery paradigm. Our word representations have been evaluated on a variety of lexical-level intrinsic tasks (semantic relatedness, semantic similarity) and the experimental results show that the proposed model outperforms several popular word representations approaches.
APA, Harvard, Vancouver, ISO, and other styles
3

Xue, Wanqi, Youzhi Zhang, Shuxin Li, Xinrun Wang, Bo An, and Chai Kiat Yeo. "Solving Large-Scale Extensive-Form Network Security Games via Neural Fictitious Self-Play." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/511.

Full text
Abstract:
Securing networked infrastructures is important in the real world. The problem of deploying security resources to protect against an attacker in networked domains can be modeled as Network Security Games (NSGs). Unfortunately, existing approaches, including the deep learning-based approaches, are inefficient to solve large-scale extensive-form NSGs. In this paper, we propose a novel learning paradigm, NSG-NFSP, to solve large-scale extensive-form NSGs based on Neural Fictitious Self-Play (NFSP). Our main contributions include: i) reforming the best response (BR) policy network in NFSP to be a mapping from action-state pair to action-value, to make the calculation of BR possible in NSGs; ii) converting the average policy network of an NFSP agent into a metric-based classifier, helping the agent to assign distributions only on legal actions rather than all actions; iii) enabling NFSP with high-level actions, which can benefit training efficiency and stability in NSGs; and iv) leveraging information contained in graphs of NSGs by learning efficient graph node embeddings. Our algorithm significantly outperforms state-of-the-art algorithms in both scalability and solution quality.
APA, Harvard, Vancouver, ISO, and other styles
4

Hayes, Tyler L., Ronald Kemker, Nathan D. Cahill, and Christopher Kanan. "New Metrics and Experimental Paradigms for Continual Learning." In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2018. http://dx.doi.org/10.1109/cvprw.2018.00273.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Asedegbega, Jerome, Oladayo Ayinde, and Alexander Nwakanma. "Application of Machine Learniing For Reservoir Facies Classification in Port Field, Offshore Niger Delta." In SPE Nigeria Annual International Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/207163-ms.

Full text
Abstract:
Abstract Several computer-aided techniques have been developed in recent past to improve interpretational accuracy of subsurface geology. This paradigm shift has provided tremendous success in variety of Machine Learning Application domains and help for better feasibility study in reservoir evaluation using multiple classification techniques. Facies classification is an essential subsurface exploration task as sedimentary facies reflect associated physical, chemical, and biological conditions that formation unit experienced during sedimentation activity. This study however, employed formation samples for facies classification using Machine Learning (ML) techniques and classified different facies from well logs in seven (7) wells of the PORT Field, Offshore Niger Delta. Six wells were concatenated during data preparation and trained using supervised ML algorithms before validating the models by blind testing on one well log to predict discrete facies groups. The analysis started with data preparation and examination where various features of the available well data were conditioned. For the model building and performance, support vector machine, random forest, decision tree, extra tree, neural network (multilayer preceptor), k-nearest neighbor and logistic regression model were built after dividing the data sets into training, test, and blind test well data. Results of metric score for the blind test well estimated for the various models using Jaccard index and F1-score indicated 0.73 and 0.82 for support vector machine, 0.38 and 0.54 for random forest, 0.78 and 0.83 for extra tree, 0.91 and 0.95 for k-nearest neighbor, 0.41 and 0.56 for decision tree, 0.63 and 0.74 for logistic regression, 0.55 and 0.68 for neural network, respectively. The efficiency of ML techniques for enhancing the prediction accuracy and decreasing the procedure time and their approach toward the data, makes it importantly desirable to recommend them in subsurface facies classification analysis.
APA, Harvard, Vancouver, ISO, and other styles
6

Pribeanu, Costin, and Vincentas Lamanauskas. "USEFULNESS OF FACEBOOK FOR STUDENTS: ANALYSIS OF UNIVERSITY PROFILE DIFFERENCES FROM A MULTIDIMENSIONAL PERSPECTIVE." In eLSE 2016. Carol I National Defence University Publishing House, 2016. http://dx.doi.org/10.12753/2066-026x-16-170.

Full text
Abstract:
The change of paradigm from learner-centered to social learning requires considering various activities such as: active participation, information and content sharing, collaboration, and debate. The use of social networking websites takes a lot of time from the students' university life thus challenging the educators to look for modalities to identify and exploit their educational potential. The increasing popularity of Facebook among university students is raising several research questions regarding its usefulness for education. Therefore, a systematic study on Facebook use by university students has been started in 2014 in the framework of cooperation between researchers from ICI Bucharest (Romania) and Siauliai University (Lithuania). The objective of this paper is twofold: (a) to measure the usefulness of Facebook for university students along three dimensions: social usefulness, information usefulness and collaboration usefulness, and (b) to analyze group differences from a multidimensional perspective. In order to compare the perceptions of students two samples with a different university profile have been utilized. A multidimensional model for the perceived usefulness has been conceptualized and empirically validated on each sample using structural equation modelling. An analysis of invariance across profiles provides evidence for the configural, metric, and scalar invariance of the evaluation instrument, thus enabling comparison between groups at construct level. The university profile comparison shows mean differences as regarding the global factor and two of its dimensions: social usefulness and information usefulness. The results of this work have several implications for researchers and practitioners. First, it contributes to a better understanding of the educational potential of Facebook. Second, it contributes to a reliable evaluation instrument that performs well across groups. Third, the multidimensional perspective enables analyses on two levels (global factor and each dimension).
APA, Harvard, Vancouver, ISO, and other styles
7

Dos Santos, Fernando Pereira, and Moacir Antonelli Ponti. "Features transfer learning for image and video recognition tasks." In Conference on Graphics, Patterns and Images. Sociedade Brasileira de Computação, 2020. http://dx.doi.org/10.5753/sibgrapi.est.2020.12980.

Full text
Abstract:
Feature transfer learning aims to reuse knowledge previously acquired in some source dataset to apply it in another target data and/or task. A requirement for the transfer of knowledge is the quality of feature spaces obtained, in which deep learning methods are widely applied since those provide discriminative and general descriptors. In this context, the main questions include: what to transfer; how to transfer; and when to transfer. Hence, we address these questions through distinct learning paradigms, transfer learning techniques, and several datasets and tasks. Therefore, our contributions are: an analysis of multiple descriptors contained in supervised deep networks; a new generalization metric that can be applied to any model and evaluation system; and a new architecture with a loss function for semi-supervised deep networks, in which all available data provide the learning.
APA, Harvard, Vancouver, ISO, and other styles
8

Albeanu, Grigore, and Marin Vlada. "NEUTROSOPHIC APPROACHES IN E-LEARNING ASSESSMENT." In eLSE 2014. Editura Universitatii Nationale de Aparare "Carol I", 2014. http://dx.doi.org/10.12753/2066-026x-14-208.

Full text
Abstract:
Due to the large number of variables necessary to evaluate the e-learning and their imprecision or uncertainty aspects, a new approach based on neutrosophy [4] can provide a better interpretation of the assessment results. The following classes of variables will be modelled by neutrosophic entities (sets, numbers, logics, probabilities, statistics): learner variables - including learner attitude and motivation, learning environment variables - including real, augmented or virtual environments, contextual variables - including informal, formal, or adult education, technology variables - including classic and new paradigms like mobile, and cloud/fog environments, and pedagogic variables - including methodologies, examination, and certification. Every variable is defined under both crisp and neutrosophic views. E-learning metrics are reviewed and extended to support neutrosophic computing models using [1]. Finally, considerations on implementing an automated tool for e-assessment of e-learning under classic and neutrosophic approaches are presented. The work uses the Smarandache's development on neutrosophy [4], neutrosophic computing models [1], e-learning metrics and assessment procedures available in literature [5] or developed by authors [2, 3]. References: 1. G. Albeanu, Neutrosophic computational models, Analele Universitatii Spiru Haret, Seria Matematica-Informatica, 2(2013). 2. G. Albeanu, e-Learning metrics, Proceedings of ICVL, 2007: http://www.cniv.ro/2007/disc2/icvl/documente/pdf/invited/invited2.pdf 3. G. Albeanu, Quality indicators and metrics for capability and maturity in e_learning, http://adlunap.ro/eLSE_publications/papers/2007/lucrare_21.pdf 4. F. Smarandache, Neutrosophy, http://www.gallup.unm.edu/~smarandache/neutrosophy.htm 5. Graham Attwell (ed.), Evaluating E-Learning: A Guide to the Evaluation of E-Learning, Evaluate Europe Handbook Series Volume 2, Leonardo da Vinci Programme, 2006.
APA, Harvard, Vancouver, ISO, and other styles
9

Gimenez, Paulo Jose de Alcantara, Marcelo De Oliveira Costa Machado, Cleber Pinelli Pinelli, and Sean Wolfgand Matsui Siqueira. "Investigating the learning perspective of Searching as Learning, a review of the state of the art." In Simpósio Brasileiro de Informática na Educação. Sociedade Brasileira de Computação, 2020. http://dx.doi.org/10.5753/cbie.sbie.2020.302.

Full text
Abstract:
Current search engines are not designed to facilitate learning as they do not lead the user to develop more complex skills. Searching as Learning (SAL) emerged as a research area from the intersection of information search and learning technologies in order to advance the study of searching as a learning process. However, we wonder how have the learning theories and approaches been explored in SAL. Through a systematic review of the literature, we identified 65 papers that report SAL solutions. We analyzed them, seeking to answer (i) which learning theories, approaches and methods support the searching as a learning process, and (ii) what metrics, procedures, or treatments were used to measure learning during the searching process. We uncover the learning perspective in the SAL literature, discussing the learning paradigms, the mechanisms influencing the learning process, the search session design for learning and the knowledge gain measurement strategies.
APA, Harvard, Vancouver, ISO, and other styles
10

Zheng, Meng, Srikrishna Karanam, Terrence Chen, Richard J. Radke, and Ziyan Wu. "Visual Similarity Attention." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/241.

Full text
Abstract:
While there has been substantial progress in learning suitable distance metrics, these techniques in general lack transparency and decision reasoning, i.e., explaining why the input set of images is similar or dissimilar. In this work, we solve this key problem by proposing the first method to generate generic visual similarity explanations with gradient-based attention. We demonstrate that our technique is agnostic to the specific similarity model type, e.g., we show applicability to Siamese, triplet, and quadruplet models. Furthermore, we make our proposed similarity attention a principled part of the learning process, resulting in a new paradigm for learning similarity functions. We demonstrate that our learning mechanism results in more generalizable, as well as explainable, similarity models. Finally, we demonstrate the generality of our framework by means of experiments on a variety of tasks, including image retrieval, person re-identification, and low-shot semantic segmentation.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Metric learning paradigm"

1

Perdigão, Rui A. P. Information physics and quantum space technologies for natural hazard sensing, modelling and prediction. Meteoceanics, September 2021. http://dx.doi.org/10.46337/210930.

Full text
Abstract:
Disruptive socio-natural transformations and climatic change, where system invariants and symmetries break down, defy the traditional complexity paradigms such as machine learning and artificial intelligence. In order to overcome this, we introduced non-ergodic Information Physics, bringing physical meaning to inferential metrics, and a coevolving flexibility to the metrics of information transfer, resulting in new methods for causal discovery and attribution. With this in hand, we develop novel dynamic models and analysis algorithms natively built for quantum information technological platforms, expediting complex system computations and rigour. Moreover, we introduce novel quantum sensing technologies in our Meteoceanics satellite constellation, providing unprecedented spatiotemporal coverage, resolution and lead, whilst using exclusively sustainable materials and processes across the value chain. Our technologies bring out novel information physical fingerprints of extreme events, with recently proven records in capturing early warning signs for extreme hydro-meteorologic events and seismic events, and do so with unprecedented quantum-grade resolution, robustness, security, speed and fidelity in sensing, processing and communication. Our advances, from Earth to Space, further provide crucial predictive edge and added value to early warning systems of natural hazards and long-term predictions supporting climatic security and action.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography