Literatura científica selecionada sobre o tema "Kernel mean embedding"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Kernel mean embedding".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Kernel mean embedding"

1

Jorgensen, Palle E. T., Myung-Sin Song e James Tian. "Conditional mean embedding and optimal feature selection via positive definite kernels". Opuscula Mathematica 44, n.º 1 (2024): 79–103. http://dx.doi.org/10.7494/opmath.2024.44.1.79.

Texto completo da fonte
Resumo:
Motivated by applications, we consider new operator-theoretic approaches to conditional mean embedding (CME). Our present results combine a spectral analysis-based optimization scheme with the use of kernels, stochastic processes, and constructive learning algorithms. For initially given non-linear data, we consider optimization-based feature selections. This entails the use of convex sets of kernels in a construction o foptimal feature selection via regression algorithms from learning models. Thus, with initial inputs of training data (for a suitable learning algorithm), each choice of a kernel \(K\) in turn yields a variety of Hilbert spaces and realizations of features. A novel aspect of our work is the inclusion of a secondary optimization process over a specified convex set of positive definite kernels, resulting in the determination of "optimal" feature representations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Muandet, Krikamol, Kenji Fukumizu, Bharath Sriperumbudur e Bernhard Schölkopf. "Kernel Mean Embedding of Distributions: A Review and Beyond". Foundations and Trends® in Machine Learning 10, n.º 1-2 (2017): 1–141. http://dx.doi.org/10.1561/2200000060.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Van Hauwermeiren, Daan, Michiel Stock, Thomas De Beer e Ingmar Nopens. "Predicting Pharmaceutical Particle Size Distributions Using Kernel Mean Embedding". Pharmaceutics 12, n.º 3 (16 de março de 2020): 271. http://dx.doi.org/10.3390/pharmaceutics12030271.

Texto completo da fonte
Resumo:
In the pharmaceutical industry, the transition to continuous manufacturing of solid dosage forms is adopted by more and more companies. For these continuous processes, high-quality process models are needed. In pharmaceutical wet granulation, a unit operation in the ConsiGma TM -25 continuous powder-to-tablet system (GEA Pharma systems, Collette, Wommelgem, Belgium), the product under study presents itself as a collection of particles that differ in shape and size. The measurement of this collection results in a particle size distribution. However, the theoretical basis to describe the physical phenomena leading to changes in this particle size distribution is lacking. It is essential to understand how the particle size distribution changes as a function of the unit operation’s process settings, as it has a profound effect on the behavior of the fluid bed dryer. Therefore, we suggest a data-driven modeling framework that links the machine settings of the wet granulation unit operation and the output distribution of granules. We do this without making any assumptions on the nature of the distributions under study. A simulation of the granule size distribution could act as a soft sensor when in-line measurements are challenging to perform. The method of this work is a two-step procedure: first, the measured distributions are transformed into a high-dimensional feature space, where the relation between the machine settings and the distributions can be learnt. Second, the inverse transformation is performed, allowing an interpretation of the results in the original measurement space. Further, a comparison is made with previous work, which employs a more mechanistic framework for describing the granules. A reliable prediction of the granule size is vital in the assurance of quality in the production line, and is needed in the assessment of upstream (feeding) and downstream (drying, milling, and tableting) issues. Now that a validated data-driven framework for predicting pharmaceutical particle size distributions is available, it can be applied in settings such as model-based experimental design and, due to its fast computation, there is potential in real-time model predictive control.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Xu, Bi-Cun, Kai Ming Ting e Yuan Jiang. "Isolation Graph Kernel". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 12 (18 de maio de 2021): 10487–95. http://dx.doi.org/10.1609/aaai.v35i12.17255.

Texto completo da fonte
Resumo:
A recent Wasserstein Weisfeiler-Lehman (WWL) Graph Kernel has a distinctive feature: Representing the distribution of Weisfeiler-Lehman (WL)-embedded node vectors of a graph in a histogram that enables a dissimilarity measurement of two graphs using Wasserstein distance. It has been shown to produce better classification accuracy than other graph kernels which do not employ such distribution and Wasserstein distance. This paper introduces an alternative called Isolation Graph Kernel (IGK) that measures the similarity between two attributed graphs. IGK is unique in two aspects among existing graph kernels. First, it is the first graph kernel which employs a distributional kernel in the framework of kernel mean embedding. This avoids the need to use the computationally expensive Wasserstein distance. Second, it is the first graph kernel that incorporates the distribution of attributed nodes (ignoring the edges) in a dataset of graphs. We reveal that this distributional information, extracted in the form of a feature map of Isolation Kernel, is crucial in building an efficient and effective graph kernel. We show that IGK is better than WWL in terms of classification accuracy, and it runs orders of magnitude faster in large datasets when used in the context of SVM classification.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Rustamov, Raif M., e James T. Klosowski. "Kernel mean embedding based hypothesis tests for comparing spatial point patterns". Spatial Statistics 38 (agosto de 2020): 100459. http://dx.doi.org/10.1016/j.spasta.2020.100459.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Hou, Boya, Sina Sanjari, Nathan Dahlin e Subhonmesh Bose. "Compressed Decentralized Learning of Conditional Mean Embedding Operators in Reproducing Kernel Hilbert Spaces". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 7 (26 de junho de 2023): 7902–9. http://dx.doi.org/10.1609/aaai.v37i7.25956.

Texto completo da fonte
Resumo:
Conditional mean embedding (CME) operators encode conditional probability densities within Reproducing Kernel Hilbert Space (RKHS). In this paper, we present a decentralized algorithm for a collection of agents to cooperatively approximate CME over a network. Communication constraints limit the agents from sending all data to their neighbors; we only allow sparse representations of covariance operators to be exchanged among agents, compositions of which defines CME. Using a coherence-based compression scheme, we present a consensus-type algorithm that preserves the average of the approximations of the covariance operators across the network. We theoretically prove that the iterative dynamics in RKHS is stable. We then empirically study our algorithm to estimate CMEs to learn spectra of Koopman operators for Markovian dynamical systems and to execute approximate value iteration for Markov decision processes (MDPs).
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Segera, Davies, Mwangi Mbuthia e Abraham Nyete. "Particle Swarm Optimized Hybrid Kernel-Based Multiclass Support Vector Machine for Microarray Cancer Data Analysis". BioMed Research International 2019 (16 de dezembro de 2019): 1–11. http://dx.doi.org/10.1155/2019/4085725.

Texto completo da fonte
Resumo:
Determining an optimal decision model is an important but difficult combinatorial task in imbalanced microarray-based cancer classification. Though the multiclass support vector machine (MCSVM) has already made an important contribution in this field, its performance solely depends on three aspects: the penalty factor C, the type of kernel, and its parameters. To improve the performance of this classifier in microarray-based cancer analysis, this paper proposes PSO-PCA-LGP-MCSVM model that is based on particle swarm optimization (PSO), principal component analysis (PCA), and multiclass support vector machine (MCSVM). The MCSVM is based on a hybrid kernel, i.e., linear-Gaussian-polynomial (LGP) that combines the advantages of three standard kernels (linear, Gaussian, and polynomial) in a novel manner, where the linear kernel is linearly combined with the Gaussian kernel embedding the polynomial kernel. Further, this paper proves and makes sure that the LGP kernel confirms the features of a valid kernel. In order to reveal the effectiveness of our model, several experiments were conducted and the obtained results compared between our model and other three single kernel-based models, namely, PSO-PCA-L-MCSVM (utilizing a linear kernel), PSO-PCA-G-MCSVM (utilizing a Gaussian kernel), and PSO-PCA-P-MCSVM (utilizing a polynomial kernel). In comparison, two dual and two multiclass imbalanced standard microarray datasets were used. Experimental results in terms of three extended assessment metrics (F-score, G-mean, and Accuracy) reveal the superior global feature extraction, prediction, and learning abilities of this model against three single kernel-based models.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Wang, Yufan, Zijing Wang, Kai Ming Ting e Yuanyi Shang. "A Principled Distributional Approach to Trajectory Similarity Measurement and its Application to Anomaly Detection". Journal of Artificial Intelligence Research 79 (13 de março de 2024): 865–93. http://dx.doi.org/10.1613/jair.1.15849.

Texto completo da fonte
Resumo:
This paper aims to solve two enduring challenges in existing trajectory similarity measures: computational inefficiency and the absence of the ‘uniqueness’ property that should be guaranteed in a distance function: dist(X, Y ) = 0 if and only if X = Y , where X and Y are two trajectories. In this work, we present a novel approach utilizing a distributional kernel for trajectory representation and similarity measurement, based on the kernel mean embedding framework. It is the very first time a distributional kernel is used for trajectory representation and similarity measurement. Our method does not rely on point-to-point distances which are used in most existing distances for trajectories. Unlike prevalent learning and deep learning approaches, our method requires no learning. We show the generality of this new approach in anomalous trajectory and sub-trajectory detection. We identify that the distributional kernel has (i) a data-dependent property and the ‘uniqueness’ property which are the key factors that lead to its superior task-specific performance, and (ii) runtime orders of magnitude faster than existing distance measures.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Brandman, David M., Michael C. Burkhart, Jessica Kelemen, Brian Franco, Matthew T. Harrison e Leigh R. Hochberg. "Robust Closed-Loop Control of a Cursor in a Person with Tetraplegia using Gaussian Process Regression". Neural Computation 30, n.º 11 (novembro de 2018): 2986–3008. http://dx.doi.org/10.1162/neco_a_01129.

Texto completo da fonte
Resumo:
Intracortical brain computer interfaces can enable individuals with paralysis to control external devices through voluntarily modulated brain activity. Decoding quality has been previously shown to degrade with signal nonstationarities—specifically, the changes in the statistics of the data between training and testing data sets. This includes changes to the neural tuning profiles and baseline shifts in firing rates of recorded neurons, as well as nonphysiological noise. While progress has been made toward providing long-term user control via decoder recalibration, relatively little work has been dedicated to making the decoding algorithm more resilient to signal nonstationarities. Here, we describe how principled kernel selection with gaussian process regression can be used within a Bayesian filtering framework to mitigate the effects of commonly encountered nonstationarities. Given a supervised training set of (neural features, intention to move in a direction)-pairs, we use gaussian process regression to predict the intention given the neural data. We apply kernel embedding for each neural feature with the standard radial basis function. The multiple kernels are then summed together across each neural dimension, which allows the kernel to effectively ignore large differences that occur only in a single feature. The summed kernel is used for real-time predictions of the posterior mean and variance under a gaussian process framework. The predictions are then filtered using the discriminative Kalman filter to produce an estimate of the neural intention given the history of neural data. We refer to the multiple kernel approach combined with the discriminative Kalman filter as the MK-DKF. We found that the MK-DKF decoder was more resilient to nonstationarities frequently encountered in-real world settings yet provided similar performance to the currently used Kalman decoder. These results demonstrate a method by which neural decoding can be made more resistant to nonstationarities.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Ali, Sarwan, e Murray Patterson. "Improving ISOMAP Efficiency with RKS: A Comparative Study with t-Distributed Stochastic Neighbor Embedding on Protein Sequences". J 6, n.º 4 (31 de outubro de 2023): 579–91. http://dx.doi.org/10.3390/j6040038.

Texto completo da fonte
Resumo:
Data visualization plays a crucial role in gaining insights from high-dimensional datasets. ISOMAP is a popular algorithm that maps high-dimensional data into a lower-dimensional space while preserving the underlying geometric structure. However, ISOMAP can be computationally expensive, especially for large datasets, due to the computation of the pairwise distances between data points. The motivation behind this study is to improve efficiency by leveraging an approximate method, which is based on random kitchen sinks (RKS). This approach provides a faster way to compute the kernel matrix. Using RKS significantly reduces the computational complexity of ISOMAP while still obtaining a meaningful low-dimensional representation of the data. We compare the performance of the approximate ISOMAP approach using RKS with the traditional t-SNE algorithm. The comparison involves computing the distance matrix using the original high-dimensional data and the low-dimensional data computed from both t-SNE and ISOMAP. The quality of the low-dimensional embeddings is measured using several metrics, including mean squared error (MSE), mean absolute error (MAE), and explained variance score (EVS). Additionally, the runtime of each algorithm is recorded to assess its computational efficiency. The comparison is conducted on a set of protein sequences, used in many bioinformatics tasks. We use three different embedding methods based on k-mers, minimizers, and position weight matrix (PWM) to capture various aspects of the underlying structure and the relationships between the protein sequences. By comparing different embeddings and by evaluating the effectiveness of the approximate ISOMAP approach using RKS and comparing it against t-SNE, we provide insights on the efficacy of our proposed approach. Our goal is to retain the quality of the low-dimensional embeddings while improving the computational performance.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Kernel mean embedding"

1

Hsu, Yuan-Shuo Kelvin. "Bayesian Perspectives on Conditional Kernel Mean Embeddings: Hyperparameter Learning and Probabilistic Inference". Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/24309.

Texto completo da fonte
Resumo:
This thesis presents the narrative of a particular journey towards discovering and developing Bayesian perspectives on conditional kernel mean embeddings. It is motivated by the desire and need to learn flexible and richer representations of conditional distributions for probabilistic inference in various contexts. While conditional kernel mean embeddings are able to achieve such representations, it is unclear how their hyperparameters can be learned for probabilistic inference in various settings. These hyperparameters govern the space of possible representations, and critically influence the degree of inference accuracy. At its core, this thesis argues for the notion that Bayesian perspectives lead to principled ways for formulating frameworks that provides a holistic treatment to model, learning, and inference. The story begins by emulating required properties of Bayesian frameworks via learning theoretic bounds. This is carried through the lens of a probabilistic multiclass setting, resulting in the multiclass conditional embedding framework. Through establishing convergence to multiclass probabilities and deriving learning theoretic and Rademacher complexity bounds, the framework arrives at an expected risk bound whose realizations exhibits desirable properties for hyperparameter learning such as the ever-crucial balance between data-fit error and model complexity, emulating marginal likelihoods. The probabilistic nature of this bound enable batch learning for scalability, and the generality of the model allow for various model architectures to be used and learned end-to-end. The narrative unfolds into forming approximate Bayesian inference frameworks directly for the likelihood-free Bayesian inference problem, leading to the kernel embedding likelihood-free inference framework. The core motivator centers around the natural suitability of conditional kernel mean embeddings to forming surrogate probabilistic models. By leveraging the likelihood-free Bayesian inference problem structure, surrogate models for both hyperparameter learning and posterior inference are developed. Finally, the journey concludes with a Bayesian regression framework that aligns the learning and inference to both the problem and the model. This begins by a careful formulation of the conditional mean and the novel deconditional mean problem, thereby revealing the novel deconditional mean embeddings as core elements of the wider kernel mean embedding framework. They can further be established as a nonparametric Bayes' rule with applications towards Bayesian inference. Crucially, by introducing the task transformed regression problem, they can be extended to the novel task transformed Gaussian processes as their Bayesian form, whose marginal likelihood can be used to learn hyperparameters in various forms and contexts. These perspectives and frameworks developed in this thesis shed light into creative ways conditional kernel mean embeddings can be learned and applied in existing problem domains, and further inspire elegant solutions in novel problem settings.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Muandet, Krikamol [Verfasser], e Bernhard [Akademischer Betreuer] Schölkopf. "From Points to Probability Measures : Statistical Learning on Distributions with Kernel Mean Embedding / Krikamol Muandet ; Betreuer: Bernhard Schölkopf". Tübingen : Universitätsbibliothek Tübingen, 2015. http://d-nb.info/1163664804/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Muandet, Krikamol Verfasser], e Bernhard [Akademischer Betreuer] [Schölkopf. "From Points to Probability Measures : Statistical Learning on Distributions with Kernel Mean Embedding / Krikamol Muandet ; Betreuer: Bernhard Schölkopf". Tübingen : Universitätsbibliothek Tübingen, 2015. http://d-nb.info/1163664804/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Fermanian, Jean-Baptiste. "High dimensional multiple means estimation and testing with applications to machine learning". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASM035.

Texto completo da fonte
Resumo:
Nous étudions dans cette thèse l'influence de la grande dimension dans des problèmes de test et d'estimation. Notre analyse porte sur la dépendance en la dimension de la vitesse de séparation d'un test de proximité et du risque quadratique de l'estimation multiples de vecteurs. Nous complétons les résultats existants en étudiant ces dépendances dans le cas de distributions non isotropes. Pour de telles distributions, le rôle de la dimension est alors joué par des notions de dimension effective définies à partir de la covariance des distributions. Ce cadre permet d'englober des données de dimension infinie comme le kernel mean embedding, outil de machine learning que nous chercherons à estimer. A l'aide de cette analyse, nous construisons des méthodes d'estimation simultanée de vecteurs moyennes de différentes distributions à partir d'échantillons indépendants de chacune. Ces estimateurs ont de meilleures performances théorique et pratique relativement aux moyennes empiriques, en particulier dans des situations défavorables où la dimension (effective) est grande. Ces méthodes utilisent explicitement ou implicitement la relative facilité du test par rapport à l'estimation. Elles reposent sur la construction d'estimateurs de distances et de moments de la covariance pour lesquels nous fournissons des bornes de concentration non asymptotiques. Un intérêt particulier est porté à l'étude de données bornées pour lesquels une analyse spécifique est nécessaire. Nos méthodes sont accompagnées d'une analyse minimax justifiant leur optimalité. Dans une dernière partie, nous proposons une interprétation du mécanisme d'attention utilisé dans les réseaux de neurones Transformers comme un problème d'estimation multiple de vecteurs. Dans un cadre simplifié, ce mécanisme partage des idées similaires avec nos approches et nous mettons en évidence son effet de débruitage en grande dimension
In this thesis, we study the influence of high dimension in testing and estimation problems. We analyze the dimension dependence of the separation rate of a closeness test and of the quadratic risk of multiple vector estimation. We complement existing results by studying these dependencies in the case of non-isotropic distributions. For such distributions, the role of dimension is played by notions of effective dimension defined from the covariance of the distributions. This framework covers infinite-dimensional data such as kernel mean embedding, a machine learning tool we will be seeking to estimate. Using this analysis, we construct methods for simultaneously estimating mean vectors of different distributions from independent samples of each. These estimators perform better theoretically and practically than the empirical mean in unfavorable situations where the (effective) dimension is large. These methods make explicit or implicit use of the relative ease of testing compared with estimation. They are based on the construction of estimators of distances and moments of covariance, for which we provide non-asymptotic concentration bounds. Particular interest is given to the study of bounded data, for which a specific analysis is required. Our methods are accompanied by a minimax analysis justifying their optimality. In a final section, we propose an interpretation of the attention mechanism used in Transformer neural networks as a multiple vector estimation problem. In a simplified framework, this mechanism shares similar ideas with our approaches, and we highlight its denoising effect in high dimension
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Chen, Tian Qi. "Deep kernel mean embeddings for generative modeling and feedforward style transfer". Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/62668.

Texto completo da fonte
Resumo:
The generation of data has traditionally been specified using hand-crafted algorithms. However, oftentimes the exact generative process is unknown while only a limited number of samples are observed. One such case is generating images that look visually similar to an exemplar image or as if coming from a distribution of images. We look into learning the generating process by constructing a similarity function that measures how close the generated image is to the target image. We discuss a framework in which the similarity function is specified by a pre-trained neural network without fine-tuning, as is the case for neural texture synthesis, and a framework where the similarity function is learned along with the generative process in an adversarial setting, as is the case for generative adversarial networks. The main point of discussion is the combined use of neural networks and maximum mean discrepancy as a versatile similarity function. Additionally, we describe an improvement to state-of-the-art style transfer that allows faster computations while maintaining generality of the generating process. The proposed objective has desirable properties such as a simpler optimization landscape, intuitive parameter tuning, and consistent frame- by-frame performance on video. We use 80,000 natural images and 80,000 paintings to train a procedure for artistic style transfer that is efficient but also allows arbitrary content and style images.
Science, Faculty of
Computer Science, Department of
Graduate
Estilos ABNT, Harvard, Vancouver, APA, etc.

Livros sobre o assunto "Kernel mean embedding"

1

Muandet, Krikamol, Kenji Fukumizu, Bharath Kumar Sriperumbudur VanGeepuram e Bernhard Schölkopf. Kernel Mean Embedding of Distributions: A Review and Beyond. Now Publishers, 2017.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Sriperumbudur, Bharath K. Kernel Mean Embedding of Distributions: A Review and Beyond. 2017.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Kernel mean embedding"

1

Fukumizu, Kenji. "Nonparametric Bayesian Inference with Kernel Mean Embedding". In Modern Methodology and Applications in Spatial-Temporal Modeling, 1–24. Tokyo: Springer Japan, 2015. http://dx.doi.org/10.1007/978-4-431-55339-7_1.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Wickstrøm, Kristoffer, J. Emmanuel Johnson, Sigurd Løkse, Gustau Camps-Valls, Karl Øyvind Mikalsen, Michael Kampffmeyer e Robert Jenssen. "The Kernelized Taylor Diagram". In Communications in Computer and Information Science, 125–31. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-17030-0_10.

Texto completo da fonte
Resumo:
AbstractThis paper presents the kernelized Taylor diagram, a graphical framework for visualizing similarities between data populations. The kernelized Taylor diagram builds on the widely used Taylor diagram, which is used to visualize similarities between populations. However, the Taylor diagram has several limitations such as not capturing non-linear relationships and sensitivity to outliers. To address such limitations, we propose the kernelized Taylor diagram. Our proposed kernelized Taylor diagram is capable of visualizing similarities between populations with minimal assumptions of the data distributions. The kernelized Taylor diagram relates the maximum mean discrepancy and the kernel mean embedding in a single diagram, a construction that, to the best of our knowledge, have not been devised prior to this work. We believe that the kernelized Taylor diagram can be a valuable tool in data visualization.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Hsu, Kelvin, Richard Nock e Fabio Ramos. "Hyperparameter Learning for Conditional Kernel Mean Embeddings with Rademacher Complexity Bounds". In Machine Learning and Knowledge Discovery in Databases, 227–42. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-10928-8_14.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Xie, Yi, Zhi-Hao Tan, Yuan Jiang e Zhi-Hua Zhou. "Identifying Helpful Learnwares Without Examining the Whole Market". In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230585.

Texto completo da fonte
Resumo:
The learnware paradigm aims to construct a market of numerous well-performing machine learning models, which enables users to leverage these models to accomplish specific tasks without having to build models from scratch. Each learnware in the market is a model associated with a specification, representing the model’s utility and enabling it to be identified according to future users’ requirements. In the learnware paradigm, due to the vast and ever-increasing number of models in the market, a significant challenge is to identify helpful learnwares efficiently for a specific user task without leaking data privacy. However, existing identification methods require examining the whole market, which is computationally unaffordable in a large market. In this paper, we propose a new framework for identifying helpful learnwares without examining the whole market. Specifically, using the Reduced Kernel Mean Embedding (RKME) specification, we derive a novel learnware scoring criterion for assessing the helpfulness of a learnware, based on which we design an anchor-based framework to identify helpful learnwares by examining only a small portion of learnwares in the market. Theoretical analyses are provided for both the criterion and the anchor-based method. Empirical studies on market containing thousands of learnwares from real-world datasets confirm the effectiveness of our proposed approach.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Kernel mean embedding"

1

Luo, Mingjie, Jie Zhou e Qingke Zou. "Multisensor Estimation Fusion Based on Kernel Mean Embedding". In 2024 27th International Conference on Information Fusion (FUSION), 1–7. IEEE, 2024. http://dx.doi.org/10.23919/fusion59988.2024.10706487.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

GUAN, ZENGDA, e JUAN ZHANG. "Quantitative Associative Classification Based on Kernel Mean Embedding". In CSAI 2020: 2020 4th International Conference on Computer Science and Artificial Intelligence. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3445815.3445827.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Tang, Shuhao, Hao Tian, Xiaofeng Cao e Wei Ye. "Deep Hierarchical Graph Alignment Kernels". In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/549.

Texto completo da fonte
Resumo:
Typical R-convolution graph kernels invoke the kernel functions that decompose graphs into non-isomorphic substructures and compare them. However, overlooking implicit similarities and topological position information between those substructures limits their performances. In this paper, we introduce Deep Hierarchical Graph Alignment Kernels (DHGAK) to resolve this problem. Specifically, the relational substructures are hierarchically aligned to cluster distributions in their deep embedding space. The substructures belonging to the same cluster are assigned the same feature map in the Reproducing Kernel Hilbert Space (RKHS), where graph feature maps are derived by kernel mean embedding. Theoretical analysis guarantees that DHGAK is positive semi-definite and has linear separability in the RKHS. Comparison with state-of-the-art graph kernels on various benchmark datasets demonstrates the effectiveness and efficiency of DHGAK. The code is available at Github (https://github.com/EWesternRa/DHGAK).
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Ding, Xiao, Bibo Cai, Ting Liu e Qiankun Shi. "Domain Adaptation via Tree Kernel Based Maximum Mean Discrepancy for User Consumption Intention Identification". In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/560.

Texto completo da fonte
Resumo:
Identifying user consumption intention from social media is of great interests to downstream applications. Since such task is domain-dependent, deep neural networks have been applied to learn transferable features for adapting models from a source domain to a target domain. A basic idea to solve this problem is reducing the distribution difference between the source domain and the target domain such that the transfer error can be bounded. However, the feature transferability drops dramatically in higher layers of deep neural networks with increasing domain discrepancy. Hence, previous work has to use a few target domain annotated data to train domain-specific layers. In this paper, we propose a deep transfer learning framework for consumption intention identification, to reduce the data bias and enhance the transferability in domain-specific layers. In our framework, the representation of the domain-specific layer is mapped to a reproducing kernel Hilbert space, where the mean embeddings of different domain distributions can be explicitly matched. By using an optimal tree kernel method for measuring the mean embedding matching, the domain discrepancy can be effectively reduced. The framework can learn transferable features in a completely unsupervised manner with statistical guarantees. Experimental results on five different domain datasets show that our approach dramatically outperforms state-of-the-art baselines, and it is general enough to be applied to more scenarios. The source code and datasets can be found at http://ir.hit.edu.cn/$\scriptsize{\sim}$xding/index\_english.htm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Zhu, Jia-Jie, Wittawat Jitkrittum, Moritz Diehl e Bernhard Scholkopf. "Worst-Case Risk Quantification under Distributional Ambiguity using Kernel Mean Embedding in Moment Problem". In 2020 59th IEEE Conference on Decision and Control (CDC). IEEE, 2020. http://dx.doi.org/10.1109/cdc42340.2020.9303938.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Romao, Licio, Ashish R. Hota e Alessandro Abate. "Distributionally Robust Optimal and Safe Control of Stochastic Systems via Kernel Conditional Mean Embedding". In 2023 62nd IEEE Conference on Decision and Control (CDC). IEEE, 2023. http://dx.doi.org/10.1109/cdc49753.2023.10383997.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Liu, Qiao, e Hui Xue. "Adversarial Spectral Kernel Matching for Unsupervised Time Series Domain Adaptation". In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/378.

Texto completo da fonte
Resumo:
Unsupervised domain adaptation (UDA) has been received increasing attention since it does not require labels in target domain. Most existing UDA methods learn domain-invariant features by minimizing discrepancy distance computed by a certain metric between domains. However, these discrepancy-based methods cannot be robustly applied to unsupervised time series domain adaptation (UTSDA). That is because discrepancy metrics in these methods contain only low-order and local statistics, which have limited expression for time series distributions and therefore result in failure of domain matching. Actually, the real-world time series are always non-local distributions, i.e., with non-stationary and non-monotonic statistics. In this paper, we propose an Adversarial Spectral Kernel Matching (AdvSKM) method, where a hybrid spectral kernel network is specifically designed as inner kernel to reform the Maximum Mean Discrepancy (MMD) metric for UTSDA. The hybrid spectral kernel network can precisely characterize non-stationary and non-monotonic statistics in time series distributions. Embedding hybrid spectral kernel network to MMD not only guarantees precise discrepancy metric but also benefits domain matching. Besides, the differentiable architecture of the spectral kernel network enables adversarial kernel learning, which brings more discriminatory expression for discrepancy matching. The results of extensive experiments on several real-world UTSDA tasks verify the effectiveness of our proposed method.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Tan, Peng, Zhi-Hao Tan, Yuan Jiang e Zhi-Hua Zhou. "Handling Learnwares Developed from Heterogeneous Feature Spaces without Auxiliary Data". In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/471.

Texto completo da fonte
Resumo:
The learnware paradigm proposed by Zhou [2016] devotes to constructing a market of numerous well-performed models, enabling users to solve problems by reusing existing efforts rather than starting from scratch. A learnware comprises a trained model and the specification which enables the model to be adequately identified according to the user's requirement. Previous studies concentrated on the homogeneous case where models share the same feature space based on Reduced Kernel Mean Embedding (RKME) specification. However, in real-world scenarios, models are typically constructed from different feature spaces. If such a scenario can be handled by the market, all models built for a particular task even with different feature spaces can be identified and reused for a new user task. Generally, this problem would be easier if there were additional auxiliary data connecting different feature spaces, however, obtaining such data in reality is challenging. In this paper, we present a general framework for accommodating heterogeneous learnwares without requiring additional auxiliary data. The key idea is to utilize the submitted RKME specifications to establish the relationship between different feature spaces. Additionally, we give a matrix factorization-based implementation and propose the overall procedure for constructing and exploiting the heterogeneous learnware market. Experiments on real-world tasks validate the efficacy of our method.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Shan, Siyuan, Vishal Athreya Baskaran, Haidong Yi, Jolene Ranek, Natalie Stanley e Junier B. Oliva. "Transparent single-cell set classification with kernel mean embeddings". In BCB '22: 13th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3535508.3545538.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Elgohary, Ahmed, Ahmed K. Farahat, Mohamed S. Kamel e Fakhri Karray. "Embed and Conquer: Scalable Embeddings for Kernel k-Means on MapReduce". In Proceedings of the 2014 SIAM International Conference on Data Mining. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2014. http://dx.doi.org/10.1137/1.9781611973440.49.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia