Academic literature on the topic 'Kernel'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Kernel.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Kernel"

1

Yang, Tianbao, Mehrdad Mahdavi, Rong Jin, Jinfeng Yi, and Steven Hoi. "Online Kernel Selection: Algorithms and Evaluations." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 1197–203. http://dx.doi.org/10.1609/aaai.v26i1.8298.

Full text
Abstract:
Kernel methods have been successfully applied to many machine learning problems. Nevertheless, since the performance of kernel methods depends heavily on the type of kernels being used, identifying good kernels among a set of given kernels is important to the success of kernel methods. A straightforward approach to address this problem is cross-validation by training a separate classifier for each kernel and choosing the best kernel classifier out of them. Another approach is Multiple Kernel Learning (MKL), which aims to learn a single kernel classifier from an optimal combination of multiple kernels. However, both approaches suffer from a high computational cost in computing the full kernel matrices and in training, especially when the number of kernels or the number of training examples is very large. In this paper, we tackle this problem by proposing an efficient online kernel selection algorithm. It incrementally learns a weight for each kernel classifier. The weight for each kernel classifier can help us to select a good kernel among a set of given kernels. The proposed approach is efficient in that (i) it is an online approach and therefore avoids computing all the full kernel matrices before training; (ii) it only updates a single kernel classifier each time by a sampling technique and therefore saves time on updating kernel classifiers with poor performance; (iii) it has a theoretically guaranteed performance compared to the best kernel predictor. Empirical studies on image classification tasks demonstrate the effectiveness of the proposed approach for selecting a good kernel among a set of kernels.
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Peiyan, and Dongfeng Cai. "Multiple kernel learning by empirical target kernel." International Journal of Wavelets, Multiresolution and Information Processing 18, no. 02 (September 24, 2019): 1950058. http://dx.doi.org/10.1142/s0219691319500589.

Full text
Abstract:
Multiple kernel learning (MKL) aims at learning an optimal combination of base kernels with which an appropriate hypothesis is determined on the training data. MKL has its flexibility featured by automated kernel learning, and also reflects the fact that typical learning problems often involve multiple and heterogeneous data sources. Target kernel is one of the most important parts of many MKL methods. These methods find the kernel weights by maximizing the similarity or alignment between weighted kernel and target kernel. The existing target kernels implement a global manner, which (1) defines the same target value for closer and farther sample pairs, and inappropriately neglects the variation of samples; (2) is independent of training data, and is hardly approximated by base kernels. As a result, maximizing the similarity to the global target kernel could make these pre-specified kernels less effectively utilized, further reducing the classification performance. In this paper, instead of defining a global target kernel, a localized target kernel is calculated for each sample pair from the training data, which is flexible and able to well handle the sample variations. A new target kernel named empirical target kernel is proposed in this research to implement this idea, and three corresponding algorithms are designed to efficiently utilize the proposed empirical target kernel. Experiments are conducted on four challenging MKL problems. The results show that our algorithms outperform other methods, verifying the effectiveness and superiority of the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Lixiang, Yuanyan Tang, Bin Luo, Lixin Cui, Xiu Chen, and Jin Xiao. "A combined Weisfeiler–Lehman graph kernel for structured data." International Journal of Wavelets, Multiresolution and Information Processing 16, no. 05 (September 2018): 1850039. http://dx.doi.org/10.1142/s021969131850039x.

Full text
Abstract:
Different graph kernels may correspond to using different notions of similarity or may be using information coming from multiple sources. In this paper, we develop a common method to construct combined graph kernel (CGK) which is based on a family of graph kernels. We define three kinds of CGK. The first one is called the weighted combined graph kernel and is a parametric CGK. The second one is called the accuracy ratio weighted combined graph kernel and is a non-parametric CGK. The third one is called the product combined graph kernel and also belongs to non-parametric CGK. The three kinds of definition of CGK can be applied for constructing CGK based on a family of graph kernels. This family of kernels is demonstrated based on the Weisfeiler–Lehman (WL) sequence of graphs in this paper, including a highly efficient subtree kernel, edge kernel, and shortest path kernel. Experiments demonstrate that our CGK based on WL graph kernels outperforms the corresponding single WL graph kernel on several classification benchmark data sets.
APA, Harvard, Vancouver, ISO, and other styles
4

Mihaylova, Dasha, Aneta Popova, Ivayla Dincheva, and Svetla Pandova. "HS-SPME-GC–MS Profiling of Volatile Organic Compounds and Polar and Lipid Metabolites of the “Stendesto” Plum–Apricot Kernel with Reference to Its Parents." Horticulturae 10, no. 3 (March 7, 2024): 257. http://dx.doi.org/10.3390/horticulturae10030257.

Full text
Abstract:
Plum–apricot hybrids are the successful backcrosses of plums and apricots. Plums and apricots are well-known and preferred by consumers because of their distinct sensory and beneficial health properties. However, kernel consumption remains limited even though kernels are easily accessible. The “Stendesto” hybrid originates from the “Modesto” apricot and the “Stanley” plum. Kernal metabolites exhibited quantitative differences in terms of metabolites identified by gas chromatography–mass spectrometry (GC–MS) analysis and HS-SPME technique profiling. The results revealed a total of 55 different compounds. Phenolic acids, hydrocarbons, organic acids, fatty acids, sugar acids and alcohols, mono- and disaccharides, as well as amino acids were identified in the studied kernels. The hybrid kernel generally inherited all the metabolites present in the parental kernels. Volatile organic compounds were also investigated. Thirty-five compounds identified as aldehydes, alcohols, ketones, furans, acids, esters, and alkanes were present in the studied samples. Considering volatile organic compounds (VOCs), the hybrid kernel had more resemblance to the plum one, bearing that alkanes were only identified in the apricot kernel. The objective of this study was to investigate the volatile composition and metabolic profile of the first Bulgarian plum–apricot hybrid kernels, and to provide comparable data relevant to both parents. With the aid of principal component analysis (PCA) and hierarchical cluster analysis (HCA), differentiation and clustering of the results occurred in terms of the metabolites present in the plum–apricot hybrid kernels with reference to their parental lines. This study is the first providing information about the metabolic profile of variety-defined kernels. It is also a pioneering study on the comprehensive evaluation of fruit hybrids.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Ke, Ligang Cheng, and Bin Yong. "Spectral-Similarity-Based Kernel of SVM for Hyperspectral Image Classification." Remote Sensing 12, no. 13 (July 6, 2020): 2154. http://dx.doi.org/10.3390/rs12132154.

Full text
Abstract:
Spectral similarity measures can be regarded as potential metrics for kernel functions, and can be used to generate spectral-similarity-based kernels. However, spectral-similarity-based kernels have not received significant attention from researchers. In this paper, we propose two novel spectral-similarity-based kernels based on spectral angle mapper (SAM) and spectral information divergence (SID) combined with the radial basis function (RBF) kernel: Power spectral angle mapper RBF (Power-SAM-RBF) and normalized spectral information divergence-based RBF (Normalized-SID-RBF) kernels. First, we prove these spectral-similarity-based kernels to be Mercer’s kernels. Second, we analyze their efficiency in terms of local and global kernels. Finally, we consider three hyperspectral datasets to analyze the effectiveness of the proposed spectral-similarity-based kernels. Experimental results demonstrate that the Power-SAM-RBF and SAM-RBF kernels can obtain an impressive performance, particularly the Power-SAM-RBF kernel. For example, when the ratio of the training set is 20 % , the kappa coefficient of Power-SAM-RBF kernel (0.8561) is 1.61 % , 1.32 % , and 1.23 % higher than that of the RBF kernel on the Indian Pines, University of Pavia, and Salinas Valley datasets, respectively. We present three conclusions. First, the superiority of the Power-SAM-RBF kernel compared to other kernels is evident. Second, the Power-SAM-RBF kernel can provide an outstanding performance when the similarity between spectral signatures in the same hyperspectral dataset is either extremely high or extremely low. Third, the Power-SAM-RBF kernel provides even greater benefits compared to other commonly used kernels when the sizes of the training sets increase. In future work, multiple kernels combining with the spectral-similarity-based kernel are expected to be provide better hyperspectral classification.
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Bo. "Multiple Kernel Feature Fusion Using Kernel Fisher Method." Applied Mechanics and Materials 333-335 (July 2013): 1406–9. http://dx.doi.org/10.4028/www.scientific.net/amm.333-335.1406.

Full text
Abstract:
Different from the existing multiple kernel methods which mainly work in implicit kernel space, we propose a novel multiple kernel method in empirical kernel mapping space. In empirical kernel mapping space, the combination of kernels can be treated as the weighted fusion of empirical kernel mapping samples. Based this fact, we developed a multiple kernel Fisher method to realize multiple kernel classification in empirical kernel mapping space. The experiments here illustrate that the proposed multiple kernel fisher method is feasible and effective.
APA, Harvard, Vancouver, ISO, and other styles
7

DIOŞAN, LAURA, ALEXANDRINA ROGOZAN, and JEAN-PIERRE PECUCHET. "LEARNING SVM WITH COMPLEX MULTIPLE KERNELS EVOLVED BY GENETIC PROGRAMMING." International Journal on Artificial Intelligence Tools 19, no. 05 (October 2010): 647–77. http://dx.doi.org/10.1142/s0218213010000352.

Full text
Abstract:
Classic kernel-based classifiers use only a single kernel, but the real-world applications have emphasized the need to consider a combination of kernels — also known as a multiple kernel (MK) — in order to boost the classification accuracy by adapting better to the characteristics of the data. Our purpose is to automatically design a complex multiple kernel by evolutionary means. In order to achieve this purpose we propose a hybrid model that combines a Genetic Programming (GP) algorithm and a kernel-based Support Vector Machine (SVM) classifier. In our model, each GP chromosome is a tree that encodes the mathematical expression of a multiple kernel. The evolutionary search process of the optimal MK is guided by the fitness function (or efficiency) of each possible MK. The complex multiple kernels which are evolved in this manner (eCMKs) are compared to several classic simple kernels (SKs), to a convex linear multiple kernel (cLMK) and to an evolutionary linear multiple kernel (eLMK) on several real-world data sets from UCI repository. The numerical experiments show that the SVM involving the evolutionary complex multiple kernels perform better than the classic simple kernels. Moreover, on the considered data sets, the new multiple kernels outperform both the cLMK and eLMK — linear multiple kernels. These results emphasize the fact that the SVM algorithm requires a combination of kernels more complex than a linear one in order to boost its performance.
APA, Harvard, Vancouver, ISO, and other styles
8

Hwang, Jeongsik, and Sadaaki Miyamoto. "Kernel Functions Derived from Fuzzy Clustering and Their Application to Kernel Fuzzyc-Means." Journal of Advanced Computational Intelligence and Intelligent Informatics 15, no. 1 (January 20, 2011): 90–94. http://dx.doi.org/10.20965/jaciii.2011.p0090.

Full text
Abstract:
Among widely used kernel functions, such as support vector machines, in data analysis, the Gaussian kernel is most often used. This kernel arises in entropy-based fuzzyc-means clustering. There is reason, however, to check whether other types of functions used in fuzzyc-means are also kernels. Using completely monotone functions, we show they can be kernels if a regularization constant proposed by Ichihashi is introduced. We also show how these kernel functions are applied to kernel-based fuzzyc-means clustering, which outperform the Gaussian kernel in a typical example.
APA, Harvard, Vancouver, ISO, and other styles
9

Smith, Michael W., Becky S. Cheary, and Becky L. Carroll. "The Occurrence of Pecan Kernel Necrosis." HortScience 42, no. 6 (October 2007): 1351–56. http://dx.doi.org/10.21273/hortsci.42.6.1351.

Full text
Abstract:
Pecan [Carya illinoinensis (Wangenh.) C. Koch] kernels (cotyledon) of ‘Pawnee’ displayed a consistent malady not described previously that was designated as “kernel necrosis.” The most severe form of the problem was blackened, necrotic tissue engulfing the basal one-half to one-third of the kernel. The mildest form was darkened tissue in the dorsal grove at the basal end of the kernel. The problem was first observable during the gel stage of kernel development. No symptoms of kernel necrosis were visible on the shuck (involucre). Kernel necrosis was more prominent on ‘Pawnee’, ‘Choctaw’, and ‘Oklahoma’ than other cultivars observed. At maturity, nuts with kernel necrosis had a larger volume than nuts with normal kernels. There were few differences in elemental concentrations of normal kernels from a severely affected orchard and an orchard with little kernel necrosis, and none of the differences appeared to be associated with this disorder. ‘Pawnee’ kernels with necrosis had more phosphorus, zinc, and manganese than normal kernels. Basal segments of necrotic kernels had more boron and acetic acid-extractable and water-soluble calcium than distal segments or normal kernels. Higher elemental concentrations in basal segments of necrotic kernels did not appear sufficient to cause tissue damage. Soil from the orchard with severe kernel necrosis had unusually high concentrations of nitrate, expressed as nitrogen (NO3-N), in the soil profile. Groundwater used for irrigation was contaminated with 34 mg·L−1 NO3-N. An experiment on ‘Pawnee’ evaluated three nitrogen (N) rates, 0, 0.8 g·cm2 cross-sectional trunk area applied in March, and 1.6 g + 1.6 g + 1.2 g·cm2 cross-sectional trunk area N applied during the second week in March, first week in June, and first week in September, respectively, on the incidence of kernel necrosis, leaf N concentration, soil NO3 concentration, yield, nut quality, and growth over 5 years. Leaf N was affected by treatment only once during the study. Nitrates accumulated in the soil, increasing 24% in 3 years when no supplemental N was applied, except in the contaminated irrigation water. Kernel necrosis was either unaffected by N treatment or during 1 year, kernel necrosis was highest without supplemental N application. Tree yield, kernel quality, and growth were unaffected by N treatment. Yield fluctuations among years were apparent demonstrating that an abundant N supply did not prevent alternate bearing. Kernel necrosis was a severe problem in one orchard and was identified in several orchards at low frequencies. The cause of kernel necrosis remains unknown.
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Cuiling, Zhijun Hu, Hongbin Xiao, Junbo Ma, and Zhi Li. "One-Step Clustering with Adaptively Local Kernels and a Neighborhood Kernel." Mathematics 11, no. 18 (September 17, 2023): 3950. http://dx.doi.org/10.3390/math11183950.

Full text
Abstract:
Among the methods of multiple kernel clustering (MKC), some adopt a neighborhood kernel as the optimal kernel, and some use local base kernels to generate an optimal kernel. However, these two methods are not synthetically combined together to leverage their advantages, which affects the quality of the optimal kernel. Furthermore, most existing MKC methods require a two-step strategy to cluster, i.e., first learn an indicator matrix, then executive clustering. This does not guarantee the optimality of the final results. To overcome the above drawbacks, a one-step clustering with adaptively local kernels and a neighborhood kernel (OSC-ALK-ONK) is proposed in this paper, where the two methods are combined together to produce an optimal kernel. In particular, the neighborhood kernel improves the expression capability of the optimal kernel and enlarges its search range, and local base kernels avoid the redundancy of base kernels and promote their variety. Accordingly, the quality of the optimal kernel is enhanced. Further, a soft block diagonal (BD) regularizer is utilized to encourage the indicator matrix to be BD. It is helpful to obtain explicit clustering results directly and achieve one-step clustering, then overcome the disadvantage of the two-step strategy. In addition, extensive experiments on eight data sets and comparisons with six clustering methods show that OSC-ALK-ONK is effective.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Kernel"

1

Shanmugam, Bala Priyadarshini. "Investigation of kernels for the reproducing kernel particle method." Birmingham, Ala. : University of Alabama at Birmingham, 2009. https://www.mhsl.uab.edu/dt/2009m/shanmugam.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Walls, Jacob. "Kernel." Thesis, University of Oregon, 2015. http://hdl.handle.net/1794/19203.

Full text
Abstract:
Kernel is a fifteen-minute work for wind ensemble. Its unifying strands of rhythm, melody, and harmony are spun out of simple four-note tone clusters which undergo changes in contour, intervallic inversion, register, texture, and harmonic environment. These four notes make up the "kernel" of the work, a word used by Breton to refer to the indestructible element of darkness prior to all creative invention, as well as a term used in computer science to refer to the crucial element of a system that, if it should fail, does so loudly.
APA, Harvard, Vancouver, ISO, and other styles
3

George, Sharath. "Usermode kernel : running the kernel in userspace in VM environments." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2858.

Full text
Abstract:
In many instances of virtual machine deployments today, virtual machine instances are created to support a single application. Traditional operating systems provide an extensive framework for protecting one process from another. In such deployments, this protection layer becomes an additional source of overhead as isolation between services is provided at an operating system level and each instance of an operating system supports only one service. This makes the operating system the equivalent of a process from the traditional operating system perspective. Isolation between these operating systems and indirectly the services they support, is ensured by the virtual machine monitor in these deployments. In these scenarios the process protection provided by the operating system becomes redundant and a source of additional overhead. We propose a new model for these scenarios with operating systems that bypass this redundant protection offered by the traditional operating systems. We prototyped such an operating system by executing parts of the operating system in the same protection ring as user applications. This gives processes more power and access to kernel memory bypassing the need to copy data from user to kernel and vice versa as is required when the traditional ring protection layer is enforced. This allows us to save the system call trap overhead and allows application program mers to directly call kernel functions exposing the rich kernel library. This does not compromise security on the other virtual machines running on the same physical machine, as they are protected by the VMM. We illustrate the design and implementation of such a system with the Xen hypervisor and the XenoLinux kernel.
APA, Harvard, Vancouver, ISO, and other styles
4

Guo, Lisong. "Boost the Reliability of the Linux Kernel : Debugging kernel oopses." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066378/document.

Full text
Abstract:
Lorsqu'une erreur survient dans le noyau Linux, celui-ci émet un rapport d’erreur appelé "kernel oops" contenant le contexte d’exécution de cette erreur. Les kernel oops décrivent des erreurs réelles de Linux, permettent de classer les efforts de débogage par ordre de priorité et de motiver la conception d’outils permettant d'améliorer la fiabilité du code de Linux. Néanmoins, les informations contenues dans un kernel oops n’ont de sens que si elles sont représentatives et qu'elles peuvent être interprétées correctement. Dans cette thèse, nous étudions une collection de kernel oops provenant d'un dépôt maintenu par Red Hat sur une période de huit mois. Nous considérons l’ensemble des caractéristiques de ces données, dans quelle mesure ces données reflètent d’autres informations à propos de Linux et l’interprétation des caractéristiques pouvant être pertinentes pour la fiabilité de Linux. Nous constatons que ces données sont bien corrélées à d’autres informations à propos de Linux, cependant, elles souffrent parfois de problèmes de duplication et de manque d’informations. Nous identifions également quelques pièges potentiels lors de l'étude des fonctionnalités, telles que les causes d'erreurs fréquentes et les causes d'applications défaillant fréquemment. En outre, un kernel oops fournit des informations précieuses et de première main pour un mainteneur du noyau Linux lui permettant d'effectuer le débogage post-mortem car il enregistre l’état du noyau Linux au moment du crash. Cependant, le débogage sur la seule base des informations contenues dans un kernel oops est difficile. Pour aider les développeurs avec le débogage, nous avons conçu une solution afin d'obtenir la ligne fautive à partir d’un kernel oops, i.e., la ligne du code source qui provoque l'erreur. Pour cela, nous proposons un nouvel algorithme basé sur la correspondance de séquences approximative utilisé dans le domaine de bioinformatique. Cet algorithme permet de localiser automatiquement la ligne fautive en se basant sur le code machine à proximité de celle-ci et inclus dans un kernel oops. Notre algorithme atteint 92% de précision comparé à 26 % pour l’approche traditionnelle utilisant le débogueur gdb. Nous avons intégré notre solution dans un outil nommé OOPSA qui peut ainsi alléger le fardeau pour les développeurs lors du débogage de kernel oops
When a failure occurs in the Linux kernel, the kernel emits an error report called “kernel oops”, summarizing the execution context of the failure. Kernel oopses describe real Linux errors, and thus can help prioritize debugging efforts and motivate the design of tools to improve the reliability of Linux code. Nevertheless, the information is only meaningful if it is representative and can be interpreted correctly. In this thesis, we study a collection of kernel oopses over a period of 8 months from a repository that is maintained by Red Hat. We consider the overall features of the data, the degree to which the data reflects other information about Linux, and the interpretation of features that may be relevant to reliability. We find that the data correlates well with other information about Linux, but that it suffers from duplicate and missing information. We furthermore identify some potential pitfalls in studying features such as the sources of common faults and common failing applications. Furthermore, a kernel oops provides valuable first-hand information for a Linux kernel maintainer to conduct postmortem debugging, since it logs the status of the Linux kernel at the time of a crash. However, debugging based on only the information in a kernel oops is difficult. To help developers with debugging, we devised a solution to derive the offending line from a kernel oops, i.e., the line of source code that incurs the crash. For this, we propose a novel algorithm based on approximate sequence matching, as used in bioinformatics, to automatically pinpoint the offending line based on information about nearby machine-code instructions, as found in a kernel oops. Our algorithm achieves 92% accuracy compared to 26% for the traditional approach of using only the oops instruction pointer. We integrated the solution into a tool named OOPSA, which would relieve some burden for the developers with the kernel oops debugging
APA, Harvard, Vancouver, ISO, and other styles
5

Guo, Lisong. "Boost the Reliability of the Linux Kernel : Debugging kernel oopses." Electronic Thesis or Diss., Paris 6, 2014. http://www.theses.fr/2014PA066378.

Full text
Abstract:
Lorsqu'une erreur survient dans le noyau Linux, celui-ci émet un rapport d’erreur appelé "kernel oops" contenant le contexte d’exécution de cette erreur. Les kernel oops décrivent des erreurs réelles de Linux, permettent de classer les efforts de débogage par ordre de priorité et de motiver la conception d’outils permettant d'améliorer la fiabilité du code de Linux. Néanmoins, les informations contenues dans un kernel oops n’ont de sens que si elles sont représentatives et qu'elles peuvent être interprétées correctement. Dans cette thèse, nous étudions une collection de kernel oops provenant d'un dépôt maintenu par Red Hat sur une période de huit mois. Nous considérons l’ensemble des caractéristiques de ces données, dans quelle mesure ces données reflètent d’autres informations à propos de Linux et l’interprétation des caractéristiques pouvant être pertinentes pour la fiabilité de Linux. Nous constatons que ces données sont bien corrélées à d’autres informations à propos de Linux, cependant, elles souffrent parfois de problèmes de duplication et de manque d’informations. Nous identifions également quelques pièges potentiels lors de l'étude des fonctionnalités, telles que les causes d'erreurs fréquentes et les causes d'applications défaillant fréquemment. En outre, un kernel oops fournit des informations précieuses et de première main pour un mainteneur du noyau Linux lui permettant d'effectuer le débogage post-mortem car il enregistre l’état du noyau Linux au moment du crash. Cependant, le débogage sur la seule base des informations contenues dans un kernel oops est difficile. Pour aider les développeurs avec le débogage, nous avons conçu une solution afin d'obtenir la ligne fautive à partir d’un kernel oops, i.e., la ligne du code source qui provoque l'erreur. Pour cela, nous proposons un nouvel algorithme basé sur la correspondance de séquences approximative utilisé dans le domaine de bioinformatique. Cet algorithme permet de localiser automatiquement la ligne fautive en se basant sur le code machine à proximité de celle-ci et inclus dans un kernel oops. Notre algorithme atteint 92% de précision comparé à 26 % pour l’approche traditionnelle utilisant le débogueur gdb. Nous avons intégré notre solution dans un outil nommé OOPSA qui peut ainsi alléger le fardeau pour les développeurs lors du débogage de kernel oops
When a failure occurs in the Linux kernel, the kernel emits an error report called “kernel oops”, summarizing the execution context of the failure. Kernel oopses describe real Linux errors, and thus can help prioritize debugging efforts and motivate the design of tools to improve the reliability of Linux code. Nevertheless, the information is only meaningful if it is representative and can be interpreted correctly. In this thesis, we study a collection of kernel oopses over a period of 8 months from a repository that is maintained by Red Hat. We consider the overall features of the data, the degree to which the data reflects other information about Linux, and the interpretation of features that may be relevant to reliability. We find that the data correlates well with other information about Linux, but that it suffers from duplicate and missing information. We furthermore identify some potential pitfalls in studying features such as the sources of common faults and common failing applications. Furthermore, a kernel oops provides valuable first-hand information for a Linux kernel maintainer to conduct postmortem debugging, since it logs the status of the Linux kernel at the time of a crash. However, debugging based on only the information in a kernel oops is difficult. To help developers with debugging, we devised a solution to derive the offending line from a kernel oops, i.e., the line of source code that incurs the crash. For this, we propose a novel algorithm based on approximate sequence matching, as used in bioinformatics, to automatically pinpoint the offending line based on information about nearby machine-code instructions, as found in a kernel oops. Our algorithm achieves 92% accuracy compared to 26% for the traditional approach of using only the oops instruction pointer. We integrated the solution into a tool named OOPSA, which would relieve some burden for the developers with the kernel oops debugging
APA, Harvard, Vancouver, ISO, and other styles
6

Mika, Sebastian. "Kernel Fisher discriminats." [S.l.] : [s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=967125413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sun, Fangzheng. "Kernel Coherence Encoders." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-theses/252.

Full text
Abstract:
In this thesis, we introduce a novel model based on the idea of autoencoders. Different from a classic autoencoder which reconstructs its own inputs through a neural network, our model is closer to Kernel Canonical Correlation Analysis (KCCA) and reconstructs input data from another data set, where these two data sets should have some, perhaps non-linear, dependence. Our model extends traditional KCCA in a way that the non-linearity of the data is learned through optimizing a kernel function by a neural network. In one of the novelties of this thesis, we do not optimize our kernel based upon some prediction error metric, as is classical in autoencoders. Rather, we optimize our kernel to maximize the "coherence" of the underlying low-dimensional hidden layers. This idea makes our method faithful to the classic interpretation of linear Canonical Correlation Analysis (CCA). As far we are aware, our method, which we call a Kernel Coherence Encoder (KCE), is the only extent approach that uses the flexibility of a neural network while maintaining the theoretical properties of classic KCCA. In another one of the novelties of our approach, we leverage a modified version of classic coherence which is far more stable in the presence of high-dimensional data to address computational and robustness issues in the implementation of a coherence based deep learning KCCA.
APA, Harvard, Vancouver, ISO, and other styles
8

Karlsson, Viktor, and Erik Rosvall. "Extreme Kernel Machine." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-211566.

Full text
Abstract:
The purpose of this report is to examine the combination of an Extreme Learning Machine (ELM) with the Kernel Method . Kernels lies at the core of Support Vector Machines success in classifying non-linearly separable datasets. The hypothesis is that by combining ELM with a kernel we will utilize features in the ELM-space otherwise unused. The report is intended as a proof of concept for the idea of using kernel methods in an ELM setting. This will be done by running the new algorithm against five image datasets for a classification accuracy and time complexity analysis. Results show that our extended ELM algorithm, which we have named Extreme Kernel Machine (EKM), improve classification accuracy for some datasets compared to the regularised ELM, in the best scenarios around three percentage points. We found that the choice of kernel type and parameter values had great effect on the classification performance. The implementation of the kernel does however add computational complexity, but where that is not a concern EKM does have an advantage. This tradeoff might give EKM a place between other neural networks and regular ELMs.
APA, Harvard, Vancouver, ISO, and other styles
9

Bhujwalla, Yusuf. "Nonlinear System Identification with Kernels : Applications of Derivatives in Reproducing Kernel Hilbert Spaces." Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0315/document.

Full text
Abstract:
Cette thèse se concentrera exclusivement sur l’application de méthodes non paramétriques basées sur le noyau à des problèmes d’identification non-linéaires. Comme pour les autres méthodes non-linéaires, deux questions clés dans l’identification basée sur le noyau sont les questions de comment définir un modèle non-linéaire (sélection du noyau) et comment ajuster la complexité du modèle (régularisation). La contribution principale de cette thèse est la présentation et l’étude de deux critères d’optimisation (un existant dans la littérature et une nouvelle proposition) pour l’approximation structurale et l’accord de complexité dans l’identification de systèmes non-linéaires basés sur le noyau. Les deux méthodes sont basées sur l’idée d’intégrer des contraintes de complexité basées sur des caractéristiques dans le critère d’optimisation, en pénalisant les dérivées de fonctions. Essentiellement, de telles méthodes offrent à l’utilisateur une certaine souplesse dans la définition d’une fonction noyau et dans le choix du terme de régularisation, ce qui ouvre de nouvelles possibilités quant à la facon dont les modèles non-linéaires peuvent être estimés dans la pratique. Les deux méthodes ont des liens étroits avec d’autres méthodes de la littérature, qui seront examinées en détail dans les chapitres 2 et 3 et formeront la base des développements ultérieurs de la thèse. Alors que l’analogie sera faite avec des cadres parallèles, la discussion sera ancrée dans le cadre de Reproducing Kernel Hilbert Spaces (RKHS). L’utilisation des méthodes RKHS permettra d’analyser les méthodes présentées d’un point de vue à la fois théorique et pratique. De plus, les méthodes développées seront appliquées à plusieurs «études de cas» d’identification, comprenant à la fois des exemples de simulation et de données réelles, notamment : • Détection structurelle dans les systèmes statiques non-linéaires. • Contrôle de la fluidité dans les modèles LPV. • Ajustement de la complexité à l’aide de pénalités structurelles dans les systèmes NARX. • Modelisation de trafic internet par l’utilisation des méthodes à noyau
This thesis will focus exclusively on the application of kernel-based nonparametric methods to nonlinear identification problems. As for other nonlinear methods, two key questions in kernel-based identification are the questions of how to define a nonlinear model (kernel selection) and how to tune the complexity of the model (regularisation). The following chapter will discuss how these questions are usually dealt with in the literature. The principal contribution of this thesis is the presentation and investigation of two optimisation criteria (one existing in the literature and one novel proposition) for structural approximation and complexity tuning in kernel-based nonlinear system identification. Both methods are based on the idea of incorporating feature-based complexity constraints into the optimisation criterion, by penalising derivatives of functions. Essentially, such methods offer the user flexibility in the definition of a kernel function and the choice of regularisation term, which opens new possibilities with respect to how nonlinear models can be estimated in practice. Both methods bear strong links with other methods from the literature, which will be examined in detail in Chapters 2 and 3 and will form the basis of the subsequent developments of the thesis. Whilst analogy will be made with parallel frameworks, the discussion will be rooted in the framework of Reproducing Kernel Hilbert Spaces (RKHS). Using RKHS methods will allow analysis of the methods presented from both a theoretical and a practical point-of-view. Furthermore, the methods developed will be applied to several identification ‘case studies’, comprising of both simulation and real-data examples, notably: • Structural detection in static nonlinear systems. • Controlling smoothness in LPV models. • Complexity tuning using structural penalties in NARX systems. • Internet traffic modelling using kernel methods
APA, Harvard, Vancouver, ISO, and other styles
10

Karim, Khan Shahid. "Abstract Kernel Management Environment." Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1806.

Full text
Abstract:

The Kerngen Module in MATLAB can be used to optimize a filter with regards to an ideal filter; while taking into consideration the weighting function and the spatial mask. To be able to remotely do these optimizations from a standard web browser over a TCP/IP network connection would be of interest. This master’s thesis covers the project of doing such a system; along with an attempt to graphically display three-dimensional filters and also save the optimized filter in XML format. It includes defining an appropriate DTD for the representation of the filter. The result is a working system, with a server and client written in the programming language PIKE.

APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Kernel"

1

Krista, Sands, and Sands Krista, eds. Kernel. Windhoek, Namibia: Namibia Scientific Society, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Owhadi, Houman, Clint Scovel, and Gene Ryan Yoo. Kernel Mode Decomposition and the Programming of Kernels. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-82171-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wand, M. P., and M. C. Jones. Kernel Smoothing. Boston, MA: Springer US, 1995. http://dx.doi.org/10.1007/978-1-4899-4493-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

C, Jones M., ed. Kernel smoothing. London: Chapman & Hall, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Salzman, Peter Jay. The Linux Kernel Module programming guide ; Kernel 2.6. London: SoHo Books, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Larkins, B., ed. Maize kernel development. Wallingford: CABI, 2017. http://dx.doi.org/10.1079/9781786391216.0000.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hirukawa, Masayuki. Asymmetric Kernel Smoothing. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-5466-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rosen, Rami. Linux Kernel Networking. Berkeley, CA: Apress, 2014. http://dx.doi.org/10.1007/978-1-4302-6197-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Love, Robert. Linux Kernel Development. Indianapolis, Ind: Sams, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Love, Robert. Linux Kernel Development. Upper Saddle River: Pearson Education, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Kernel"

1

Reinders, James, Ben Ashbaugh, James Brodman, Michael Kinsner, John Pennycook, and Xinmin Tian. "Defining Kernels." In Data Parallel C++, 241–58. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5574-2_10.

Full text
Abstract:
Abstract Thus far in this book, our code examples have represented kernels using C++ lambda expressions. Lambda expressions are a concise and convenient way to represent a kernel right where it is used, but they are not the only way to represent a kernel in SYCL. In this chapter, we will explore various ways to define kernels in detail, helping us to choose a kernel form that is most natural for our C++ coding needs.
APA, Harvard, Vancouver, ISO, and other styles
2

Reinders, James, Ben Ashbaugh, James Brodman, Michael Kinsner, John Pennycook, and Xinmin Tian. "Programming for CPUs." In Data Parallel C++, 417–49. Berkeley, CA: Apress, 2023. http://dx.doi.org/10.1007/978-1-4842-9691-2_16.

Full text
Abstract:
AbstractAs kernel programming is generalized, it is important to understand how the kernel programming style affects the mapping of our code to a CPU. Chapter 16 describes tips and techniques to keep in mind when we are writing and optimizing parallel kernels for a CPU.
APA, Harvard, Vancouver, ISO, and other styles
3

Abe, Shigeo. "Kernel-Based Methods Kernel@Kernel-based method." In Support Vector Machines for Pattern Classification, 305–29. London: Springer London, 2010. http://dx.doi.org/10.1007/978-1-84996-098-4_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Babar, Yogesh. "Kernel." In Hands-on Booting, 183–205. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5890-3_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Weik, Martin H. "kernel." In Computer Science and Communications Dictionary, 855. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_9765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gooch, Jan W. "Kernel." In Encyclopedic Dictionary of Polymers, 410. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4419-6247-8_6637.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Montesinos López, Osval Antonio, Abelardo Montesinos López, and Jose Crossa. "Reproducing Kernel Hilbert Spaces Regression and Classification Methods." In Multivariate Statistical Machine Learning Methods for Genomic Prediction, 251–336. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-89010-0_8.

Full text
Abstract:
AbstractThe fundamentals for Reproducing Kernel Hilbert Spaces (RKHS) regression methods are described in this chapter. We first point out the virtues of RKHS regression methods and why these methods are gaining a lot of acceptance in statistical machine learning. Key elements for the construction of RKHS regression methods are provided, the kernel trick is explained in some detail, and the main kernel functions for building kernels are provided. This chapter explains some loss functions under a fixed model framework with examples of Gaussian, binary, and categorical response variables. We illustrate the use of mixed models with kernels by providing examples for continuous response variables. Practical issues for tuning the kernels are illustrated. We expand the RKHS regression methods under a Bayesian framework with practical examples applied to continuous and categorical response variables and by including in the predictor the main effects of environments, genotypes, and the genotype ×environment interaction. We show examples of multi-trait RKHS regression methods for continuous response variables. Finally, some practical issues of kernel compression methods are provided which are important for reducing the computation cost of implementing conventional RKHS methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Wand, M. P., and M. C. Jones. "Kernel regression." In Kernel Smoothing, 114–45. Boston, MA: Springer US, 1995. http://dx.doi.org/10.1007/978-1-4899-4493-1_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wand, M. P., and M. C. Jones. "Introduction." In Kernel Smoothing, 1–9. Boston, MA: Springer US, 1995. http://dx.doi.org/10.1007/978-1-4899-4493-1_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wand, M. P., and M. C. Jones. "Univariate kernel density estimation." In Kernel Smoothing, 10–57. Boston, MA: Springer US, 1995. http://dx.doi.org/10.1007/978-1-4899-4493-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Kernel"

1

Zhang, Xiao, and Shizhong Liao. "Hypothesis Sketching for Online Kernel Selection in Continuous Kernel Space." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/346.

Full text
Abstract:
Online kernel selection in continuous kernel space is more complex than that in discrete kernel set. But existing online kernel selection approaches for continuous kernel spaces have linear computational complexities at each round with respect to the current number of rounds and lack sublinear regret guarantees due to the continuously many candidate kernels. To address these issues, we propose a novel hypothesis sketching approach to online kernel selection in continuous kernel space, which has constant computational complexities at each round and enjoys a sublinear regret bound. The main idea of the proposed hypothesis sketching approach is to maintain the orthogonality of the basis functions and the prediction accuracy of the hypothesis sketches in a time-varying reproducing kernel Hilbert space. We first present an efficient dependency condition to maintain the basis functions of the hypothesis sketches under a computational budget. Then we update the weights and the optimal kernels by minimizing the instantaneous loss of the hypothesis sketches using the online gradient descent with a compensation strategy. We prove that the proposed hypothesis sketching approach enjoys a regret bound of order O(√T) for online kernel selection in continuous kernel space, which is optimal for convex loss functions, where T is the number of rounds, and reduces the computational complexities at each round from linear to constant with respect to the number of rounds. Experimental results demonstrate that the proposed hypothesis sketching approach significantly improves the efficiency of online kernel selection in continuous kernel space while retaining comparable predictive accuracies.
APA, Harvard, Vancouver, ISO, and other styles
2

Xue, Hui, Yu Song, and Hai-Ming Xu. "Multiple Indefinite Kernel Learning for Feature Selection." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/448.

Full text
Abstract:
Multiple kernel learning for feature selection (MKL-FS) utilizes kernels to explore complex properties of features and performs better in embedded methods. However, the kernels in MKL-FS are generally limited to be positive definite. In fact, indefinite kernels often emerge in actual applications and can achieve better empirical performance. But due to the non-convexity of indefinite kernels, existing MKL-FS methods are usually inapplicable and the corresponding research is also relatively little. In this paper, we propose a novel multiple indefinite kernel feature selection method (MIK-FS) based on the primal framework of indefinite kernel support vector machine (IKSVM), which applies an indefinite base kernel for each feature and then exerts an l1-norm constraint on kernel combination coefficients to select features automatically. A two-stage algorithm is further presented to optimize the coefficients of IKSVM and kernel combination alternately. In the algorithm, we reformulate the non-convex optimization problem of primal IKSVM as a difference of convex functions (DC) programming and transform the non-convex problem into a convex one with the affine minorization approximation. Experiments on real-world datasets demonstrate that MIK-FS is superior to some related state-of-the-art methods in both feature selection and classification performance.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Xiao, and Shizhong Liao. "Online Kernel Selection via Incremental Sketched Kernel Alignment." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/433.

Full text
Abstract:
In contrast to offline kernel selection, online kernel selection must rise to the new challenges of passing the training set once, selecting optimal kernels and updating hypotheses at each round, enjoying a sublinear regret bound for online kernel learning, and requiring a constant maintenance time complexity at each round and an efficient overall time complexity integrated with online kernel learning. However, most of existing online kernel selection approaches can not meet the new challenges. To address this issue, we propose a novel online kernel selection approach via the incremental sketched kernel alignment criterion, which meets all the new challenges. We first define the incremental sketched kernel alignment (ISKA) criterion, which estimates the kernel alignment and can be computed incrementally and efficiently. When applying the proposed ISKA criterion to online kernel selection, we adopt the subclass coherence to maintain the hypothesis space, select the optimal kernel at each round using the median of the ISKA criterion estimates, and update the hypothesis following the online gradient decent method. We prove that the ISKA criterion is an unbiased estimate of the maximum mean discrepancy, enjoys the optimal logarithmic regret bound for online kernel learning, and has a constant maintenance time complexity at each round and a logarithmic overall time complexity integrated with online kernel learning. Empirical studies demonstrate that the proposed online kernel selection approach is computationally efficient while maintaining comparable accuracy for online kernel learning.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Rong, Jitao Lu, Yihang Lu, Feiping Nie, and Xuelong Li. "Discrete Multiple Kernel k-means." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/428.

Full text
Abstract:
The multiple kernel k-means (MKKM) and its variants utilize complementary information from different kernels, achieving better performance than kernel k-means (KKM). However, the optimization procedures of previous works all comprise two stages, learning the continuous relaxed label matrix and obtaining the discrete one by extra discretization procedures. Such a two-stage strategy gives rise to a mismatched problem and severe information loss. To address this problem, we elaborate a novel Discrete Multiple Kernel k-means (DMKKM) model solved by an optimization algorithm that directly obtains the cluster indicator matrix without subsequent discretization procedures. Moreover, DMKKM can strictly measure the correlations among kernels, which is capable of enhancing kernel fusion by reducing redundancy and improving diversity. What’s more, DMKKM is parameter-free avoiding intractable hyperparameter tuning, which makes it feasible in practical applications. Extensive experiments illustrated the effectiveness and superiority of the proposed model.
APA, Harvard, Vancouver, ISO, and other styles
5

Gallego-Mejia, Joseph, and Fabio Gonzalez. "Robust Estimation in Reproducing Kernel Hilbert Space." In LatinX in AI at Neural Information Processing Systems Conference 2019. Journal of LatinX in AI Research, 2019. http://dx.doi.org/10.52591/lxai2019120829.

Full text
Abstract:
Our work shows that estimating the mean in a feature space induced by certain kinds of kernels is the same as doing a robust mean estimation using an M-estimator in the original problem space. In particular, we show that calculating the average on a feature space induced by a Gaussian kernel is equivalent to perform robust mean estimation with the Welsch M-estimator. Besides, a new framework is proposed that was used to build four new robust kernels: Tukey’s, Andrews’, Huber’s and Cauchy’s robust kernels. The new robust kernels, combined with kernel matrix factorization clustering algorithm, were compared to state-of-the-art algorithms in clustering tasks. The result shows that some of the new robust kernels perform in a par with state-of-the-art algorithms.
APA, Harvard, Vancouver, ISO, and other styles
6

Guevara, Jorge, Roberto Hirata, and Stephane Canu. "Support Fuzzy-Set Machines: From Kernels on Fuzzy Sets to Machine Learning Applications." In LatinX in AI at Neural Information Processing Systems Conference 2018. Journal of LatinX in AI Research, 2018. http://dx.doi.org/10.52591/lxai2018120324.

Full text
Abstract:
This work introduces a new class of kernel machines: the Support fuzzy-sets machines. This machines can be used to solve machine learning tasks, like classification, regression or clustering, on data with point-wise uncertainty. We advocate the use of fuzzy set for modeling the uncertainty around the vicinity of observations, and for incorporating those uncertainties into the learning machine. Support fuzzy-sets machines are defined by kernel gram (covariance) matrices defined by kernel on fuzzy sets, which are special kind of real-valued (kernel) functions whose domain is the set of all fuzzy sets, i.e., k : X × X → R, where X is a fuzzy set. Under fuzzy set modeling, such kernels can be used to estimate covariance matrices for observations with point-wise uncertainty and for estimating similarity measures for that kind of data. Previous research showed in fact, an improved performance in learning machines when is considered the information given by the neighborhood around observations, see for example local learning [Bottou and Vapnik, 1992], Vicinal kernels [Vapnik, 1995], Vicinal risk minimization [Chapelle et al., 2001] and the RBF network. More recent approaches consider kernel machines defined by kernels on probability measures [Muandet et al., 2012]. Support Fuzzy-set Machines thanks to the reproducing theorem of kernel methods learn f = Pi αik(X,) in a high dimensional Hilbert space called Reproducing Kernel Hilbert Space, where the support fuzzy-sets is the set given by all the fuzzy sets such the correspondent αi > 0. Several positive definite kernels on fuzzy sets can be used for training the proposed kernel machines, as for example: The cross product kernel on fuzzy sets [Guevara et al., 2017], the intersection kernel on fuzzy sets, [Guevara et al., 2014], the non-singleton kernel on fuzzy sets, [Guevara et al., 2013] or the Distance-based kernels on fuzzy sets ([Guevara et al., 2015]).
APA, Harvard, Vancouver, ISO, and other styles
7

Huusari, Riikka, and Hachem Kadri. "Entangled Kernels." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/358.

Full text
Abstract:
We consider the problem of operator-valued kernel learning and investigate the possibility of going beyond the well-known separable kernels. Borrowing tools and concepts from the field of quantum computing, such as partial trace and entanglement, we propose a new view on operator-valued kernels and define a general family of kernels that encompasses previously known operator-valued kernels, including separable and transformable kernels. Within this framework, we introduce another novel class of operator-valued kernels called entangled kernels that are not separable. We propose an efficient two-step algorithm for this framework, where the entangled kernel is learned based on a novel extension of kernel alignment to operator-valued kernels. The utility of the algorithm is illustrated on both artificial and real data.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Yueqing, Xinwang Liu, Yong Dou, and Rongchun Li. "Multiple Kernel Clustering Framework with Improved Kernels." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/418.

Full text
Abstract:
Multiple kernel clustering (MKC) algorithms have been successfully applied into various applications. However, these successes are largely dependent on the quality of pre-defined base kernels, which cannot be guaranteed in practical applications. This may adversely affect the clustering performance. To address this issue, we propose a simple while effective framework to adaptively improve the quality of these base kernels. Under our framework, we instantiate three MKC algorithms based on the widely used multiple kernel $k$-means clustering (MKKM), MKKM with matrix-induced regularization (MKKM-MR) and co-regularized multi-view spectral clustering (CRSC). After that, we design the corresponding algorithms with proved convergence to solve the resultant optimization problems. To the best of our knowledge, our framework fills the gap between kernel adaption and clustering procedure for the first time in the literature and is readily extendable. Extensive experimental research has been conducted on 7 MKC benchmarks. As is shown, our algorithms consistently and significantly improve the performance of the base MKC algorithms, indicating the effectiveness of the proposed framework. Meanwhile, our framework shows better performance than compared ones with imperfect kernels.
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Zhi-Xuan, Cheng Jin, Tian-Jing Zhang, Xiao Wu, and Liang-Jian Deng. "SpanConv: A New Convolution via Spanning Kernel Space for Lightweight Pansharpening." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/118.

Full text
Abstract:
Standard convolution operations can effectively perform feature extraction and representation but result in high computational cost, largely due to the generation of the original convolution kernel corresponding to the channel dimension of the feature map, which will cause unnecessary redundancy. In this paper, we focus on kernel generation and present an interpretable span strategy, named SpanConv, for the effective construction of kernel space. Specifically, we first learn two navigated kernels with single channel as bases, then extend the two kernels by learnable coefficients, and finally span the two sets of kernels by their linear combination to construct the so-called SpanKernel. The proposed SpanConv is realized by replacing plain convolution kernel by SpanKernel. To verify the effectiveness of SpanConv, we design a simple network with SpanConv. Experiments demonstrate the proposed network significantly reduces parameters comparing with benchmark networks for remote sensing pansharpening, while achieving competitive performance and excellent generalization. Code is available at https://github.com/zhi-xuan-chen/IJCAI-2022 SpanConv.
APA, Harvard, Vancouver, ISO, and other styles
10

Kriege, Nils M., Christopher Morris, Anja Rey, and Christian Sohler. "A Property Testing Framework for the Theoretical Expressivity of Graph Kernels." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/325.

Full text
Abstract:
Graph kernels are applied heavily for the classification of structured data. However, their expressivity is assessed almost exclusively from experimental studies and there is no theoretical justification why one kernel is in general preferable over another. We introduce a theoretical framework for investigating the expressive power of graph kernels, which is inspired by concepts from the area of property testing. We introduce the notion of distinguishability of a graph property by a graph kernel. For several established graph kernels we show that they cannot distinguish essential graph properties. In order to overcome this, we consider a kernel based on k-disc frequencies. We show that this efficiently computable kernel can distinguish fundamental graph properties. Finally, we obtain learning guarantees for nearest neighbor classifiers in our framework.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Kernel"

1

Bamberger, Judy, Currie Colket, Robert Firth, Daniel Klein, and Roger Van Scoy. Kernal Facilities Definition, Kernel Version 3.0. Fort Belvoir, VA: Defense Technical Information Center, December 1989. http://dx.doi.org/10.21236/ada228027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bamberger, Judy, Currie Colket, Robert Firth, Daniel Klein, and Roger Van Scoy. Kernel Facilities Definition. Fort Belvoir, VA: Defense Technical Information Center, July 1988. http://dx.doi.org/10.21236/ada532236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Smith, Richard J., and Paulo Parente. Kernel block bootstrap. The IFS, July 2018. http://dx.doi.org/10.1920/wp.cem.2018.4818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bamberger, Judy, Timothy Coddington, Currie Colket, Robert Firth, and Daniel Klein. Kernel Architecture Manual. Fort Belvoir, VA: Defense Technical Information Center, December 1989. http://dx.doi.org/10.21236/ada219295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Marchette, David J., Carey E. Priebe, George W. Rogers, and Jeffrey L. Solka. Filtered Kernel Density Estimation. Fort Belvoir, VA: Defense Technical Information Center, October 1994. http://dx.doi.org/10.21236/ada288293.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Marchette, David J., Carey E. Priebe, George W. Rogers, and Jefferey L. Solka. Filtered Kernel Density Estimation. Fort Belvoir, VA: Defense Technical Information Center, October 1994. http://dx.doi.org/10.21236/ada290438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bamberger, Judy, Timothy Coddington, Robert Firth, Daniel Klien, and Dave Stinchcomb. DARK (Distributed Ada Real-Time Kernel) Porting and Extension Guide Kernel Version 3.0. Fort Belvoir, VA: Defense Technical Information Center, December 1989. http://dx.doi.org/10.21236/ada219291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

MARTIN, SHAWN B. Kernel Near Principal Component Analysis. Office of Scientific and Technical Information (OSTI), July 2002. http://dx.doi.org/10.2172/810934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bamberger, Judy, Tim Coddington, Robert Firth, Daniel Klein, and David Stinchcomb. Kernel User's Manual Version 1.0. Fort Belvoir, VA: Defense Technical Information Center, February 1989. http://dx.doi.org/10.21236/ada207414.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bamberger, Judy, Currie Colket, Robert Firth, Daniel Klein, and Roger Van Scoy. Distributed Ada Real-Time Kernel. Fort Belvoir, VA: Defense Technical Information Center, August 1988. http://dx.doi.org/10.21236/ada199482.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography