Dissertations / Theses on the topic 'Kernel'

To see the other types of publications on this topic, follow the link: Kernel.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Kernel.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Shanmugam, Bala Priyadarshini. "Investigation of kernels for the reproducing kernel particle method." Birmingham, Ala. : University of Alabama at Birmingham, 2009. https://www.mhsl.uab.edu/dt/2009m/shanmugam.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Walls, Jacob. "Kernel." Thesis, University of Oregon, 2015. http://hdl.handle.net/1794/19203.

Full text
Abstract:
Kernel is a fifteen-minute work for wind ensemble. Its unifying strands of rhythm, melody, and harmony are spun out of simple four-note tone clusters which undergo changes in contour, intervallic inversion, register, texture, and harmonic environment. These four notes make up the "kernel" of the work, a word used by Breton to refer to the indestructible element of darkness prior to all creative invention, as well as a term used in computer science to refer to the crucial element of a system that, if it should fail, does so loudly.
APA, Harvard, Vancouver, ISO, and other styles
3

George, Sharath. "Usermode kernel : running the kernel in userspace in VM environments." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2858.

Full text
Abstract:
In many instances of virtual machine deployments today, virtual machine instances are created to support a single application. Traditional operating systems provide an extensive framework for protecting one process from another. In such deployments, this protection layer becomes an additional source of overhead as isolation between services is provided at an operating system level and each instance of an operating system supports only one service. This makes the operating system the equivalent of a process from the traditional operating system perspective. Isolation between these operating systems and indirectly the services they support, is ensured by the virtual machine monitor in these deployments. In these scenarios the process protection provided by the operating system becomes redundant and a source of additional overhead. We propose a new model for these scenarios with operating systems that bypass this redundant protection offered by the traditional operating systems. We prototyped such an operating system by executing parts of the operating system in the same protection ring as user applications. This gives processes more power and access to kernel memory bypassing the need to copy data from user to kernel and vice versa as is required when the traditional ring protection layer is enforced. This allows us to save the system call trap overhead and allows application program mers to directly call kernel functions exposing the rich kernel library. This does not compromise security on the other virtual machines running on the same physical machine, as they are protected by the VMM. We illustrate the design and implementation of such a system with the Xen hypervisor and the XenoLinux kernel.
APA, Harvard, Vancouver, ISO, and other styles
4

Guo, Lisong. "Boost the Reliability of the Linux Kernel : Debugging kernel oopses." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066378/document.

Full text
Abstract:
Lorsqu'une erreur survient dans le noyau Linux, celui-ci émet un rapport d’erreur appelé "kernel oops" contenant le contexte d’exécution de cette erreur. Les kernel oops décrivent des erreurs réelles de Linux, permettent de classer les efforts de débogage par ordre de priorité et de motiver la conception d’outils permettant d'améliorer la fiabilité du code de Linux. Néanmoins, les informations contenues dans un kernel oops n’ont de sens que si elles sont représentatives et qu'elles peuvent être interprétées correctement. Dans cette thèse, nous étudions une collection de kernel oops provenant d'un dépôt maintenu par Red Hat sur une période de huit mois. Nous considérons l’ensemble des caractéristiques de ces données, dans quelle mesure ces données reflètent d’autres informations à propos de Linux et l’interprétation des caractéristiques pouvant être pertinentes pour la fiabilité de Linux. Nous constatons que ces données sont bien corrélées à d’autres informations à propos de Linux, cependant, elles souffrent parfois de problèmes de duplication et de manque d’informations. Nous identifions également quelques pièges potentiels lors de l'étude des fonctionnalités, telles que les causes d'erreurs fréquentes et les causes d'applications défaillant fréquemment. En outre, un kernel oops fournit des informations précieuses et de première main pour un mainteneur du noyau Linux lui permettant d'effectuer le débogage post-mortem car il enregistre l’état du noyau Linux au moment du crash. Cependant, le débogage sur la seule base des informations contenues dans un kernel oops est difficile. Pour aider les développeurs avec le débogage, nous avons conçu une solution afin d'obtenir la ligne fautive à partir d’un kernel oops, i.e., la ligne du code source qui provoque l'erreur. Pour cela, nous proposons un nouvel algorithme basé sur la correspondance de séquences approximative utilisé dans le domaine de bioinformatique. Cet algorithme permet de localiser automatiquement la ligne fautive en se basant sur le code machine à proximité de celle-ci et inclus dans un kernel oops. Notre algorithme atteint 92% de précision comparé à 26 % pour l’approche traditionnelle utilisant le débogueur gdb. Nous avons intégré notre solution dans un outil nommé OOPSA qui peut ainsi alléger le fardeau pour les développeurs lors du débogage de kernel oops
When a failure occurs in the Linux kernel, the kernel emits an error report called “kernel oops”, summarizing the execution context of the failure. Kernel oopses describe real Linux errors, and thus can help prioritize debugging efforts and motivate the design of tools to improve the reliability of Linux code. Nevertheless, the information is only meaningful if it is representative and can be interpreted correctly. In this thesis, we study a collection of kernel oopses over a period of 8 months from a repository that is maintained by Red Hat. We consider the overall features of the data, the degree to which the data reflects other information about Linux, and the interpretation of features that may be relevant to reliability. We find that the data correlates well with other information about Linux, but that it suffers from duplicate and missing information. We furthermore identify some potential pitfalls in studying features such as the sources of common faults and common failing applications. Furthermore, a kernel oops provides valuable first-hand information for a Linux kernel maintainer to conduct postmortem debugging, since it logs the status of the Linux kernel at the time of a crash. However, debugging based on only the information in a kernel oops is difficult. To help developers with debugging, we devised a solution to derive the offending line from a kernel oops, i.e., the line of source code that incurs the crash. For this, we propose a novel algorithm based on approximate sequence matching, as used in bioinformatics, to automatically pinpoint the offending line based on information about nearby machine-code instructions, as found in a kernel oops. Our algorithm achieves 92% accuracy compared to 26% for the traditional approach of using only the oops instruction pointer. We integrated the solution into a tool named OOPSA, which would relieve some burden for the developers with the kernel oops debugging
APA, Harvard, Vancouver, ISO, and other styles
5

Guo, Lisong. "Boost the Reliability of the Linux Kernel : Debugging kernel oopses." Electronic Thesis or Diss., Paris 6, 2014. http://www.theses.fr/2014PA066378.

Full text
Abstract:
Lorsqu'une erreur survient dans le noyau Linux, celui-ci émet un rapport d’erreur appelé "kernel oops" contenant le contexte d’exécution de cette erreur. Les kernel oops décrivent des erreurs réelles de Linux, permettent de classer les efforts de débogage par ordre de priorité et de motiver la conception d’outils permettant d'améliorer la fiabilité du code de Linux. Néanmoins, les informations contenues dans un kernel oops n’ont de sens que si elles sont représentatives et qu'elles peuvent être interprétées correctement. Dans cette thèse, nous étudions une collection de kernel oops provenant d'un dépôt maintenu par Red Hat sur une période de huit mois. Nous considérons l’ensemble des caractéristiques de ces données, dans quelle mesure ces données reflètent d’autres informations à propos de Linux et l’interprétation des caractéristiques pouvant être pertinentes pour la fiabilité de Linux. Nous constatons que ces données sont bien corrélées à d’autres informations à propos de Linux, cependant, elles souffrent parfois de problèmes de duplication et de manque d’informations. Nous identifions également quelques pièges potentiels lors de l'étude des fonctionnalités, telles que les causes d'erreurs fréquentes et les causes d'applications défaillant fréquemment. En outre, un kernel oops fournit des informations précieuses et de première main pour un mainteneur du noyau Linux lui permettant d'effectuer le débogage post-mortem car il enregistre l’état du noyau Linux au moment du crash. Cependant, le débogage sur la seule base des informations contenues dans un kernel oops est difficile. Pour aider les développeurs avec le débogage, nous avons conçu une solution afin d'obtenir la ligne fautive à partir d’un kernel oops, i.e., la ligne du code source qui provoque l'erreur. Pour cela, nous proposons un nouvel algorithme basé sur la correspondance de séquences approximative utilisé dans le domaine de bioinformatique. Cet algorithme permet de localiser automatiquement la ligne fautive en se basant sur le code machine à proximité de celle-ci et inclus dans un kernel oops. Notre algorithme atteint 92% de précision comparé à 26 % pour l’approche traditionnelle utilisant le débogueur gdb. Nous avons intégré notre solution dans un outil nommé OOPSA qui peut ainsi alléger le fardeau pour les développeurs lors du débogage de kernel oops
When a failure occurs in the Linux kernel, the kernel emits an error report called “kernel oops”, summarizing the execution context of the failure. Kernel oopses describe real Linux errors, and thus can help prioritize debugging efforts and motivate the design of tools to improve the reliability of Linux code. Nevertheless, the information is only meaningful if it is representative and can be interpreted correctly. In this thesis, we study a collection of kernel oopses over a period of 8 months from a repository that is maintained by Red Hat. We consider the overall features of the data, the degree to which the data reflects other information about Linux, and the interpretation of features that may be relevant to reliability. We find that the data correlates well with other information about Linux, but that it suffers from duplicate and missing information. We furthermore identify some potential pitfalls in studying features such as the sources of common faults and common failing applications. Furthermore, a kernel oops provides valuable first-hand information for a Linux kernel maintainer to conduct postmortem debugging, since it logs the status of the Linux kernel at the time of a crash. However, debugging based on only the information in a kernel oops is difficult. To help developers with debugging, we devised a solution to derive the offending line from a kernel oops, i.e., the line of source code that incurs the crash. For this, we propose a novel algorithm based on approximate sequence matching, as used in bioinformatics, to automatically pinpoint the offending line based on information about nearby machine-code instructions, as found in a kernel oops. Our algorithm achieves 92% accuracy compared to 26% for the traditional approach of using only the oops instruction pointer. We integrated the solution into a tool named OOPSA, which would relieve some burden for the developers with the kernel oops debugging
APA, Harvard, Vancouver, ISO, and other styles
6

Mika, Sebastian. "Kernel Fisher discriminats." [S.l.] : [s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=967125413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sun, Fangzheng. "Kernel Coherence Encoders." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-theses/252.

Full text
Abstract:
In this thesis, we introduce a novel model based on the idea of autoencoders. Different from a classic autoencoder which reconstructs its own inputs through a neural network, our model is closer to Kernel Canonical Correlation Analysis (KCCA) and reconstructs input data from another data set, where these two data sets should have some, perhaps non-linear, dependence. Our model extends traditional KCCA in a way that the non-linearity of the data is learned through optimizing a kernel function by a neural network. In one of the novelties of this thesis, we do not optimize our kernel based upon some prediction error metric, as is classical in autoencoders. Rather, we optimize our kernel to maximize the "coherence" of the underlying low-dimensional hidden layers. This idea makes our method faithful to the classic interpretation of linear Canonical Correlation Analysis (CCA). As far we are aware, our method, which we call a Kernel Coherence Encoder (KCE), is the only extent approach that uses the flexibility of a neural network while maintaining the theoretical properties of classic KCCA. In another one of the novelties of our approach, we leverage a modified version of classic coherence which is far more stable in the presence of high-dimensional data to address computational and robustness issues in the implementation of a coherence based deep learning KCCA.
APA, Harvard, Vancouver, ISO, and other styles
8

Karlsson, Viktor, and Erik Rosvall. "Extreme Kernel Machine." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-211566.

Full text
Abstract:
The purpose of this report is to examine the combination of an Extreme Learning Machine (ELM) with the Kernel Method . Kernels lies at the core of Support Vector Machines success in classifying non-linearly separable datasets. The hypothesis is that by combining ELM with a kernel we will utilize features in the ELM-space otherwise unused. The report is intended as a proof of concept for the idea of using kernel methods in an ELM setting. This will be done by running the new algorithm against five image datasets for a classification accuracy and time complexity analysis. Results show that our extended ELM algorithm, which we have named Extreme Kernel Machine (EKM), improve classification accuracy for some datasets compared to the regularised ELM, in the best scenarios around three percentage points. We found that the choice of kernel type and parameter values had great effect on the classification performance. The implementation of the kernel does however add computational complexity, but where that is not a concern EKM does have an advantage. This tradeoff might give EKM a place between other neural networks and regular ELMs.
APA, Harvard, Vancouver, ISO, and other styles
9

Bhujwalla, Yusuf. "Nonlinear System Identification with Kernels : Applications of Derivatives in Reproducing Kernel Hilbert Spaces." Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0315/document.

Full text
Abstract:
Cette thèse se concentrera exclusivement sur l’application de méthodes non paramétriques basées sur le noyau à des problèmes d’identification non-linéaires. Comme pour les autres méthodes non-linéaires, deux questions clés dans l’identification basée sur le noyau sont les questions de comment définir un modèle non-linéaire (sélection du noyau) et comment ajuster la complexité du modèle (régularisation). La contribution principale de cette thèse est la présentation et l’étude de deux critères d’optimisation (un existant dans la littérature et une nouvelle proposition) pour l’approximation structurale et l’accord de complexité dans l’identification de systèmes non-linéaires basés sur le noyau. Les deux méthodes sont basées sur l’idée d’intégrer des contraintes de complexité basées sur des caractéristiques dans le critère d’optimisation, en pénalisant les dérivées de fonctions. Essentiellement, de telles méthodes offrent à l’utilisateur une certaine souplesse dans la définition d’une fonction noyau et dans le choix du terme de régularisation, ce qui ouvre de nouvelles possibilités quant à la facon dont les modèles non-linéaires peuvent être estimés dans la pratique. Les deux méthodes ont des liens étroits avec d’autres méthodes de la littérature, qui seront examinées en détail dans les chapitres 2 et 3 et formeront la base des développements ultérieurs de la thèse. Alors que l’analogie sera faite avec des cadres parallèles, la discussion sera ancrée dans le cadre de Reproducing Kernel Hilbert Spaces (RKHS). L’utilisation des méthodes RKHS permettra d’analyser les méthodes présentées d’un point de vue à la fois théorique et pratique. De plus, les méthodes développées seront appliquées à plusieurs «études de cas» d’identification, comprenant à la fois des exemples de simulation et de données réelles, notamment : • Détection structurelle dans les systèmes statiques non-linéaires. • Contrôle de la fluidité dans les modèles LPV. • Ajustement de la complexité à l’aide de pénalités structurelles dans les systèmes NARX. • Modelisation de trafic internet par l’utilisation des méthodes à noyau
This thesis will focus exclusively on the application of kernel-based nonparametric methods to nonlinear identification problems. As for other nonlinear methods, two key questions in kernel-based identification are the questions of how to define a nonlinear model (kernel selection) and how to tune the complexity of the model (regularisation). The following chapter will discuss how these questions are usually dealt with in the literature. The principal contribution of this thesis is the presentation and investigation of two optimisation criteria (one existing in the literature and one novel proposition) for structural approximation and complexity tuning in kernel-based nonlinear system identification. Both methods are based on the idea of incorporating feature-based complexity constraints into the optimisation criterion, by penalising derivatives of functions. Essentially, such methods offer the user flexibility in the definition of a kernel function and the choice of regularisation term, which opens new possibilities with respect to how nonlinear models can be estimated in practice. Both methods bear strong links with other methods from the literature, which will be examined in detail in Chapters 2 and 3 and will form the basis of the subsequent developments of the thesis. Whilst analogy will be made with parallel frameworks, the discussion will be rooted in the framework of Reproducing Kernel Hilbert Spaces (RKHS). Using RKHS methods will allow analysis of the methods presented from both a theoretical and a practical point-of-view. Furthermore, the methods developed will be applied to several identification ‘case studies’, comprising of both simulation and real-data examples, notably: • Structural detection in static nonlinear systems. • Controlling smoothness in LPV models. • Complexity tuning using structural penalties in NARX systems. • Internet traffic modelling using kernel methods
APA, Harvard, Vancouver, ISO, and other styles
10

Karim, Khan Shahid. "Abstract Kernel Management Environment." Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1806.

Full text
Abstract:

The Kerngen Module in MATLAB can be used to optimize a filter with regards to an ideal filter; while taking into consideration the weighting function and the spatial mask. To be able to remotely do these optimizations from a standard web browser over a TCP/IP network connection would be of interest. This master’s thesis covers the project of doing such a system; along with an attempt to graphically display three-dimensional filters and also save the optimized filter in XML format. It includes defining an appropriate DTD for the representation of the filter. The result is a working system, with a server and client written in the programming language PIKE.

APA, Harvard, Vancouver, ISO, and other styles
11

Jin, Bo. "Evolutionary Granular Kernel Machines." Digital Archive @ GSU, 2007. http://digitalarchive.gsu.edu/cs_diss/15.

Full text
Abstract:
Kernel machines such as Support Vector Machines (SVMs) have been widely used in various data mining applications with good generalization properties. Performance of SVMs for solving nonlinear problems is highly affected by kernel functions. The complexity of SVMs training is mainly related to the size of a training dataset. How to design a powerful kernel, how to speed up SVMs training and how to train SVMs with millions of examples are still challenging problems in the SVMs research. For these important problems, powerful and flexible kernel trees called Evolutionary Granular Kernel Trees (EGKTs) are designed to incorporate prior domain knowledge. Granular Kernel Tree Structure Evolving System (GKTSES) is developed to evolve the structures of Granular Kernel Trees (GKTs) without prior knowledge. A voting scheme is also proposed to reduce the prediction deviation of GKTSES. To speed up EGKTs optimization, a master-slave parallel model is implemented. To help SVMs challenge large-scale data mining, a Minimum Enclosing Ball (MEB) based data reduction method is presented, and a new MEB-SVM algorithm is designed. All these kernel methods are designed based on Granular Computing (GrC). In general, Evolutionary Granular Kernel Machines (EGKMs) are investigated to optimize kernels effectively, speed up training greatly and mine huge amounts of data efficiently.
APA, Harvard, Vancouver, ISO, and other styles
12

Andersson, Björn. "Contributions to Kernel Equating." Doctoral thesis, Uppsala universitet, Statistiska institutionen, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-234618.

Full text
Abstract:
The statistical practice of equating is needed when scores on different versions of the same standardized test are to be compared. This thesis constitutes four contributions to the observed-score equating framework kernel equating. Paper I introduces the open source R package kequate which enables the equating of observed scores using the kernel method of test equating in all common equating designs. The package is designed for ease of use and integrates well with other packages. The equating methods non-equivalent groups with covariates and item response theory observed-score kernel equating are currently not available in any other software package. In paper II an alternative bandwidth selection method for the kernel method of test equating is proposed. The new method is designed for usage with non-smooth data such as when using the observed data directly, without pre-smoothing. In previously used bandwidth selection methods, the variability from the bandwidth selection was disregarded when calculating the asymptotic standard errors. Here, the bandwidth selection is accounted for and updated asymptotic standard error derivations are provided. Item response theory observed-score kernel equating for the non-equivalent groups with anchor test design is introduced in paper III. Multivariate observed-score kernel equating functions are defined and their asymptotic covariance matrices are derived. An empirical example in the form of a standardized achievement test is used and the item response theory methods are compared to previously used log-linear methods. In paper IV, Wald tests for equating differences in item response theory observed-score kernel equating are conducted using the results from paper III. Simulations are performed to evaluate the empirical significance level and power under different settings, showing that the Wald test is more powerful than the Hommel multiple hypothesis testing method. Data from a psychometric licensure test and a standardized achievement test are used to exemplify the hypothesis testing procedure. The results show that using the Wald test can provide different conclusions to using the Hommel procedure.
APA, Harvard, Vancouver, ISO, and other styles
13

Dhanjal, Charanpal. "Sparse Kernel feature extraction." Thesis, University of Southampton, 2008. https://eprints.soton.ac.uk/64875/.

Full text
Abstract:
The presence of irrelevant features in training data is a significant obstacle for many machine learning tasks, since it can decrease accuracy, make it harder to understand the learned model and increase computational and memory requirements. One approach to this problem is to extract appropriate features. General approaches such as Principal Components Analysis (PCA) are successful for a variety of applications, however they can be improved upon by targeting feature extraction towards more specific problems. More recent work has been more focused and considers sparser formulations which potentially have improved generalisation. However, sparsity is not always efficiently implemented and frequently requires complex optimisation routines. Furthermore, one often does not have a direct control on the sparsity of the solution. In this thesis, we address some of these problems, first by proposing a general framework for feature extraction which possesses a number of useful properties. The framework is based on Partial Least Squares (PLS), and one can choose a user defined criterion to compute projection directions. It draws together a number of existing results and provides additional insights into several popular feature extraction methods. More specific feature extraction is considered for three objectives: matrix approximation, supervised feature extraction and learning the semantics of two-viewed data. Computational and memory efficiency is prioritised, as well as sparsity in a direct manner and simple implementations. For the matrix approximation case, an analysis of different orthogonalisation methods is presented in terms of the optimal choice of projection direction. The analysis results in a new derivation for Kernel Feature Analysis (KFA) and the formation of two novel matrix approximation methods based on PLS. In the supervised case, we apply the general feature extraction framework to derive two new methods based on maximising covariance and alignment respectively. Finally, we outline a novel sparse variant of Kernel Canonical Correlation Analysis (KCCA) which approximates a cardinality constrained optimisation. This method, as well as a variant which performs feature selection in one view, is applied to an enzyme function prediction case study.
APA, Harvard, Vancouver, ISO, and other styles
14

Rademeyer, Estian. "Bayesian kernel density estimation." Diss., University of Pretoria, 2017. http://hdl.handle.net/2263/64692.

Full text
Abstract:
This dissertation investigates the performance of two-class classi cation credit scoring data sets with low default ratios. The standard two-class parametric Gaussian and naive Bayes (NB), as well as the non-parametric Parzen classi ers are extended, using Bayes' rule, to include either a class imbalance or a Bernoulli prior. This is done with the aim of addressing the low default probability problem. Furthermore, the performance of Parzen classi cation with Silverman and Minimum Leave-one-out Entropy (MLE) Gaussian kernel bandwidth estimation is also investigated. It is shown that the non-parametric Parzen classi ers yield superior classi cation power. However, there is a longing for these non-parametric classi ers to posses a predictive power, such as exhibited by the odds ratio found in logistic regression (LR). The dissertation therefore dedicates a section to, amongst other things, study the paper entitled \Model-Free Objective Bayesian Prediction" (Bernardo 1999). Since this approach to Bayesian kernel density estimation is only developed for the univariate and the uncorrelated multivariate case, the section develops a theoretical multivariate approach to Bayesian kernel density estimation. This approach is theoretically capable of handling both correlated as well as uncorrelated features in data. This is done through the assumption of a multivariate Gaussian kernel function and the use of an inverse Wishart prior.
Dissertation (MSc)--University of Pretoria, 2017.
The financial assistance of the National Research Foundation (NRF) towards this research is hereby acknowledged. Opinions expressed and conclusions arrived at, are those of the authors and are not necessarily to be attributed to the NRF.
Statistics
MSc
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
15

Pevný, Tomáš. "Kernel methods in steganalysis." Diss., Online access via UMI:, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
16

Corrigan, Andrew. "Kernel-based meshless methods." Fairfax, VA : George Mason University, 2009. http://hdl.handle.net/1920/4585.

Full text
Abstract:
Thesis (Ph.D.)--George Mason University, 2009.
Vita: p. 108. Thesis co-directors: John Wallin, Thomas Wanner. Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computational Science and Informatics. Title from PDF t.p. (viewed Oct. 12, 2009). Includes bibliographical references (p. 102-107). Also issued in print.
APA, Harvard, Vancouver, ISO, and other styles
17

Ho, Ka-Lung. "Kernel eigenvoice speaker adaptation /." View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?COMP%202003%20HOK.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 56-61). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
18

Reichenbach, Stephen Edward. "Small-kernel image restoration." W&M ScholarWorks, 1989. https://scholarworks.wm.edu/etd/1539623783.

Full text
Abstract:
The goal of image restoration is to remove degradations that are introduced during image acquisition and display. Although image restoration is a difficult task that requires considerable computation, in many applications the processing must be performed significantly faster than is possible with traditional algorithms implemented on conventional serial architectures. as demonstrated in this dissertation, digital image restoration can be efficiently implemented by convolving an image with a small kernel. Small-kernel convolution is a local operation that requires relatively little processing and can be easily implemented in parallel. A small-kernel technique must compromise effectiveness for efficiency, but if the kernel values are well-chosen, small-kernel restoration can be very effective.;This dissertation develops a small-kernel image restoration algorithm that minimizes expected mean-square restoration error. The derivation of the mean-square-optimal small kernel parallels that of the Wiener filter, but accounts for explicit spatial constraints on the kernel. This development is thorough and rigorous, but conceptually straightforward: the mean-square-optimal kernel is conditioned only on a comprehensive end-to-end model of the imaging process and spatial constraints on the kernel. The end-to-end digital imaging system model accounts for the scene, acquisition blur, sampling, noise, and display reconstruction. The determination of kernel values is directly conditioned on the specific size and shape of the kernel. Experiments presented in this dissertation demonstrate that small-kernel image restoration requires significantly less computation than a state-of-the-art implementation of the Wiener filter yet the optimal small-kernel yields comparable restored images.;The mean-square-optimal small-kernel algorithm and most other image restoration algorithms require a characterization of the image acquisition device (i.e., an estimate of the device's point spread function or optical transfer function). This dissertation describes an original method for accurately determining this characterization. The method extends the traditional knife-edge technique to explicitly deal with fundamental sampled system considerations of aliasing and sample/scene phase. Results for both simulated and real imaging systems demonstrate the accuracy of the method.
APA, Harvard, Vancouver, ISO, and other styles
19

Haasdonk, Bernard [Verfasser]. "Transformation Knowledge in Pattern Analysis with Kernel Methods: Distance and Integration Kernels / Bernard Haasdonk." aachen : shaker, 2006. http://nbn-resolving.de/urn:nbn:de:bsz:25-opus-23769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Ansary, B. M. Saif. "High Performance Inter-kernel Communication and Networking in a Replicated-kernel Operating System." Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/78338.

Full text
Abstract:
Modern computer hardware platforms are moving towards high core-count and heterogeneous Instruction Set Architecture (ISA) processors to achieve improved performance as single core performance has reached its performance limit. These trends put the current monolithic SMP operating system (OS) under scrutiny in terms of scalability and portability. Proper pairing of computing workloads with computing resources has become increasingly arduous with traditional software architecture. One of the most promising emerging operating system architectures is the Multi-kernel. Multi-kernels not only address scalability issues, but also inherently support heterogeneity. Furthermore, provide an easy way to properly map computing workloads to the correct type of processing resources in presence of heterogeneity. Multi-kernels do so by partitioning the resources and running independent kernel instances and co-operating amongst themselves to present a unified view of the system to the application. Popcorn is one the most prominent multi-kernels today, which is unique in the sense that it runs multiple Linux instances on different cores or group of cores, and provides a unified view of the system i.e., Single System Image (SSI). This thesis presents four contributions. First, it introduces a filesystem for Popcorn, which is a vital part to provide a SSI. Popcorn supports thread/process migration that requires migration of file descriptors which is not provided by traditional filesystems as well as popular distributed file systems, this work proposes a scalable messaging based file descriptor migration and consistency protocol for Popcorn. Second, multi-kernel OSs rely heavily on a fast low latency messaging layer to be scalable. Messaging is even more important in heterogeneous systems where different types of cores are on different islands with no shared memory. Thus, another contribution proposes a fast-low latency messaging layer to enable communication among heterogeneous processor islands for Heterogeneous Popcorn. With advances in networking technology, newest Ethernet technologies are able to support up to 40 Gbps bandwidth, but due to scalability issues in monolithic kernels, the number of connections served per second does not scale with this increase in speed.Therefore, the third and fourth contributions try to address this problem with Snap Bean, a virtual network device and Angel, an opportunistic load balancer for Popcorn's network system. With the messaging layer Popcorn gets over 30% performance benefit over OpenCL and Intel Offloading technique (LEO). And with NetPopcorn we achieve over 7 to 8 times better performance over vanilla Linux and 2 to 5 times over state-of-the-art Affinity Accept .
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
21

Ceau, Alban. "Kernel. Application et potentiels scientifiques de l’interférométrie pleine pupille : Analyse statistique des observables kernel." Thesis, Université Côte d'Azur, 2020. http://www.theses.fr/2020COAZ4035.

Full text
Abstract:
Les observations à haute résolution du ciel sont assurées par deux grandes techniques : l'imagerie et l'interférométrie. L'imagerie consiste à estimer la distribution spatiale d'intensité d'une source, en formant une image de cette distribution d'intensité sur une plaque photosensible, anciennement chimiquement (plaque photographique), mais désormais exclusivement électronique, sous la forme de détecteur. L'imagerie est limitée par la qualité des images produites, qui peut être approximée par la taille de l'image que forme un point source sur le détecteur. Plus cette image est petite, plus la résolution est importante. La deuxième technique, l'interférométrie consiste à exploiter les propriétés ondulatoires de la lumière pour former non pas une image, mais des franges d'interférence, qui encodent dans leur position et leur contraste des informations sur la structure spatiale de l'objet observé.Bien que l'interférométrie et l'imagerie soient deux techniques différentes, et que leurs spécialistes tendent à former des communautés distinctes, les phénomènes à l'œuvre lors de la formation d'une image d'une part, et d’une figure d'interférence d'autre part sont fondamentalement les mêmes. Cela permet, sous certaines conditions, d'exploiter des techniques interférométriques sur des images. Une de ces techniques permet de former clôtures de phases, des observables robustes aux défauts optiques à partir d'observations interférométriques. Si les défauts optiques sont assez faibles (avec des erreurs sur le chemin optique plus petites que la longueur d'onde), il est possible de former des observables analogues à ces clôtures de phase à partir d'images, appelées des kernel phases, ou noyaux de phase.Le régime dans lequel ces observables peuvent être extraites n'a été rendu accessible que récemment, avec le lancement des premiers télescopes spatiaux d'une part, et d'autre part l'émergence de l'optique adaptative, qui peut corriger en temps réel les défauts liés aux turbulences atmosphériques. Si les défauts sont assez faibles, les images sont dites "limitées par la diffraction“ : la réponse du télescope peut être considérée comme dominée par les effets de diffraction, qui dépendent de la géométrie de l'ouverture d'entrée, et les défauts optiques comme des perturbations de la diffraction.Dans ce régime, la structure de la perturbation peut être utilisée pour construire des observables qu'elle n'affecte pas. Ces observables ne sont toutefois pas robustes à toutes les erreurs. Dans ce cas, je me suis concentré sur la détection de binaires dans les kernel phases extraites à partir d'images, en utilisant des méthodes statistiques robustes. En théorie de la détection, la procédure la plus efficace pour détecter un signal dans des données bruitées est le rapport de vraisemblance. Ici, je propose trois tests, tous basés sur cette procédure optimale pour effectuer des détections systématiques de binaires dans des images. Ces procédures sont applicables aux kernels phases extraites à partir de n'importe quelle image.Les performances de ces procédures de détection sont ensuite prédites pour des observations de naines brunes de type Y avec le télescope spatial James Webb. Nous montrons que des détections de binaires sont possible à des contrastes pouvant atteindre 1000 à des séparations correspondant à la limite de diffraction, qui est communément admise comme la "limite de résolution“ d'un télescope formant des images. Ces performances font de l'interférométrie kernel une méthode performante pour la détection de binaires de faible intensité. Ces limites dépendent fortement du flux disponible, qui détermine l'erreur sur les valeurs de flux mesurées au niveau de chaque pixel, et, par extension les erreurs qui affectent les kernel phases
High resolution observations of the sky are made using techniques that fall into two wide categories: imaging, and interferometry. Imaging consists in estimating the spatial intensity distribution of a source by forming an image of this source of a photosensitive plate, historically using chemical processes (a photographic plate), but nowadays electronically, with detectors. Imaging is limited by the quality of images, which can be approximated from the size of the image formed by point source on the detector. The smaller this size, the higher the resolution of an image. Interferometry, the second aforementioned technique, consists in exploiting the wave properties of light to form interference fringes rather than an image. These fringes encode information on the spatial structure of the observed object in their position and contrast Even though interferometry and imaging are two different techniques, and specialists of one or the other tend to form distinct communities, the phenomena that lead to the formation either of an image, or of an interference pattern are fundamentally the same. This enables, under some conditions, the use of techniques originally developed to treat interferometry data on images. One of these techniques allows to construct closure phases, observables that are robust to optical defaults from interferometry observations. if these optical defaults are small enough (with optical path differences smaller than the wavelength), it is possible to form observables analog to these closure phases called kernel phases. The regime in which these observables can be extracted was only attained recently, with the launch of the first space telescopes and the rise of extreme adaptive optics, which can correct in real time the defaults caused by atmospheric turbulence. If these defaults are small enough, images are called "diffraction limited": the response of the telescope can be considered dominated by the effects of diffraction, which depend on the geometry of the entrance aperture, optical defaults can be described ads perturbations of diffraction.In this regime, the structure of the perturbation can be used to build observables it does not affect. These observables are however to robust to all errors. An imperfect modeling on the entrance aperture and the approximations necessary to their construction can lead to systematic errors. Noises in the image also propagate to the observables. To be able to analyze a measurement, it is necessary to know the errors that affect it, and to propagate them to the final parameters deduced from these measurements.The use case we chose to evaluate these techniques was images of cold brown dwarfs produced by the James Webb Space Telescope (JWST), to predict the detection performances of companions around them. Currently, observation of these cold, Y type dwarfs has been made difficult by their very weak luminosity and temperature, which make observing them very difficult in the near infrared, the preferred domain of AO corrected ground based observatories. Thanks to its great sensitivity and stability, JWST will be able to observe these objects with the greatest precision achieved yet. This stability makes images produced by this telescope ideal candidates for kernel analysis.The performances of these detection procedures are then predicted for images of cold brown dwarfs produced by JWST. For these images, we show that binary detections are possible a contrast that can reach 1000 at separations corresponding to the diffraction limit, often considered to be the resolution limit of a telescope. These contrast detection limits make kernel interferometry a powerful method for the detection of low flux binaries. These detection limits strongly depend on the available flux, which determines the error level on each pixel, and therefore the noise that affects the kernel phases
APA, Harvard, Vancouver, ISO, and other styles
22

Liang, Zhiyu. "Eigen-analysis of kernel operators for nonlinear dimension reduction and discrimination." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1388676476.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Guardati, Emanuele. "Path integrals and heat kernel." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14608/.

Full text
Abstract:
L’integrale funzionale fu introdotto per la prima volta da Feynman nel 1948. Esso costituisce un differente approccio alla meccanica quantistica non relativistica, equivalente alle precedenti formulazioni. Contrariamente all'approccio prettamente matematico della quantizzazione canonica, Feynman predilige un’interpretazione più intuitiva, basando il suo lavoro su una generalizzazione dell’esperimento delle due fenditure. Oltre ad una comprensione più profonda dei ben noti risultati della meccanica quantistica non relativistica, l’utilizzo dell’integrale funzionale ha altresì il vantaggio di esemplificare calcoli perturbativi. Oltre a mostrare l’equivalenza con la formulazione di Schödinger, discutiamo il limite classico, e diamo degli esempi di calcoli espliciti, determinando i propagatori della particella libera e dell’oscillatore armonico quantistici. Infine, mediante l’integrale funzionale, studiamo una soluzione perturbativa dell’equazione del calore.
APA, Harvard, Vancouver, ISO, and other styles
24

Tullo, Alessandra. "Apprendimento automatico con metodo kernel." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23200/.

Full text
Abstract:
Il seguente lavoro ha come obbiettivo lo studio dei metodi kernel nell'apprendimento automatico. Partendo dalla definizione di spazi di Hilbert a nucleo riproducente vengono esaminate le funzioni kernel e i metodi kernel. In particolare vengono analizzati il kernel trick e il representer theorem. Infine viene dato un esempio di problema dell'apprendimento automatico supervisionato, il problema di regressione lineare del kernel, risolto attraverso il representer theorem.
APA, Harvard, Vancouver, ISO, and other styles
25

Cheung, Pak-Ming. "Kernel-based multiple-instance learning /." View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?COMP%202006%20CHEUNGP.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Brinker, Klaus. "Active learning with kernel machines." [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=974403946.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Prenov, B., and Nikolai Tarkhanov. "Kernel spikes of singular problems." Universität Potsdam, 2001. http://opus.kobv.de/ubp/volltexte/2008/2619/.

Full text
Abstract:
Function spaces with asymptotics is a usual tool in the analysis on manifolds with singularities. The asymptotics are singular ingredients of the kernels of pseudodifferential operators in the calculus. They correspond to potentials supported by the singularities of the manifold, and in this form asymptotics can be treated already on smooth configurations. This paper is aimed at describing refined asymptotics in the Dirichlet problem in a ball. The beauty of explicit formulas highlights the structure of asymptotic expansions in the calculi on singular varieties.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhu, Feng. "Integrity-Based Kernel Malware Detection." FIU Digital Commons, 2014. http://digitalcommons.fiu.edu/etd/1572.

Full text
Abstract:
Kernel-level malware is one of the most dangerous threats to the security of users on the Internet, so there is an urgent need for its detection. The most popular detection approach is misuse-based detection. However, it cannot catch up with today's advanced malware that increasingly apply polymorphism and obfuscation. In this thesis, we present our integrity-based detection for kernel-level malware, which does not rely on the specific features of malware. We have developed an integrity analysis system that can derive and monitor integrity properties for commodity operating systems kernels. In our system, we focus on two classes of integrity properties: data invariants and integrity of Kernel Queue (KQ) requests. We adopt static analysis for data invariant detection and overcome several technical challenges: field-sensitivity, array-sensitivity, and pointer analysis. We identify data invariants that are critical to system runtime integrity from Linux kernel 2.4.32 and Windows Research Kernel (WRK) with very low false positive rate and very low false negative rate. We then develop an Invariant Monitor to guard these data invariants against real-world malware. In our experiment, we are able to use Invariant Monitor to detect ten real-world Linux rootkits and nine real-world Windows malware and one synthetic Windows malware. We leverage static and dynamic analysis of kernel and device drivers to learn the legitimate KQ requests. Based on the learned KQ requests, we build KQguard to protect KQs. At runtime, KQguard rejects all the unknown KQ requests that cannot be validated. We apply KQguard on WRK and Linux kernel, and extensive experimental evaluation shows that KQguard is efficient (up to 5.6% overhead) and effective (capable of achieving zero false positives against representative benign workloads after appropriate training and very low false negatives against 125 real-world malware and nine synthetic attacks). In our system, Invariant Monitor and KQguard cooperate together to protect data invariants and KQs in the target kernel. By monitoring these integrity properties, we can detect malware by its violation of these integrity properties during execution.
APA, Harvard, Vancouver, ISO, and other styles
29

Kellner, Jérémie. "Gaussian models and kernel methods." Thesis, Lille 1, 2016. http://www.theses.fr/2016LIL10177/document.

Full text
Abstract:
Les méthodes à noyaux ont été beaucoup utilisées pour transformer un jeu de données initial en les envoyant dans un espace dit « à noyau » ou RKHS, pour ensuite appliquer une procédure statistique sur les données transformées. En particulier, cette approche a été envisagée dans la littérature pour tenter de rendre un modèle probabiliste donné plus juste dans l'espace à noyaux, qu'il s'agisse de mélanges de gaussiennes pour faire de la classification ou d'une simple gaussienne pour de la détection d'anomalie. Ainsi, cette thèse s'intéresse à la pertinence de tels modèles probabilistes dans ces espaces à noyaux. Dans un premier temps, nous nous concentrons sur une famille de noyaux paramétrée - la famille des noyaux radiaux gaussiens - et étudions d'un point de vue théorique la distribution d'une variable aléatoire projetée vers un RKHS correspondant. Nous établissons que la plupart des marginales d'une telle distribution est asymptotiquement proche d'un « scale-mixture » de gaussiennes - autrement dit une gaussienne avec une variance aléatoire - lorsque le paramètre du noyau tend vers l'infini. Une nouvelle méthode de détection d'anomalie utilisant ce résultat théorique est introduite.Dans un second temps, nous introduisons un test d'adéquation basé sur la Maximum Mean Discrepancy pour tester des modèles gaussiens dans un RKHS. En particulier, notre test utilise une procédure de bootstrap paramétrique rapide qui permet d'éviter de ré-estimer les paramètres de la distribution gaussienne à chaque réplication bootstrap
Kernel methods have been extensively used to transform initial datasets by mapping them into a so-called kernel space or RKHS, before applying some statistical procedure onto transformed data. In particular, this kind of approach has been explored in the literature to try and make some prescribed probabilistic model more accurate in the RKHS, for instance Gaussian mixtures for classification or mere Gaussians for outlier detection. Therefore this thesis studies the relevancy of such models in kernel spaces.In a first time, we focus on a family of parameterized kernels - Gaussian RBF kernels - and study theoretically the distribution of an embedded random variable in a corresponding RKHS. We managed to prove that most marginals of such a distribution converge weakly to a so-called ''scale-mixture'' of Gaussians - basically a Gaussian with a random variance - when the parameter of the kernel tends to infinity. This result is used in practice to device a new method for outlier detection.In a second time, we present a one-sample test for normality in an RKHS based on the Maximum Mean Discrepancy. In particular, our test uses a fast parametric bootstrap procedure which circumvents the need for re-estimating Gaussian parameters for each bootstrap replication
APA, Harvard, Vancouver, ISO, and other styles
30

Subhan, Fazli. "Multilevel sparse kernel-based interpolation." Thesis, University of Leicester, 2011. http://hdl.handle.net/2381/9894.

Full text
Abstract:
Radial basis functions (RBFs) have been successfully applied for the last four decades for fitting scattered data in Rd, due to their simple implementation for any d. However, RBF interpolation faces the challenge of keeping a balance between convergence performance and numerical stability. Moreover, to ensure good convergence rates in high dimensions, one has to deal with the difficulty of exponential growth of the degrees of freedom with respect to the dimension d of the interpolation problem. This makes the application of RBFs limited to few thousands of data sites and/or low dimensions in practice. In this work, we propose a hierarchical multilevel scheme, termed sparse kernel-based interpolation (SKI) algorithm, for the solution of interpolation problem in high dimensions. The new scheme uses direction-wise multilevel decomposition of structured or mildly unstructured interpolation data sites in conjunction with the application of kernel-based interpolants with different scaling in each direction. The new SKI algorithm can be viewed as an extension of the idea of sparse grids/hyperbolic cross to kernel-based functions. To achieve accelerated convergence, we propose a multilevel version of the SKI algorithm. The SKI and multilevel SKI (MLSKI) algorithms admit good reproduction properties: they are numerically stable and efficient for the reconstruction of large data in Rd, for d = 2, 3, 4, with several thousand data. SKI is generally superior over classical RBF methods in terms of complexity, run time, and convergence at least for large data sets. The MLSKI algorithm accelerates the convergence of SKI and has also generally faster convergence than the classical multilevel RBF scheme.
APA, Harvard, Vancouver, ISO, and other styles
31

Friess, Thilo-Thomas. "Perceptrons in kernel feature spaces." Thesis, University of Sheffield, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.327730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Xiao, Bai. "Heat kernel analysis on graphs." Thesis, University of York, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.440819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Sun, Xinyuan. "Kernel Methods for Collaborative Filtering." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-theses/135.

Full text
Abstract:
The goal of the thesis is to extend the kernel methods to matrix factorization(MF) for collaborative ltering(CF). In current literature, MF methods usually assume that the correlated data is distributed on a linear hyperplane, which is not always the case. The best known member of kernel methods is support vector machine (SVM) on linearly non-separable data. In this thesis, we apply kernel methods on MF, embedding the data into a possibly higher dimensional space and conduct factorization in that space. To improve kernelized matrix factorization, we apply multi-kernel learning methods to select optimal kernel functions from the candidates and introduce L2-norm regularization on the weight learning process. In our empirical study, we conduct experiments on three real-world datasets. The results suggest that the proposed method can improve the accuracy of the prediction surpassing state-of-art CF methods.
APA, Harvard, Vancouver, ISO, and other styles
34

Evgeniou, Theodoros K. (Theodoros Kostantinos) 1974. "Learning with kernel machine architectures." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86442.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (p. 99-106).
by Theodoros K. Evgeniou.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
35

Naish-Guzman, Andrew Guillermo Peter. "Sparse and robust kernel methods." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Haine, Christopher. "Kernel optimization by layout restructuring." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0639/document.

Full text
Abstract:
Bien penser la structuration de données est primordial pour obtenir de hautes performances, alors que les processeurs actuels perdent un temps considérable à attendre la complétion de transactions mémoires. En particulier les localités spatiales et temporelles de données doivent être optimisées.Cependant, les transformations de structures de données ne sont pas proprement explorées par les compilateurs, en raison de la difficulté que pose l'évaluation de performance des transformations potentielles. De plus,l'optimisation des structures de données est chronophage, sujette à erreur etles transformations à considérer sont trop nombreuses pour être implémentées à la main dans l'optique de trouver une version de code efficace.On propose de guider les programmeurs à travers le processus de restructuration de données grace à un retour utilisateur approfondi, tout d'abord en donnant une description multidimensionnelle de la structure de donnée initiale, faite par une analyse de traces mémoire issues du binaire de l'application de l'utilisateur, dans le but de localiser des problèmes de stride au niveau instruction, indépendemment du langage d'entrée. On choisit de focaliser notre étude sur les transformations de structure de données, traduisibles dans un formalisme proche du C pour favoriser la compréhension de l'utilisateur, que l'on applique et évalue sur deux cas d'étude qui sont des applications réelles,à savoir une simulation d'ondes cardiaques et une simulation de chromodynamique quantique sur réseau, avec différents jeux d'entrées. La prédiction de performance de différentes transformations est conforme à 5% près aux versions réécrites à la main
Careful data layout design is crucial for achieving high performance, as nowadays processors waste a considerable amount of time being stalled by memory transactions, and in particular spacial and temporal locality have to be optimized. However, data layout transformations is an area left largely unexplored by state-of-the-art compilers, due to the difficulty to evaluate the possible performance gains of transformations. Moreover, optimizing data layout is time-consuming, error-prone, and layout transformations are too numerous tobe experimented by hand in hope to discover a high performance version. We propose to guide application programmers through data layout restructuring with an extensive feedback, firstly by providing a comprehensive multidimensional description of the initial layout, built via analysis of memory traces collected from the application binary textit {in fine} aiming at pinpointing problematic strides at the instruction level, independently of theinput language. We choose to focus on layout transformations,translatable to C-formalism to aid user understanding, that we apply and assesson case study composed of two representative multithreaded real-lifeapplications, a cardiac wave simulation and lattice QCD simulation, with different inputs and parameters. The performance prediction of different transformations matches (within 5%) with hand-optimized layout code
APA, Harvard, Vancouver, ISO, and other styles
37

Lundberg, Johannes. "Safe Kernel Programming with Rust." Thesis, KTH, Programvaruteknik och datorsystem, SCS, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233255.

Full text
Abstract:
Writing bug free computer code is a challenging task in a low-level language like C. While C compilers are getting better and better at detecting possible bugs, they still have a long way to go. For application programming we have higher level languages that abstract away details in memory handling and concurrent programming. However, a lot of an operating system's source code is still written in C and the kernel is exclusively written in C. How can we make writing kernel code safer? What are the performance penalties we have to pay for writing safe code? In this thesis, we will answer these questions using the Rust programming language. A Rust Kernel Programming Interface is designed and implemented, and a network device driver is then ported to Rust. The Rust code is analyzed to determine the safeness and the two implementations are benchmarked for performance and compared to each other. It is shown that a kernel device driver can be written entirely in safe Rust code, but the interface layer require some unsafe code. Measurements show unexpected minor improvements to performance with Rust.
Att skriva buggfri kod i ett lågnivåspråk som C är väldigt svårt. C-kompilatorer blir bättre och bättre på att upptäcka buggar men är ännu långt ifrån att kunna garantera buggfri kod. För applikationsprogrammering finns det tillgängligt olika högnivåspråk som abstrakterar bort den manuella minneshanteringen och hjälper med trådsäker programmering. Dock fortfarande så är större delar av operativsystemet och dess kärna är endast skriven i C. Hur kan vi göra programmering i kärnan säkrare? Vad är prestandakonsekvenserna av att använda ett säkrare språk? I denna uppsats ska vi försöka svara på dessa frågor genom att använda språket Rust. Ett programmeringsgränssnitt i Rust är implementerat i kärnan och en nätverksdrivrutin är portad till Rust. Källkoden skriven i Rust är analyserad för att bedömma säkerheten samt prestandan är jämförd mellan C och Rust implementationerna. Det är bevisat att vi kan skriva en drivrutin i enbart säker Rust om vi kan lita på några osäkra funktioner i gränssnittet. Mätningar visar lite bättre prestanda i Rust.
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Xiaoyi. "Transfer Learning with Kernel Methods." Thesis, Troyes, 2018. http://www.theses.fr/2018TROY0005.

Full text
Abstract:
Le transfert d‘apprentissage regroupe les méthodes permettant de transférer l’apprentissage réalisé sur des données (appelées Source) à des données nouvelles, différentes, mais liées aux données Source. Ces travaux sont une contribution au transfert d’apprentissage homogène (les domaines de représentation des Source et Cible sont identiques) et transductif (la tâche à effectuer sur les données Cible est identique à celle sur les données Source), lorsque nous ne disposons pas d’étiquettes des données Cible. Dans ces travaux, nous relâchons la contrainte d’égalité des lois des étiquettes conditionnellement aux observations, souvent considérée dans la littérature. Notre approche permet de traiter des cas de plus en plus généraux. Elle repose sur la recherche de transformations permettant de rendre similaires les données Source et Cible. Dans un premier temps, nous recherchons cette transformation par Maximum de Vraisemblance. Ensuite, nous adaptons les Machines à Vecteur de Support en intégrant une contrainte additionnelle sur la similitude des données Source et Cible. Cette similitude est mesurée par la Maximum Mean Discrepancy. Enfin, nous proposons l’utilisation de l’Analyse en Composantes Principales à noyau pour rechercher un sous espace, obtenu à partir d’une transformation non linéaire des données Source et Cible, dans lequel les lois des observations sont les plus semblables possibles. Les résultats expérimentaux montrent l’efficacité de nos approches
Transfer Learning aims to take advantage of source data to help the learning task of related but different target data. This thesis contributes to homogeneous transductive transfer learning where no labeled target data is available. In this thesis, we relax the constraint on conditional probability of labels required by covariate shift to be more and more general, based on which the alignment of marginal probabilities of source and target observations renders source and target similar. Thus, firstly, a maximum likelihood based approach is proposed. Secondly, SVM is adapted to transfer learning with an extra MMD-like constraint where Maximum Mean Discrepancy (MMD) measures this similarity. Thirdly, KPCA is used to align data in a RKHS on minimizing MMD. We further develop the KPCA based approach so that a linear transformation in the input space is enough for a good and robust alignment in the RKHS. Experimentally, our proposed approaches are very promising
APA, Harvard, Vancouver, ISO, and other styles
39

You, Di. "Model Selection in Kernel Methods." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1322581224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Sung, Iyue. "Importance sampling kernel density estimation /." The Ohio State University, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=osu1486398528559777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Bloehdorn, Stephan. "Kernel Methods for knowledge structures." [S.l. : s.n.], 2008. http://digbib.ubka.uni-karlsruhe.de/volltexte/1000009223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Hsiao, Roger Wend Huu. "Kernel eigenspace-based MLLR adaptation /." View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?COMP%202004%20HSIAO.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Strathmann, Heiko. "Kernel methods for Monte Carlo." Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10040707/.

Full text
Abstract:
This thesis investigates the use of reproducing kernel Hilbert spaces (RKHS) in the context of Monte Carlo algorithms. The work proceeds in three main themes. Adaptive Monte Carlo proposals: We introduce and study two adaptive Markov chain Monte Carlo (MCMC) algorithms to sample from target distributions with non-linear support and intractable gradients. Our algorithms, generalisations of random walk Metropolis and Hamiltonian Monte Carlo, adaptively learn local covariance and gradient structure respectively, by modelling past samples in an RKHS. We further show how to embed these methods into the sequential Monte Carlo framework. Efficient and principled score estimation: We propose methods for fitting an RKHS exponential family model that work by fitting the gradient of the log density, the score, thus avoiding the need to compute a normalization constant. While the problem is of general interest, here we focus on its embedding into the adaptive MCMC context from above. We improve the computational efficiency of an earlier solution with two novel fast approximation schemes without guarantees, and a low-rank, Nyström-like solution. The latter retains the consistency and convergence rates of the exact solution, at lower computational cost. Goodness-of-fit testing: We propose a non-parametric statistical test for goodness-of-fit. The measure is a divergence constructed via Stein's method using functions from an RKHS. We derive a statistical test, both for i.i.d. and non-i.i.d. samples, and apply the test to quantifying convergence of approximate MCMC methods, statistical model criticism, and evaluating accuracy in non-parametric score estimation.
APA, Harvard, Vancouver, ISO, and other styles
44

Cagnin, Francesco <1991&gt. "LLDBagility: practical macOS kernel debugging." Master's Degree Thesis, Università Ca' Foscari Venezia, 2020. http://hdl.handle.net/10579/16240.

Full text
Abstract:
The effectiveness of debugging software issues depends largely on the capabilities of the tools available to aid in such task. To debug the macOS kernel there is at present no real alternative other than the basic debugger integrated in the kernel itself, intended to be used remotely from another machine through a full-fledged debugger like LLDB. Due to design constraints and implementative choices, this approach has however several drawbacks, such as the necessity of modifying the system configuration, or the impossibility to set hardware breakpoints or to pause the execution of kernel from the debugger. The aim of this work was improving the overall debugging experience for the macOS kernel. To this end we developed LLDBagility, a tool to enable kernel debugging via virtual machine introspection. LLDBagility connects LLDB to any unmodified macOS virtual machine running on a patched version of the VirtualBox hypervisor, allowing the debugger to fully control the machine without the system being aware of the process. This solution have made possible to overcome all limitations of the classic kernel debugging approach, and also to implement new useful features like the ability to save and eventually restore the state of the machine directly from the debugger.
APA, Harvard, Vancouver, ISO, and other styles
45

Braun, Mikio Ludwig. "Spectral properties of the kernel matrix and their relation to kernel methods in machine learning." [S.l.] : [s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=978607309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Xiao, Quanwu. "Learning with kernel based regularization schemes /." access full-text access abstract and table of contents, 2009. http://libweb.cityu.edu.hk/cgi-bin/ezdb/thesis.pl?phd-ma-b30082365f.pdf.

Full text
Abstract:
Thesis (Ph.D.)--City University of Hong Kong, 2009.
"Submitted to Department of Mathematics in partial fulfillment of the requirements for the degree of Doctor of Philosophy." Includes bibliographical references (leaves [73]-81)
APA, Harvard, Vancouver, ISO, and other styles
47

Ramon, Gurrea Elies. "Kernel approaches for complex phenotype prediction." Doctoral thesis, Universitat Autònoma de Barcelona, 2020. http://hdl.handle.net/10803/671296.

Full text
Abstract:
La relació entre fenotip i informació genotípica és considerablement intricada i complexa. Els mètodes d’aprenentatge automàtic s’han utilitzat amb èxit per a la predicció de fenotips en un gran ventall de problemes dins de la genètica i la genòmica. Tanmateix, les dades biològiques sovent estan estructurades i són de tipus “no estàndard”, el que pot suposar un repte per a la majoria de mètodes d’aprenentatge automàtic. Entre aquestos, els mètodes kernel proporcionen un enfocament molt versàtil per manejar diferents tipus de dades i problemes mitjançant la utilització d’una família de funcions anomenades de kernel. L’objectiu principal d’aquesta tesi doctoral és el desenvolupament i l’avaluació d’estratègies de kernel específiques per a la predicció fenotípica, especialment en problemes biològics amb dades o dissenys experimentals de tipus estructurat. A la primera part, utilitzam seqüències de proteasa, transcriptasa inversa i integrasa per predir la resistència del VIH a fàrmacs antiretrovirals. Proposam dos kernels categòrics (Overlap i Jaccard) que tenen en compte les particularitats de les dades de VIH, com per exemple les barreges d’al·lels. Els kernels proposats es combinen amb Support Vector Machines (SVM) i es comparen amb dos kernels estàndard (Linear i RBF) i dos mètodes que no són de kernel: els boscos aleatoris (RF) i un tipus de xarxa neuronal (el perceptró multicapa). També incloem en els kernels la importància relativa de cada posició de la proteïna pel que fa a la resistència. Els resultats mostren que tenir en compte la naturalesa categòrica de les dades i la presència de barreges millora sistemàticament la predicció. L’efecte de ponderar les posicions per la seua importància és més gran en la transcriptasa inversa i en la integrasa, el que podria estar relacionat amb les diferències que hi ha entre els tres enzims pel que fa als patrons de mutació per adquirir resistència a fàrmacs antiretrovirals. A la segona part, ampliam l’estudi anterior per considerar no-independència entre les posicions de la proteïna. Representam les proteïnes com a grafs i ponderam cada aresta entre dos residus per la seua distància euclidiana, calculada a partir de dades de cristal·lografia de rajos X. A continuació, els aplicam un kernel per a grafs (el random walk exponential kernel) que integra els kernels Overlap i Jaccard. A pesar dels avantatges potencials d’aquest kernel, no aconseguim millorar els resultats obtinguts en la primera part. A la tercera part, proposam un kernel framework per unificar les anàlisis supervisades i no supervisades en el camp del microbioma. Aprofitam la mateixa matriu de kernel per predicció mitjançant SVM i visualització mitjançant anàlisi de components principals amb kernels (kPCA). Discutim com transformar mesures de beta-diversitat en kernels, i definim dos kernels per a dades composicionals (Aitchison-RBF i compositional linear). Aquest darrer kernel també permet obtenir les importàncies dels tàxons respecte del fenotip predit (signatures microbianes). Per a les dades amb estructuració espacial i temporal utilitzam Multiple Kernel Learning i kernels per a sèries temporals, respectivament. El framework s’il·lustra amb tres bases de dades: la primera conté mostres de sòl, la segona mostres humanes amb una component espacial i la tercera, no publicada fins ara, dades longitudinals de porcs. Totes les anàlisis es contrasten amb els estudis originals (en els dos primers casos) i també amb els resultats dels RF. El nostre kernel framework no només permet una visió global de les dades, sinó que també dóna bons resultats a cada àrea d’aprenentatge. En les anàlisis no supervisades, els patrons detectats en estudis previs es conserven a la kPCA. En anàlisis supervisades, el SVM té un rendiment superior (o equivalent) al dels RF, mentre que les signatures microbianes són coherents amb els estudis originals i la literatura prèvia.
La relación entre fenotipo e información genotípica es considerablemente intrincada y compleja. Los métodos de aprendizaje automático (ML) se han utilizado con éxito para la predicción de fenotipos en una gran variedad de problemas dentro de la genética y la genómica. Sin embargo, los datos biológicos suelen estar estructurados y pertenecer a tipos de datos "no estándar", lo que puede representar un desafío para la mayoría de los métodos de ML. Entre ellos, los métodos de kernel permiten un enfoque muy versátil para manejar diferentes tipos de datos y problemas mediante una familia de funciones llamadas de kernel. El objetivo principal de esta tesis doctoral es el desarrollo y evaluación de enfoques de kernel específicos para la predicción fenotípica, centrándose en problemas biológicos con tipos de datos o diseños experimentales estructurados. En la primera parte, usamos secuencias de proteínas mutadas del VIH (proteasa, transcriptasa inversa e integrasa) para predecir la resistencia a antiretrovirales. Proponemos dos funciones de kernel categóricas (Overlap y Jaccard) que tienen en cuenta las particularidades de los datos de VIH, como las mezclas de alelos. Los kernels propuestos se combinan con máquinas de vector soporte (SVM) y se comparan con dos funciones de kernel estándar (Linear y RBF) y dos métodos que no son de kernel: bosques aleatorios (RF) y un tipo de red neuronal, el perceptrón multicapa. También incluimos en los kernels la importancia relativa de cada posición de la proteína con respecto a la resistencia. Tener en cuenta tanto la naturaleza categórica de los datos como la presencia de mezclas obtenemos sistemáticamente mejores predicciones. El efecto de la ponderación es mayor en los inhibidores de la integrasa y la transcriptasa inversa, lo que puede estar relacionado con diferencias en los patrones mutacionales de las tres enzimas virales. En la segunda parte, ampliamos el estudio anterior para considerar que las posiciones de las proteínas pueden no ser independientes. Las secuencias mutadas se representan como grafos, ponderándose las aristas por la distancia euclidiana entre residuos obtenida por cristalografía de rayos X. A continuación, se calcula un kernel para grafos (el exponential random walk kernel) que integra los kernels Overlap y Jaccard. A pesar de las ventajas potenciales de este enfoque, no observamos una mejora en la capacidad predictiva. En la tercera parte, proponemos un kernel framework para unificar los análisis supervisados ​​y no supervisados del microbioma. Para ello, usamos una misma matriz de kernel para predecir fenotipos usando SVM y visualización a través de análisis de componentes principales con kernels (kPCA). Definimos dos kernels para datos composicionales (Aitchison-RBF y compositional linear) y discutimos la transformación de medidas de beta-diversidad en kernels. El kernel lineal composicional también permite la recuperación de importancias de taxones (firmas microbianas) del modelo SVM. Para datos con estructura espacial y temporal usamos Multiple Kernel Learning y kernels para series temporales, respectivamente. Ilustramos el kernel framework con tres conjuntos de datos: datos de suelos, datos humanos con un componente espacial y, un conjunto de datos longitudinales inéditos sobre producción porcina. Todos los análisis incluyen una comparación con los informes originales (en los dos primeros casos), así como un contraste con los resultados de RF. El kernel framework no solo permite una visión holística de los datos, sino que también da buenos resultados en cada área de aprendizaje. En análisis no supervisados, los principales patrones detectados en los estudios originales se conservan en kPCA. En análisis supervisados, la SVM tiene un rendimiento mayor (o equivalente) a los RF, mientras que las firmas microbianas son coherentes con los estudios originales y la literatura previa.
The relationship between phenotype and genotypic information is considerably intricate and complex. Machine Learning (ML) methods have been successfully used for phenotype prediction in a great range of problems within genetics and genomics. However, biological data is usually structured and belongs to & 'nonstandard' data types, which can pose a challenge to most ML methods. Among them, kernel methods bring along a very versatile approach to handle different types of data and problems through a family of functions called kernels. The main goal of this PhD thesis is the development and evaluation of specific kernel approaches for phenotypic prediction, focusing on biological problems with structured data types or study designs. In the first part, we predict drug resistance from HIV-mutated protein sequences (protease, reverse transcriptase and integrase). We propose two categorical kernel functions (Overlap and Jaccard) that take into account HIV data particularities, such as allele mixtures. The proposed kernels are coupled with Support Vector Machines (SVM) and compared against two well-known standard kernel functions (Linear and RBF) and two nonkernel methods: Random Forests (RF) and the Multilayer Perceptron neural network. We also include a relative weight into the aforementioned kernels, representing the importance of each protein residue regarding drug resistance. Taking into account both the categorical nature of data and the presence of mixtures consistently delivers better predictions. The weighting effect is higher in reverse transcriptase and integrase inhibitors, which may be related to the different mutational patterns in the viral enzymes regarding drug resistance. In the second part, we extend the previous study to consider the fact that protein positions are not independent. Mutated sequences are modeled as graphs, with edges weighted by the Euclidean distance between residues, obtained from crystal three-dimensional structures. A kernel for graphs (the exponential random walk kernel) that integrates the previous Overlap and Jaccard kernels is then computed. Despite the potential advantages of this kernel for graphs, an improvement on predictive ability as compared to the kernels of the first study is not observed. In the third part, we propose a kernel framework to unify unsupervised and supervised microbiome analyses. To do so, we use the same kernel matrix to perform phenotype prediction via SVMs and visualization via kernel Principal Components Analysis (kPCA). We define two kernels for compositional data (Aitchison-RBF and compositional linear) and discuss the transformation of beta-diversity measures into kernels. The compositional linear kernel also allows the retrieval of taxa importances (microbial signatures) from the SVM model. Spatial and time-structured datasets are handled with Multiple Kernel Learning and kernels for time series, respectively. We illustrate the kernel framework with three datasets: a single point soil dataset, a human dataset with a spatial component, and a previously unpublished longitudinal dataset concerning pig production. Analyses across the three case studies include a comparison with the original reports (for the two former datasets), as well as contrast with results from RF. The kernel framework not only allows a holistic view of data but also gives good results in each learning area. In unsupervised analyses, the main patterns detected in the original reports are conserved in kPCA. In supervised analyses SVM has better (or, in some cases, equivalent) performance than RF, while microbial signatures are consistent with the original studies and previous literature.
APA, Harvard, Vancouver, ISO, and other styles
48

Meyer, Jochen. "Renormierungsgruppen-Flussgleichungen im Heat-Kernel-Formalismus." [S.l. : s.n.], 2001. http://deposit.ddb.de/cgi-bin/dokserv?idn=96196006X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Kile, Håkon. "Bandwidth Selection in Kernel Density Estimation." Thesis, Norwegian University of Science and Technology, Department of Mathematical Sciences, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10015.

Full text
Abstract:

In kernel density estimation, the most crucial step is to select a proper bandwidth (smoothing parameter). There are two conceptually different approaches to this problem: a subjective and an objective approach. In this report, we only consider the objective approach, which is based upon minimizing an error, defined by an error criterion. The most common objective bandwidth selection method is to minimize some squared error expression, but this method is not without its critics. This approach is said to not perform satisfactory in the tail(s) of the density, and to put too much weight on observations close to the mode(s) of the density. An approach which minimizes an absolute error expression, is thought to be without these drawbacks. We will provide a new explicit formula for the mean integrated absolute error. The optimal mean integrated absolute error bandwidth will be compared to the optimal mean integrated squared error bandwidth. We will argue that these two bandwidths are essentially equal. In addition, we study data-driven bandwidth selection, and we will propose a new data-driven bandwidth selector. Our new bandwidth selector has promising behavior with respect to the visual error criterion, especially in the cases of limited sample sizes.

APA, Harvard, Vancouver, ISO, and other styles
50

Topaloglu, Mehmet Ersan. "Improving Kernel Performance For Network Sniffing." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1097856/index.pdf.

Full text
Abstract:
&
#728
G Sniffing is computer-network equivalent of telephone tapping. A Sniffer is simply any software tool used for sniffing. Needs of modern networks today are much more than a sniffer can meet, because of high network traffic and load. Some efforts are shown to overcome this problem. Although successful approaches exist, problem is not completely solved. Efforts mainly includes producing faster hardware, modifying NICs (Network Interface Card), modifying kernel, or some combinations of them. Most efforts are either costly or no know-how exists. In this thesis, problem is attacked via modifying kernel and NIC with aim of transferring the data captured from the network to the application as fast as possible. Snort [1], running on Linux, is used as a case study for performance comparison with the original system. A significant amount of decrease in packet lost ratios is observed at resultant system.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography