Academic literature on the topic 'Kernel filtering'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Kernel filtering.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Kernel filtering"

1

Maeda, Yoshihiro, Norishige Fukushima, and Hiroshi Matsuo. "Taxonomy of Vectorization Patterns of Programming for FIR Image Filters Using Kernel Subsampling and New One." Applied Sciences 8, no. 8 (July 26, 2018): 1235. http://dx.doi.org/10.3390/app8081235.

Full text
Abstract:
This study examines vectorized programming for finite impulse response image filtering. Finite impulse response image filtering occupies a fundamental place in image processing, and has several approximated acceleration algorithms. However, no sophisticated method of acceleration exists for parameter adaptive filters or any other complex filter. For this case, simple subsampling with code optimization is a unique solution. Under the current Moore’s law, increases in central processing unit frequency have stopped. Moreover, the usage of more and more transistors is becoming insuperably complex due to power and thermal constraints. Most central processing units have multi-core architectures, complicated cache memories, and short vector processing units. This change has complicated vectorized programming. Therefore, we first organize vectorization patterns of vectorized programming to highlight the computing performance of central processing units by revisiting the general finite impulse response filtering. Furthermore, we propose a new vectorization pattern of vectorized programming and term it as loop vectorization. Moreover, these vectorization patterns mesh well with the acceleration method of subsampling of kernels for general finite impulse response filters. Experimental results reveal that the vectorization patterns are appropriate for general finite impulse response filtering. A new vectorization pattern with kernel subsampling is found to be effective for various filters. These include Gaussian range filtering, bilateral filtering, adaptive Gaussian filtering, randomly-kernel-subsampled Gaussian range filtering, randomly-kernel-subsampled bilateral filtering, and randomly-kernel-subsampled adaptive Gaussian filtering.
APA, Harvard, Vancouver, ISO, and other styles
2

Nair, Pravin, and Kunal Narayan Chaudhury. "Fast High-Dimensional Kernel Filtering." IEEE Signal Processing Letters 26, no. 2 (February 2019): 377–81. http://dx.doi.org/10.1109/lsp.2019.2891879.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Douma, Huub, David Yingst, Ivan Vasconcelos, and Jeroen Tromp. "On the connection between artifact filtering in reverse-time migration and adjoint tomography." GEOPHYSICS 75, no. 6 (November 2010): S219—S223. http://dx.doi.org/10.1190/1.3505124.

Full text
Abstract:
Finite-frequency sensitivity kernels in seismic tomography define the volumes inside the earth that influence seismic waves as they traverse through it. It has recently been numerically observed that an image obtained using the impedance kernel is much less contaminated by low-frequency artifacts due to the presence of sharp wave-speed contrasts in the background model, than is an image obtained using reverse-time migration. In practical reverse-time migration, these artifacts are routinely heuristically dampened by Laplacian filtering of the image. Here we show analytically that, for an isotropic acoustic medium with constant density, away from sources and receivers and in a smooth background medium, Laplacian image filtering is identical to imaging with the impedance kernel. Therefore, when imaging is pushed toward using background models with sharp wave-speed contrasts, the impedance kernel image is less prone to develop low-frequency artifacts than is the reverse-time migration image, due to the implicit action of the Laplacian that amplifies the higher-frequency reflectors relative to the low-frequency artifacts. Thus, the heuristic Laplacian filtering commonly used in practical reverse-time migration is fundamentally rooted in adjoint tomography and, in particular, closely connected to the impedance kernel.
APA, Harvard, Vancouver, ISO, and other styles
4

Huang, Di, Xishan Zhang, Rui Zhang, Tian Zhi, Deyuan He, Jiaming Guo, Chang Liu, et al. "DWM: A Decomposable Winograd Method for Convolution Acceleration." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 4174–81. http://dx.doi.org/10.1609/aaai.v34i04.5838.

Full text
Abstract:
Winograd's minimal filtering algorithm has been widely used in Convolutional Neural Networks (CNNs) to reduce the number of multiplications for faster processing. However, it is only effective on convolutions with kernel size as 3x3 and stride as 1, because it suffers from significantly increased FLOPs and numerical accuracy problem for kernel size larger than 3x3 and fails on convolution with stride larger than 1. In this paper, we propose a novel Decomposable Winograd Method (DWM), which breaks through the limitation of original Winograd's minimal filtering algorithm to a wide and general convolutions. DWM decomposes kernels with large size or large stride to several small kernels with stride as 1 for further applying Winograd method, so that DWM can reduce the number of multiplications while keeping the numerical accuracy. It enables the fast exploring of larger kernel size and larger stride value in CNNs for high performance and accuracy and even the potential for new CNNs. Comparing against the original Winograd, the proposed DWM is able to support all kinds of convolutions with a speedup of ∼2, without affecting the numerical accuracy.
APA, Harvard, Vancouver, ISO, and other styles
5

Yijie Tang, Yijie Tang, Guobing Qian Yijie Tang, Wenqi Wu Guobing Qian, and Ying-Ren Chien Wenqi Wu. "An Efficient Filtering Algorithm against Impulse Noise in Communication Systems." 網際網路技術學刊 24, no. 2 (March 2023): 357–62. http://dx.doi.org/10.53106/160792642023032402014.

Full text
Abstract:
<p>The kernel adaptive filter (KAF), which processes data in the reproducing kernel Hilbert space (RKHS), can improve the performance of conventional adaptive filters in nonlinear systems. However, the presence of impulse noise can seriously degrade the performance of KAF. In this paper, we propose a kernel modified-sign least-mean-square algorithm (KMSLMS) to mitigate the impact of impulse noise in communication systems. Moreover, we apply the nearest-instance-centroid estimation (NICE) algorithm to reduce the computational complexity of our KMSLMS algorithm, called the NICE-KMSLMS algorithm. Finally, computer simulations were used to evaluate the effectiveness of our proposed method. Compared with the conventional kernel least-mean-square algorithm (KLMS), our proposed method can improve the testing mean-squared error (MSE) by 2.32 dB and 7.39 dB for the nonlinear channel equalization and Mackey-Glass chaotic time series prediction problems, respectively. Furthermore, the testing MSE degradation caused by combining the NICE algorithm with our KMSLMS algorithm is negligible but can save about 55% computational cost in terms of the required mean size.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
6

Cheng, Sheng-Wei, Yi-Ting Lin, and Yan-Tsung Peng. "A Fast Two-Stage Bilateral Filter Using Constant Time O(1) Histogram Generation." Sensors 22, no. 3 (January 25, 2022): 926. http://dx.doi.org/10.3390/s22030926.

Full text
Abstract:
Bilateral Filtering (BF) is an effective edge-preserving smoothing technique in image processing. However, an inherent problem of BF for image denoising is that it is challenging to differentiate image noise and details with the range kernel, thus often preserving both noise and edges in denoising. This letter proposes a novel Dual-Histogram BF (DHBF) method that exploits an edge-preserving noise-reduced guidance image to compute the range kernel, removing isolated noisy pixels for better denoising results. Furthermore, we approximate the spatial kernel using mean filtering based on column histogram construction to achieve constant-time filtering regardless of the kernel radius’ size and achieve better smoothing. Experimental results on multiple benchmark datasets for denoising show that the proposed DHBF outperforms other state-of-the-art BF methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Ning, and Thomas Schumacher. "Improved Denoising of Structural Vibration Data Employing Bilateral Filtering." Sensors 20, no. 5 (March 5, 2020): 1423. http://dx.doi.org/10.3390/s20051423.

Full text
Abstract:
With the continuous advancement of data acquisition and signal processing, sensors, and wireless communication, copious research work has been done using vibration response signals for structural damage detection. However, in actual projects, vibration signals are often subject to noise interference during acquisition and transmission, thereby reducing the accuracy of damage identification. In order to effectively remove the noise interference, bilateral filtering, a filtering method commonly used in the field of image processing for improving data signal-to-noise ratio was introduced. Based on the Gaussian filter, the method constructs a bilateral filtering kernel function by multiplying the spatial proximity Gaussian kernel function and the numerical similarity Gaussian kernel function and replaces the current data with the data obtained by weighting the neighborhood data, thereby implementing filtering. By processing the simulated data and experimental data, introducing a time-frequency analysis method and a method for calculating the time-frequency spectrum energy, the denoising abilities of median filtering, wavelet denoising and bilateral filtering were compared. The results show that the bilateral filtering method can better preserve the details of the effective signal while suppressing the noise interference and effectively improve the data quality for structural damage detection. The effectiveness and feasibility of the bilateral filtering method applied to the noise suppression of vibration signals is verified.
APA, Harvard, Vancouver, ISO, and other styles
8

CHEN, Xiao-li, and Pei-yu LIU. "Word sequence kernel applied in spam-filtering." Journal of Computer Applications 31, no. 3 (May 18, 2011): 698–701. http://dx.doi.org/10.3724/sp.j.1087.2011.00698.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nan, Shanghan, and Guobing Qian. "Univariate kernel sums correntropy for adaptive filtering." Applied Acoustics 184 (December 2021): 108316. http://dx.doi.org/10.1016/j.apacoust.2021.108316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sun, Zhonggui, Bo Han, Jie Li, Jin Zhang, and Xinbo Gao. "Weighted Guided Image Filtering With Steering Kernel." IEEE Transactions on Image Processing 29 (2020): 500–508. http://dx.doi.org/10.1109/tip.2019.2928631.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Kernel filtering"

1

Sun, Xinyuan. "Kernel Methods for Collaborative Filtering." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-theses/135.

Full text
Abstract:
The goal of the thesis is to extend the kernel methods to matrix factorization(MF) for collaborative ltering(CF). In current literature, MF methods usually assume that the correlated data is distributed on a linear hyperplane, which is not always the case. The best known member of kernel methods is support vector machine (SVM) on linearly non-separable data. In this thesis, we apply kernel methods on MF, embedding the data into a possibly higher dimensional space and conduct factorization in that space. To improve kernelized matrix factorization, we apply multi-kernel learning methods to select optimal kernel functions from the candidates and introduce L2-norm regularization on the weight learning process. In our empirical study, we conduct experiments on three real-world datasets. The results suggest that the proposed method can improve the accuracy of the prediction surpassing state-of-art CF methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Kabbara, Jad. "Kernel adaptive filtering algorithms with improved tracking ability." Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=123272.

Full text
Abstract:
In recent years, there has been an increasing interest in kernel methods in areas such as machine learning and signal processing as these methods show strong performance in classification and regression problems. Interesting "kernelized" extensions of many well-known algorithms in artificial intelligence and signal processing have been presented, particularly, kernel versions of the popular online recursive least squares (RLS) adaptive algorithm, namely kernel RLS (KRLS). These algorithms have been receiving significant attention over the past decade in statistical estimation problems, among which those problems involving tracking time-varying systems. KRLS algorithms obtain a non-linear least squares (LS) regressor as a linear combination of kernel functions evaluated at the elements of a carefully chosen subset, called a dictionary, of the received input vectors. As such, the number of coefficients in that linear combination, i.e., the weights, is equal to the size of the dictionary. This coupling between the number of weights and the dictionary size introduces a trade-off. On one hand, a large dictionary would accurately capture the dynamics of the input-output relationship over time. On the other, it has a detrimental effect on the algorithm's ability to track changes in that relationship because having to adjust a large number of weights can significantly slow down adaptation. In this thesis, we present a new KRLS algorithm designed specifically for the tracking of time-varying systems. The key idea behind the proposed algorithm is to break the dependency of the number of weights on the dictionary size. In the proposed method, the number of weights K is fixed and is independent from the dictionary size.Particularly, we use a novel hybrid approach for the construction of the dictionary that employs the so-called surprise criterion for admitting data samples along with a simple pruning method ("remove-the-oldest") that imposes a hard limit on the dictionary size. Then, we propose to construct a K-sparse LS regressor tracking the relationship of the most recent training input-output pairs using the K dictionary elements that provide the best approximation of the output values. Identifying those dictionary elements is a combinatorial optimization problem with a prohibitive computational complexity. To overcome this, we extend the Subspace Pursuit algorithm (SP) which, in essence, is a low complexity method to obtain LS solutions with a pre-specified sparsity level, to non-linear regression problems and introduce a kernel version of SP, which we call Kernel SP (KSP). The standard KRLS is used to recursively update the weights until a new dictionary element selection is triggered by the admission of a new input vector to the dictionary. Simulations show that that the proposed algorithm outperforms existing KRLS-type algorithms in tracking time-varying systems and highly chaotic time series.
Au cours des dernières années, il y a eu un intérêt accru pour les méthodes à noyau dans des domaines tels que l'apprentissage automatique et le traitement du signal, puisque ces méthodes démontrent une performance supérieure dans la résolution des problèmes de classification et de régression. D'intéressantes extensions à noyau de plusieurs algorithmes connus en intelligence artificielle et en traitement du signal ont été introduites, particulièrement, les versions à noyau du fameux algorithme d'apprentissage incrémental des moindres carrés récursifs (en anglais, Recursive Least Squares (RLS)), nommées KRLS. Ces algorithmes ont reçu une attention considérable durant la dernière décennie dans les problèmes d'estimation statistique, particulièrement ceux de suivi des systèmes variant dans le temps. Les algorithmes KRLS forment le régresseur aux moindres carrés non-linéaires en utilisant une combinaison linéaire de noyaux évalués aux membres d'un sous-ensemble, appelé dictionnaire, des données d'entrée. Le nombre des coefficients dans la combinaison linéaire, c'est à dire les poids, est égal à la taille du dictionnaire. Ce couplage entre le nombre de poids et la taille du dictionnaire introduit un compromis. D'une part, un dictionnaire de grande taille reflète avec précision la dynamique de la relation entre les données d'entrée et les sorties à travers le temps. De l'autre part, un tel dictionnaire diminue la capacité de l'algorithme à suivre les variations dans cette relation, car ajuster un grand nombre de poids ralentit considérablement l'adaptation de l'algorithme aux variations du système. Dans cette thèse, nous présentons un nouvel algorithme KRLS conçu précisément pour suivre les systèmes variant dans le temps. L'idée principale de l'algorithme est d'enlever la dépendance du nombre de poids sur la taille du dictionnaire. Ainsi, nous proposons de fixer le nombre de poids indépendamment de la taille du dictionnaire.Particulièrement, nous présentons une nouvelle approche hybride pour la construction du dictionnaire qui emploie le test de la surprise pour l'admission des données d'entrées avec une méthode simple d'élagage (l'élimination du membre le plus ancien du dictionnaire) qui impose une limite stricte sur la taille du dictionnaire. Nous proposons ainsi de construire un régresseur "K-creux" (en anglais, K-sparse) aux moindres carrés qui suit la relation des paires de données d'entrées et sorties les plus récentes en utilisant les K membres du dictionnaire qui approximent le mieux possible les sorties. L'identification de ces membres est un problème d'optimisation combinatoire ayant une complexité prohibitive. Pour surmonter cet obstacle, nous étendons l'algorithme Subspace Pursuit (SP), qui est une méthode à complexité réduite pour le calcul des solutions aux moindres carrés ayant un niveau préfixé de parcimonie, aux problèmes de régression non-linéaire. Ainsi, nous introduisons une version à noyau de SP qu'on appelle Kernel Subspace Pursuit (KSP). L'algorithme standard KRLS est utilisé pour l'ajustement récursif des poids jusqu'à ce qu'un nouveau vecteur de donnée soit admis au dictionnaire. Les simulations démontrent que la performance de notre algorithme dans le cadre du suivi des systèmes variant dans le temps surpasse celle d'autres algorithmes KRLS.
APA, Harvard, Vancouver, ISO, and other styles
3

Bilal, Tahir. "Content Based Packet Filtering In Linux Kernel Using Deterministic Finite Automata." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613710/index.pdf.

Full text
Abstract:
In this thesis, we present a content based packet filtering Architecture in Linux using Deterministic Finite Automata and iptables framework. New generation firewalls and intrusion detection systems not only filter or inspect network packets according to their header fields but also take into account the content of payload. These systems use a set of signatures in the form of regular expressions or plain strings to scan network packets. This scanning phase is a CPU intensive task which may degrade network performance. Currently, the Linux kernel firewall scans network packets separately for each signature in the signature set provided by the user. This approach constitutes a considerable bottleneck to network performance. We implement a content based packet filtering architecture and a multiple string matching extension for the Linux kernel firewall that matches all signatures at once, and show that we are able to filter network traffic by consuming constant bandwidth regardless of the number of signatures. Furthermore, we show that we can do packet filtering in multi-gigabit rates.
APA, Harvard, Vancouver, ISO, and other styles
4

Polato, Mirko. "Definition and learning of logic-based kernels for categorical data, and application to collaborative filtering." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3427260.

Full text
Abstract:
The continuous pursuit of better prediction quality has gradually led to the development of increasingly complex machine learning models, e.g., deep neural networks. Despite the great success in many domains, the black-box nature of these models makes them not suitable for applications in which the model understanding is at least as important as the prediction accuracy, such as medical applications. On the other hand, more interpretable models, as decision trees, are in general much less accurate. In this thesis, we try to merge the positive aspects of these two realities, by injecting interpretable elements inside complex methods. We focus on kernel methods which have an elegant framework that decouples learning algorithms from data representations. In particular, the first main contribution of this thesis is the proposal of a new family of Boolean kernels, i.e., kernels defined on binary data, with the aim of creating interpretable feature spaces. Assuming binary input vectors, the core idea is to build embedding spaces in which the dimensions represent logical formulas (of a specific form) of the input variables. As a result the solution of a kernel machine can be represented as a weighted sum of logical propositions, and this allows to extract from it human-readable rules. Our framework provides a constructive and efficient way to calculate Boolean kernels of different forms (e.g., disjunctive, conjunctive, DNF, CNF). We show that on binary classification tasks over categorical datasets the proposed kernels achieve state-of-the-art performances. We also provide some theoretical properties about the expressiveness of such kernels. The second main contribution consists in the development of a new multiple kernel learning algorithm to automatically learn the best representation (avoiding the validation). We start from a theoretical result which states that, under mild conditions, any dot-product kernel can be seen as a linear non-negative combination of Boolean conjunctive kernels. Then, from this combination, our MKL algorithm learns non-parametrically the best combination of the conjunctive kernels. This algorithm is designed to optimize the radius-margin ratio of the combined kernel, which has been demonstrated of being an upper bound of the Leave-One-Out error. An extensive empirical evaluation, on several binary classification tasks, shows how our MKL technique is able to outperform state-of-the-art MKL approaches. A third contribution is the proposal of another kernel family for binary input data, which aims to overcome the limitations of the Boolean kernels. In this case the focus is not exclusively on the interpretability, but also on the expressivity. With this new framework, that we dubbed propositional kernel framework, is possible to build kernel functions able to create feature spaces containing almost any kind of logical propositions. Finally, the last contribution is the application of the Boolean kernels to Recommender Systems, specifically, on top-N recommendation tasks. First of all, we propose a novel kernel-based collaborative filtering method and we apply on top of it our Boolean kernels. Empirical results on several collaborative filtering datasets show how less expressive kernels can alleviate the sparsity issue, which is peculiar in this kind of applications.
The continuous pursuit of better prediction quality has gradually led to the development of increasingly complex machine learning models, e.g., deep neural networks. Despite the great success in many domains, the black-box nature of these models makes them not suitable for applications in which the model understanding is at least as important as the prediction accuracy, such as medical applications. On the other hand, more interpretable models, as decision trees, are in general much less accurate. In this thesis, we try to merge the positive aspects of these two realities, by injecting interpretable elements inside complex methods. We focus on kernel methods which have an elegant framework that decouples learning algorithms from data representations. In particular, the first main contribution of this thesis is the proposal of a new family of Boolean kernels, i.e., kernels defined on binary data, with the aim of creating interpretable feature spaces. Assuming binary input vectors, the core idea is to build embedding spaces in which the dimensions represent logical formulas (of a specific form) of the input variables. As a result the solution of a kernel machine can be represented as a weighted sum of logical propositions, and this allows to extract from it human-readable rules. Our framework provides a constructive and efficient way to calculate Boolean kernels of different forms (e.g., disjunctive, conjunctive, DNF, CNF). We show that on binary classification tasks over categorical datasets the proposed kernels achieve state-of-the-art performances. We also provide some theoretical properties about the expressiveness of such kernels. The second main contribution consists in the development of a new multiple kernel learning algorithm to automatically learn the best representation (avoiding the validation). We start from a theoretical result which states that, under mild conditions, any dot-product kernel can be seen as a linear non-negative combination of Boolean conjunctive kernels. Then, from this combination, our MKL algorithm learns non-parametrically the best combination of the conjunctive kernels. This algorithm is designed to optimize the radius-margin ratio of the combined kernel, which has been demonstrated of being an upper bound of the Leave-One-Out error. An extensive empirical evaluation, on several binary classification tasks, shows how our MKL technique is able to outperform state-of-the-art MKL approaches. A third contribution is the proposal of another kernel family for binary input data, which aims to overcome the limitations of the Boolean kernels. In this case the focus is not exclusively on the interpretability, but also on the expressivity. With this new framework, that we dubbed propositional kernel framework, is possible to build kernel functions able to create feature spaces containing almost any kind of logical propositions. Finally, the last contribution is the application of the Boolean kernels to Recommender Systems, specifically, on top-N recommendation tasks. First of all, we propose a novel kernel-based collaborative filtering method and we apply on top of it our Boolean kernels. Empirical results on several collaborative filtering datasets show how less expressive kernels can alleviate the sparsity issue, which is peculiar in this kind of applications.
APA, Harvard, Vancouver, ISO, and other styles
5

Fischer, Manfred M., and Peter Stumpner. "Income Distribution Dynamics and Cross-Region Convergence in Europe. Spatial filtering and novel stochastic kernel representations." WU Vienna University of Economics and Business, 2007. http://epub.wu.ac.at/3969/1/SSRN%2Did981148.pdf.

Full text
Abstract:
This paper suggests an empirical framework for analysing income distribution dynamics and cross-region convergence in the European Union of 27 member states, 1995- 2003. The framework lies in the research tradition that allows the state income space to be continuous, puts emphasis on both shape and intra-distribution dynamics and uses stochastic kernels for studying transition dynamics and implied long-run behaviour. In this paper stochastic kernels are described by conditional density functions, estimated by a product kernel estimator of conditional density and represented by means of novel visualisation tools. The technique of spatial filtering is used to account for spatial effects, in order to avoid misguided inferences and interpretations caused by the presence of spatial autocorrelation in the income distributions. The results reveal a slow catching-up of the poorest regions and a process of polarisation, with a small group of very rich regions shifting away from the rest of the cross-section. This is well evidenced by both, the unfiltered and the filtered ergodic density view. Differences exist in detail, and these emphasise the importance to properly deal with the spatial autocorrelation problem. (authors' abstract)
APA, Harvard, Vancouver, ISO, and other styles
6

Mahfouz, Sandy. "Kernel-based machine learning for tracking and environmental monitoring in wireless sensor networkds." Thesis, Troyes, 2015. http://www.theses.fr/2015TROY0025/document.

Full text
Abstract:
Cette thèse porte sur les problèmes de localisation et de surveillance de champ de gaz à l'aide de réseaux de capteurs sans fil. Nous nous intéressons d'abord à la géolocalisation des capteurs et au suivi de cibles. Nous proposons ainsi une approche exploitant la puissance des signaux échangés entre les capteurs et appliquant les méthodes à noyaux avec la technique de fingerprinting. Nous élaborons ensuite une méthode de suivi de cibles, en se basant sur l'approche de localisation proposée. Cette méthode permet d'améliorer la position estimée de la cible en tenant compte de ses accélérations, et cela à l'aide du filtre de Kalman. Nous proposons également un modèle semi-paramétrique estimant les distances inter-capteurs en se basant sur les puissances des signaux échangés entre ces capteurs. Ce modèle est une combinaison du modèle physique de propagation avec un terme non linéaire estimé par les méthodes à noyaux. Les données d'accélérations sont également utilisées ici avec les distances, pour localiser la cible, en s'appuyant sur un filtrage de Kalman et un filtrage particulaire. Dans un autre contexte, nous proposons une méthode pour la surveillance de la diffusion d'un gaz dans une zone d'intérêt, basée sur l'apprentissage par noyaux. Cette méthode permet de détecter la diffusion d'un gaz en utilisant des concentrations relevées régulièrement par des capteurs déployés dans la zone. Les concentrations mesurées sont ensuite traitées pour estimer les paramètres de la source de gaz, notamment sa position et la quantité du gaz libéré
This thesis focuses on the problems of localization and gas field monitoring using wireless sensor networks. First, we focus on the geolocalization of sensors and target tracking. Using the powers of the signals exchanged between sensors, we propose a localization method combining radio-location fingerprinting and kernel methods from statistical machine learning. Based on this localization method, we develop a target tracking method that enhances the estimated position of the target by combining it to acceleration information using the Kalman filter. We also provide a semi-parametric model that estimates the distances separating sensors based on the powers of the signals exchanged between them. This semi-parametric model is a combination of the well-known log-distance propagation model with a non-linear fluctuation term estimated within the framework of kernel methods. The target's position is estimated by incorporating acceleration information to the distances separating the target from the sensors, using either the Kalman filter or the particle filter. In another context, we study gas diffusions in wireless sensor networks, using also machine learning. We propose a method that allows the detection of multiple gas diffusions based on concentration measures regularly collected from the studied region. The method estimates then the parameters of the multiple gas sources, including the sources' locations and their release rates
APA, Harvard, Vancouver, ISO, and other styles
7

Vaerenbergh, Steven Van. "Kernel Methods for Nonlinear Identification, Equalization and Separation of Signals." Doctoral thesis, Universidad de Cantabria, 2010. http://hdl.handle.net/10803/10673.

Full text
Abstract:
En la última década, los métodos kernel (métodos núcleo) han demostrado ser técnicas muy eficaces en la resolución de problemas no lineales. Parte de su éxito puede atribuirse a su sólida base matemática dentro de los espacios de Hilbert generados por funciones kernel ("reproducing kernel Hilbert spaces", RKHS); y al hecho de que resultan en problemas convexos de optimización. Además, son aproximadores universales y la complejidad computacional que requieren es moderada. Gracias a estas características, los métodos kernel constituyen una alternativa atractiva a las técnicas tradicionales no lineales, como las series de Volterra, los polinómios y las redes neuronales. Los métodos kernel también presentan ciertos inconvenientes que deben ser abordados adecuadamente en las distintas aplicaciones, por ejemplo, las dificultades asociadas al manejo de grandes conjuntos de datos y los problemas de sobreajuste ocasionados al trabajar en espacios de dimensionalidad infinita.En este trabajo se desarrolla un conjunto de algoritmos basados en métodos kernel para resolver una serie de problemas no lineales, dentro del ámbito del procesado de señal y las comunicaciones. En particular, se tratan problemas de identificación e igualación de sistemas no lineales, y problemas de separación ciega de fuentes no lineal ("blind source separation", BSS). Esta tesis se divide en tres partes. La primera parte consiste en un estudio de la literatura sobre los métodos kernel. En la segunda parte, se proponen una serie de técnicas nuevas basadas en regresión con kernels para resolver problemas de identificación e igualación de sistemas de Wiener y de Hammerstein, en casos supervisados y ciegos. Como contribución adicional se estudia el campo del filtrado adaptativo mediante kernels y se proponen dos algoritmos recursivos de mínimos cuadrados mediante kernels ("kernel recursive least-squares", KRLS). En la tercera parte se tratan problemas de decodificación ciega en que las fuentes son dispersas, como es el caso en comunicaciones digitales. La dispersidad de las fuentes se refleja en que las muestras observadas se agrupan, lo cual ha permitido diseñar técnicas de decodificación basadas en agrupamiento espectral. Las técnicas propuestas se han aplicado al problema de la decodificación ciega de canales MIMO rápidamente variantes en el tiempo, y a la separación ciega de fuentes post no lineal.
In the last decade, kernel methods have become established techniques to perform nonlinear signal processing. Thanks to their foundation in the solid mathematical framework of reproducing kernel Hilbert spaces (RKHS), kernel methods yield convex optimization problems. In addition, they are universal nonlinear approximators and require only moderate computational complexity. These properties make them an attractive alternative to traditional nonlinear techniques such as Volterra series, polynomial filters and neural networks.This work aims to study the application of kernel methods to resolve nonlinear problems in signal processing and communications. Specifically, the problems treated in this thesis consist of the identification and equalization of nonlinear systems, both in supervised and blind scenarios, kernel adaptive filtering and nonlinear blind source separation.In a first contribution, a framework for identification and equalization of nonlinear Wiener and Hammerstein systems is designed, based on kernel canonical correlation analysis (KCCA). As a result of this study, various other related techniques are proposed, including two kernel recursive least squares (KRLS) algorithms with fixed memory size, and a KCCA-based blind equalization technique for Wiener systems that uses oversampling. The second part of this thesis treats two nonlinear blind decoding problems of sparse data, posed under conditions that do not permit the application of traditional clustering techniques. For these problems, which include the blind decoding of fast time-varying MIMO channels, a set of algorithms based on spectral clustering is designed. The effectiveness of the proposed techniques is demonstrated through various simulations.
APA, Harvard, Vancouver, ISO, and other styles
8

Suutala, J. (Jaakko). "Learning discriminative models from structured multi-sensor data for human context recognition." Doctoral thesis, Oulun yliopisto, 2012. http://urn.fi/urn:isbn:9789514298493.

Full text
Abstract:
Abstract In this work, statistical machine learning and pattern recognition methods were developed and applied to sensor-based human context recognition. More precisely, we concentrated on an effective discriminative learning framework, where input-output mapping is learned directly from a labeled dataset. Non-parametric discriminative classification and regression models based on kernel methods were applied. They include support vector machines (SVM) and Gaussian processes (GP), which play a central role in modern statistical machine learning. Based on these established models, we propose various extensions for handling structured data that usually arise from real-life applications, for example, in a field of context-aware computing. We applied both SVM and GP techniques to handle data with multiple classes in a structured multi-sensor domain. Moreover, a framework for combining data from several sources in this setting was developed using multiple classifiers and fusion rules, where kernel methods are used as base classifiers. We developed two novel methods for handling sequential input and output data. For sequential time-series data, a novel kernel based on graphical presentation, called a weighted walk-based graph kernel (WWGK), is introduced. For sequential output labels, discriminative temporal smoothing (DTS) is proposed. Again, the proposed algorithms are modular, so different kernel classifiers can be used as base models. Finally, we propose a group of techniques based on Gaussian process regression (GPR) and particle filtering (PF) to learn to track multiple targets. We applied the proposed methodology to three different human-motion-based context recognition applications: person identification, person tracking, and activity recognition, where floor (pressure-sensitive and binary switch) and wearable acceleration sensors are used to measure human motion and gait during walking and other activities. Furthermore, we extracted a useful set of specific high-level features from raw sensor measurements based on time, frequency, and spatial domains for each application. As a result, we developed practical extensions to kernel-based discriminative learning to handle many kinds of structured data applied to human context recognition
Tiivistelmä Tässä työssä kehitettiin ja sovellettiin tilastollisen koneoppimisen ja hahmontunnistuksen menetelmiä anturipohjaiseen ihmiseen liittyvän tilannetiedon tunnistamiseen. Esitetyt menetelmät kuuluvat erottelevan oppimisen viitekehykseen, jossa ennustemalli sisääntulomuuttujien ja vastemuuttujan välille voidaan oppia suoraan tunnetuilla vastemuuttujilla nimetystä aineistosta. Parametrittomien erottelevien mallien oppimiseen käytettiin ydinmenetelmiä kuten tukivektorikoneita (SVM) ja Gaussin prosesseja (GP), joita voidaan pitää yhtenä modernin tilastollisen koneoppimisen tärkeimmistä menetelmistä. Työssä kehitettiin näihin menetelmiin liittyviä laajennuksia, joiden avulla rakenteellista aineistoa voidaan mallittaa paremmin reaalimaailman sovelluksissa, esimerkiksi tilannetietoisen laskennan sovellusalueella. Tutkimuksessa sovellettiin SVM- ja GP-menetelmiä moniluokkaisiin luokitteluongelmiin rakenteellisen monianturitiedon mallituksessa. Useiden tietolähteiden käsittelyyn esitetään menettely, joka yhdistää useat opetetut luokittelijat päätöstason säännöillä lopulliseksi malliksi. Tämän lisäksi aikasarjatiedon käsittelyyn kehitettiin uusi graafiesitykseen perustuva ydinfunktio sekä menettely sekventiaalisten luokkavastemuuttujien käsittelyyn. Nämä voidaan liittää modulaarisesti ydinmenetelmiin perustuviin erotteleviin luokittelijoihin. Lopuksi esitetään tekniikoita usean liikkuvan kohteen seuraamiseen. Menetelmät perustuvat anturitiedosta oppivaan GP-regressiomalliin ja partikkelisuodattimeen. Työssä esitettyjä menetelmiä sovellettiin kolmessa ihmisen liikkeisiin liittyvässä tilannetiedon tunnistussovelluksessa: henkilön biometrinen tunnistaminen, henkilöiden seuraaminen sekä aktiviteettien tunnistaminen. Näissä sovelluksissa henkilön asentoa, liikkeitä ja astuntaa kävelyn ja muiden aktiviteettien aikana mitattiin kahdella erilaisella paineherkällä lattia-anturilla sekä puettavilla kiihtyvyysantureilla. Tunnistusmenetelmien laajennuksien lisäksi jokaisessa sovelluksessa kehitettiin menetelmiä signaalin segmentointiin ja kuvaavien piirteiden irroittamiseen matalantason anturitiedosta. Tutkimuksen tuloksena saatiin parannuksia erottelevien mallien oppimiseen rakenteellisesta anturitiedosta sekä erityisesti uusia menettelyjä tilannetiedon tunnistamiseen
APA, Harvard, Vancouver, ISO, and other styles
9

Verzotto, Davide. "Advanced Computational Methods for Massive Biological Sequence Analysis." Doctoral thesis, Università degli studi di Padova, 2011. http://hdl.handle.net/11577/3426282.

Full text
Abstract:
With the advent of modern sequencing technologies massive amounts of biological data, from protein sequences to entire genomes, are becoming increasingly available. This poses the need for the automatic analysis and classification of such a huge collection of data, in order to enhance knowledge in the Life Sciences. Although many research efforts have been made to mathematically model this information, for example finding patterns and similarities among protein or genome sequences, these approaches often lack structures that address specific biological issues. In this thesis, we present novel computational methods for three fundamental problems in molecular biology: the detection of remote evolutionary relationships among protein sequences, the identification of subtle biological signals in related genome or protein functional sites, and the phylogeny reconstruction by means of whole-genome comparisons. The main contribution is given by a systematic analysis of patterns that may affect these tasks, leading to the design of practical and efficient new pattern discovery tools. We thus introduce two advanced paradigms of pattern discovery and filtering based on the insight that functional and conserved biological motifs, or patterns, should lie in different sites of sequences. This enables to carry out space-conscious approaches that avoid a multiple counting of the same patterns. The first paradigm considered, namely irredundant common motifs, concerns the discovery of common patterns, for two sequences, that have occurrences not covered by other patterns, whose coverage is defined by means of specificity and extension. The second paradigm, namely underlying motifs, concerns the filtering of patterns, from a given set, that have occurrences not overlapping other patterns with higher priority, where priority is defined by lexicographic properties of patterns on the boundary between pattern matching and statistical analysis. We develop three practical methods directly based on these advanced paradigms. Experimental results indicate that we are able to identify subtle similarities among biological sequences, using the same type of information only once. In particular, we employ the irredundant common motifs and the statistics based on these patterns to solve the remote protein homology detection problem. Results show that our approach, called Irredundant Class, outperforms the state-of-the-art methods in a challenging benchmark for protein analysis. Afterwards, we establish how to compare and filter a large number of complex motifs (e.g., degenerate motifs) obtained from modern motif discovery tools, in order to identify subtle signals in different biological contexts. In this case we employ the notion of underlying motifs. Tests on large protein families indicate that we drastically reduce the number of motifs that scientists should manually inspect, further highlighting the actual functional motifs. Finally, we combine the two proposed paradigms to allow the comparison of whole genomes, and thus the construction of a novel and practical distance function. With our method, called Unic Subword Approach, we relate to each other the regions of two genome sequences by selecting conserved motifs during evolution. Experimental results show that our approach achieves better performance than other state-of-the-art methods in the whole-genome phylogeny reconstruction of viruses, prokaryotes, and unicellular eukaryotes, further identifying the major clades of these organisms.
Con l'avvento delle moderne tecnologie di sequenziamento, massive quantità di dati biologici, da sequenze proteiche fino a interi genomi, sono disponibili per la ricerca. Questo progresso richiede l'analisi e la classificazione automatica di tali collezioni di dati, al fine di migliorare la conoscenza nel campo delle Scienze della Vita. Nonostante finora siano stati proposti molti approcci per modellare matematicamente le sequenze biologiche, ad esempio cercando pattern e similarità tra sequenze genomiche o proteiche, questi metodi spesso mancano di strutture in grado di indirizzare specifiche questioni biologiche. In questa tesi, presentiamo nuovi metodi computazionali per tre problemi fondamentali della biologia molecolare: la scoperta di relazioni evolutive remote tra sequenze proteiche, l'individuazione di segnali biologici complessi in siti funzionali tra loro correlati, e la ricostruzione della filogenesi di un insieme di organismi, attraverso la comparazione di interi genomi. Il principale contributo è dato dall'analisi sistematica dei pattern che possono interessare questi problemi, portando alla progettazione di nuovi strumenti computazionali efficaci ed efficienti. Vengono introdotti così due paradigmi avanzati per la scoperta e il filtraggio di pattern, basati sull'osservazione che i motivi biologici funzionali, o pattern, sono localizzati in differenti regioni delle sequenze in esame. Questa osservazione consente di realizzare approcci parsimoniosi in grado di evitare un conteggio multiplo degli stessi pattern. Il primo paradigma considerato, ovvero irredundant common motifs, riguarda la scoperta di pattern comuni a coppie di sequenze che hanno occorrenze non coperte da altri pattern, la cui copertura è definita da una maggiore specificità e/o possibile estensione dei pattern. Il secondo paradigma, ovvero underlying motifs, riguarda il filtraggio di pattern che hanno occorrenze non sovrapposte a quelle di altri pattern con maggiore priorità, dove la priorità è definita da proprietà lessicografiche dei pattern al confine tra pattern matching e analisi statistica. Sono stati sviluppati tre metodi computazionali basati su questi paradigmi avanzati. I risultati sperimentali indicano che i nostri metodi sono in grado di identificare le principali similitudini tra sequenze biologiche, utilizzando l'informazione presente in maniera non ridondante. In particolare, impiegando gli irredundant common motifs e le statistiche basate su questi pattern risolviamo il problema della rilevazione di omologie remote tra proteine. I risultati evidenziano che il nostro approccio, chiamato Irredundant Class, ottiene ottime prestazioni su un benchmark impegnativo, e migliora i metodi allo stato dell'arte. Inoltre, per individuare segnali biologici complessi utilizziamo la nozione di underlying motifs, definendo così alcune modalità per il confronto e il filtraggio di motivi degenerati ottenuti tramite moderni strumenti di pattern discovery. Esperimenti su grandi famiglie proteiche dimostrano che il nostro metodo riduce drasticamente il numero di motivi che gli scienziati dovrebbero altrimenti ispezionare manualmente, mettendo in luce inoltre i motivi funzionali identificati in letteratura. Infine, combinando i due paradigmi proposti presentiamo una nuova e pratica funzione di distanza tra interi genomi. Con il nostro metodo, chiamato Unic Subword Approach, relazioniamo tra loro le diverse regioni di due sequenze genomiche, selezionando i motivi conservati durante l'evoluzione. I risultati sperimentali evidenziano che il nostro approccio offre migliori prestazioni rispetto ad altri metodi allo stato dell'arte nella ricostruzione della filogenesi di organismi quali virus, procarioti ed eucarioti unicellulari, identificando inoltre le sottoclassi principali di queste specie.
APA, Harvard, Vancouver, ISO, and other styles
10

Hsiao, Ming-Yuen, and 蕭閔元. "Indoor Positioning With Distributed Kernel-Based Bayesian Filtering." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/3328rw.

Full text
Abstract:
碩士
國立中興大學
電機工程學系所
101
In the wireless sensor network, several localization algorithms have been proposed for indoor positioning systems. However, the computational complexity of these schemes is high, which may not be suitable to be implemented in sensor nodes. For example, the limited sensor capabilities lead to performing the particle filtering with a very small set of samples, which results in high positioning errors. Hence, a novel sampling scheme may be required to improve estimation accuracy for the particle filter method. In this thesis, the concept of support vector regression (SVR) is conducted to suppress the estimation error, which enhances the reliability of the positioning system. Accordingly, we propose a Kernel-Based Particle Filtering (KBPF) algorithm, which consists of the following three steps: (1) Initial SVR Estimation; (2) Kernel-based Re-weighting; and (3) Estimation Refinement. The experimental results show that the proposed scheme can achieve good indoor positioning accuracy with a small number of samples and the performance of the proposed KBPF system using three beacons is comparable with that of the KLF system using four beacons.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Kernel filtering"

1

Príncipe, J. C. Kernel adaptive filtering: A comprehensive introduction. Hoboken, N.J: Wiley, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Príncipe, J. C. Kernel adaptive filtering: A comprehensive introduction. Hoboken, N.J: Wiley, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Príncipe, J. C. Kernel adaptive filtering: A comprehensive introduction. Hoboken, N.J: Wiley, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Príncipe, J. C. Kernel adaptive filtering: A comprehensive introduction. Hoboken, N.J: John Wiley, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Príncipe, J. C. Kernel adaptive filtering: A comprehensive introduction. Hoboken, N.J: John Wiley, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Príncipe, J. C. Kernel adaptive filtering: A comprehensive introduction. Hoboken, N.J: Wiley, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Principe, Jos, Simon Haykin, and Weifeng Liu. Kernel Adaptive Filtering. Wiley & Sons, Incorporated, John, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Haykin, Simon, José C. Principe, and Weifeng Liu. Kernel Adaptive Filtering: A Comprehensive Introduction. Wiley & Sons, Incorporated, John, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Haykin, Simon, José C. Principe, and Weifeng Liu. Kernel Adaptive Filtering: A Comprehensive Introduction. Wiley & Sons, Incorporated, John, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Haykin, Simon, José C. Principe, and Weifeng Liu. Kernel Adaptive Filtering: A Comprehensive Introduction. Wiley & Sons, Incorporated, John, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Kernel filtering"

1

Ozeki, Kazuhiko. "Kernel Affine Projection Algorithm." In Theory of Affine Projection Algorithms for Adaptive Filtering, 165–85. Tokyo: Springer Japan, 2015. http://dx.doi.org/10.1007/978-4-431-55738-8_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Badong, Lin Li, Weifeng Liu, and José C. Príncipe. "Nonlinear Adaptive Filtering in Kernel Spaces." In Springer Handbook of Bio-/Neuroinformatics, 715–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-30574-0_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

García-Vega, S., A. M. Álvarez-Meza, and Germán Castellanos-Domínguez. "Estimation of Cyclostationary Codebooks for Kernel Adaptive Filtering." In Advanced Information Systems Engineering, 351–58. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-319-12568-8_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Tao, and Wu Huang. "Kernel Relative-prototype Spectral Filtering for Few-Shot Learning." In Lecture Notes in Computer Science, 541–57. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20044-1_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kang, Dong-Ho, and Rhee Man Kil. "Nonlinear Filtering Based on a Network with Gaussian Kernel Functions." In Neural Information Processing, 53–60. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-26555-1_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Scardapane, Simone, Danilo Comminiello, Michele Scarpiniti, Raffaele Parisi, and Aurelio Uncini. "PM10 Forecasting Using Kernel Adaptive Filtering: An Italian Case Study." In Neural Nets and Surroundings, 93–100. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-35467-0_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Blandon, J. S., C. K. Valencia, A. Alvarez, J. Echeverry, M. A. Alvarez, and A. Orozco. "Shape Classification Using Hilbert Space Embeddings and Kernel Adaptive Filtering." In Lecture Notes in Computer Science, 245–51. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-93000-8_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Jing, Dong Hu, Biqiu Zhang, and Yuwei Pang. "Hierarchical Convolution Feature for Target Tracking with Kernel-Correlation Filtering." In Lecture Notes in Computer Science, 297–306. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-34120-6_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Barash, Danny, and Dorin Comaniciu. "A Common Viewpoint on Broad Kernel Filtering and Nonlinear Diffusion." In Scale Space Methods in Computer Vision, 683–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-44935-3_48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Coufal, David. "On Persistence of Convergence of Kernel Density Estimates in Particle Filtering." In Advances in Intelligent Systems and Computing, 339–46. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-18058-4_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Kernel filtering"

1

Wada, Tomoya, and Toshihisa Tanaka. "Doubly adaptive kernel filtering." In 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2017. http://dx.doi.org/10.1109/apsipa.2017.8282173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Badong, Nanning Zheng, and Jose C. Principe. "Survival kernel with application to kernel adaptive filtering." In 2013 International Joint Conference on Neural Networks (IJCNN 2013 - Dallas). IEEE, 2013. http://dx.doi.org/10.1109/ijcnn.2013.6706866.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

De Luca, Patrick Medeiros, and Wemerson Delcio Parreira. "Simulação do comportamento estocástico do algoritmo KLMS com diferentes kernels." In Computer on the Beach. Itajaí: Universidade do Vale do Itajaí, 2020. http://dx.doi.org/10.14210/cotb.v11n1.p004-006.

Full text
Abstract:
The kernel least-mean-square (KLMS) algorithm is a popular algorithmin nonlinear adaptive filtering due to its simplicity androbustness. In kernel adaptive filtering, the statistics of the inputto the linear filter depends on the kernel and its parameters. Moreover,practical implementations on systems estimation require afinite non-linearity model order. In order to obtain finite ordermodels, many kernelized adaptive filters use a dictionary of kernelfunctions. Dictionary size also depends on the kernel and itsparameters. Therefore, KLMS may have different performanceson the estimation of a nonlinear system, the time of convergence,and the accuracy using a different kernel. In order to analyze theperformance of KLMS with different kernels, this paper proposesthe use of the Monte Carlo simulation of both steady-state and thetransient behavior of the KLMS algorithm using different types ofkernel functions and Gaussian inputs.
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Jin, Hua Qu, Badong Chen, and Wentao Ma. "Kernel robust mixed-norm adaptive filtering." In 2014 International Joint Conference on Neural Networks (IJCNN). IEEE, 2014. http://dx.doi.org/10.1109/ijcnn.2014.6889889.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xia, Zhonghang, Wenke Zhang, Manghui Tu, and I.-Ling Yen. "Kernel-based Approaches for Collaborative Filtering." In 2010 International Conference on Machine Learning and Applications (ICMLA). IEEE, 2010. http://dx.doi.org/10.1109/icmla.2010.41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Kan, Badong Chen, and Jose C. Principe. "Kernel adaptive filtering with confidence intervals." In 2013 International Joint Conference on Neural Networks (IJCNN 2013 - Dallas). IEEE, 2013. http://dx.doi.org/10.1109/ijcnn.2013.6707045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rawat, Paresh, and Manish D. Sawale. "Gaussian kernel filtering for video stabilization." In 2017 International Conference on Recent Innovations in Signal processing and Embedded Systems (RISE). IEEE, 2017. http://dx.doi.org/10.1109/rise.2017.8378142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fernandez-Berni, J., R. Carmona-Galan, and A. Rodriguez-Vazquez. "Image filtering by reduced kernels exploiting kernel structure and focal-plane averaging." In 2011 European Conference on Circuit Theory and Design (ECCTD). IEEE, 2011. http://dx.doi.org/10.1109/ecctd.2011.6043324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Takeuchi, Airi, Masahiro Yukawa, and Klaus-Robert Muller. "A better metric in kernel adaptive filtering." In 2016 24th European Signal Processing Conference (EUSIPCO). IEEE, 2016. http://dx.doi.org/10.1109/eusipco.2016.7760514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Engholm, Rasmus, Henrik Karstoft, and Eva B. V. Jensen. "Adaptive kernel filtering used in video processing." In IS&T/SPIE Electronic Imaging, edited by Majid Rabbani and Robert L. Stevenson. SPIE, 2009. http://dx.doi.org/10.1117/12.808155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography