Academic literature on the topic 'Kernel-based model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Kernel-based model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Kernel-based model"

1

Nishiyama, Yu, Motonobu Kanagawa, Arthur Gretton, and Kenji Fukumizu. "Model-based kernel sum rule: kernel Bayesian inference with probabilistic models." Machine Learning 109, no. 5 (January 2, 2020): 939–72. http://dx.doi.org/10.1007/s10994-019-05852-9.

Full text
Abstract:
AbstractKernel Bayesian inference is a principled approach to nonparametric inference in probabilistic graphical models, where probabilistic relationships between variables are learned from data in a nonparametric manner. Various algorithms of kernel Bayesian inference have been developed by combining kernelized basic probabilistic operations such as the kernel sum rule and kernel Bayes’ rule. However, the current framework is fully nonparametric, and it does not allow a user to flexibly combine nonparametric and model-based inferences. This is inefficient when there are good probabilistic models (or simulation models) available for some parts of a graphical model; this is in particular true in scientific fields where “models” are the central topic of study. Our contribution in this paper is to introduce a novel approach, termed the model-based kernel sum rule (Mb-KSR), to combine a probabilistic model and kernel Bayesian inference. By combining the Mb-KSR with the existing kernelized probabilistic rules, one can develop various algorithms for hybrid (i.e., nonparametric and model-based) inferences. As an illustrative example, we consider Bayesian filtering in a state space model, where typically there exists an accurate probabilistic model for the state transition process. We propose a novel filtering method that combines model-based inference for the state transition process and data-driven, nonparametric inference for the observation generating process. We empirically validate our approach with synthetic and real-data experiments, the latter being the problem of vision-based mobile robot localization in robotics, which illustrates the effectiveness of the proposed hybrid approach.
APA, Harvard, Vancouver, ISO, and other styles
2

Zong, Xinlu, Chunzhi Wang, and Hui Xu. "Density-based Adaptive Wavelet Kernel SVM Model for P2P Traffic Classification." International Journal of Future Generation Communication and Networking 6, no. 6 (December 31, 2013): 25–36. http://dx.doi.org/10.14257/ijfgcn.2013.6.6.04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shim, Jooyong, and Changha Hwang. "Kernel-based orthogonal quantile regression model." Model Assisted Statistics and Applications 12, no. 3 (August 30, 2017): 217–26. http://dx.doi.org/10.3233/mas-170396.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Su, Zhi-gang, Pei-hong Wang, and Zhao-long Song. "Kernel based nonlinear fuzzy regression model." Engineering Applications of Artificial Intelligence 26, no. 2 (February 2013): 724–38. http://dx.doi.org/10.1016/j.engappai.2012.05.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Zhijie, Mohamed Ben Salah, Hong Zhang, and Nilanjan Ray. "Shape based appearance model for kernel tracking." Image and Vision Computing 30, no. 4-5 (May 2012): 332–44. http://dx.doi.org/10.1016/j.imavis.2012.03.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ma, Xin, and Zhi-bin Liu. "The kernel-based nonlinear multivariate grey model." Applied Mathematical Modelling 56 (April 2018): 217–38. http://dx.doi.org/10.1016/j.apm.2017.12.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lingyu, Liang, Wenqi Huang, Zhaojie Dong, Jiguang Zhao, Peng Li, Bingfang Lu, and Xinde Zhu. "Short-term power load forecasting based on combined kernel Gaussian process hybrid model." E3S Web of Conferences 256 (2021): 01009. http://dx.doi.org/10.1051/e3sconf/202125601009.

Full text
Abstract:
As one of the countries with the most energy consumption in the world, electricity accounts for a large proportion of the energy supply in our country. According to the national basic policy of energy conservation and emission reduction, it is urgent to realize the intelligent distribution and management of electricity by prediction. Due to the complex nature of electricity load sequences, the traditional model predicts poor results. As a kernel-based machine learning model, Gaussian Process Mixing (GPM) has high predictive accuracy, can multi-modal prediction and output confidence intervals. However, the traditional GPM often uses a single kernel function, and the prediction effect is not optimal. Therefore, this paper will combine a variety of existing kernel to build a new kernel, and use it for load sequence prediction. In the electricity load prediction experiments, the prediction characteristics of the load sequences are first analyzed, and then the prediction is made based on the optimal hybrid kernel function constructed by GPM and compared with the traditional prediction model. The results show that the GPM based on the hybrid kernel is not only superior to the single kernel GPM but also superior to some traditional prediction models such as ridge regression, kernel regression and GP.
APA, Harvard, Vancouver, ISO, and other styles
8

Fan, Yanqin, and Qi Li. "CONSISTENT MODEL SPECIFICATION TESTS." Econometric Theory 16, no. 6 (December 2000): 1016–41. http://dx.doi.org/10.1017/s0266466600166083.

Full text
Abstract:
We point out the close relationship between the integrated conditional moment tests in Bierens (1982, Journal of Econometrics 20, 105–134) and Bierens and Ploberger (1997, Econometrica 65, 1129–1152) with the complex-valued exponential weight function and the kernel-based tests in Härdle and Mammen (1993, Annals of Statistics 21, 1926–1947), Li and Wang (1998, Journal of Econometrics 87, 145–165), and Zheng (1996, Journal of Econometrics 75, 263–289). It is well established that the integrated conditional moment tests of Bierens (1982) and Bierens and Ploberger (1997) are more powerful than kernel-based nonparametric tests against Pitman local alternatives. In this paper we analyze the power properties of the kernel-based tests and the integrated conditional moment tests for a sequence of “singular” local alternatives, and show that the kernel-based tests can be more powerful than the integrated conditional moment tests for these “singular” local alternatives. These results suggest that integrated conditional moment tests and kernel-based tests should be viewed as complements to each other. Results from a simulation study are in agreement with the theoretical results.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhai, Yuejing, Zhouzheng Li, and Haizhong Liu. "Multi-Angle Fast Neural Tangent Kernel Classifier." Applied Sciences 12, no. 21 (October 26, 2022): 10876. http://dx.doi.org/10.3390/app122110876.

Full text
Abstract:
Multi-kernel learning methods are essential kernel learning methods. Still, the base kernel functions in most multi-kernel learning methods only with select kernel functions with shallow structures, which are weak for large-scale uneven data. We propose two types of acceleration models from a multidimensional perspective of the data: the neural tangent kernel (NTK)-based multi-kernel learning method is proposed, where the NTK kernel regressor is shown to be equivalent to an infinitely wide neural network predictor, and the NTK with deep structure is used as the base kernel function to enhance the learning ability of multi-kernel models; and a parallel computing kernel model based on data partitioning techniques. An RBF, POLY-based multi-kernel model is also proposed. All models use historical memory-based PSO (HMPSO) for efficient search of parameters within the model. Since NTK has a multi-layer structure and thus has a significant computational complexity, the use of a Monotone Disjunctive Kernel (MDK) to store and train Boolean features in binary achieves a 15–60% training time compression of NTK models in different datasets while obtaining a 1–25% accuracy improvement.
APA, Harvard, Vancouver, ISO, and other styles
10

Segera, Davies, Mwangi Mbuthia, and Abraham Nyete. "Particle Swarm Optimized Hybrid Kernel-Based Multiclass Support Vector Machine for Microarray Cancer Data Analysis." BioMed Research International 2019 (December 16, 2019): 1–11. http://dx.doi.org/10.1155/2019/4085725.

Full text
Abstract:
Determining an optimal decision model is an important but difficult combinatorial task in imbalanced microarray-based cancer classification. Though the multiclass support vector machine (MCSVM) has already made an important contribution in this field, its performance solely depends on three aspects: the penalty factor C, the type of kernel, and its parameters. To improve the performance of this classifier in microarray-based cancer analysis, this paper proposes PSO-PCA-LGP-MCSVM model that is based on particle swarm optimization (PSO), principal component analysis (PCA), and multiclass support vector machine (MCSVM). The MCSVM is based on a hybrid kernel, i.e., linear-Gaussian-polynomial (LGP) that combines the advantages of three standard kernels (linear, Gaussian, and polynomial) in a novel manner, where the linear kernel is linearly combined with the Gaussian kernel embedding the polynomial kernel. Further, this paper proves and makes sure that the LGP kernel confirms the features of a valid kernel. In order to reveal the effectiveness of our model, several experiments were conducted and the obtained results compared between our model and other three single kernel-based models, namely, PSO-PCA-L-MCSVM (utilizing a linear kernel), PSO-PCA-G-MCSVM (utilizing a Gaussian kernel), and PSO-PCA-P-MCSVM (utilizing a polynomial kernel). In comparison, two dual and two multiclass imbalanced standard microarray datasets were used. Experimental results in terms of three extended assessment metrics (F-score, G-mean, and Accuracy) reveal the superior global feature extraction, prediction, and learning abilities of this model against three single kernel-based models.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Kernel-based model"

1

Bose, Aishwarya. "Effective web service discovery using a combination of a semantic model and a data mining technique." Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/26425/1/Aishwarya_Bose_Thesis.pdf.

Full text
Abstract:
With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Bose, Aishwarya. "Effective web service discovery using a combination of a semantic model and a data mining technique." Queensland University of Technology, 2008. http://eprints.qut.edu.au/26425/.

Full text
Abstract:
With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Lin. "Semiparametric Bayesian Kernel Survival Model for Highly Correlated High-Dimensional Data." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/95040.

Full text
Abstract:
We are living in an era in which many mysteries related to science, technologies and design can be answered by "learning" the huge amount of data accumulated over the past few decades. In the processes of those endeavors, highly-correlated high-dimensional data are frequently observed in many areas including predicting shelf life, controlling manufacturing processes, and identifying important pathways related with diseases. We define a "set" as a group of highly-correlated high-dimensional (HCHD) variables that possess a certain practical meaning or control a certain process, and define an "element" as one of the HCHD variables within a certain set. Such an elements-within-a-set structure is very complicated because: (i) the dimensions of elements in different sets can vary dramatically, ranging from two to hundreds or even thousands; (ii) the true relationships, include element-wise associations, set-wise interactions, and element-set interactions, are unknown; (iii) and the sample size (n) is usually much smaller than the dimension of the elements (p). The goal of this dissertation is to provide a systematic way to identify both the set effects and the element effects associated with survival outcomes from heterogeneous populations using Bayesian survival kernel models. By connecting kernel machines with semiparametric Bayesian hierarchical models, the proposed unified model frameworks can identify significant elements as well as sets regardless of mis-specifications of distributions or kernels. The proposed methods can potentially be applied to a vast range of fields to solve real-world problems.
PHD
APA, Harvard, Vancouver, ISO, and other styles
4

Garg, Aditie. "Designing Reactive Power Control Rules for Smart Inverters using Machine Learning." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/83558.

Full text
Abstract:
Due to increasing penetration of solar power generation, distribution grids are facing a number of challenges. Frequent reverse active power flows can result in rapid fluctuations in voltage magnitudes. However, with the revised IEEE 1547 standard, smart inverters can actively control their reactive power injection to minimize voltage deviations and power losses in the grid. Reactive power control and globally optimal inverter coordination in real-time is computationally and communication-wise demanding, whereas the local Volt-VAR or Watt-VAR control rules are subpar for enhanced grid services. This thesis uses machine learning tools and poses reactive power control as a kernel-based regression task to learn policies and evaluate the reactive power injections in real-time. This novel approach performs inverter coordination through non-linear control policies centrally designed by the operator on a slower timescale using anticipated scenarios for load and generation. In real-time, the inverters feed locally and/or globally collected grid data to the customized control rules. The developed models are highly adjustable to the available computation and communication resources. The developed control scheme is tested on the IEEE 123-bus system and is seen to efficiently minimize losses and regulate voltage within the permissible limits.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, Byung-Jun. "Semiparametric and Nonparametric Methods for Complex Data." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/99155.

Full text
Abstract:
A variety of complex data has broadened in many research fields such as epidemiology, genomics, and analytical chemistry with the development of science, technologies, and design scheme over the past few decades. For example, in epidemiology, the matched case-crossover study design is used to investigate the association between the clustered binary outcomes of disease and a measurement error in covariate within a certain period by stratifying subjects' conditions. In genomics, high-correlated and high-dimensional(HCHD) data are required to identify important genes and their interaction effect over diseases. In analytical chemistry, multiple time series data are generated to recognize the complex patterns among multiple classes. Due to the great diversity, we encounter three problems in analyzing those complex data in this dissertation. We have then provided several contributions to semiparametric and nonparametric methods for dealing with the following problems: the first is to propose a method for testing the significance of a functional association under the matched study; the second is to develop a method to simultaneously identify important variables and build a network in HDHC data; the third is to propose a multi-class dynamic model for recognizing a pattern in the time-trend analysis. For the first topic, we propose a semiparametric omnibus test for testing the significance of a functional association between the clustered binary outcomes and covariates with measurement error by taking into account the effect modification of matching covariates. We develop a flexible omnibus test for testing purposes without a specific alternative form of a hypothesis. The advantages of our omnibus test are demonstrated through simulation studies and 1-4 bidirectional matched data analyses from an epidemiology study. For the second topic, we propose a joint semiparametric kernel machine network approach to provide a connection between variable selection and network estimation. Our approach is a unified and integrated method that can simultaneously identify important variables and build a network among them. We develop our approach under a semiparametric kernel machine regression framework, which can allow for the possibility that each variable might be nonlinear and is likely to interact with each other in a complicated way. We demonstrate our approach using simulation studies and real application on genetic pathway analysis. Lastly, for the third project, we propose a Bayesian focal-area detection method for a multi-class dynamic model under a Bayesian hierarchical framework. Two-step Bayesian sequential procedures are developed to estimate patterns and detect focal intervals, which can be used for gas chromatography. We demonstrate the performance of our proposed method using a simulation study and real application on gas chromatography on Fast Odor Chromatographic Sniffer (FOX) system.
Doctor of Philosophy
A variety of complex data has broadened in many research fields such as epidemiology, genomics, and analytical chemistry with the development of science, technologies, and design scheme over the past few decades. For example, in epidemiology, the matched case-crossover study design is used to investigate the association between the clustered binary outcomes of disease and a measurement error in covariate within a certain period by stratifying subjects' conditions. In genomics, high-correlated and high-dimensional(HCHD) data are required to identify important genes and their interaction effect over diseases. In analytical chemistry, multiple time series data are generated to recognize the complex patterns among multiple classes. Due to the great diversity, we encounter three problems in analyzing the following three types of data: (1) matched case-crossover data, (2) HCHD data, and (3) Time-series data. We contribute to the development of statistical methods to deal with such complex data. First, under the matched study, we discuss an idea about hypothesis testing to effectively determine the association between observed factors and risk of interested disease. Because, in practice, we do not know the specific form of the association, it might be challenging to set a specific alternative hypothesis. By reflecting the reality, we consider the possibility that some observations are measured with errors. By considering these measurement errors, we develop a testing procedure under the matched case-crossover framework. This testing procedure has the flexibility to make inferences on various hypothesis settings. Second, we consider the data where the number of variables is very large compared to the sample size, and the variables are correlated to each other. In this case, our goal is to identify important variables for outcome among a large amount of the variables and build their network. For example, identifying few genes among whole genomics associated with diabetes can be used to develop biomarkers. By our proposed approach in the second project, we can identify differentially expressed and important genes and their network structure with consideration for the outcome. Lastly, we consider the scenario of changing patterns of interest over time with application to gas chromatography. We propose an efficient detection method to effectively distinguish the patterns of multi-level subjects in time-trend analysis. We suggest that our proposed method can give precious information on efficient search for the distinguishable patterns so as to reduce the burden of examining all observations in the data.
APA, Harvard, Vancouver, ISO, and other styles
6

Polajnar, Tamara. "Semantic models as metrics for kernel-based interaction identification." Thesis, University of Glasgow, 2010. http://theses.gla.ac.uk/2260/.

Full text
Abstract:
Automatic detection of protein-protein interactions (PPIs) in biomedical publications is vital for efficient biological research. It also presents a host of new challenges for pattern recognition methodologies, some of which will be addressed by the research in this thesis. Proteins are the principal method of communication within a cell; hence, this area of research is strongly motivated by the needs of biologists investigating sub-cellular functions of organisms, diseases, and treatments. These researchers rely on the collaborative efforts of the entire field and communicate through experimental results published in reviewed biomedical journals. The substantial number of interactions detected by automated large-scale PPI experiments, combined with the ease of access to the digitised publications, has increased the number of results made available each day. The ultimate aim of this research is to provide tools and mechanisms to aid biologists and database curators in locating relevant information. As part of this objective this thesis proposes, studies, and develops new methodologies that go some way to meeting this grand challenge. Pattern recognition methodologies are one approach that can be used to locate PPI sentences; however, most accurate pattern recognition methods require a set of labelled examples to train on. For this particular task, the collection and labelling of training data is highly expensive. On the other hand, the digital publications provide a plentiful source of unlabelled data. The unlabelled data is used, along with word cooccurrence models, to improve classification using Gaussian processes, a probabilistic alternative to the state-of-the-art support vector machines. This thesis presents and systematically assesses the novel methods of using the knowledge implicitly encoded in biomedical texts and shows an improvement on the current approaches to PPI sentence detection.
APA, Harvard, Vancouver, ISO, and other styles
7

Lyubchyk, Leonid, Oleksy Galuza, and Galina Grinberg. "Ranking Model Real-Time Adaptation via Preference Learning Based on Dynamic Clustering." Thesis, ННК "IПСА" НТУУ "КПI iм. Iгоря Сiкорського", 2017. http://repository.kpi.kharkov.ua/handle/KhPI-Press/36819.

Full text
Abstract:
The proposed preference learning on clusters method allows to fully realizing the advantages of the kernel-based approach. While the dimension of the model is determined by a pre-selected number of clusters and its complexity do not grow with increasing number of observations. Thus real-time preference function identification algorithm based on training data stream includes successive estimates of cluster parameter as well as average cluster ranks updating and recurrent kernel-based nonparametric estimation of preference model.
APA, Harvard, Vancouver, ISO, and other styles
8

Vlachos, Dimitrios. "Novel algorithms in wireless CDMA systems for estimation and kernel based equalization." Thesis, Brunel University, 2012. http://bura.brunel.ac.uk/handle/2438/7658.

Full text
Abstract:
A powerful technique is presented for joint blind channel estimation and carrier offset method for code- division multiple access (CDMA) communication systems. The new technique combines singular value decomposition (SVD) analysis with carrier offset parameter. Current blind methods sustain a high computational complexity as they require the computation of a large SVD twice, and they are sensitive to accurate knowledge of the noise subspace rank. The proposed method overcomes both problems by computing the SVD only once. Extensive simulations using MatLab demonstrate the robustness of the proposed scheme and its performance is comparable to other existing SVD techniques with significant lower computational as much as 70% cost because it does not require knowledge of the rank of the noise sub-space. Also a kernel based equalization for CDMA communication systems is proposed, designed and simulated using MatLab. The proposed method in CDMA systems overcomes all other methods.
APA, Harvard, Vancouver, ISO, and other styles
9

Buch, Armin [Verfasser], and Gerhard [Akademischer Betreuer] Jäger. "Linguistic Spaces : Kernel-based models of natural language / Armin Buch ; Betreuer: Gerhard Jäger." Tübingen : Universitätsbibliothek Tübingen, 2011. http://d-nb.info/1161803572/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mahfouz, Sandy. "Kernel-based machine learning for tracking and environmental monitoring in wireless sensor networkds." Thesis, Troyes, 2015. http://www.theses.fr/2015TROY0025/document.

Full text
Abstract:
Cette thèse porte sur les problèmes de localisation et de surveillance de champ de gaz à l'aide de réseaux de capteurs sans fil. Nous nous intéressons d'abord à la géolocalisation des capteurs et au suivi de cibles. Nous proposons ainsi une approche exploitant la puissance des signaux échangés entre les capteurs et appliquant les méthodes à noyaux avec la technique de fingerprinting. Nous élaborons ensuite une méthode de suivi de cibles, en se basant sur l'approche de localisation proposée. Cette méthode permet d'améliorer la position estimée de la cible en tenant compte de ses accélérations, et cela à l'aide du filtre de Kalman. Nous proposons également un modèle semi-paramétrique estimant les distances inter-capteurs en se basant sur les puissances des signaux échangés entre ces capteurs. Ce modèle est une combinaison du modèle physique de propagation avec un terme non linéaire estimé par les méthodes à noyaux. Les données d'accélérations sont également utilisées ici avec les distances, pour localiser la cible, en s'appuyant sur un filtrage de Kalman et un filtrage particulaire. Dans un autre contexte, nous proposons une méthode pour la surveillance de la diffusion d'un gaz dans une zone d'intérêt, basée sur l'apprentissage par noyaux. Cette méthode permet de détecter la diffusion d'un gaz en utilisant des concentrations relevées régulièrement par des capteurs déployés dans la zone. Les concentrations mesurées sont ensuite traitées pour estimer les paramètres de la source de gaz, notamment sa position et la quantité du gaz libéré
This thesis focuses on the problems of localization and gas field monitoring using wireless sensor networks. First, we focus on the geolocalization of sensors and target tracking. Using the powers of the signals exchanged between sensors, we propose a localization method combining radio-location fingerprinting and kernel methods from statistical machine learning. Based on this localization method, we develop a target tracking method that enhances the estimated position of the target by combining it to acceleration information using the Kalman filter. We also provide a semi-parametric model that estimates the distances separating sensors based on the powers of the signals exchanged between them. This semi-parametric model is a combination of the well-known log-distance propagation model with a non-linear fluctuation term estimated within the framework of kernel methods. The target's position is estimated by incorporating acceleration information to the distances separating the target from the sensors, using either the Kalman filter or the particle filter. In another context, we study gas diffusions in wireless sensor networks, using also machine learning. We propose a method that allows the detection of multiple gas diffusions based on concentration measures regularly collected from the studied region. The method estimates then the parameters of the multiple gas sources, including the sources' locations and their release rates
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Kernel-based model"

1

Chen, Bo, Hongwei Liu, and Zheng Bao. "General Kernel Optimization Model Based on Kernel Fisher Criterion." In Lecture Notes in Computer Science, 143–46. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11881070_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Yuehua, Peng Zhang, and Yong Shi. "Kernel Based Regularized Multiple Criteria Linear Programming Model." In Lecture Notes in Computer Science, 625–32. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-01973-9_70.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Travieso, Carlos M., Jesús B. Alonso, Jaime R. Ticay-Rivas, and Marcos del Pozo-Baños. "Apnea Detection Based on Hidden Markov Model Kernel." In Advances in Nonlinear Speech Processing, 71–79. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-25020-0_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fleischanderl, Gerhard, Thomas Havelka, Herwig Schreiner, Markus Stumptner, and Franz Wotawa. "DiKe - A Model-Based Diagnosis Kernel and Its Application." In KI 2001: Advances in Artificial Intelligence, 440–54. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45422-5_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Taylan, Pakize. "Kernel Based C-Bridge Estimator for Partially Nonlinear Model." In Operations Research, 2–20. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003324508-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hernández-Torruco, José, Juana Canul-Reich, Juan Frausto-Solis, and Juan José Méndez-Castillo. "A Kernel-Based Predictive Model for Guillain-Barré Syndrome." In Advances in Artificial Intelligence and Its Applications, 270–81. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-27101-9_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kokologiannakis, Michalis, and Viktor Vafeiadis. "GenMC: A Model Checker for Weak Memory Models." In Computer Aided Verification, 427–40. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81685-8_20.

Full text
Abstract:
AbstractGenMC is an LLVM-based state-of-the-art stateless model checker for concurrent C/C++ programs. Its modular infrastructure allows it to support complex memory models, such as RC11 and IMM, and makes it easy to extend to support further axiomatic memory models.In this paper, we discuss the overall architecture of the tool and how it can be extended to support additional memory models, programming languages, and/or synchronization primitives. To demonstrate the point, we have extended the tool with support for the Linux kernel memory model (LKMM), synchronization barriers, POSIX I/O system calls, and better error detection capabilities.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhou, Yifei, and Conor Hayes. "Graph-Based Diffusion Method for Top-N Recommendation." In Communications in Computer and Information Science, 292–304. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26438-2_23.

Full text
Abstract:
AbstractData that may be used for personalised recommendation purposes can intuitively be modelled as a graph. Users can be linked to item data; item data may be linked to item data. With such a model, the task of recommending new items to users or making new connections between items can be undertaken by algorithms designed to establish the relatedness between vertices in a graph. One such class of algorithm is based on the random walk, whereby a sequence of connected vertices are visited based on an underlying probability distribution and a determination of vertex relatedness established. A diffusion kernel encodes such a process. This paper demonstrates several diffusion kernel approaches on a graph composed of user-item and item-item relationships. The approach presented in this paper, RecWalk*, consists of a user-item bipartite combined with an item-item graph on which several diffusion kernels are applied and evaluated in terms of top-n recommendation. We conduct experiments on several datasets of the RecWalk* model using combinations of different item-item graph models and personalised diffusion kernels. We compare accuracy with some non-item recommender methods. We show that diffusion kernel approaches match or outperform state-of-the-art recommender approaches.
APA, Harvard, Vancouver, ISO, and other styles
9

Dudek, Grzegorz. "Variable Selection in the Kernel Regression Based Short-Term Load Forecasting Model." In Artificial Intelligence and Soft Computing, 557–63. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29350-4_66.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jing, Huiyun, Xin He, Qi Han, and Xiamu Niu. "A Saliency Detection Model Based on Local and Global Kernel Density Estimation." In Neural Information Processing, 164–71. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24955-6_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Kernel-based model"

1

Yu, Leiming, Xun Gong, Yifan Sun, Qianqian Fang, Norm Rubin, and David Kaeli. "Moka: Model-based concurrent kernel analysis." In 2017 IEEE International Symposium on Workload Characterization (IISWC). IEEE, 2017. http://dx.doi.org/10.1109/iiswc.2017.8167777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhu, Qi, Yong Xu, JinRong Cui, ChangFeng Chen, JingHua Wang, XiangQian Wu, and YingNan Zhao. "A method for constructing simplified kernel model based on kernel-MSE." In 2009 Asia-Pacific Conference on Computational Intelligence and Industrial Applications (PACIIA 2009). IEEE, 2009. http://dx.doi.org/10.1109/paciia.2009.5406447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Shan, Lingling Zhou, Rendong Ying, and Yi Ge. "Safe device driver model based on kernel-mode JVM." In the 3rd international workshop. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1408654.1408657.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ghoshal, Debarshi Patanjali, Kumar Gopalakrishnan, and Hannah Michalska. "Kernel-based adaptive multiple model target tracking." In 2017 IEEE Conference on Control Technology and Applications (CCTA). IEEE, 2017. http://dx.doi.org/10.1109/ccta.2017.8062644.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fang, Yudong, Zhenfei Zhan, Junqi Yang, Jun Lu, and Chong Chen. "A Mixed-Kernel-Based Support Vector Regression Model for Automotive Body Design Optimization." In ASME 2016 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/imece2016-67669.

Full text
Abstract:
Finite Element (FE) models are commonly used for automotive body design. However, even with increasing speed of computers, the FE-based simulation models are still too time-consuming when the models are complex. To improve the computational efficiency, SVR, a potential approximate model, has been widely used as the surrogate of FE model for crashworthiness optimization design. Generally, in the traditional SVR, when dealing with nonlinear data, the single kernel function based projection can’t fully cover data distribution characteristics. In order to eliminate the limitations of single kernel SVR, a mixed-kernel-based SVR (MKSVR) is proposed in this research. The mixed kernel is constructed based on the linear combination of radial basis kernel function and polynomial kernel function. Through the particle swarm optimization algorithm, the parameters of the mixed kernel SVR are optimized. Then the proposed MKSVR is applied to automotive body design optimization. The application of MKSVR is demonstrated by a vehicle design problem for weight reduction while satisfying safety constraints on X direction acceleration and Crush Distance. A comparison study for SVR and MKSVR in application indicates MKSVR surpasses SVR in model accuracy.
APA, Harvard, Vancouver, ISO, and other styles
6

Langone, Rocco, Carlos Alzate, and Johan A. K. Suykens. "Modularity-based model selection for kernel spectral clustering." In 2011 International Joint Conference on Neural Networks (IJCNN 2011 - San Jose). IEEE, 2011. http://dx.doi.org/10.1109/ijcnn.2011.6033449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Huanhuan, Fengzhen Tang, Peter Tino, and Xin Yao. "Model-based kernel for efficient time series analysis." In KDD' 13: The 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2013. http://dx.doi.org/10.1145/2487575.2487700.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Beckers, Thomas, Somil Bansal, Claire J. Tomlin, and Sandra Hirche. "Closed-loop Model Selection for Kernel-based Models using Bayesian Optimization." In 2019 IEEE 58th Conference on Decision and Control (CDC). IEEE, 2019. http://dx.doi.org/10.1109/cdc40024.2019.9029690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jadhav, Dattatray V., and Raghunath S. Holambe. "Multiresolution based Kernel Fisher Discriminant Model for Face Recognition." In Fourth International Conference on Information Technology (ITNG'07). IEEE, 2007. http://dx.doi.org/10.1109/itng.2007.131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Janakiram, Dharanipragada, Hemang Mehta, and S. J. Balaji. "Dhara: A Service Abstraction-Based OS Kernel Design Model." In 2012 17th International Conference on Engineering of Complex Computer Systems (ICECCS). IEEE, 2012. http://dx.doi.org/10.1109/iceccs20050.2012.6299208.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Kernel-based model"

1

Helmut, Harbrecht, John Davis Jakeman, and Peter Zaspel. Weighted greedy-optimal design of computer experiments for kernel-based and Gaussian process model emulation and calibration. Office of Scientific and Technical Information (OSTI), March 2020. http://dx.doi.org/10.2172/1608084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sparks, Paul, Jesse Sherburn, William Heard, and Brett Williams. Penetration modeling of ultra‐high performance concrete using multiscale meshfree methods. Engineer Research and Development Center (U.S.), September 2021. http://dx.doi.org/10.21079/11681/41963.

Full text
Abstract:
Terminal ballistics of concrete is of extreme importance to the military and civil communities. Over the past few decades, ultra‐high performance concrete (UHPC) has been developed for various applications in the design of protective structures because UHPC has an enhanced ballistic resistance over conventional strength concrete. Developing predictive numerical models of UHPC subjected to penetration is critical in understanding the material's enhanced performance. This study employs the advanced fundamental concrete (AFC) model, and it runs inside the reproducing kernel particle method (RKPM)‐based code known as the nonlinear meshfree analysis program (NMAP). NMAP is advantageous for modeling impact and penetration problems that exhibit extreme deformation and material fragmentation. A comprehensive experimental study was conducted to characterize the UHPC. The investigation consisted of fracture toughness testing, the utilization of nondestructive microcomputed tomography analysis, and projectile penetration shots on the UHPC targets. To improve the accuracy of the model, a new scaled damage evolution law (SDEL) is employed within the microcrack informed damage model. During the homogenized macroscopic calculation, the corresponding microscopic cell needs to be dimensionally equivalent to the mesh dimension when the partial differential equation becomes ill posed and strain softening ensues. Results of numerical investigations will be compared with results of penetration experiments.
APA, Harvard, Vancouver, ISO, and other styles
3

Manninen, Terhikki, and Pauline Stenberg. Influence of forest floor vegetation on the total forest reflectance and its implications for LAI estimation using vegetation indices. Finnish Meteorological Institute, 2021. http://dx.doi.org/10.35614/isbn.9789523361379.

Full text
Abstract:
Recently a simple analytic canopy bidirectional reflectance factor (BRF) model based on the spectral invariants theory was presented. The model takes into account that the recollision probability in the forest canopy is different for the first scattering than the later ones. Here this model is extended to include the forest floor contribution to the total forest BRF. The effect of the understory vegetation on the total forest BRF as well as on the simple ratio (SR) and the normalized difference (NDVI) vegetation indices is demonstrated for typical cases of boreal forest. The relative contribution of the forest floor to the total BRF was up to 69 % in the red wavelength range and up to 54 % in the NIR wavelength range. Values of SR and NDVI for the forest and the canopy differed within 10 % and 30 % in red and within 1 % and 10 % in the NIR wavelength range. The relative variation of the BRF with the azimuth and view zenith angles was not very sensitive to the forest floor vegetation. Hence, linear correlation of the modelled total BRF and the Ross-thick kernel was strong for dense forests (R2 > 0.9). The agreement between modelled BRF and satellite-based reflectance values was good when measured LAI, clumping index and leaf single scattering albedo values for a boreal forest were used as input to the model.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography