Journal articles on the topic 'Kernel Inference'

To see the other types of publications on this topic, follow the link: Kernel Inference.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Kernel Inference.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Nishiyama, Yu, Motonobu Kanagawa, Arthur Gretton, and Kenji Fukumizu. "Model-based kernel sum rule: kernel Bayesian inference with probabilistic models." Machine Learning 109, no. 5 (January 2, 2020): 939–72. http://dx.doi.org/10.1007/s10994-019-05852-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractKernel Bayesian inference is a principled approach to nonparametric inference in probabilistic graphical models, where probabilistic relationships between variables are learned from data in a nonparametric manner. Various algorithms of kernel Bayesian inference have been developed by combining kernelized basic probabilistic operations such as the kernel sum rule and kernel Bayes’ rule. However, the current framework is fully nonparametric, and it does not allow a user to flexibly combine nonparametric and model-based inferences. This is inefficient when there are good probabilistic models (or simulation models) available for some parts of a graphical model; this is in particular true in scientific fields where “models” are the central topic of study. Our contribution in this paper is to introduce a novel approach, termed the model-based kernel sum rule (Mb-KSR), to combine a probabilistic model and kernel Bayesian inference. By combining the Mb-KSR with the existing kernelized probabilistic rules, one can develop various algorithms for hybrid (i.e., nonparametric and model-based) inferences. As an illustrative example, we consider Bayesian filtering in a state space model, where typically there exists an accurate probabilistic model for the state transition process. We propose a novel filtering method that combines model-based inference for the state transition process and data-driven, nonparametric inference for the observation generating process. We empirically validate our approach with synthetic and real-data experiments, the latter being the problem of vision-based mobile robot localization in robotics, which illustrates the effectiveness of the proposed hybrid approach.
2

Rogers, Mark F., Colin Campbell, and Yiming Ying. "Probabilistic Inference of Biological Networks via Data Integration." BioMed Research International 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/707453.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
There is significant interest in inferring the structure of subcellular networks of interaction. Here we consider supervised interactive network inference in which a reference set of known network links and nonlinks is used to train a classifier for predicting new links. Many types of data are relevant to inferring functional links between genes, motivating the use of data integration. We use pairwise kernels to predict novel links, along with multiple kernel learning to integrate distinct sources of data into a decision function. We evaluate various pairwise kernels to establish which are most informative and compare individual kernel accuracies with accuracies for weighted combinations. By associating a probability measure with classifier predictions, we enable cautious classification, which can increase accuracy by restricting predictions to high-confidence instances, and data cleaning that can mitigate the influence of mislabeled training instances. Although one pairwise kernel (the tensor product pairwise kernel) appears to work best, different kernels may contribute complimentary information about interactions: experiments inS. cerevisiae(yeast) reveal that a weighted combination of pairwise kernels applied to different types of data yields the highest predictive accuracy. Combined with cautious classification and data cleaning, we can achieve predictive accuracies of up to 99.6%.
3

LUGO-MARTINEZ, JOSE, and PREDRAG RADIVOJAC. "Generalized graphlet kernels for probabilistic inference in sparse graphs." Network Science 2, no. 2 (August 2014): 254–76. http://dx.doi.org/10.1017/nws.2014.14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractGraph kernels for learning and inference on sparse graphs have been widely studied. However, the problem of designing robust kernel functions that can effectively compare graph neighborhoods in the presence of noisy and complex data remains less explored. Here we propose a novel graph-based kernel method referred to as an edit distance graphlet kernel. The method was designed to add flexibility in capturing similarities between local graph neighborhoods as a means of probabilistically annotating vertices in sparse and labeled graphs. We report experiments on nine real-life data sets from molecular biology and social sciences and provide evidence that the new kernels perform favorably compared to established approaches. However, when both performance accuracy and run time are considered, we suggest that edit distance kernels are best suited for inference on graphs derived from protein structures. Finally, we demonstrate that the new approach facilitates simple and principled ways of integrating domain knowledge into classification and point out that our methodology extends beyond classification; e.g. to applications such as kernel-based clustering of graphs or approximate motif finding. Availability:www.sourceforge.net/projects/graphletkernels/
4

Lazarus, Eben, Daniel J. Lewis, and James H. Stock. "The Size‐Power Tradeoff in HAR Inference." Econometrica 89, no. 5 (2021): 2497–516. http://dx.doi.org/10.3982/ecta15404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Heteroskedasticity‐ and autocorrelation‐robust (HAR) inference in time series regression typically involves kernel estimation of the long‐run variance. Conventional wisdom holds that, for a given kernel, the choice of truncation parameter trades off a test's null rejection rate and power, and that this tradeoff differs across kernels. We formalize this intuition: using higher‐order expansions, we provide a unified size‐power frontier for both kernel and weighted orthonormal series tests using nonstandard “fixed‐ b” critical values. We also provide a frontier for the subset of these tests for which the fixed‐ b distribution is t or F. These frontiers are respectively achieved by the QS kernel and equal‐weighted periodogram. The frontiers have simple closed‐form expressions, which show that the price paid for restricting attention to tests with t and F critical values is small. The frontiers are derived for the Gaussian multivariate location model, but simulations suggest the qualitative findings extend to stochastic regressors.
5

Billio, M. "Kernel-Based Indirect Inference." Journal of Financial Econometrics 1, no. 3 (September 1, 2003): 297–326. http://dx.doi.org/10.1093/jjfinec/nbg014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Li Lyna, Shihao Han, Jianyu Wei, Ningxin Zheng, Ting Cao, and Yunxin Liu. "nn-METER." GetMobile: Mobile Computing and Communications 25, no. 4 (March 30, 2022): 19–23. http://dx.doi.org/10.1145/3529706.3529712.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Inference latency has become a crucial metric in running Deep Neural Network (DNN) models on various mobile and edge devices. To this end, latency prediction of DNN inference is highly desirable for many tasks where measuring the latency on real devices is infeasible or too costly. Yet it is very challenging and existing approaches fail to achieve a high accuracy of prediction, due to the varying model-inference latency caused by the runtime optimizations on diverse edge devices. In this paper, we propose and develop nn-Meter, a novel and efficient system to accurately predict the DNN inference latency on diverse edge devices. The key idea of nn-Meter is dividing a whole model inference into kernels, i.e., the execution units on a device, and conducting kernel-level prediction. nn-Meter builds atop two key techniques: (i) kernel detection to automatically detect the execution unit of model inference via a set of well-designed test cases; and (ii) adaptive sampling to efficiently sample the most beneficial configurations from a large space to build accurate kernel-level latency predictors. nn-Meter achieves significant high prediction accuracy on four types of edge devices.
7

Robinson, P. M. "INFERENCE ON NONPARAMETRICALLY TRENDING TIME SERIES WITH FRACTIONAL ERRORS." Econometric Theory 25, no. 6 (December 2009): 1716–33. http://dx.doi.org/10.1017/s0266466609990302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The central limit theorem for nonparametric kernel estimates of a smooth trend, with linearly generated errors, indicates asymptotic independence and homoskedasticity across fixed points, irrespective of whether disturbances have short memory, long memory, or antipersistence. However, the asymptotic variance depends on the kernel function in a way that varies across these three circumstances, and in the latter two it involves a double integral that cannot necessarily be evaluated in closed form. For a particular class of kernels, we obtain analytic formulas. We discuss extensions to more general settings, including ones involving possible cross-sectional or spatial dependence.
8

Yuan, Ao. "Semiparametric inference with kernel likelihood." Journal of Nonparametric Statistics 21, no. 2 (February 2009): 207–28. http://dx.doi.org/10.1080/10485250802553382.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cheng, Yansong, and Surajit Ray. "Multivariate Modality Inference Using Gaussian Kernel." Open Journal of Statistics 04, no. 05 (2014): 419–34. http://dx.doi.org/10.4236/ojs.2014.45041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Agbokou, Komi, and Yaogan Mensah. "INFERENCE ON THE REPRODUCING KERNEL HILBERT SPACES." Universal Journal of Mathematics and Mathematical Sciences 15 (October 10, 2021): 11–29. http://dx.doi.org/10.17654/2277141722002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Memisevic, R., L. Sigal, and D. J. Fleet. "Shared Kernel Information Embedding for Discriminative Inference." IEEE Transactions on Pattern Analysis and Machine Intelligence 34, no. 4 (April 2012): 778–90. http://dx.doi.org/10.1109/tpami.2011.154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Maswadah, M. "Kernel Inference on the Inverse Weibull Distribution." Communications for Statistical Applications and Methods 13, no. 3 (December 31, 2006): 503–12. http://dx.doi.org/10.5351/ckss.2006.13.3.503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Racine, Jeffrey S., and James G. MacKinnon. "Inference via kernel smoothing of bootstrap values." Computational Statistics & Data Analysis 51, no. 12 (August 2007): 5949–57. http://dx.doi.org/10.1016/j.csda.2006.11.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Sun, Yixiao, and Jingjing Yang. "Testing-optimal kernel choice in HAR inference." Journal of Econometrics 219, no. 1 (November 2020): 123–36. http://dx.doi.org/10.1016/j.jeconom.2020.06.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kondratyev, Dmitry A. "Towards Automatic Deductive Verification of C Programs with Sisal Loops Using the C-lightVer System." Modeling and Analysis of Information Systems 28, no. 4 (December 18, 2021): 372–93. http://dx.doi.org/10.18255/1818-1015-2021-4-372-393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The C-lightVer system is developed in IIS SB RAS for C-program deductive verification. C-kernel is an intermediate verification language in this system. Cloud parallel programming system (CPPS) is also developed in IIS SB RAS. Cloud Sisal is an input language of CPPS. The main feature of CPPS is implicit parallel execution based on automatic parallelization of Cloud Sisal loops. Cloud-Sisal-kernel is an intermediate verification language in the CPPS system. Our goal is automatic parallelization of such a superset of C that allows implementing automatic verification. Our solution is such a superset of C-kernel as C-Sisal-kernel. The first result presented in this paper is an extension of C-kernel by Cloud-Sisal-kernel loops. We have obtained the C-Sisal-kernel language. The second result is an extension of C-kernel axiomatic semantics by inference rule for Cloud-Sisal-kernel loops. The paper also presents our approach to the problem of deductive verification automation in the case of finite iterations over data structures. This kind of loops is referred to as definite iterations. Our solution is a composition of symbolic method of verification of definite iterations, verification condition metageneration and mixed axiomatic semantics method. Symbolic method of verification of definite iterations allows defining inference rules for these loops without invariants. Symbolic replacement of definite iterations by recursive functions is the base of this method. Obtained verification conditions with applications of recursive functions correspond to logical base of ACL2 prover. We use ACL2 system based on computable recursive functions. Verification condition metageneration allows simplifying implementation of new inference rules in a verification system. The use of mixed axiomatic semantics results to simpler verification conditions in some cases.
16

Lei, Zijian, and Liang Lan. "Memory and Computation-Efficient Kernel SVM via Binary Embedding and Ternary Model Coefficients." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 8316–23. http://dx.doi.org/10.1609/aaai.v35i9.17011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Kernel approximation is widely used to scale up kernel SVM training and prediction. However, the memory and computation costs of kernel approximation models are still too large if we want to deploy them on memory-limited devices such as mobile phones, smart watches and IoT devices. To address this challenge, we propose a novel memory and computation-efficient kernel SVM model by using both binary embedding and binary model coefficients. First, we propose an efficient way to generate compact binary embedding of the data which can preserve the kernel similarity. Second, we propose a simple but effective algorithm to learn a linear classification model with binary coefficients which can support different types of loss function and regularizer. Our algorithm can achieve better generalization accuracy than existing works on learning binary coefficients since we allow coefficient to be -1, 0 or 1 during the training stage and coefficient 0 can be removed during model inference. Moreover, we provide detailed analysis on the convergence of our algorithm and the inference complexity of our model. The analysis shows that the convergence to a local optimum is guaranteed and the inference complexity of our model is much lower than other competing methods. Our experimental results on five large real-world datasets have demonstrated that our proposed method can build accurate nonlinear SVM model with memory cost less than 30KB.
17

Cawley, Gavin C., and Nicola L. C. Talbot. "Kernel learning at the first level of inference." Neural Networks 53 (May 2014): 69–80. http://dx.doi.org/10.1016/j.neunet.2014.01.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Kai. "Conditional asymptotic inference for the kernel association test." Bioinformatics 33, no. 23 (August 14, 2017): 3733–39. http://dx.doi.org/10.1093/bioinformatics/btx511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Lu, Chi-Ken, and Patrick Shafto. "Conditional Deep Gaussian Processes: Empirical Bayes Hyperdata Learning." Entropy 23, no. 11 (October 23, 2021): 1387. http://dx.doi.org/10.3390/e23111387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
It is desirable to combine the expressive power of deep learning with Gaussian Process (GP) in one expressive Bayesian learning model. Deep kernel learning showed success as a deep network used for feature extraction. Then, a GP was used as the function model. Recently, it was suggested that, albeit training with marginal likelihood, the deterministic nature of a feature extractor might lead to overfitting, and replacement with a Bayesian network seemed to cure it. Here, we propose the conditional deep Gaussian process (DGP) in which the intermediate GPs in hierarchical composition are supported by the hyperdata and the exposed GP remains zero mean. Motivated by the inducing points in sparse GP, the hyperdata also play the role of function supports, but are hyperparameters rather than random variables. It follows our previous moment matching approach to approximate the marginal prior for conditional DGP with a GP carrying an effective kernel. Thus, as in empirical Bayes, the hyperdata are learned by optimizing the approximate marginal likelihood which implicitly depends on the hyperdata via the kernel. We show the equivalence with the deep kernel learning in the limit of dense hyperdata in latent space. However, the conditional DGP and the corresponding approximate inference enjoy the benefit of being more Bayesian than deep kernel learning. Preliminary extrapolation results demonstrate expressive power from the depth of hierarchy by exploiting the exact covariance and hyperdata learning, in comparison with GP kernel composition, DGP variational inference and deep kernel learning. We also address the non-Gaussian aspect of our model as well as way of upgrading to a full Bayes inference.
20

Kumar, Mukesh, and Santanu Kumar Rath. "Classification of Microarray Data Using Kernel Fuzzy Inference System." International Scholarly Research Notices 2014 (August 21, 2014): 1–18. http://dx.doi.org/10.1155/2014/769159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The DNA microarray classification technique has gained more popularity in both research and practice. In real data analysis, such as microarray data, the dataset contains a huge number of insignificant and irrelevant features that tend to lose useful information. Classes with high relevance and feature sets with high significance are generally referred for the selected features, which determine the samples classification into their respective classes. In this paper, kernel fuzzy inference system (K-FIS) algorithm is applied to classify the microarray data (leukemia) using t-test as a feature selection method. Kernel functions are used to map original data points into a higher-dimensional (possibly infinite-dimensional) feature space defined by a (usually nonlinear) function ϕ through a mathematical process called the kernel trick. This paper also presents a comparative study for classification using K-FIS along with support vector machine (SVM) for different set of features (genes). Performance parameters available in the literature such as precision, recall, specificity, F-measure, ROC curve, and accuracy are considered to analyze the efficiency of the classification model. From the proposed approach, it is apparent that K-FIS model obtains similar results when compared with SVM model. This is an indication that the proposed approach relies on kernel function.
21

Massaroppe, Lucas, and Luiz Baccalá. "Kernel Methods for Nonlinear Connectivity Detection." Entropy 21, no. 6 (June 20, 2019): 610. http://dx.doi.org/10.3390/e21060610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, we show that the presence of nonlinear coupling between time series may be detected using kernel feature space F representations while dispensing with the need to go back to solve the pre-image problem to gauge model adequacy. This is done by showing that the kernelized auto/cross sequences in F can be computed from the model rather than from prediction residuals in the original data space X . Furthermore, this allows for reducing the connectivity inference problem to that of fitting a consistent linear model in F that works even in the case of nonlinear interactions in the X -space which ordinary linear models may fail to capture. We further illustrate the fact that the resulting F -space parameter asymptotics provide reliable means of space model diagnostics in this space, and provide straightforward Granger connectivity inference tools even for relatively short time series records as opposed to other kernel based methods available in the literature.
22

Stordal, Andreas S., Rafael J. Moraes, Patrick N. Raanes, and Geir Evensen. "p-Kernel Stein Variational Gradient Descent for Data Assimilation and History Matching." Mathematical Geosciences 53, no. 3 (March 17, 2021): 375–93. http://dx.doi.org/10.1007/s11004-021-09937-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractA Bayesian method of inference known as “Stein variational gradient descent” was recently implemented for data assimilation problems, under the heading of “mapping particle filter”. In this manuscript, the algorithm is applied to another type of geoscientific inversion problems, namely history matching of petroleum reservoirs. In order to combat the curse of dimensionality, the commonly used Gaussian kernel, which defines the solution space, is replaced by a p-kernel. In addition, the ensemble gradient approximation used in the mapping particle filter is rectified, and the data assimilation experiments are re-run with more relevant settings and comparisons. Our experimental results in data assimilation are rather disappointing. However, the results from the subsurface inverse problem show more promise, especially as regards the use of p-kernels.
23

Auzina, Ilze A., and Jakub M. Tomczak. "Approximate Bayesian Computation for Discrete Spaces." Entropy 23, no. 3 (March 6, 2021): 312. http://dx.doi.org/10.3390/e23030312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Many real-life processes are black-box problems, i.e., the internal workings are inaccessible or a closed-form mathematical expression of the likelihood function cannot be defined. For continuous random variables, likelihood-free inference problems can be solved via Approximate Bayesian Computation (ABC). However, an optimal alternative for discrete random variables is yet to be formulated. Here, we aim to fill this research gap. We propose an adjusted population-based MCMC ABC method by re-defining the standard ABC parameters to discrete ones and by introducing a novel Markov kernel that is inspired by differential evolution. We first assess the proposed Markov kernel on a likelihood-based inference problem, namely discovering the underlying diseases based on a QMR-DTnetwork and, subsequently, the entire method on three likelihood-free inference problems: (i) the QMR-DT network with the unknown likelihood function, (ii) the learning binary neural network, and (iii) neural architecture search. The obtained results indicate the high potential of the proposed framework and the superiority of the new Markov kernel.
24

Xiao, Chengcheng, Xiaowen Liu, Chi Sun, Zhongyu Liu, and Enjie Ding. "Hierarchical Prototypes Polynomial Softmax Loss Function for Visual Classification." Applied Sciences 12, no. 20 (October 13, 2022): 10336. http://dx.doi.org/10.3390/app122010336.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A well-designed loss function can effectively improve the characterization ability of network features without increasing the amount of calculation in the model inference stage, and has become the focus of attention in recent research. Given that the existing lightweight network adds a loss to the last layer, which severely attenuates the gradient during the backpropagation process, we propose a hierarchical polynomial kernel prototype loss function in this study. In this function, the addition of a polynomial kernel loss function to multiple stages of the deep neural network effectively enhances the efficiency of gradient return, and only adds multi-layer prototype loss functions in the training stage without increasing the calculation of the inference stage. In addition, the good non-linear expression ability of the polynomial kernel improves the characteristic expression performance of the network. Verification on multiple public datasets shows that the lightweight network trained with the proposed hierarchical polynomial kernel loss function has a higher accuracy than other loss functions.
25

Liang, Junjie, Yanting Wu, Dongkuan Xu, and Vasant G. Honavar. "Longitudinal Deep Kernel Gaussian Process Regression." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (May 18, 2021): 8556–64. http://dx.doi.org/10.1609/aaai.v35i10.17038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Gaussian processes offer an attractive framework for predictive modeling from longitudinal data, \ie irregularly sampled, sparse observations from a set of individuals over time. However, such methods have two key shortcomings: (i) They rely on ad hoc heuristics or expensive trial and error to choose the effective kernels, and (ii) They fail to handle multilevel correlation structure in the data. We introduce Longitudinal deep kernel Gaussian process regression (L-DKGPR) to overcome these limitations by fully automating the discovery of complex multilevel correlation structure from longitudinal data. Specifically, L-DKGPR eliminates the need for ad hoc heuristics or trial and error using a novel adaptation of deep kernel learning that combines the expressive power of deep neural networks with the flexibility of non-parametric kernel methods. L-DKGPR effectively learns the multilevel correlation with a novel additive kernel that simultaneously accommodates both time-varying and the time-invariant effects. We derive an efficient algorithm to train L-DKGPR using latent space inducing points and variational inference. Results of extensive experiments on several benchmark data sets demonstrate that L-DKGPR significantly outperforms the state-of-the-art longitudinal data analysis (LDA) methods.
26

Nie, Junlan, Ruibo Gao, and Ye Kang. "Urban Noise Inference Model Based on Multiple Views and Kernel Tensor Decomposition." Fluctuation and Noise Letters 20, no. 03 (January 25, 2021): 2150027. http://dx.doi.org/10.1142/s0219477521500279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Prediction of urban noise is becoming more significant for tackling noise pollution and protecting human mental health. However, the existing noise prediction algorithms neglected not only the correlation between noise regions, but also the nonlinearity and sparsity of the data, which resulted in low accuracy of filling in the missing entries of data. In this paper, we propose a model based on multiple views and kernel-matrix tensor decomposition to predict the noise situation at different times of day in each region. We first construct a kernel tensor decomposition model by using kernel mapping in order to speed decomposition rate and realize stable estimate the prediction system. Then, we analyze and compute the cause of the noise from multiple views including computing the similarity of regions and the correlation between noise categories by kernel distance, which improves the credibility to infer the noise situation and the categories of regions. Finally, we devise a prediction algorithm based on the kernel-matrix tensor factorization model. We evaluate our method with a real dataset, and the experiments to verify the advantages of our method compared with other existing baselines.
27

Hou, Yuxin, Ari Heljakka, and Arno Solin. "Gaussian Process Priors for View-Aware Inference." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 7762–70. http://dx.doi.org/10.1609/aaai.v35i9.16948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
While frame-independent predictions with deep neural networks have become the prominent solutions to many computer vision tasks, the potential benefits of utilizing correlations between frames have received less attention. Even though probabilistic machine learning provides the ability to encode correlation as prior knowledge for inference, there is a tangible gap between the theory and practice of applying probabilistic methods to modern vision problems. For this, we derive a principled framework to combine information coupling between camera poses (translation and orientation) with deep models. We proposed a novel view kernel that generalizes the standard periodic kernel in SO(3). We show how this soft-prior knowledge can aid several pose-related vision tasks like novel view synthesis and predict arbitrary points in the latent space of generative models, pointing towards a range of new applications for inter-frame reasoning.
28

Maswadah, Mohamed, and Seham Mohamed. "Bayesian Inference on the Generalized Exponential Distribution Based on the Kernel Prior." Science Journal of Applied Mathematics and Statistics 12, no. 2 (May 17, 2024): 29–36. http://dx.doi.org/10.11648/j.sjams.20241202.12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this work, we introduce an objective prior based on the kernel density estimation to eliminate the subjectivity of the Bayesian estimation for information other than data. For comparing the kernel prior with the informative gamma prior, the mean squared error and the mean percentage error for the generalized exponential (GE) distribution parameters estimations are studied using both symmetric and asymmetric loss functions via Monte Carlo simulations. The simulation results indicated that the kernel prior outperforms the informative gamma prior. Finally, a numerical example is given to demonstrate the efficiency of the proposed priors.
29

Wang, Qihuan, Haolin Yang, Qianghao He, Dong Yue, Ce Zhang, and Duanyang Geng. "Real-Time Detection System of Broken Corn Kernels Based on BCK-YOLOv7." Agronomy 13, no. 7 (June 28, 2023): 1750. http://dx.doi.org/10.3390/agronomy13071750.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Accurately and effectively measuring the breaking quality of harvested corn kernels is a critical step in the intelligent development of corn harvesters. The detection of broken corn kernels is complicated during the harvesting process due to turbulent corn kernel movement, uneven lighting, and interference from numerous external factors. This paper develops a deep learning-based detection method in real time for broken corn kernels in response to these issues. The system uses an image acquisition device to continuously acquire high-quality corn kernel image data and cooperates with a deep learning model to realize the rapid and accurate recognition of broken corn kernels. First, we defined the range of broken corn kernels based on image characteristics captured by the acquisition device and prepared the corn kernel datasets. The corn kernels in the acquired image were densely distributed, and the highly similar features of broken and whole corn kernels brough challenges to the system for visual recognition. To address this problem, we propose an improved model called BCK-YOLOv7, which is based on YOLOv7. We fine-tuned the model’s positive sample matching strategy and added a transformer encoder block module and coordinate attention mechanism, among other strategies. Ablation experiments demonstrate that our approach improves the BCK-YOLOv7 model’s ability to learn effectively broken corn kernel features, even when high-density features are similar. The improved model achieved a precision rate of 96.9%, a recall rate of 97.5%, and a mAP of 99.1%, representing respective improvements of 3.7%, 4.3%, and 2.8% over the original YOLOv7 model. To optimize and deploy the BCK-YOLOv7 model to the edge device (NVIDIA Jetson Nano), TensorRT was utilized, resulting in an impressive inference speed of 33 FPS. Finally, the simulation system experiment for corn kernel broken rate detection was performed. The results demonstrate that the system’s mean absolute deviation is merely 0.35 percent compared to that of manual statistical results. The main contribution of this work is the fact that this is the first time that a set of deep learning model improvement strategies and methods are proposed to deal with the problem of rapid and accurate corn kernel detection under the conditions of high density and similar features.
30

Zhang, Rui, Christian Walder, and Marian-Andrei Rizoiu. "Variational Inference for Sparse Gaussian Process Modulated Hawkes Process." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6803–10. http://dx.doi.org/10.1609/aaai.v34i04.6160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The Hawkes process (HP) has been widely applied to modeling self-exciting events including neuron spikes, earthquakes and tweets. To avoid designing parametric triggering kernel and to be able to quantify the prediction confidence, the non-parametric Bayesian HP has been proposed. However, the inference of such models suffers from unscalability or slow convergence. In this paper, we aim to solve both problems. Specifically, first, we propose a new non-parametric Bayesian HP in which the triggering kernel is modeled as a squared sparse Gaussian process. Then, we propose a novel variational inference schema for model optimization. We employ the branching structure of the HP so that maximization of evidence lower bound (ELBO) is tractable by the expectation-maximization algorithm. We propose a tighter ELBO which improves the fitting performance. Further, we accelerate the novel variational inference schema to linear time complexity by leveraging the stationarity of the triggering kernel. Different from prior acceleration methods, ours enjoys higher efficiency. Finally, we exploit synthetic data and two large social media datasets to evaluate our method. We show that our approach outperforms state-of-the-art non-parametric frequentist and Bayesian methods. We validate the efficiency of our accelerated variational inference schema and practical utility of our tighter ELBO for model selection. We observe that the tighter ELBO exceeds the common one in model selection.
31

Cui, Chen, Shengyi Jiang, and Bruno C. d. S. Oliveira. "Greedy Implicit Bounded Quantification." Proceedings of the ACM on Programming Languages 7, OOPSLA2 (October 16, 2023): 2083–111. http://dx.doi.org/10.1145/3622871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Mainstream object-oriented programming languages such as Java, Scala, C#, or TypeScript have polymorphic type systems with subtyping and bounded quantification. Bounded quantification, despite being a pervasive and widely used feature, has attracted little research work on type-inference algorithms to support it. A notable exception is local type inference, which is the basis of most current implementations of type inference for mainstream languages. However, support for bounded quantification in local type inference has important restrictions, and its non-algorithmic specification is complex. In this paper, we present a variant of kernel F ≤ , which is the canonical calculus with bounded quantification, with implicit polymorphism. Our variant, called F ≤ b , comes with a declarative and an algorithmic formulation of the type system. The declarative type system is based on previous work on bidirectional typing for predicative higher-rank polymorphism and a greedy approach to implicit instantiation. This allows for a clear declarative specification where programs require few type annotations and enables implicit polymorphism where applications omit type parameters. Just as local type inference, explicit type applications are also available in F ≤ b if desired. This is useful to deal with impredicative instantiations, which would not be allowed otherwise in F ≤ b . Due to the support for impredicative instantiations, we can obtain a completeness result with respect to kernel F ≤ , showing that all the well-typed kernel F ≤ programs can type-check in F ≤ b . The corresponding algorithmic version of the type system is shown to be sound, complete, and decidable. All the results have been mechanically formalized in the Abella theorem prover.
32

Teng, Tong, Jie Chen, Yehong Zhang, and Bryan Kian Hsiang Low. "Scalable Variational Bayesian Kernel Selection for Sparse Gaussian Process Regression." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5997–6004. http://dx.doi.org/10.1609/aaai.v34i04.6061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper presents a variational Bayesian kernel selection (VBKS) algorithm for sparse Gaussian process regression (SGPR) models. In contrast to existing GP kernel selection algorithms that aim to select only one kernel with the highest model evidence, our VBKS algorithm considers the kernel as a random variable and learns its belief from data such that the uncertainty of the kernel can be interpreted and exploited to avoid overconfident GP predictions. To achieve this, we represent the probabilistic kernel as an additional variational variable in a variational inference (VI) framework for SGPR models where its posterior belief is learned together with that of the other variational variables (i.e., inducing variables and kernel hyperparameters). In particular, we transform the discrete kernel belief into a continuous parametric distribution via reparameterization in order to apply VI. Though it is computationally challenging to jointly optimize a large number of hyperparameters due to many kernels being evaluated simultaneously by our VBKS algorithm, we show that the variational lower bound of the log-marginal likelihood can be decomposed into an additive form such that each additive term depends only on a disjoint subset of the variational variables and can thus be optimized independently. Stochastic optimization is then used to maximize the variational lower bound by iteratively improving the variational approximation of the exact posterior belief via stochastic gradient ascent, which incurs constant time per iteration and hence scales to big data. We empirically evaluate the performance of our VBKS algorithm on synthetic and massive real-world datasets.
33

Gudmundarson, Ragnar L., and Gareth W. Peters. "Assessing portfolio diversification via two-sample graph kernel inference. A case study on the influence of ESG screening." PLOS ONE 19, no. 4 (April 16, 2024): e0301804. http://dx.doi.org/10.1371/journal.pone.0301804.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this work we seek to enhance the frameworks practitioners in asset management and wealth management may adopt to asses how different screening rules may influence the diversification benefits of portfolios. The problem arises naturally in the area of Environmental, Social, and Governance (ESG) based investing practices as practitioners often need to select subsets of the total available assets based on some ESG screening rule. Once a screening rule is identified, one constructs a dynamic portfolio which is usually compared with another dynamic portfolio to check if it satisfies or outperforms the risk and return profile set by the company. Our study proposes a novel method that tackles the problem of comparing diversification benefits of portfolios constructed under different screening rules. Each screening rule produces a sequence of graphs, where the nodes are assets and edges are partial correlations. To compare the diversification benefits of screening rules, we propose to compare the obtained graph sequences. The method proposed is based on a machine learning hypothesis testing framework called the kernel two-sample test whose objective is to determine whether the graphs come from the same distribution. If they come from the same distribution, then the risk and return profiles should be the same. The fact that the sample data points are graphs means that one needs to use graph testing frameworks. The problem is natural for kernel two-sample testing as one can use so-called graph kernels to work with samples of graphs. The null hypothesis of the two-sample graph kernel test is that the graph sequences were generated from the same distribution, while the alternative is that the distributions are different. A failure to reject the null hypothesis would indicate that ESG screening does not affect diversification while rejection would indicate that ESG screening does have an effect. The article describes the graph kernel two-sample testing framework, and further provides a brief overview of different graph kernels. We then demonstrate the power of the graph two-sample testing framework under different realistic scenarios. Finally, the proposed methodology is applied to data within the SnP500 to demonstrate the workflow one can use in asset management to test for structural differences in diversification of portfolios under different ESG screening rules.
34

Rocha, Gustavo H. M. A., Rosangela H. Loschi, and Reinaldo B. Arellano-Valle. "Inference in flexible families of distributions with normal kernel." Statistics 47, no. 6 (December 2013): 1184–206. http://dx.doi.org/10.1080/02331888.2012.688207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Gao, Junbin, Paul W. Kwan, and Daming Shi. "Sparse kernel learning with LASSO and Bayesian inference algorithm." Neural Networks 23, no. 2 (March 2010): 257–64. http://dx.doi.org/10.1016/j.neunet.2009.07.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Capobianco, Enrico. "Kernel methods and flexible inference for complex stochastic dynamics." Physica A: Statistical Mechanics and its Applications 387, no. 16-17 (July 2008): 4077–98. http://dx.doi.org/10.1016/j.physa.2008.03.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Lam, Clifford, and Jianqing Fan. "Profile-kernel likelihood inference with diverging number of parameters." Annals of Statistics 36, no. 5 (October 2008): 2232–60. http://dx.doi.org/10.1214/07-aos544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Li, Bochong, and Lingchong You. "Stochastic Sensitivity Analysis and Kernel Inference via Distributional Data." Biophysical Journal 107, no. 5 (September 2014): 1247–55. http://dx.doi.org/10.1016/j.bpj.2014.07.025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Li, Degui, Peter C. B. Phillips, and Jiti Gao. "Kernel-based Inference in Time-Varying Coefficient Cointegrating Regression." Journal of Econometrics 215, no. 2 (April 2020): 607–32. http://dx.doi.org/10.1016/j.jeconom.2019.10.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Patel, Zeel B., Palak Purohit, Harsh M. Patel, Shivam Sahni, and Nipun Batra. "Accurate and Scalable Gaussian Processes for Fine-Grained Air Quality Inference." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 12080–88. http://dx.doi.org/10.1609/aaai.v36i11.21467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Air pollution is a global problem and severely impacts human health. Fine-grained air quality (AQ) monitoring is important in mitigating air pollution. However, existing AQ station deployments are sparse. Conventional interpolation techniques fail to learn the complex AQ phenomena. Physics-based models require domain knowledge and pollution source data for AQ modeling. In this work, we propose a Gaussian processes based approach for estimating AQ. The important features of our approach are: a) a non-stationary (NS) kernel to allow input depended smoothness of fit; b) a Hamming distance-based kernel for categorical features; and c) a locally periodic kernel to capture temporal periodicity. We leverage batch-wise training to scale our approach to a large amount of data. Our approach outperforms the conventional baselines and a state-of-the-art neural attention-based approach.
41

Ren, Ming, Chi Cheung, and Gao Xiao. "Gaussian Process Based Bayesian Inference System for Intelligent Surface Measurement." Sensors 18, no. 11 (November 21, 2018): 4069. http://dx.doi.org/10.3390/s18114069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper presents a Gaussian process based Bayesian inference system for the realization of intelligent surface measurement on multi-sensor instruments. The system considers the surface measurement as a time series data collection process, and the Gaussian process is used as mathematical foundation to establish an inferring plausible model to aid the measurement process via multi-feature classification and multi-dataset regression. Multi-feature classification extracts and classifies the geometric features of the measured surfaces at different scales to design an appropriate composite covariance kernel and corresponding initial sampling strategy. Multi-dataset regression takes the designed covariance kernel as input to fuse the multi-sensor measured datasets with Gaussian process model, which is further used to adaptively refine the initial sampling strategy by taking the credibility of the fused model as the critical sampling criteria. Hence, intelligent sampling can be realized with consecutive learning process with full Bayesian treatment. The statistical nature of the Gaussian process model combined with various powerful covariance kernel functions offer the system great flexibility for different kinds of complex surfaces.
42

Song, Le, Kenji Fukumizu, and Arthur Gretton. "Kernel Embeddings of Conditional Distributions: A Unified Kernel Framework for Nonparametric Inference in Graphical Models." IEEE Signal Processing Magazine 30, no. 4 (July 2013): 98–111. http://dx.doi.org/10.1109/msp.2013.2252713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

González-Vanegas, Wilson, Andrés Álvarez-Meza, José Hernández-Muriel, and Álvaro Orozco-Gutiérrez. "AKL-ABC: An Automatic Approximate Bayesian Computation Approach Based on Kernel Learning." Entropy 21, no. 10 (September 24, 2019): 932. http://dx.doi.org/10.3390/e21100932.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Bayesian statistical inference under unknown or hard to asses likelihood functions is a very challenging task. Currently, approximate Bayesian computation (ABC) techniques have emerged as a widely used set of likelihood-free methods. A vast number of ABC-based approaches have appeared in the literature; however, they all share a hard dependence on free parameters selection, demanding expensive tuning procedures. In this paper, we introduce an automatic kernel learning-based ABC approach, termed AKL-ABC, to automatically compute posterior estimations from a weighting-based inference. To reach this goal, we propose a kernel learning stage to code similarities between simulation and parameter spaces using a centered kernel alignment (CKA) that is automated via an Information theoretic learning approach. Besides, a local neighborhood selection (LNS) algorithm is used to highlight local dependencies over simulations relying on graph theory. Attained results on synthetic and real-world datasets show our approach is a quite competitive method compared to other non-automatic state-of-the-art ABC techniques.
44

Huh, Jaeseok, Jonghun Park, Dongmin Shin, and Yerim Choi. "A Hierarchical SVM Based Behavior Inference of Human Operators Using a Hybrid Sequence Kernel." Sustainability 11, no. 18 (September 4, 2019): 4836. http://dx.doi.org/10.3390/su11184836.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
To train skilled unmanned combat aerial vehicle (UCAV) operators, it is important to establish a real-time training environment where an enemy appropriately responds to the action performed by a trainee. This can be addressed by constructing the inference method for the behavior of a UCAV operator from given simulation log data. Through this method, the virtual enemy is capable of performing actions that are highly likely to be made by an actual operator. To achieve this, we propose a hybrid sequence (HS) kernel-based hierarchical support vector machine (HSVM) for the behavior inference of a UCAV operator. Specifically, the HS kernel is designed to resolve the heterogeneity in simulation log data, and HSVM performs the behavior inference in a sequential manner considering the hierarchical structure of the behaviors of a UCAV operator. The effectiveness of the proposed method is demonstrated with the log data collected from the air-to-air combat simulator.
45

Lee, Dong-Yeong, Hayotjon Aliev, Muhammad Junaid, Sang-Bo Park, Hyung-Won Kim, Keon-Myung Lee, and Sang-Hoon Sim. "High-Speed CNN Accelerator SoC Design Based on a Flexible Diagonal Cyclic Array." Electronics 13, no. 8 (April 19, 2024): 1564. http://dx.doi.org/10.3390/electronics13081564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The latest convolutional neural network (CNN) models for object detection include complex layered connections to process inference data. Each layer utilizes different types of kernel modes, so the hardware needs to support all kernel modes at an optimized speed. In this paper, we propose a high-speed and optimized CNN accelerator with flexible diagonal cyclic arrays (FDCA) that supports the acceleration of CNN networks with various kernel sizes and significantly reduces the time required for inference processing. The accelerator uses four FDCAs to simultaneously calculate 16 input channels and 8 output channels. Each FDCA features a 4 × 8 systolic array that contains a 3 × 3 processing element (PE) array and is designed to handle the most commonly used kernel sizes. To evaluate the proposed CNN accelerator, we mapped the widely used YOLOv5 CNN model and evaluated the performance of its implementation on the Zynq UltraScale+ MPSoC ZCU102 FPGA. The design consumes 249,357 logic cells, 2304 DSP blocks, and only 567 KB BRAM. In our evaluation, the YOLOv5n model achieves an accuracy of 43.1% (mAP@0.5). A prototype accelerator has been implemented using Samsung’s 14 nm CMOS technology. It achieves 1.075 TOPS, a peak performance with a 400 MHz clock frequency.
46

Mohanty, Pete, and Robert Shaffer. "Messy Data, Robust Inference? Navigating Obstacles to Inference with bigKRLS." Political Analysis 27, no. 2 (September 26, 2018): 127–44. http://dx.doi.org/10.1017/pan.2018.33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Complex models are of increasing interest to social scientists. Researchers interested in prediction generally favor flexible, robust approaches, while those interested in causation are often interested in modeling nuanced treatment structures and confounding relationships. Unfortunately, estimators of complex models often scale poorly, especially if they seek to maintain interpretability. In this paper, we present an example of such a conundrum and show how optimization can alleviate the worst of these concerns. Specifically, we introduce bigKRLS, which offers a variety of statistical and computational improvements to the Hainmueller and Hazlett (2013) Kernel-Regularized Least Squares (KRLS) approach. As part of our improvements, we decrease the estimator’s single-core runtime by 50% and reduce the estimator’s peak memory usage by an order of magnitude. We also improve uncertainty estimates for the model’s average marginal effect estimates—which we test both in simulation and in practice—and introduce new visual and statistical tools designed to assist with inference under the model. We further demonstrate the value of our improvements through an analysis of the 2016 presidential election, an analysis that would have been impractical or even infeasible for many users with existing software.
47

Dixit, Purushottam D. "Introducing User-Prescribed Constraints in Markov Chains for Nonlinear Dimensionality Reduction." Neural Computation 31, no. 5 (May 2019): 980–97. http://dx.doi.org/10.1162/neco_a_01184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Stochastic kernel-based dimensionality-reduction approaches have become popular in the past decade. The central component of many of these methods is a symmetric kernel that quantifies the vicinity between pairs of data points and a kernel-induced Markov chain on the data. Typically, the Markov chain is fully specified by the kernel through row normalization. However, in many cases, it is desirable to impose user-specified stationary-state and dynamical constraints on the Markov chain. Unfortunately, no systematic framework exists to impose such user-defined constraints. Here, based on our previous work on inference of Markov models, we introduce a path entropy maximization based approach to derive the transition probabilities of Markov chains using a kernel and additional user-specified constraints. We illustrate the usefulness of these Markov chains with examples.
48

Ueda, K. "Design of the Kernel Language for the Parallel Inference Machine." Computer Journal 33, no. 6 (June 1, 1990): 494–500. http://dx.doi.org/10.1093/comjnl/33.6.494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Tsionas, Efthymios G. "Bayesian inference in time series models using kernel quasi likelihoods." Statistica Neerlandica 56, no. 3 (August 2002): 285–94. http://dx.doi.org/10.1111/1467-9574.04800.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Cai, Qianfeng, Zhifeng Hao, and Xiaowei Yang. "Gaussian kernel-based fuzzy inference systems for high dimensional regression." Neurocomputing 77, no. 1 (February 2012): 197–204. http://dx.doi.org/10.1016/j.neucom.2011.09.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles

To the bibliography