Статті в журналах з теми "Variable sparsity kernel learning"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Variable sparsity kernel learning.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Variable sparsity kernel learning".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Chen, Jingxiang, Chong Zhang, Michael R. Kosorok, and Yufeng Liu. "Double sparsity kernel learning with automatic variable selection and data extraction." Statistics and Its Interface 11, no. 3 (2018): 401–20. http://dx.doi.org/10.4310/sii.2018.v11.n3.a1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Huang, Yuan, and Shuangge Ma. "Discussion on “Double sparsity kernel learning with automatic variable selection and data extraction”." Statistics and Its Interface 11, no. 3 (2018): 421–22. http://dx.doi.org/10.4310/sii.2018.v11.n3.a2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Liu, Meimei, and Guang Cheng. "Discussion on “Double sparsity kernel learning with automatic variable selection and data extraction”." Statistics and Its Interface 11, no. 3 (2018): 423–24. http://dx.doi.org/10.4310/sii.2018.v11.n3.a3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Zhang, Hao Helen. "Discussion on “Doubly sparsity kernel learning with automatic variable selection and data extraction”." Statistics and Its Interface 11, no. 3 (2018): 425–28. http://dx.doi.org/10.4310/sii.2018.v11.n3.a4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Chen, Jingxiang, Chong Zhang, Michael R. Kosorok, and Yufeng Liu. "Rejoinder of “Double sparsity kernel learning with automatic variable selection and data extraction”." Statistics and Its Interface 11, no. 3 (2018): 429–31. http://dx.doi.org/10.4310/sii.2018.v11.n3.a5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Wang, Shuangyue, and Ziyan Luo. "Sparse Support Tensor Machine with Scaled Kernel Functions." Mathematics 11, no. 13 (June 24, 2023): 2829. http://dx.doi.org/10.3390/math11132829.

Повний текст джерела
Анотація:
As one of the supervised tensor learning methods, the support tensor machine (STM) for tensorial data classification is receiving increasing attention in machine learning and related applications, including remote sensing imaging, video processing, fault diagnosis, etc. Existing STM approaches lack consideration for support tensors in terms of data reduction. To address this deficiency, we built a novel sparse STM model to control the number of support tensors in the binary classification of tensorial data. The sparsity is imposed on the dual variables in the context of the feature space, which facilitates the nonlinear classification with kernel tricks, such as the widely used Gaussian RBF kernel. To alleviate the local risk associated with the constant width in the tensor Gaussian RBF kernel, we propose a two-stage classification approach; in the second stage, we advocate for a scaling strategy on the kernel function in a data-dependent way, using the information of the support tensors obtained from the first stage. The essential optimization models in both stages share the same type, which is non-convex and discontinuous, due to the sparsity constraint. To resolve the computational challenge, a subspace Newton method is tailored for the sparsity-constrained optimization for effective computation with local convergence. Numerical experiments were conducted on real datasets, and the numerical results demonstrate the effectiveness of our proposed two-stage sparse STM approach in terms of classification accuracy, compared with the state-of-the-art binary classification approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Pan, Chao, Cheng Shi, Honglang Mu, Jie Li, and Xinbo Gao. "EEG-Based Emotion Recognition Using Logistic Regression with Gaussian Kernel and Laplacian Prior and Investigation of Critical Frequency Bands." Applied Sciences 10, no. 5 (February 29, 2020): 1619. http://dx.doi.org/10.3390/app10051619.

Повний текст джерела
Анотація:
Emotion plays a nuclear part in human attention, decision-making, and communication. Electroencephalogram (EEG)-based emotion recognition has developed a lot due to the application of Brain-Computer Interface (BCI) and its effectiveness compared to body expressions and other physiological signals. Despite significant progress in affective computing, emotion recognition is still an unexplored problem. This paper introduced Logistic Regression (LR) with Gaussian kernel and Laplacian prior for EEG-based emotion recognition. The Gaussian kernel enhances the EEG data separability in the transformed space. The Laplacian prior promotes the sparsity of learned LR regressors to avoid over-specification. The LR regressors are optimized using the logistic regression via variable splitting and augmented Lagrangian (LORSAL) algorithm. For simplicity, the introduced method is noted as LORSAL. Experiments were conducted on the dataset for emotion analysis using EEG, physiological and video signals (DEAP). Various spectral features and features by combining electrodes (power spectral density (PSD), differential entropy (DE), differential asymmetry (DASM), rational asymmetry (RASM), and differential caudality (DCAU)) were extracted from different frequency bands (Delta, Theta, Alpha, Beta, Gamma, and Total) with EEG signals. The Naive Bayes (NB), support vector machine (SVM), linear LR with L1-regularization (LR_L1), linear LR with L2-regularization (LR_L2) were used for comparison in the binary emotion classification for valence and arousal. LORSAL obtained the best classification accuracies (77.17% and 77.03% for valence and arousal, respectively) on the DE features extracted from total frequency bands. This paper also investigates the critical frequency bands in emotion recognition. The experimental results showed the superiority of Gamma and Beta bands in classifying emotions. It was presented that DE was the most informative and DASM and DCAU had lower computational complexity with relatively ideal accuracies. An analysis of LORSAL and the recently deep learning (DL) methods is included in the discussion. Conclusions and future work are presented in the final section.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Koltchinskii, Vladimir, and Ming Yuan. "Sparsity in multiple kernel learning." Annals of Statistics 38, no. 6 (December 2010): 3660–95. http://dx.doi.org/10.1214/10-aos825.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Jiang, Zhengxiong, Yingsong Li, Xinqi Huang, and Zhan Jin. "A Sparsity-Aware Variable Kernel Width Proportionate Affine Projection Algorithm for Identifying Sparse Systems." Symmetry 11, no. 10 (October 1, 2019): 1218. http://dx.doi.org/10.3390/sym11101218.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Yuan, Ying, Weiming Lu, Fei Wu, and Yueting Zhuang. "Multiple kernel learning with NOn-conVex group spArsity." Journal of Visual Communication and Image Representation 25, no. 7 (October 2014): 1616–24. http://dx.doi.org/10.1016/j.jvcir.2014.08.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Zhang, Lijun, Rong Jin, Chun Chen, Jiajun Bu, and Xiaofei He. "Efficient Online Learning for Large-Scale Sparse Kernel Logistic Regression." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 1219–25. http://dx.doi.org/10.1609/aaai.v26i1.8300.

Повний текст джерела
Анотація:
In this paper, we study the problem of large-scale Kernel Logistic Regression (KLR). A straightforward approach is to apply stochastic approximation to KLR. We refer to this approach as non-conservative online learning algorithm because it updates the kernel classifier after every received training example, leading to a dense classifier. To improve the sparsity of the KLR classifier, we propose two conservative online learning algorithms that update the classifier in a stochastic manner and generate sparse solutions. With appropriately designed updating strategies, our analysis shows that the two conservative algorithms enjoy similar theoretical guarantee as that of the non-conservative algorithm. Empirical studies on several benchmark data sets demonstrate that compared to batch-mode algorithms for KLR, the proposed conservative online learning algorithms are able to produce sparse KLR classifiers, and achieve similar classification accuracy but with significantly shorter training time. Furthermore, both the sparsity and classification accuracy of our methods are comparable to those of the online kernel SVM.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Peng, Jialin, Xiaofeng Zhu, Ye Wang, Le An, and Dinggang Shen. "Structured sparsity regularized multiple kernel learning for Alzheimer’s disease diagnosis." Pattern Recognition 88 (April 2019): 370–82. http://dx.doi.org/10.1016/j.patcog.2018.11.027.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Suzuki, Taiji, and Masashi Sugiyama. "Fast learning rate of multiple kernel learning: Trade-off between sparsity and smoothness." Annals of Statistics 41, no. 3 (June 2013): 1381–405. http://dx.doi.org/10.1214/13-aos1095.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Yu, Han, Peng Cui, Yue He, Zheyan Shen, Yong Lin, Renzhe Xu, and Xingxuan Zhang. "Stable Learning via Sparse Variable Independence." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (June 26, 2023): 10998–1006. http://dx.doi.org/10.1609/aaai.v37i9.26303.

Повний текст джерела
Анотація:
The problem of covariate-shift generalization has attracted intensive research attention. Previous stable learning algorithms employ sample reweighting schemes to decorrelate the covariates when there is no explicit domain information about training data. However, with finite samples, it is difficult to achieve the desirable weights that ensure perfect independence to get rid of the unstable variables. Besides, decorrelating within stable variables may bring about high variance of learned models because of the over-reduced effective sample size. A tremendous sample size is required for these algorithms to work. In this paper, with theoretical justification, we propose SVI (Sparse Variable Independence) for the covariate-shift generalization problem. We introduce sparsity constraint to compensate for the imperfectness of sample reweighting under the finite-sample setting in previous methods. Furthermore, we organically combine independence-based sample reweighting and sparsity-based variable selection in an iterative way to avoid decorrelating within stable variables, increasing the effective sample size to alleviate variance inflation. Experiments on both synthetic and real-world datasets demonstrate the improvement of covariate-shift generalization performance brought by SVI.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Kumar, Kuldeep, Kaleem Siddiqi, and Christian Desrosiers. "White matter fiber analysis using kernel dictionary learning and sparsity priors." Pattern Recognition 95 (November 2019): 83–95. http://dx.doi.org/10.1016/j.patcog.2019.06.002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Tong, Anh, Toan M. Tran, Hung Bui, and Jaesik Choi. "Learning Compositional Sparse Gaussian Processes with a Shrinkage Prior." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (May 18, 2021): 9906–14. http://dx.doi.org/10.1609/aaai.v35i11.17190.

Повний текст джерела
Анотація:
Choosing a proper set of kernel functions is an important problem in learning Gaussian Process (GP) models since each kernel structure has different model complexity and data fitness. Recently, automatic kernel composition methods provide not only accurate prediction but also attractive interpretability through search-based methods. However, existing methods suffer from slow kernel composition learning. To tackle large-scaled data, we propose a new sparse approximate posterior for GPs, MultiSVGP, constructed from groups of inducing points associated with individual additive kernels in compositional kernels. We demonstrate that this approximation provides a better fit to learn compositional kernels given empirical observations. We also provide theoretically justification on error bound when compared to the traditional sparse GP. In contrast to the search-based approach, we present a novel probabilistic algorithm to learn a kernel composition by handling the sparsity in the kernel selection with Horseshoe prior. We demonstrate that our model can capture characteristics of time series with significant reductions in computational time and have competitive regression performance on real-world data sets.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Niu, Wei, Mengshu Sun, Zhengang Li, Jou-An Chen, Jiexiong Guan, Xipeng Shen, Yanzhi Wang, Sijia Liu, Xue Lin, and Bin Ren. "RT3D: Achieving Real-Time Execution of 3D Convolutional Neural Networks on Mobile Devices." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (May 18, 2021): 9179–87. http://dx.doi.org/10.1609/aaai.v35i10.17108.

Повний текст джерела
Анотація:
Mobile devices are becoming an important carrier for deep learning tasks, as they are being equipped with powerful, high-end mobile CPUs and GPUs. However, it is still a challenging task to execute 3D Convolutional Neural Networks (CNNs) targeting for real-time performance, besides high inference accuracy. The reason is more complex model structure and higher model dimensionality overwhelm the available computation/storage resources on mobile devices. A natural way may be turning to deep learning weight pruning techniques. However, the direct generalization of existing 2D CNN weight pruning methods to 3D CNNs is not ideal for fully exploiting mobile parallelism while achieving high inference accuracy. This paper proposes RT3D, a model compression and mobile acceleration framework for 3D CNNs, seamlessly integrating neural network weight pruning and compiler code generation techniques. We propose and investigate two structured sparsity schemes i.e., the vanilla structured sparsity and kernel group structured (KGS) sparsity that are mobile acceleration friendly. The vanilla sparsity removes whole kernel groups, while KGS sparsity is a more fine-grained structured sparsity that enjoys higher flexibility while exploiting full on-device parallelism. We propose a reweighted regularization pruning algorithm to achieve the proposed sparsity schemes. The inference time speedup due to sparsity is approaching the pruning rate of the whole model FLOPs (floating point operations). RT3D demonstrates up to 29.1x speedup in end-to-end inference time comparing with current mobile frameworks supporting 3D CNNs, with moderate 1%~1.5% accuracy loss. The end-to-end inference time for 16 video frames could be within 150 ms, when executing representative C3D and R(2+1)D models on a cellphone. For the first time, real-time execution of 3D CNNs is achieved on off-the-shelf mobiles.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Babeetha, S., B. Muruganantham, S. Ganesh Kumar, and A. Murugan. "An enhanced kernel weighted collaborative recommended system to alleviate sparsity." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 1 (February 1, 2020): 447. http://dx.doi.org/10.11591/ijece.v10i1.pp447-454.

Повний текст джерела
Анотація:
<p>User Reviews in the form of ratings giving an opportunity to judge the user interest on the available products and providing a chance to recommend new similar items to the customers. Personalized recommender techniques placing vital role in this grown ecommerce century to predict the users’ interest. Collaborative Filtering (CF) system is one of the widely used democratic recommender system where it completely rely on user ratings to provide recommendations for the users. In this paper, an enhanced Collaborative Filtering system is proposed using Kernel Weighted K-means Clustering (KWKC) approach using Radial basis Functions (RBF) for eliminate the Sparsity problem where lack of rating is the challenge of providing the accurate recommendation to the user. The proposed system having two phases of state transitions: Connected and Disconnected. During Connected state the form of transition will be ‘Recommended mode’ where the active user be given with the Predicted-recommended items. In Disconnected State the form of transition will be ‘Learning mode’ where the hybrid learning approach and user clusters will be used to define the similar user models. Disconnected State activities will be performed in hidden layer of RBF and Connected Sate activities will be performed in output Layer. Input Layer of RBF using original user Ratings. The proposed KWKC used to smoothen the sparse original rating matrix and define the similar user clusters. A benchmark comparative study also made with classical learning and prediction techniques in terms of accuracy and computational time. Experiential setup is made using MovieLens dataset.<strong></strong></p>
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Ge, Ting, Tianming Zhan, Qinfeng Li, and Shanxiang Mu. "Optimal Superpixel Kernel-Based Kernel Low-Rank and Sparsity Representation for Brain Tumour Segmentation." Computational Intelligence and Neuroscience 2022 (June 24, 2022): 1–12. http://dx.doi.org/10.1155/2022/3514988.

Повний текст джерела
Анотація:
Given the need for quantitative measurement and 3D visualisation of brain tumours, more and more attention has been paid to the automatic segmentation of tumour regions from brain tumour magnetic resonance (MR) images. In view of the uneven grey distribution of MR images and the fuzzy boundaries of brain tumours, a representation model based on the joint constraints of kernel low-rank and sparsity (KLRR-SR) is proposed to mine the characteristics and structural prior knowledge of brain tumour image in the spectral kernel space. In addition, the optimal kernel based on superpixel uniform regions and multikernel learning (MKL) is constructed to improve the accuracy of the pairwise similarity measurement of pixels in the kernel space. By introducing the optimal kernel into KLRR-SR, the coefficient matrix can be solved, which allows brain tumour segmentation results to conform with the spatial information of the image. The experimental results demonstrate that the segmentation accuracy of the proposed method is superior to several existing methods under different indicators and that the sparsity constraint for the coefficient matrix in the kernel space, which is integrated into the kernel low-rank model, has certain effects in preserving the local structure and details of brain tumours.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Lin, Shaobo, Jinshan Zeng, Jian Fang, and Zongben Xu. "Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel." Neural Computation 26, no. 10 (October 2014): 2350–78. http://dx.doi.org/10.1162/neco_a_00641.

Повний текст джерела
Анотація:
Regularization is a well-recognized powerful strategy to improve the performance of a learning machine and lq regularization schemes with [Formula: see text] are central in use. It is known that different q leads to different properties of the deduced estimators, say, l2 regularization leads to a smooth estimator, while l1 regularization leads to a sparse estimator. Then how the generalization capability of lq regularization learning varies with q is worthy of investigation. In this letter, we study this problem in the framework of statistical learning theory. Our main results show that implementing lq coefficient regularization schemes in the sample-dependent hypothesis space associated with a gaussian kernel can attain the same almost optimal learning rates for all [Formula: see text]. That is, the upper and lower bounds of learning rates for lq regularization learning are asymptotically identical for all [Formula: see text]. Our finding tentatively reveals that in some modeling contexts, the choice of q might not have a strong impact on the generalization capability. From this perspective, q can be arbitrarily specified, or specified merely by other nongeneralization criteria like smoothness, computational complexity or sparsity.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Matsui, Kota, Wataru Kumagai, Kenta Kanamori, Mitsuaki Nishikimi, and Takafumi Kanamori. "Variable Selection for Nonparametric Learning with Power Series Kernels." Neural Computation 31, no. 8 (August 2019): 1718–50. http://dx.doi.org/10.1162/neco_a_01212.

Повний текст джерела
Анотація:
In this letter, we propose a variable selection method for general nonparametric kernel-based estimation. The proposed method consists of two-stage estimation: (1) construct a consistent estimator of the target function, and (2) approximate the estimator using a few variables by [Formula: see text]-type penalized estimation. We see that the proposed method can be applied to various kernel nonparametric estimation such as kernel ridge regression, kernel-based density, and density-ratio estimation. We prove that the proposed method has the property of variable selection consistency when the power series kernel is used. Here, the power series kernel is a certain class of kernels containing polynomial and exponential kernels. This result is regarded as an extension of the variable selection consistency for the nonnegative garrote (NNG), a special case of the adaptive Lasso, to the kernel-based estimators. Several experiments, including simulation studies and real data applications, show the effectiveness of the proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Wilkinson, Lucas, Kazem Cheshmi, and Maryam Mehri Dehnavi. "Register Tiling for Unstructured Sparsity in Neural Network Inference." Proceedings of the ACM on Programming Languages 7, PLDI (June 6, 2023): 1995–2020. http://dx.doi.org/10.1145/3591302.

Повний текст джерела
Анотація:
Unstructured sparse neural networks are an important class of machine learning (ML) models, as they compact model size and reduce floating point operations. The execution time of these models is frequently dominated by the sparse matrix multiplication (SpMM) kernel, C = A × B , where A is a sparse matrix, and B and C are dense matrices. The unstructured sparsity pattern of matrices in pruned machine learning models along with their sparsity ratio has rendered useless the large class of libraries and systems that optimize sparse matrix multiplications. Reusing registers is particularly difficult because accesses to memory locations should be known statically. This paper proposes Sparse Register Tiling, a new technique composed of an unroll-and-sparse-jam transformation followed by data compression that is specifically tailored to sparsity patterns in ML matrices. Unroll-and-sparse-jam uses sparsity information to jam the code while improving register reuse. Sparse register tiling is evaluated across 2396 weight matrices from transformer and convolutional models with a sparsity range of 60-95% and provides an average speedup of 1.72× and 2.65× over MKL SpMM and dense matrix multiplication, respectively, on a multicore CPU processor. It also provides an end-to-end speedup of 2.12× for MobileNetV1 with 70% sparsity on an ARM processor commonly used in edge devices.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Sun, Yiheng, Tian Lu, Cong Wang, Yuan Li, Huaiyu Fu, Jingran Dong, and Yunjie Xu. "TransBoost: A Boosting-Tree Kernel Transfer Learning Algorithm for Improving Financial Inclusion." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 12181–90. http://dx.doi.org/10.1609/aaai.v36i11.21478.

Повний текст джерела
Анотація:
The prosperity of mobile and financial technologies has bred and expanded various kinds of financial products to a broader scope of people, which contributes to financial inclusion. It brings non-trivial social benefits of diminishing financial inequality. However, the technical challenges in individual financial risk evaluation exacerbated by the unforeseen user characteristic distribution and limited credit history of new users, as well as the inexperience of newly-entered companies in handling complex data and obtaining accurate labels, impede further promotion of financial inclusion. To tackle these challenges, this paper develops a novel transfer learning algorithm (i.e., TransBoost) that combines the merits of tree-based models and kernel methods. The TransBoost is designed with a parallel tree structure and efficient weights updating mechanism with theoretical guarantee, which enables it to excel in tackling real-world data with high dimensional features and sparsity in O(n) time complexity. We conduct extensive experiments on two public datasets and a unique largescale dataset from Tencent Mobile Payment. The results show that the TransBoost outperforms other state-of-the- art benchmark transfer learning algorithms in terms of prediction accuracy with superior efficiency, demonstrate stronger robustness to data sparsity, and provide meaningful model interpretation. Besides, given a financial risk level, the TransBoost enables financial service providers to serve the largest number of users including those who would otherwise be excluded by other algorithms. That is, the TransBoost improves financial inclusion.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Xia, Yifan, Yongchao Hou, and Shaogao Lv. "Learning rates for partially linear support vector machine in high dimensions." Analysis and Applications 19, no. 01 (October 28, 2020): 167–82. http://dx.doi.org/10.1142/s0219530520400126.

Повний текст джерела
Анотація:
This paper analyzes a new regularized learning scheme for high-dimensional partially linear support vector machine (SVM). The proposed approach consists of an empirical risk and the Lasso-type penalty for linear part, as well as the standard functional norm for nonlinear part. Here, the linear kernel is used for model interpretation and feature selection, while the nonlinear kernel is adopted to enhance algorithmic flexibility. In this paper, we develop a new technical analysis on the weighted empirical process, and establish the sharp learning rates for the semi-parametric estimator under the regularized conditions. Specially, our derived learning rates for semi-parametric SVM depend on not only the sample size and the functional complexity, but also the sparsity and the margin parameters.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Ma, Wenlu, and Han Liu. "Least Squares Support Vector Machine Regression Based on Sparse Samples and Mixture Kernel Learning." Information Technology and Control 50, no. 2 (June 17, 2021): 319–31. http://dx.doi.org/10.5755/j01.itc.50.2.27752.

Повний текст джерела
Анотація:
Least squares support vector machine (LSSVM) is a machine learning algorithm based on statistical theory. Itsadvantages include robustness and calculation simplicity, and it has good performance in the data processingof small samples. The LSSVM model lacks sparsity and is unable to handle large-scale data problem, this articleproposes an LSSVM method based on mixture kernel learning and sparse samples. This algorithm reduces theinitial training set to a sub-dataset using a sparse selection strategy. It converts the single kernel function in theLSSVM model into a mixed kernel function and optimizes its parameters. The reduced sub-dataset is used fortraining LSSVM. Finally, a group of datasets in the UCI Machine Learning Repository were used to verify theeffectiveness of the proposed algorithm, which is applied to real-world power load data to achieve better fittingand improve the prediction accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Gu, Yanfeng, Guoming Gao, Deshan Zuo, and Di You. "Model Selection and Classification With Multiple Kernel Learning for Hyperspectral Images via Sparsity." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 7, no. 6 (June 2014): 2119–30. http://dx.doi.org/10.1109/jstars.2014.2318181.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Liu, Chang, Lixin Tang, and Jiyin Liu. "Least squares support vector machine with self-organizing multiple kernel learning and sparsity." Neurocomputing 331 (February 2019): 493–504. http://dx.doi.org/10.1016/j.neucom.2018.11.067.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Dong, Xue-Mei, Hao Weng, Jian Shi, and Yinhe Gu. "Randomized multi-scale kernels learning with sparsity constraint regularization for regression." International Journal of Wavelets, Multiresolution and Information Processing 17, no. 06 (November 2019): 1950048. http://dx.doi.org/10.1142/s0219691319500486.

Повний текст джерела
Анотація:
This paper presents a simple multiple kernel learning framework for complicated data modeling, where randomized multi-scale Gaussian kernels are employed as base kernels and a [Formula: see text]-norm regularizer is integrated as a sparsity constraint for the solution. The randomly pre-chosen scales provide random basis functions with diversity approximation ability and lead to extremely low computational complexity in finding the optimal solution. The random parameter appearing in the probability distribution and the regularizing factor are decided by the training data with cross validation techniques and the combination weights are solved by a well-posed linear system. Comparison experiments on one function approximation and three real-world regression problems of six learning algorithms are carried out. The way that multi-scale kernels fit the objective function is illustrated, the sparsity and the system robustness analysis with respect to the regularizing factor are given.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Stephens, Hunter, Q. Jackie Wu, and Qiuwen Wu. "Introducing matrix sparsity with kernel truncation into dose calculations for fluence optimization." Biomedical Physics & Engineering Express 8, no. 1 (November 12, 2021): 017001. http://dx.doi.org/10.1088/2057-1976/ac35f8.

Повний текст джерела
Анотація:
Abstract Deep learning algorithms for radiation therapy treatment planning automation require large patient datasets and complex architectures that often take hundreds of hours to train. Some of these algorithms require constant dose updating (such as with reinforcement learning) and may take days. When these algorithms rely on commerical treatment planning systems to perform dose calculations, the data pipeline becomes the bottleneck of the entire algorithm’s efficiency. Further, uniformly accurate distributions are not always needed for the training and approximations can be introduced to speed up the process without affecting the outcome. These approximations not only speed up the calculation process, but allow for custom algorithms to be written specifically for the purposes of use in AI/ML applications where the dose and fluence must be calculated a multitude of times for a multitude of different situations. Here we present and investigate the effect of introducing matrix sparsity through kernel truncation on the dose calculation for the purposes of fluence optimzation within these AI/ML algorithms. The basis for this algorithm relies on voxel discrimination in which numerous voxels are pruned from the computationally expensive part of the calculation. This results in a significant reduction in computation time and storage. Comparing our dose calculation against calculations in both a water phantom and patient anatomy in Eclipse without heterogenity corrections produced gamma index passing rates around 99% for individual and composite beams with uniform fluence and around 98% for beams with a modulated fluence. The resulting sparsity introduces a reduction in computational time and space proportional to the square of the sparsity tolerance with a potential decrease in cost greater than 10 times that of a dense calculation allowing not only for faster caluclations but for calculations that a dense algorithm could not perform on the same system.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Jing, Yongjun, Hao Wang, Kun Shao, Xing Huo, and Yangyang Zhang. "Unsupervised Graph Representation Learning With Variable Heat Kernel." IEEE Access 8 (2020): 15800–15811. http://dx.doi.org/10.1109/access.2020.2966409.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Lowe, David G. "Similarity Metric Learning for a Variable-Kernel Classifier." Neural Computation 7, no. 1 (January 1995): 72–85. http://dx.doi.org/10.1162/neco.1995.7.1.72.

Повний текст джерела
Анотація:
Nearest-neighbor interpolation algorithms have many useful properties for applications to learning, but they often exhibit poor generalization. In this paper, it is shown that much better generalization can be obtained by using a variable interpolation kernel in combination with conjugate gradient optimization of the similarity metric and kernel size. The resulting method is called variable-kernel similarity metric (VSM) learning. It has been tested on several standard classification data sets, and on these problems it shows better generalization than backpropagation and most other learning methods. The number of parameters that must be determined through optimization are orders of magnitude less than for backpropagation or radial basis function (RBF) networks, which may indicate that the method better captures the essential degrees of variation in learning. Other features of VSM learning are discussed that make it relevant to models for biological learning in the brain.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Boßelmann, Christian Malte, Ulrike B. S. Hedrich, Holger Lerche, and Nico Pfeifer. "Predicting functional effects of ion channel variants using new phenotypic machine learning methods." PLOS Computational Biology 19, no. 3 (March 6, 2023): e1010959. http://dx.doi.org/10.1371/journal.pcbi.1010959.

Повний текст джерела
Анотація:
Missense variants in genes encoding ion channels are associated with a spectrum of severe diseases. Variant effects on biophysical function correlate with clinical features and can be categorized as gain- or loss-of-function. This information enables a timely diagnosis, facilitates precision therapy, and guides prognosis. Functional characterization presents a bottleneck in translational medicine. Machine learning models may be able to rapidly generate supporting evidence by predicting variant functional effects. Here, we describe a multi-task multi-kernel learning framework capable of harmonizing functional results and structural information with clinical phenotypes. This novel approach extends the human phenotype ontology towards kernel-based supervised machine learning. Our gain- or loss-of-function classifier achieves high performance (mean accuracy 0.853 SD 0.016, mean AU-ROC 0.912 SD 0.025), outperforming both conventional baseline and state-of-the-art methods. Performance is robust across different phenotypic similarity measures and largely insensitive to phenotypic noise or sparsity. Localized multi-kernel learning offered biological insight and interpretability by highlighting channels with implicit genotype-phenotype correlations or latent task similarity for downstream analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Xie, Zhonghua, Lingjun Liu, and Cui Yang. "An Entropy-Based Algorithm with Nonlocal Residual Learning for Image Compressive Sensing Recovery." Entropy 21, no. 9 (September 17, 2019): 900. http://dx.doi.org/10.3390/e21090900.

Повний текст джерела
Анотація:
Image recovery from compressive sensing (CS) measurement data, especially noisy data has always been challenging due to its implicit ill-posed nature, thus, to seek a domain where a signal can exhibit a high degree of sparsity and to design an effective algorithm have drawn increasingly more attention. Among various sparsity-based models, structured or group sparsity often leads to more powerful signal reconstruction techniques. In this paper, we propose a novel entropy-based algorithm for CS recovery to enhance image sparsity through learning the group sparsity of residual. To reduce the residual of similar packed patches, the group sparsity of residual is described by a Laplacian scale mixture (LSM) model, therefore, each singular value of the residual of similar packed patches is modeled as a Laplacian distribution with a variable scale parameter, to exploit the benefits of high-order dependency among sparse coefficients. Due to the latent variables, the maximum a posteriori (MAP) estimation of the sparse coefficients cannot be obtained, thus, we design a loss function for expectation–maximization (EM) method based on relative entropy. In the frame of EM iteration, the sparse coefficients can be estimated with the denoising-based approximate message passing (D-AMP) algorithm. Experimental results have shown that the proposed algorithm can significantly outperform existing CS techniques for image recovery.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Tao, Zhou, Chang XiaoYu, Lu HuiLing, Ye XinYu, Liu YunCan, and Zheng XiaoMin. "Pooling Operations in Deep Learning: From “Invariable” to “Variable”." BioMed Research International 2022 (June 20, 2022): 1–17. http://dx.doi.org/10.1155/2022/4067581.

Повний текст джерела
Анотація:
Deep learning has become a research hotspot in multimedia, especially in the field of image processing. Pooling operation is an important operation in deep learning. Pooling operation can reduce the feature dimension, the number of parameters, the complexity of computation, and the complexity of time. With the development of deep learning models, pooling operation has made great progress. The main contributions of this paper on pooling operation are as follows: firstly, the steps of the pooling operation are summarized as the pooling domain, pooling kernel, step size, activation value, and response value. Secondly, the expression form of pooling operation is standardized. From the perspective of “invariable” to “variable,” this paper analyzes the pooling domain and pooling kernel in the pooling operation. Pooling operation can be classified into four categories: invariable of pooling domain, variable of pooling domain, variable of pooling kernel, and the pooling of invariable “+” variable. Finally, the four types of pooling operation are summarized and discussed with their advantages and disadvantages. There is great significance to the research of pooling operations and the iterative updating of deep learning models.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Zhao, Ji, Hongbin Zhang, and Xiaofeng Liao. "Variable learning rates kernel adaptive filter with single feedback." Digital Signal Processing 83 (December 2018): 59–72. http://dx.doi.org/10.1016/j.dsp.2018.06.007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Wang, Yanbo, Quan Liu, and Bo Yuan. "Learning Latent Variable Gaussian Graphical Model for Biomolecular Network with Low Sample Complexity." Computational and Mathematical Methods in Medicine 2016 (2016): 1–13. http://dx.doi.org/10.1155/2016/2078214.

Повний текст джерела
Анотація:
Learning a Gaussian graphical model with latent variables is ill posed when there is insufficient sample complexity, thus having to be appropriately regularized. A common choice is convexl1plus nuclear norm to regularize the searching process. However, the best estimator performance is not always achieved with these additive convex regularizations, especially when the sample complexity is low. In this paper, we consider a concave additive regularization which does not require the strong irrepresentable condition. We use concave regularization to correct the intrinsic estimation biases from Lasso and nuclear penalty as well. We establish the proximity operators for our concave regularizations, respectively, which induces sparsity and low rankness. In addition, we extend our method to also allow the decomposition of fused structure-sparsity plus low rankness, providing a powerful tool for models with temporal information. Specifically, we develop a nontrivial modified alternating direction method of multipliers with at least local convergence. Finally, we use both synthetic and real data to validate the excellence of our method. In the application of reconstructing two-stage cancer networks, “the Warburg effect” can be revealed directly.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Fattahi, Loubna El, and El Hassan Sbai. "Clustering using kernel entropy principal component analysis and variable kernel estimator." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 3 (June 1, 2021): 2109. http://dx.doi.org/10.11591/ijece.v11i3.pp2109-2119.

Повний текст джерела
Анотація:
Clustering as unsupervised learning method is the mission of dividing data objects into clusters with common characteristics. In the present paper, we introduce an enhanced technique of the existing EPCA data transformation method. Incorporating the kernel function into the EPCA, the input space can be mapped implicitly into a high-dimensional of feature space. Then, the Shannon’s entropy estimated via the inertia provided by the contribution of every mapped object in data is the key measure to determine the optimal extracted features space. Our proposed method performs very well the clustering algorithm of the fast search of clusters’ centers based on the local densities’ computing. Experimental results disclose that the approach is feasible and efficient on the performance query.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Huang, Shimeng, Elisabeth Ailer, Niki Kilbertus, and Niklas Pfister. "Supervised learning and model analysis with compositional data." PLOS Computational Biology 19, no. 6 (June 30, 2023): e1011240. http://dx.doi.org/10.1371/journal.pcbi.1011240.

Повний текст джерела
Анотація:
Supervised learning, such as regression and classification, is an essential tool for analyzing modern high-throughput sequencing data, for example in microbiome research. However, due to the compositionality and sparsity, existing techniques are often inadequate. Either they rely on extensions of the linear log-contrast model (which adjust for compositionality but cannot account for complex signals or sparsity) or they are based on black-box machine learning methods (which may capture useful signals, but lack interpretability due to the compositionality). We propose KernelBiome, a kernel-based nonparametric regression and classification framework for compositional data. It is tailored to sparse compositional data and is able to incorporate prior knowledge, such as phylogenetic structure. KernelBiome captures complex signals, including in the zero-structure, while automatically adapting model complexity. We demonstrate on par or improved predictive performance compared with state-of-the-art machine learning methods on 33 publicly available microbiome datasets. Additionally, our framework provides two key advantages: (i) We propose two novel quantities to interpret contributions of individual components and prove that they consistently estimate average perturbation effects of the conditional mean, extending the interpretability of linear log-contrast coefficients to nonparametric models. (ii) We show that the connection between kernels and distances aids interpretability and provides a data-driven embedding that can augment further analysis. KernelBiome is available as an open-source Python package on PyPI and at https://github.com/shimenghuang/KernelBiome.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

SCHLEIF, F. M., THOMAS VILLMANN, BARBARA HAMMER, and PETRA SCHNEIDER. "EFFICIENT KERNELIZED PROTOTYPE BASED CLASSIFICATION." International Journal of Neural Systems 21, no. 06 (December 2011): 443–57. http://dx.doi.org/10.1142/s012906571100295x.

Повний текст джерела
Анотація:
Prototype based classifiers are effective algorithms in modeling classification problems and have been applied in multiple domains. While many supervised learning algorithms have been successfully extended to kernels to improve the discrimination power by means of the kernel concept, prototype based classifiers are typically still used with Euclidean distance measures. Kernelized variants of prototype based classifiers are currently too complex to be applied for larger data sets. Here we propose an extension of Kernelized Generalized Learning Vector Quantization (KGLVQ) employing a sparsity and approximation technique to reduce the learning complexity. We provide generalization error bounds and experimental results on real world data, showing that the extended approach is comparable to SVM on different public data.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Liu, Zhou-zhou, and Shi-ning Li. "WSNs Compressed Sensing Signal Reconstruction Based on Improved Kernel Fuzzy Clustering and Discrete Differential Evolution Algorithm." Journal of Sensors 2019 (June 16, 2019): 1–9. http://dx.doi.org/10.1155/2019/7039510.

Повний текст джерела
Анотація:
To reconstruct compressed sensing (CS) signal fast and accurately, this paper proposes an improved discrete differential evolution (IDDE) algorithm based on fuzzy clustering for CS reconstruction. Aiming to overcome the shortcomings of traditional CS reconstruction algorithm, such as heavy dependence on sparsity and low precision of reconstruction, a discrete differential evolution (DDE) algorithm based on improved kernel fuzzy clustering is designed. In this algorithm, fuzzy clustering algorithm is used to analyze the evolutionary population, which improves the pertinence and scientificity of population learning evolution while realizing effective clustering. The differential evolutionary particle coding method and evolutionary mechanism are redefined. And the improved fuzzy clustering discrete differential evolution algorithm is applied to CS reconstruction algorithm, in which signal with unknown sparsity is considered as particle coding. Then the wireless sensor networks (WSNs) sparse signal is accurately reconstructed through the iterative evolution of population. Finally, simulations are carried out in the WSNs data acquisition environment. Results show that compared with traditional reconstruction algorithms such as StOMP, the reconstruction accuracy of the algorithm proposed in this paper is improved by 36.4-51.9%, and the reconstruction time is reduced by 15.1-31.3%.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Ji, Xiaojia, Xuanyi Lu, Chunhong Guo, Weiwei Pei, and Hui Xu. "Predictions of Geological Interface Using Relevant Vector Machine with Borehole Data." Sustainability 14, no. 16 (August 15, 2022): 10122. http://dx.doi.org/10.3390/su141610122.

Повний текст джерела
Анотація:
Due to the discreteness, sparsity, multidimensionality, and incompleteness of geotechnical investigation data, traditional methods cannot reasonably predict complex stratigraphic profiles, thus hindering the three-dimensional (3D) reconstruction of geological formation that is vital to the visualization and digitization of geotechnical engineering. The machine learning method of relevant vector machine (RVM) is employed in this work to predict the 3D stratigraphic profile based on limited geotechnical borehole data. The hyper-parameters of kernel functions are determined by maximizing the marginal likelihood using the particle swarm optimization algorithm. Three kinds of kernel functions are employed to investigate the prediction performance of the proposed method in both 2D analysis and 3D analysis. The 2D analysis shows that the Gauss kernel function is more suitable to deal with nonlinear problems but is more sensitive to the number of training data and it is better to use spline kernel functions for RVM model trainings when there are few geotechnical investigation data. In the 3D analysis, it is found that the prediction result of the spline kernel function is the best and the relevant vector machine model with a spline kernel function performs better in the area with a fast change in geological formation. In general, the RVM model can be used to achieve the purpose of 3D stratigraphic reconstruction.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Christmann, Andreas, and Ding-Xuan Zhou. "Learning rates for the risk of kernel-based quantile regression estimators in additive models." Analysis and Applications 14, no. 03 (April 13, 2016): 449–77. http://dx.doi.org/10.1142/s0219530515500050.

Повний текст джерела
Анотація:
Additive models play an important role in semiparametric statistics. This paper gives learning rates for regularized kernel-based methods for additive models. These learning rates compare favorably in particular in high dimensions to recent results on optimal learning rates for purely nonparametric regularized kernel-based quantile regression using the Gaussian radial basis function kernel, provided the assumption of an additive model is valid. Additionally, a concrete example is presented to show that a Gaussian function depending only on one variable lies in a reproducing kernel Hilbert space generated by an additive Gaussian kernel, but does not belong to the reproducing kernel Hilbert space generated by the multivariate Gaussian kernel of the same variance.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Prasad, Srijanani Anurag. "Reproducing Kernel Hilbert Space and Coalescence Hidden-variable Fractal Interpolation Functions." Demonstratio Mathematica 52, no. 1 (September 30, 2019): 467–74. http://dx.doi.org/10.1515/dema-2019-0027.

Повний текст джерела
Анотація:
AbstractReproducing Kernel Hilbert Spaces (RKHS) and their kernel are important tools which have been found to be incredibly useful in many areas like machine learning, complex analysis, probability theory, group representation theory and the theory of integral operator. In the present paper, the space of Coalescence Hidden-variable Fractal Interpolation Functions (CHFIFs) is demonstrated to be an RKHS and its associated kernel is derived. This extends the possibility of using this new kernel function, which is partly self-affine and partly non-self-affine, in diverse fields wherein the structure is not always self-affine.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Cui, Lipeng, Jie Shen, and Song Yao. "The Sparse Learning of The Support Vector Machine." Journal of Physics: Conference Series 2078, no. 1 (November 1, 2021): 012006. http://dx.doi.org/10.1088/1742-6596/2078/1/012006.

Повний текст джерела
Анотація:
Abstract The sparse model plays an important role in many aeras, such as in the machine learning, image processing and signal processing. The sparse model has the ability of variable selection, so they can solve the over-fitting problem. The sparse model can be introduced into the field of support vector machine in order to get classification of the labels and sparsity of the variables simultaneously. This paper summarizes various sparse support vector machines. Finally, we revealed the research directions of the sparse support vector machines in the future.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Wang, Fan, and Guige Gao. "Optimization of short-term wind power prediction of Multi-kernel Extreme Learning Machine based on Sparrow Search Algorithm." Journal of Physics: Conference Series 2527, no. 1 (June 1, 2023): 012075. http://dx.doi.org/10.1088/1742-6596/2527/1/012075.

Повний текст джерела
Анотація:
Abstract Aiming at the problem that the single kernel function of kernel extreme learning machine (KELM) cannot adapt to the variable actual wind power. This paper proposes a modified prediction model which can increase the accuracy of prediction. The prediction model uses multiple kernel functions instead of a single kernel function and optimizes the kernel parameters by using a sparrow search algorithm (SSA). Finally, through the simulation and comparison experiments, the proposed prediction model has better prediction accuracy than the conventional prediction model.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Zhu, Shuang, Xiangang Luo, Zhanya Xu, and Lei Ye. "Seasonal streamflow forecasts using mixture-kernel GPR and advanced methods of input variable selection." Hydrology Research 50, no. 1 (June 8, 2018): 200–214. http://dx.doi.org/10.2166/nh.2018.023.

Повний текст джерела
Анотація:
Abstract Gaussian Process Regression (GPR) is a new machine-learning method based on Bayesian theory and statistical learning theory. It provides a flexible framework for probabilistic regression and uncertainty estimation. The main effort in GPR modelling is determining the structure of the kernel function. As streamflow is composed of trend, period and random components. In this study, we constructed a mixture-kernel composed of squared exponential kernel, periodic kernel and a rational quadratic term to reflect different properties of streamflow time series to make streamflow forecasts. A relevant feature-selection wrapper algorithm was used, with a top-down search for relevant features by Random Forest, to offer a systematic factors analysis that can potentially affect basin streamflow predictability. Streamflow prediction is evaluated by putting emphasis on the degree of coincidence, the deviation on low flows, high flows and the error level. The objective of this study is to construct a seasonal streamflow forecasts model using mixture-kernel GPR and the advanced input variable selection method. Results show that the mixture-kernel GPR has good forecasting quality, and top importance predictors are streamflow at 12, 6, 5, 1, 11, 7, 8, 4 months ahead, Nino 1 + 2 at 11, 5, 12, 10 months ahead.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Niu, Guo, Zhengming Ma, and Shuyu Liu. "A Multikernel-Like Learning Algorithm Based on Data Probability Distribution." Mathematical Problems in Engineering 2016 (2016): 1–18. http://dx.doi.org/10.1155/2016/5927306.

Повний текст джерела
Анотація:
In the machine learning based on kernel tricks, people often put one variable of a kernel function on the given samples to produce the basic functions of a solution space of learning problem. If the collection of the given samples deviates from the data distribution, the solution space spanned by these basic functions will also deviate from the real solution space of learning problem. In this paper a multikernel-like learning algorithm based on data probability distribution (MKDPD) is proposed, in which the parameters of a kernel function are locally adjusted according to the data probability distribution, and thus produces different kernel functions. These different kernel functions will generate different Reproducing Kernel Hilbert Spaces (RKHS). The direct sum of the subspaces of these RKHS constitutes the solution space of learning problem. Furthermore, based on the proposed MKDPD algorithm, a new algorithm for labeling new coming data is proposed, in which the basic functions are retrained according to the new coming data, while the coefficients of the retrained basic functions remained unchanged to label the new coming data. The experimental results presented in this paper show the effectiveness of the proposed algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Al-Aamri, Amira, Kamal Taha, Maher Maalouf, Andrzej Kudlicki, and Dirar Homouz. "Inferring Causation in Yeast Gene Association Networks With Kernel Logistic Regression." Evolutionary Bioinformatics 16 (January 2020): 117693432092031. http://dx.doi.org/10.1177/1176934320920310.

Повний текст джерела
Анотація:
Computational prediction of gene-gene associations is one of the productive directions in the study of bioinformatics. Many tools are developed to infer the relation between genes using different biological data sources. The association of a pair of genes deduced from the analysis of biological data becomes meaningful when it reflects the directionality and the type of reaction between genes. In this work, we follow another method to construct a causal gene co-expression network while identifying transcription factors in each pair of genes using microarray expression data. We adopt a machine learning technique based on a logistic regression model to tackle the sparsity of the network and to improve the quality of the prediction accuracy. The proposed system classifies each pair of genes into either connected or nonconnected class using the data of the correlation between these genes in the whole Saccharomyces cerevisiae genome. The accuracy of the classification model in predicting related genes was evaluated using several data sets for the yeast regulatory network. Our system achieves high performance in terms of several statistical measures.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Wu, Xingyu, Bingbing Jiang, Tianhao Wu, and Huanhuan Chen. "Practical Markov Boundary Learning without Strong Assumptions." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (June 26, 2023): 10388–98. http://dx.doi.org/10.1609/aaai.v37i9.26236.

Повний текст джерела
Анотація:
Theoretically, the Markov boundary (MB) is the optimal solution for feature selection. However, existing MB learning algorithms often fail to identify some critical features in real-world feature selection tasks, mainly because the strict assumptions of existing algorithms, on either data distribution, variable types, or correctness of criteria, cannot be satisfied in application scenarios. This paper takes further steps toward opening the door to real-world applications for MB. We contribute in particular to a practical MB learning strategy, which can maintain feasibility and effectiveness in real-world data where variables can be numerical or categorical with linear or nonlinear, pairwise or multivariate relationships. Specifically, the equivalence between MB and the minimal conditional covariance operator (CCO) is investigated, which inspires us to design the objective function based on the predictability evaluation of the mapping variables in a reproducing kernel Hilbert space. Based on this, a kernel MB learning algorithm is proposed, where nonlinear multivariate dependence could be considered without extra requirements on data distribution and variable types. Extensive experiments demonstrate the efficacy of these contributions.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Li, Meng-Yu, Rui-Qi Wang, Jian-Bo Zhang, and Zhong-Ke Gao. "Characterizing gas–liquid two-phase flow behavior using complex network and deep learning." Chaos: An Interdisciplinary Journal of Nonlinear Science 33, no. 1 (January 2023): 013108. http://dx.doi.org/10.1063/5.0124998.

Повний текст джерела
Анотація:
Gas–liquid two-phase flow is polymorphic and unstable, and characterizing its flow behavior is a major challenge in the study of multiphase flow. We first conduct dynamic experiments on gas–liquid two-phase flow in a vertical tube and obtain multi-channel signals using a self-designed four-sector distributed conductivity sensor. In order to characterize the evolution of gas–liquid two-phase flow, we transform the obtained signals using the adaptive optimal kernel time-frequency representation and build a complex network based on the time-frequency energy distribution. As quantitative indicators, global clustering coefficients of the complex network at various sparsity levels are computed to analyze the dynamic behavior of various flow structures. The results demonstrate that the proposed approach enables effective analysis of multi-channel measurement information for revealing the evolutionary mechanisms of gas–liquid two-phase flow. Furthermore, for the purpose of flow structure recognition, we propose a temporal-spatio convolutional neural network and achieve a classification accuracy of 95.83%.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії