Siga este enlace para ver otros tipos de publicaciones sobre el tema: Fast kernel methods.

Artículos de revistas sobre el tema "Fast kernel methods"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Fast kernel methods".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Li, Jun-yi y Jian-hua Li. "Fast Image Search with Locality-Sensitive Hashing and Homogeneous Kernels Map". Scientific World Journal 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/350676.

Texto completo
Resumen
Fast image search with efficient additive kernels and kernel locality-sensitive hashing has been proposed. As to hold the kernel functions, recent work has probed methods to create locality-sensitive hashing, which guarantee our approach’s linear time; however existing methods still do not solve the problem of locality-sensitive hashing (LSH) algorithm and indirectly sacrifice the loss in accuracy of search results in order to allow fast queries. To improve the search accuracy, we show how to apply explicit feature maps into the homogeneous kernels, which help in feature transformation and combine it with kernel locality-sensitive hashing. We prove our method on several large datasets and illustrate that it improves the accuracy relative to commonly used methods and make the task of object classification and, content-based retrieval more fast and accurate.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Zhai, Yuejing, Zhouzheng Li y Haizhong Liu. "Multi-Angle Fast Neural Tangent Kernel Classifier". Applied Sciences 12, n.º 21 (26 de octubre de 2022): 10876. http://dx.doi.org/10.3390/app122110876.

Texto completo
Resumen
Multi-kernel learning methods are essential kernel learning methods. Still, the base kernel functions in most multi-kernel learning methods only with select kernel functions with shallow structures, which are weak for large-scale uneven data. We propose two types of acceleration models from a multidimensional perspective of the data: the neural tangent kernel (NTK)-based multi-kernel learning method is proposed, where the NTK kernel regressor is shown to be equivalent to an infinitely wide neural network predictor, and the NTK with deep structure is used as the base kernel function to enhance the learning ability of multi-kernel models; and a parallel computing kernel model based on data partitioning techniques. An RBF, POLY-based multi-kernel model is also proposed. All models use historical memory-based PSO (HMPSO) for efficient search of parameters within the model. Since NTK has a multi-layer structure and thus has a significant computational complexity, the use of a Monotone Disjunctive Kernel (MDK) to store and train Boolean features in binary achieves a 15–60% training time compression of NTK models in different datasets while obtaining a 1–25% accuracy improvement.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Shitong Wang, Jun Wang y Fu-lai Chung. "Kernel Density Estimation, Kernel Methods, and Fast Learning in Large Data Sets". IEEE Transactions on Cybernetics 44, n.º 1 (enero de 2014): 1–20. http://dx.doi.org/10.1109/tsmcb.2012.2236828.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Viljanen, Markus, Antti Airola y Tapio Pahikkala. "Generalized vec trick for fast learning of pairwise kernel models". Machine Learning 111, n.º 2 (28 de enero de 2022): 543–73. http://dx.doi.org/10.1007/s10994-021-06127-y.

Texto completo
Resumen
AbstractPairwise learning corresponds to the supervised learning setting where the goal is to make predictions for pairs of objects. Prominent applications include predicting drug-target or protein-protein interactions, or customer-product preferences. In this work, we present a comprehensive review of pairwise kernels, that have been proposed for incorporating prior knowledge about the relationship between the objects. Specifically, we consider the standard, symmetric and anti-symmetric Kronecker product kernels, metric-learning, Cartesian, ranking, as well as linear, polynomial and Gaussian kernels. Recently, a $$O(nm+nq)$$ O ( n m + n q ) time generalized vec trick algorithm, where $$n$$ n , $$m$$ m , and $$q$$ q denote the number of pairs, drugs and targets, was introduced for training kernel methods with the Kronecker product kernel. This was a significant improvement over previous $$O(n^2)$$ O ( n 2 ) training methods, since in most real-world applications $$m,q<< n$$ m , q < < n . In this work we show how all the reviewed kernels can be expressed as sums of Kronecker products, allowing the use of generalized vec trick for speeding up their computation. In the experiments, we demonstrate how the introduced approach allows scaling pairwise kernels to much larger data sets than previously feasible, and provide an extensive comparison of the kernels on a number of biological interaction prediction tasks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Trifonov, P. V. "Design and decoding of polar codes with large kernels: a survey". Проблемы передачи информации 59, n.º 1 (15 de diciembre de 2023): 25–45. http://dx.doi.org/10.31857/s0555292323010035.

Texto completo
Resumen
We present techniques for the construction of polar codes with large kernels and their decoding. A crucial problem in the implementation of the successive cancellation decoding algorithm and its derivatives is kernel processing, i.e., fast evaluation of the log-likelihood ratios for kernel input symbols. We discuss window and recursive trellis processing methods. We consider techniques for evaluation of the reliability of bit subchannels and for obtaining codes with improved distance properties.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Rejwer-Kosińska, Ewa, Liliana Rybarska-Rusinek y Aleksandr Linkov. "On accuracy of translations by kernel independent fast multipole methods". Computers & Mathematics with Applications 124 (octubre de 2022): 227–40. http://dx.doi.org/10.1016/j.camwa.2022.08.033.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Chen, Kai, Rongchun Li, Yong Dou, Zhengfa Liang y Qi Lv. "Ranking Support Vector Machine with Kernel Approximation". Computational Intelligence and Neuroscience 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/4629534.

Texto completo
Resumen
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Bian, Lu Sha, Yong Fang Yao, Xiao Yuan Jing, Sheng Li, Jiang Yue Man y Jie Sun. "Face Recognition Based on a Fast Kernel Discriminant Analysis Approach". Advanced Materials Research 433-440 (enero de 2012): 6205–11. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.6205.

Texto completo
Resumen
The computational cost of kernel discrimination is usually higher than linear discrimination, making many kernel methods impractically slow. To overcome this disadvantage, several accelerated algorithms have been presented, which express kernel discriminant vectors using a part of mapped training samples that are selected by some criterions. However, they still need to calculate a large kernel matrix using all training samples, so they only save rather limited computing time. In this paper, we propose the fast and effective kernel discriminations based on the mapped mean samples (MMS). It calculates a small kernel matrix by constructing a few mean samples in input space, then expresses the kernel discriminant vectors using MMS. The proposed kernel approach is tested on the public AR and FERET face databases. Experimental results show that this approach is effective in both saving computing time and acquiring favorable recognition results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Kriege, Nils M., Marion Neumann, Christopher Morris, Kristian Kersting y Petra Mutzel. "A unifying view of explicit and implicit feature maps of graph kernels". Data Mining and Knowledge Discovery 33, n.º 6 (17 de septiembre de 2019): 1505–47. http://dx.doi.org/10.1007/s10618-019-00652-0.

Texto completo
Resumen
Abstract Non-linear kernel methods can be approximated by fast linear ones using suitable explicit feature maps allowing their application to large scale problems. We investigate how convolution kernels for structured data are composed from base kernels and construct corresponding feature maps. On this basis we propose exact and approximative feature maps for widely used graph kernels based on the kernel trick. We analyze for which kernels and graph properties computation by explicit feature maps is feasible and actually more efficient. In particular, we derive approximative, explicit feature maps for state-of-the-art kernels supporting real-valued attributes including the GraphHopper and graph invariant kernels. In extensive experiments we show that our approaches often achieve a classification accuracy close to the exact methods based on the kernel trick, but require only a fraction of their running time. Moreover, we propose and analyze algorithms for computing random walk, shortest-path and subgraph matching kernels by explicit and implicit feature maps. Our theoretical results are confirmed experimentally by observing a phase transition when comparing running time with respect to label diversity, walk lengths and subgraph size, respectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Jiang, Shunhua, Yunze Man, Zhao Song, Zheng Yu y Danyang Zhuo. "Fast Graph Neural Tangent Kernel via Kronecker Sketching". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 6 (28 de junio de 2022): 7033–41. http://dx.doi.org/10.1609/aaai.v36i6.20662.

Texto completo
Resumen
Many deep learning tasks need to deal with graph data (e.g., social networks, protein structures, code ASTs). Due to the importance of these tasks, people turned to Graph Neural Networks (GNNs) as the de facto method for machine learning on graph data. GNNs have become widely applied due to their convincing performance. Unfortunately, one major barrier to using GNNs is that GNNs require substantial time and resources to train. Recently, a new method for learning on graph data is Graph Neural Tangent Kernel (GNTK). GNTK is an application of Neural Tangent Kernel (NTK) (a kernel method) on graph data, and solving NTK regression is equivalent to using gradient descent to train an infinite-wide neural network. The key benefit of using GNTK is that, similar to any kernel method, GNTK's parameters can be solved directly in a single step, avoiding time-consuming gradient descent. Meanwhile, sketching has become increasingly used in speeding up various optimization problems, including solving kernel regression. Given a kernel matrix of n graphs, using sketching in solving kernel regression can reduce the running time to o(n^3). But unfortunately such methods usually require extensive knowledge about the kernel matrix beforehand, while in the case of GNTK we find that the construction of the kernel matrix is already O(n^2N^4), assuming each graph has N nodes. The kernel matrix construction time can be a major performance bottleneck when the size of graphs N increases. A natural question to ask is thus whether we can speed up the kernel matrix construction to improve GNTK regression's end-to-end running time. This paper provides the first algorithm to construct the kernel matrix in o(n^2N^3) running time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Pilario, Karl Ezra, Mahmood Shafiee, Yi Cao, Liyun Lao y Shuang-Hua Yang. "A Review of Kernel Methods for Feature Extraction in Nonlinear Process Monitoring". Processes 8, n.º 1 (23 de diciembre de 2019): 24. http://dx.doi.org/10.3390/pr8010024.

Texto completo
Resumen
Kernel methods are a class of learning machines for the fast recognition of nonlinear patterns in any data set. In this paper, the applications of kernel methods for feature extraction in industrial process monitoring are systematically reviewed. First, we describe the reasons for using kernel methods and contextualize them among other machine learning tools. Second, by reviewing a total of 230 papers, this work has identified 12 major issues surrounding the use of kernel methods for nonlinear feature extraction. Each issue was discussed as to why they are important and how they were addressed through the years by many researchers. We also present a breakdown of the commonly used kernel functions, parameter selection routes, and case studies. Lastly, this review provides an outlook into the future of kernel-based process monitoring, which can hopefully instigate more advanced yet practical solutions in the process industries.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

March, William B. y George Biros. "Far-field compression for fast kernel summation methods in high dimensions". Applied and Computational Harmonic Analysis 43, n.º 1 (julio de 2017): 39–75. http://dx.doi.org/10.1016/j.acha.2015.09.007.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

NIE, YIMING, BIN DAI, XIANGJING AN, ZHENPING SUN, TAO WU y HANGEN HE. "FAST LANE DETECTION USING DIRECTION KERNEL FUNCTION". International Journal of Wavelets, Multiresolution and Information Processing 10, n.º 02 (marzo de 2012): 1250017. http://dx.doi.org/10.1142/s0219691312500178.

Texto completo
Resumen
The lane information is essential to the highway intelligent vehicle applications. The direct description of the lanes is lane markings. Many vision methods have been proposed for lane markings detection. But in practice there are some problems to be solved by previous lane tracking systems such as shadows on the road, lighting changes, characters on the road and discontinuous changes in road types. Direction kernel function is proposed for robust detection of the lanes. This method focuses on selecting points on the markings edge by classification. During the classifying, the vanishing point is selected and the parts of the lane marking could form the lanes. The algorithm presented in this paper is proved to be both robust and fast by a large amount of experiments in variable occasions, besides, the algorithm can extract the lanes even in some parts of lane markings missing occasions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Kim, Do Wan y Yongsik Kim. "Point collocation methods using the fast moving least-square reproducing kernel approximation". International Journal for Numerical Methods in Engineering 56, n.º 10 (2003): 1445–64. http://dx.doi.org/10.1002/nme.618.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Zheng, Jianwei, Hong Qiu, Xinli Xu, Wanliang Wang y Qiongfang Huang. "Fast Discriminative Stochastic Neighbor Embedding Analysis". Computational and Mathematical Methods in Medicine 2013 (2013): 1–14. http://dx.doi.org/10.1155/2013/106867.

Texto completo
Resumen
Feature is important for many applications in biomedical signal analysis and living system analysis. A fast discriminative stochastic neighbor embedding analysis (FDSNE) method for feature extraction is proposed in this paper by improving the existing DSNE method. The proposed algorithm adopts an alternative probability distribution model constructed based on itsK-nearest neighbors from the interclass and intraclass samples. Furthermore, FDSNE is extended to nonlinear scenarios using the kernel trick and then kernel-based methods, that is, KFDSNE1 and KFDSNE2. FDSNE, KFDSNE1, and KFDSNE2 are evaluated in three aspects: visualization, recognition, and elapsed time. Experimental results on several datasets show that, compared with DSNE and MSNP, the proposed algorithm not only significantly enhances the computational efficiency but also obtains higher classification accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Chan, Tsz Nam, Zhe Li, Leong Hou U, Jianliang Xu y Reynold Cheng. "Fast augmentation algorithms for network kernel density visualization". Proceedings of the VLDB Endowment 14, n.º 9 (mayo de 2021): 1503–16. http://dx.doi.org/10.14778/3461535.3461540.

Texto completo
Resumen
Network kernel density visualization, or NKDV, has been extensively used to visualize spatial data points in various domains, including traffic accident hotspot detection, crime hotspot detection, disease outbreak detection, and business and urban planning. Due to a wide range of applications for NKDV, some geographical software, e.g., ArcGIS, can also support this operation. However, computing NKDV is very time-consuming. Although NKDV has been used for more than a decade in different domains, existing algorithms are not scalable to million-sized datasets. To address this issue, we propose three efficient methods in this paper, namely aggregate distance augmentation (ADA), interval augmentation (IA), and hybrid augmentation (HA), which can significantly reduce the time complexity for computing NKDV. In our experiments, ADA, IA and HA can achieve at least 5x to 10x speedup, compared with the state-of-the-art solutions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Huanrui, Hao. "New Mixed Kernel Functions of SVM Used in Pattern Recognition". Cybernetics and Information Technologies 16, n.º 5 (1 de octubre de 2016): 5–14. http://dx.doi.org/10.1515/cait-2016-0047.

Texto completo
Resumen
Abstract The pattern analysis technology based on kernel methods is a new technology, which combines good performance and strict theory. With support vector machine, pattern analysis is easy and fast. But the existing kernel function fits the requirement. In the paper, we explore the new mixed kernel functions which are mixed with Gaussian and Wavelet function, Gaussian and Polynomial kernel function. With the new mixed kernel functions, we check different parameters. The results shows that the new mixed kernel functions have good time efficiency and accuracy. In image recognition we used SVM with two mixed kernel functions, the mixed kernel function of Gaussian and Wavelet function are suitable for more states.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Cheng, Sheng-Wei, Yi-Ting Lin y Yan-Tsung Peng. "A Fast Two-Stage Bilateral Filter Using Constant Time O(1) Histogram Generation". Sensors 22, n.º 3 (25 de enero de 2022): 926. http://dx.doi.org/10.3390/s22030926.

Texto completo
Resumen
Bilateral Filtering (BF) is an effective edge-preserving smoothing technique in image processing. However, an inherent problem of BF for image denoising is that it is challenging to differentiate image noise and details with the range kernel, thus often preserving both noise and edges in denoising. This letter proposes a novel Dual-Histogram BF (DHBF) method that exploits an edge-preserving noise-reduced guidance image to compute the range kernel, removing isolated noisy pixels for better denoising results. Furthermore, we approximate the spatial kernel using mean filtering based on column histogram construction to achieve constant-time filtering regardless of the kernel radius’ size and achieve better smoothing. Experimental results on multiple benchmark datasets for denoising show that the proposed DHBF outperforms other state-of-the-art BF methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Lewis, R. D., G. Beylkin y L. Monzón. "Fast and accurate propagation of coherent light". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 469, n.º 2159 (8 de noviembre de 2013): 20130323. http://dx.doi.org/10.1098/rspa.2013.0323.

Texto completo
Resumen
We describe a fast algorithm to propagate, for any user-specified accuracy, a time-harmonic electromagnetic field between two parallel planes separated by a linear, isotropic and homogeneous medium. The analytical formulation of this problem (ca 1897) requires the evaluation of the so-called Rayleigh–Sommerfeld integral. If the distance between the planes is small, this integral can be accurately evaluated in the Fourier domain; if the distance is very large, it can be accurately approximated by asymptotic methods. In the large intermediate region of practical interest, where the oscillatory Rayleigh–Sommerfeld kernel must be applied directly, current numerical methods can be highly inaccurate without indicating this fact to the user. In our approach, for any user-specified accuracy ϵ >0, we approximate the kernel by a short sum of Gaussians with complex-valued exponents, and then efficiently apply the result to the input data using the unequally spaced fast Fourier transform. The resulting algorithm has computational complexity , where we evaluate the solution on an N × N grid of output points given an M × M grid of input samples. Our algorithm maintains its accuracy throughout the computational domain.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Wang, Yinkun, Jianshu Luo, Xiangling Chen y Lei Sun. "A Chebyshev collocation method for Hallén’s equation of thin wire antennas". COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering 34, n.º 4 (6 de julio de 2015): 1319–34. http://dx.doi.org/10.1108/compel-06-2014-0142.

Texto completo
Resumen
Purpose – The purpose of this paper is to propose a Chebyshev collocation method (CCM) for Hallén’s equation of thin wire antennas. Design/methodology/approach – Since the current induced on the thin wire antennas behaves like the square root of the distance from the end, a smoothed current is used to annihilate this end effect. Then the CCM adopts Chebyshev polynomials to approximate the smoothed current from which the actual current can be quickly recovered. To handle the difficulty of the kernel singularity and to realize fast computation, a decomposition is adopted by separating the singularity from the exact kernel. The integrals including the singularity in the linear system can be given in an explicit formula while the others can be evaluated efficiently by the fast cosine transform or the fast Fourier transform. Findings – The CCM convergence rate is fast and this method is more efficient than the other existing methods. Specially, it can attain less than 1 percent relative errors by using 32 basis functions when a/h is bigger than 2×10−5 where h is the half length of wire antenna and a is the radius of antenna. Besides, a new efficient scheme to evaluate the exact kernel has been proposed by comparing with most of the literature methods. Originality/value – Since the kernel evaluation is vital to the solution of Hallén’s and Pocklington’s equations, the proposed scheme to evaluate the exact kernel may be helpful in improving the efficiency of existing methods in the study of wire antennas. Due to the good convergence and efficiency, the CCM may be a competitive method in the analysis of radiation properties of thin wire antennas. Several numerical experiments are presented to validate the proposed method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

van der Tol, Sebastiaan, Bram Veenboer y André R. Offringa. "Image Domain Gridding: a fast method for convolutional resampling of visibilities". Astronomy & Astrophysics 616 (agosto de 2018): A27. http://dx.doi.org/10.1051/0004-6361/201832858.

Texto completo
Resumen
In radio astronomy obtaining a high dynamic range in synthesis imaging of wide fields requires a correction for time and direction-dependent effects. Applying direction-dependent correction can be done by either partitioning the image in facets and applying a direction-independent correction per facet, or by including the correction in the gridding kernel (AW-projection). An advantage of AW-projection over faceting is that the effectively applied beam is a sinc interpolation of the sampled beam, where the correction applied in the faceting approach is a discontinuous piece wise constant beam. However, AW-projection quickly becomes prohibitively expensive when the corrections vary over short time scales. This occurs, for example, when ionospheric effects are included in the correction. The cost of the frequent recomputation of the oversampled convolution kernels then dominates the total cost of gridding. Image domain gridding is a new approach that avoids the costly step of computing oversampled convolution kernels. Instead low-resolution images are made directly for small groups of visibilities which are then transformed and added to the large uv grid. The computations have a simple, highly parallel structure that maps very well onto massively parallel hardware such as graphical processing units (GPUs). Despite being more expensive in pure computation count, the throughput is comparable to classical W-projection. The accuracy is close to classical gridding with a continuous convolution kernel. Compared to gridding methods that use a sampled convolution function, the new method is more accurate. Hence, the new method is at least as fast and accurate as classical W-projection, while allowing for the correction for quickly varying direction-dependent effects.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

KUANG, RUI, EUGENE IE, KE WANG, KAI WANG, MAHIRA SIDDIQI, YOAV FREUND y CHRISTINA LESLIE. "PROFILE-BASED STRING KERNELS FOR REMOTE HOMOLOGY DETECTION AND MOTIF EXTRACTION". Journal of Bioinformatics and Computational Biology 03, n.º 03 (junio de 2005): 527–50. http://dx.doi.org/10.1142/s021972000500120x.

Texto completo
Resumen
We introduce novel profile-based string kernels for use with support vector machines (SVMs) for the problems of protein classification and remote homology detection. These kernels use probabilistic profiles, such as those produced by the PSI-BLAST algorithm, to define position-dependent mutation neighborhoods along protein sequences for inexact matching of k-length subsequences ("k-mers") in the data. By use of an efficient data structure, the kernels are fast to compute once the profiles have been obtained. For example, the time needed to run PSI-BLAST in order to build the profiles is significantly longer than both the kernel computation time and the SVM training time. We present remote homology detection experiments based on the SCOP database where we show that profile-based string kernels used with SVM classifiers strongly outperform all recently presented supervised SVM methods. We further examine how to incorporate predicted secondary structure information into the profile kernel to obtain a small but significant performance improvement. We also show how we can use the learned SVM classifier to extract "discriminative sequence motifs" — short regions of the original profile that contribute almost all the weight of the SVM classification score — and show that these discriminative motifs correspond to meaningful structural features in the protein data. The use of PSI-BLAST profiles can be seen as a semi-supervised learning technique, since PSI-BLAST leverages unlabeled data from a large sequence database to build more informative profiles. Recently presented "cluster kernels" give general semi-supervised methods for improving SVM protein classification performance. We show that our profile kernel results also outperform cluster kernels while providing much better scalability to large datasets. Supplementary website:.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

FRANDI, EMANUELE, RICARDO ÑANCULEF, MARIA GRAZIA GASPARO, STEFANO LODI y CLAUDIO SARTORI. "TRAINING SUPPORT VECTOR MACHINES USING FRANK–WOLFE OPTIMIZATION METHODS". International Journal of Pattern Recognition and Artificial Intelligence 27, n.º 03 (mayo de 2013): 1360003. http://dx.doi.org/10.1142/s0218001413600033.

Texto completo
Resumen
Training a support vector machine (SVM) requires the solution of a quadratic programming problem (QP) whose computational complexity becomes prohibitively expensive for large scale datasets. Traditional optimization methods cannot be directly applied in these cases, mainly due to memory restrictions. By adopting a slightly different objective function and under mild conditions on the kernel used within the model, efficient algorithms to train SVMs have been devised under the name of core vector machines (CVMs). This framework exploits the equivalence of the resulting learning problem with the task of building a minimal enclosing ball (MEB) problem in a feature space, where data is implicitly embedded by a kernel function. In this paper, we improve on the CVM approach by proposing two novel methods to build SVMs based on the Frank–Wolfe algorithm, recently revisited as a fast method to approximate the solution of a MEB problem. In contrast to CVMs, our algorithms do not require to compute the solutions of a sequence of increasingly complex QPs and are defined by using only analytic optimization steps. Experiments on a large collection of datasets show that our methods scale better than CVMs in most cases, sometimes at the price of a slightly lower accuracy. As CVMs, the proposed methods can be easily extended to machine learning problems other than binary classification. However, effective classifiers are also obtained using kernels which do not satisfy the condition required by CVMs, and thus our methods can be used for a wider set of problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

FISCHER, T., D. LOGASHENKO, M. KIRKILIONIS y G. WITTUM. "FAST NUMERICAL INTEGRATION FOR SIMULATION OF STRUCTURED POPULATION EQUATIONS". Mathematical Models and Methods in Applied Sciences 16, n.º 12 (diciembre de 2006): 1987–2012. http://dx.doi.org/10.1142/s0218202506001789.

Texto completo
Resumen
In this paper, we consider the fast computation of integral terms arising in simulations of structured populations modeled by integro-differential equations. This is of enormous relevance for demographic studies in which populations are structured by a large number of variables (often called i-states) like age, gender, income etc. This holds equally for applications in ecology and biotechnology. In this paper we will concentrate on an example describing microbial growth. For this class of problems we apply the panel clustering method that has str almost linear complexity for many integral kernels that are of interest in the field of biology. We further present the primitive function method as an improved version of the panel clustering for the case that the kernel function is non-smooth on hypersurfaces. We compare these methods with a conventional numerical integration algorithm, all used in-side standard discretization schemes for the complete system of integro-differential equations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Shin, Kilho, Taichi Ishikawa, Yu-Lu Liu y David Lawrence Shepard. "Learning DOM Trees of Web Pages by Subpath Kernel and Detecting Fake e-Commerce Sites". Machine Learning and Knowledge Extraction 3, n.º 1 (14 de enero de 2021): 95–122. http://dx.doi.org/10.3390/make3010006.

Texto completo
Resumen
The subpath kernel is a class of positive definite kernels defined over trees, which has the following advantages for the purposes of classification, regression and clustering: it can be incorporated into a variety of powerful kernel machines including SVM; It is invariant whether input trees are ordered or unordered; It can be computed by significantly fast linear-time algorithms; And, finally, its excellent learning performance has been proven through intensive experiments in the literature. In this paper, we leverage recent advances in tree kernels to solve real problems. As an example, we apply our method to the problem of detecting fake e-commerce sites. Although the problem is similar to phishing site detection, the fact that mimicking existing authentic sites is harmful for fake e-commerce sites marks a clear difference between these two problems. We focus on fake e-commerce site detection for three reasons: e-commerce fraud is a real problem that companies and law enforcement have been cooperating to solve; Inefficiency hampers existing approaches because datasets tend to be large, while subpath kernel learning overcomes these performance challenges; And we offer increased resiliency against attempts to subvert existing detection methods through incorporating robust features that adversaries cannot change: the DOM-trees of web-sites. Our real-world results are remarkable: our method has exhibited accuracy as high as 0.998 when training SVM with 1000 instances and evaluating accuracy for almost 7000 independent instances. Its generalization efficiency is also excellent: with only 100 training instances, the accuracy score reached 0.996.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Jiang, Hansi, Haoyu Wang, Wenhao Hu, Deovrat Kakde y Arin Chaudhuri. "Fast Incremental SVDD Learning Algorithm with the Gaussian Kernel". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 3991–98. http://dx.doi.org/10.1609/aaai.v33i01.33013991.

Texto completo
Resumen
Support vector data description (SVDD) is a machine learning technique that is used for single-class classification and outlier detection. The idea of SVDD is to find a set of support vectors that defines a boundary around data. When dealing with online or large data, existing batch SVDD methods have to be rerun in each iteration. We propose an incremental learning algorithm for SVDD that uses the Gaussian kernel. This algorithm builds on the observation that all support vectors on the boundary have the same distance to the center of sphere in a higher-dimensional feature space as mapped by the Gaussian kernel function. Each iteration involves only the existing support vectors and the new data point. Moreover, the algorithm is based solely on matrix manipulations; the support vectors and their corresponding Lagrange multiplier αi’s are automatically selected and determined in each iteration. It can be seen that the complexity of our algorithm in each iteration is only O(k2), where k is the number of support vectors. Experimental results on some real data sets indicate that FISVDD demonstrates significant gains in efficiency with almost no loss in either outlier detection accuracy or objective function value.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Khatri, Ajay, Shweta Agrawal y Jyotir M. Chatterjee. "Wheat Seed Classification: Utilizing Ensemble Machine Learning Approach". Scientific Programming 2022 (2 de febrero de 2022): 1–9. http://dx.doi.org/10.1155/2022/2626868.

Texto completo
Resumen
Recognizing and authenticating wheat varieties is critical for quality evaluation in the grain supply chain, particularly for methods for seed inspection. Recognition and verification of grains are carried out manually through direct visual examination. Automatic categorization techniques based on machine learning and computer vision offered fast and high-throughput solutions. Even yet, categorization remains a complicated process at the varietal level. The paper utilized machine learning approaches for classifying wheat seeds. The seed classification is performed based on 7 physical features: area of wheat, perimeter of wheat, compactness, length of the kernel, width of the kernel, asymmetry coefficient, and kernel groove length. The dataset is collected from the UCI library and has 210 occurrences of wheat kernels. The dataset contains kernels from three wheat varieties Kama, Rosa, and Canadian, with 70 components chosen at random for the experiment. In the first phase, K-nearest neighbor, classification and regression tree, and Gaussian Naïve Bayes algorithms are implemented for classification. The results of these algorithms are compared with the ensemble approach of machine learning. The results reveal that accuracies calculated for KNN, decision, and Naïve Bayes classifiers are 92%, 94%, and 92%, respectively. The highest accuracy of 95% is achieved through the ensemble classifier in which decision is made based on hard voting.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Wang, Bo, Rui Wang y YueSheng Xu. "Fast Fourier-Galerkin methods for first-kind logarithmic-kernel integral equations on open arcs". Science in China Series A: Mathematics 53, n.º 1 (enero de 2010): 1–22. http://dx.doi.org/10.1007/s11425-010-0014-x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Tsuchida, Russell, Cheng Soon Ong y Dino Sejdinovic. "Exact, Fast and Expressive Poisson Point Processes via Squared Neural Families". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 18 (24 de marzo de 2024): 20559–66. http://dx.doi.org/10.1609/aaai.v38i18.30041.

Texto completo
Resumen
We introduce squared neural Poisson point processes (SNEPPPs) by parameterising the intensity function by the squared norm of a two layer neural network. When the hidden layer is fixed and the second layer has a single neuron, our approach resembles previous uses of squared Gaussian process or kernel methods, but allowing the hidden layer to be learnt allows for additional flexibility. In many cases of interest, the integrated intensity function admits a closed form and can be computed in quadratic time in the number of hidden neurons. We enumerate a far more extensive number of such cases than has previously been discussed. Our approach is more memory and time efficient than naive implementations of squared or exponentiated kernel methods or Gaussian processes. Maximum likelihood and maximum a posteriori estimates in a reparameterisation of the final layer of the intensity function can be obtained by solving a (strongly) convex optimisation problem using projected gradient descent. We demonstrate SNEPPPs on real, and synthetic benchmarks, and provide a software implementation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Zheng, Kaiyuan, Zhiyong Zhang y Changzhen Qiu. "A Fast Adaptive Multi-Scale Kernel Correlation Filter Tracker for Rigid Object". Sensors 22, n.º 20 (14 de octubre de 2022): 7812. http://dx.doi.org/10.3390/s22207812.

Texto completo
Resumen
The efficient and accurate tracking of a target in complex scenes has always been one of the challenges to tackle. At present, the most effective tracking algorithms are basically neural network models based on deep learning. Although such algorithms have high tracking accuracy, the huge number of parameters and computations in the network models makes it difficult for such algorithms to meet the real-time requirements under limited hardware conditions, such as embedded platforms with small size, low power consumption and limited computing power. Tracking algorithms based on a kernel correlation filter are well-known and widely applied because of their high performance and speed, but when the target is in a complex background, it still can not adapt to the target scale change and occlusion, which will lead to template drift. In this paper, a fast multi-scale kernel correlation filter tracker based on adaptive template updating is proposed for common rigid targets. We introduce a simple scale pyramid on the basis of Kernel Correlation Filtering (KCF), which can adapt to the change in target size while ensuring the speed of operation. We propose an adaptive template updater based on the Mean of Cumulative Maximum Response Values (MCMRV) to alleviate the problem of template drift effectively when occlusion occurs. Extensive experiments have demonstrated the effectiveness of our method on various datasets and significantly outperformed other state-of-the-art methods based on a kernel correlation filter.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Li, Wenzong, Aichun Zhu, Yonggang Xu, Hongsheng Yin y Gang Hua. "A Fast Multi-Scale Generative Adversarial Network for Image Compressed Sensing". Entropy 24, n.º 6 (31 de mayo de 2022): 775. http://dx.doi.org/10.3390/e24060775.

Texto completo
Resumen
Recently, deep neural network-based image compressed sensing methods have achieved impressive success in reconstruction quality. However, these methods (1) have limitations in sampling pattern and (2) usually have the disadvantage of high computational complexity. To this end, a fast multi-scale generative adversarial network (FMSGAN) is implemented in this paper. Specifically, (1) an effective multi-scale sampling structure is proposed. It contains four different kernels with varying sizes so that decompose, and sample images effectively, which is capable of capturing different levels of spatial features at multiple scales. (2) An efficient lightweight multi-scale residual structure for deep image reconstruction is proposed to balance receptive field size and computational complexity. The key idea is to apply smaller convolution kernel sizes in the multi-scale residual structure to reduce the number of operations while maintaining the receptive field. Meanwhile, the channel attention structure is employed for enriching useful information. Moreover, perceptual loss is combined with MSE loss and adversarial loss as the optimization function to recover a finer image. Numerous experiments show that our FMSGAN achieves state-of-the-art image reconstruction quality with low computational complexity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Bissaker, Edward, Bishnu Lamichhane y David Jenkins. "Connectivity aware simulated annealing kernel methods for coke microstructure generation". ANZIAM Journal 63 (28 de julio de 2022): C123—C137. http://dx.doi.org/10.21914/anziamj.v63.17187.

Texto completo
Resumen
A vital input for steel manufacture is a coal-derived solid fuel called coke. Digital reconstructions and simulations of coke are valuable tools to analyse and test coke properties. We implement biased voxel iteration into a simulated annealing method via a kernel convolution to reduce the number of iterations required to generate a digital coke microstructure. We demonstrate that voxel connectivity assumptions impact the number of iterations and reduce the normalised computation time required to generate a digital microstructure by as much as 70%. References L. De Floriani, U. Fugacci, and F. Iuricich. Homological shape analysis through discrete morse theory. Perspectives in Shape Analysis. Ed. by M. Breuss, A. Bruckstein, P. Maragos, and S. Wuhrer. Springer, 2016, pp. 187–209. doi: 10.1007/978-3-319-24726-7_9 M. A. Diez, R. Alvarez, and C. Barriocanal. Coal for metallurgical coke production: predictions of coke quality and future requirements for cokemaking. Int. J. Coal Geol. 50.1–4 (2002), pp. 389–412. doi: 10.1016/S0166-5162(02)00123-4 D. T. Fullwood, S. R. Kalidindi, S. R. Niezgoda, A. Fast, and N. Hampson. Gradient-based microstructure reconstructions from distributions using fast Fourier transforms. Mat. Sci. Eng. A 494.1–2 (2008), pp. 68–72. doi: 10.1016/j.msea.2007.10.087 E.-Y. Guo, N. Chawla, T. Jing, S. Torquato, and Y. Jiao. Accurate modeling and reconstruction of three-dimensional percolating filamentary microstructures from two-dimensional micrographs via dilation-erosion method. Mat. Character. 89 (2014), pp. 33–42. doi: 10.1016/j.matchar.2013.12.011 Y. Jiao, F. H. Stillinger, and S. Torquato. Modeling heterogeneous materials via two-point correlation functions: Basic principles. Phys. Rev. E 76.3, 031110 (2007). doi: 10.1103/PhysRevE.76.031110 H. Kumar, C. L. Briant, and W. A. Curtin. Using microstructure reconstruction to model mechanical behavior in complex microstructures. Mech. Mat. 38.8–10 (2006), pp. 818–832. doi: 10.1016/j.mechmat.2005.06.030 Z. Ma and S. Torquato. Generation and structural characterization of Debye random media. Phys. Rev. E 102.4, 043310 (2020). doi: 10.1103/PhysRevE.102.043310 F. Meng, S. Gupta, D. French, P. Koshy, C. Sorrell, and Y. Shen. Characterization of microstructure and strength of coke particles and their dependence on coal properties. Powder Tech. 320 (2017), pp. 249–256. doi: 10.1016/j.powtec.2017.07.046 M. G. Rozman and M. Utz. Uniqueness of reconstruction of multiphase morphologies from two-point correlation functions. Phys. Rev. Lett. 89.13, 135501 (2002). doi: 10.1103/PhysRevLett.89.135501 T. Tang, Q. Teng, X. He, and D. Luo. A pixel selection rule based on the number of different-phase neighbours for the simulated annealing reconstruction of sandstone microstructure. J. Microscopy 234.3 (2009), pp. 262–268. doi: 10.1111/j.1365-2818.2009.03173.x S. Torquato. Microstructure characterization and bulk properties of disordered two-phase media. J. Stat. Phys. 45.5 (1986), pp. 843–873. doi: 10.1007/BF01020577 S. Torquato and H. W. Haslach Jr. Random heterogeneous materials: microstructure and macroscopic properties. Appl. Mech. Rev. 55.4 (2002), B62–B63. doi: 10.1115/1.1483342 S. Torquato and C. L. Y. Yeong. Reconstructing random media.II: three-dimensional media from two-dimensional cuts. Phys. Rev. E 58.1 (1998), pp. 224–233. doi: 10.1103/PhysRevE.58.224 on p. C128). C. L. Y. Yeong and S. Torquato. Reconstructing random media. Phys. Rev. E 57.1, 495 (1998). doi: 10.1103/PhysRevE.57.495
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Zhao, Yuxuan, Qi Sun, Zhuolun He, Yang Bai y Bei Yu. "AutoGraph: Optimizing DNN Computation Graph for Parallel GPU Kernel Execution". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 9 (26 de junio de 2023): 11354–62. http://dx.doi.org/10.1609/aaai.v37i9.26343.

Texto completo
Resumen
Deep learning frameworks optimize the computation graphs and intra-operator computations to boost the inference performance on GPUs, while inter-operator parallelism is usually ignored. In this paper, a unified framework, AutoGraph, is proposed to obtain highly optimized computation graphs in favor of parallel executions of GPU kernels. A novel dynamic programming algorithm, combined with backtracking search, is adopted to explore the optimal graph optimization solution, with the fast performance estimation from the mixed critical path cost. Accurate runtime information based on GPU Multi-Stream launched with CUDA Graph is utilized to determine the convergence of the optimization. Experimental results demonstrate that our method achieves up to 3.47x speedup over existing graph optimization methods. Moreover, AutoGraph outperforms state-of-the-art parallel kernel launch frameworks by up to 1.26x.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Bao, Yang, Zhiwei Liu y Jiming Song. "Rapid Electromagnetic Modeling and Simulation of Eddy Current NDE by MLKD-ACA Algorithm with Integral Kernel Truncations". Symmetry 14, n.º 4 (1 de abril de 2022): 712. http://dx.doi.org/10.3390/sym14040712.

Texto completo
Resumen
In this article, a novel hybrid method of multilevel kernel degeneration and adaptive cross approximation (MLKD-ACA) algorithm with integral kernel truncations is proposed to accelerate solving integral equations using method of moments (MoM), and to simulate the 3D eddy current nondestructive evaluation (NDE) problems efficiently. The MLKD-ACA algorithm with an integral kernel-truncations-based fast solver is symmetrical in the sense that: (1) the impedance matrix, which is generated by the MoM representing the interactions among the field and source basis functions, is symmetrical; (2) the factorized form of the integral kernel (Green’s function) resulted from degenerating it by the Lagrange polynomial interpolation is symmetrical; (3) the structure of the truncated integral kernel for the interactions among the blocks, which ignores the trivial ones of the far block pairs, is symmetrical using the integral kernel truncations technique. The impedance variations predicted by the proposed symmetrical eddy current NDE solver are compared with other methods in benchmarks to show the remarkable accuracy and efficiency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Huang, Xiandong, Qinglin Wang, Shuyu Lu, Ruochen Hao, Songzhu Mei y Jie Liu. "Evaluating FFT-based Algorithms for Strided Convolutions on ARMv8 Architectures?" ACM SIGMETRICS Performance Evaluation Review 49, n.º 3 (22 de marzo de 2022): 28–29. http://dx.doi.org/10.1145/3529113.3529122.

Texto completo
Resumen
Convolutional Neural Networks (CNNs) have been widely adopted in all kinds of artificial intelligence applications. Most of the computational overhead of CNNs is mainly spent on convolutions. An effective approach to reducing the overhead is FFT-based fast algorithms for convolutions. However, current FFT-based fast implementations cannot be directly applied to strided convolutions with a stride size of greater than 1. In this paper, we first introduce rearrangement- and sampling-based methods for applying FFT-based fast algorithms on strided convolutions. Then, the highly optimized parallel implementations of the two methods on ARMv8-based many-core CPU are presented. Lastly, we benchmark the implementations against the two GEMM-based implementations on this ARMv8 CPU. Our experimental results with convolutions of different kernel, and feature maps, and batch sizes show that the rearrangementbased method generally exceed the sampling-based one under the same optimizations in most cases, and these two methods can get much better performance than GEMMbased ones when the kernel, feature maps and batch sizes are large. The experimental results with the convolutional layers in popular CNNs further demonstrate the conclusion above.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Ince, Ibrahim Furkan, Yusuf Sait Erdem, Faruk Bulut y Md Haidar Sharif. "A Low-Cost Pupil Center Localization Algorithm Based on Maximized Integral Voting of Circular Hollow Kernels". Computer Journal 62, n.º 7 (5 de octubre de 2018): 1001–15. http://dx.doi.org/10.1093/comjnl/bxy102.

Texto completo
Resumen
Abstract Pupil center localization is an essential requirement for robust eye gaze tracking systems. In this paper, a low-cost pupil center localization algorithm is presented. The aim is to propose a computationally inexpensive algorithm with high accuracy in terms of performance and processing speed. Hence, a computationally inexpensive pupil center localization algorithm based on maximized integral voting of candidate kernels is presented. As the kernel type, a novel circular hollow kernel (CHK) is used. Estimation of pupil center is employed by applying a rule-based schema for each pixel over the eye sockets. Additionally, several features of CHK are proposed for maximizing the probability of voting for each kernel. Experimental results show promising results with 96.94% overall accuracy with around 13.89 ms computational time (71.99 fps) for a single image as an average time by using a standard PC. An extensive benchmarking study indicates that this method outperforms the learning-free algorithms and it competes with the other methods having a learning phase while their processing speed is much higher. Furthermore, this proposed learning-free system is fast enough to run on an average PC and also applicable to work with even a low-resolution webcam on a real-time video stream.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Shrikumar, Avanti, Eva Prakash y Anshul Kundaje. "GkmExplain: fast and accurate interpretation of nonlinear gapped k-mer SVMs". Bioinformatics 35, n.º 14 (julio de 2019): i173—i182. http://dx.doi.org/10.1093/bioinformatics/btz322.

Texto completo
Resumen
Abstract Summary Support Vector Machines with gapped k-mer kernels (gkm-SVMs) have been used to learn predictive models of regulatory DNA sequence. However, interpreting predictive sequence patterns learned by gkm-SVMs can be challenging. Existing interpretation methods such as deltaSVM, in-silico mutagenesis (ISM) or SHAP either do not scale well or make limiting assumptions about the model that can produce misleading results when the gkm kernel is combined with nonlinear kernels. Here, we propose GkmExplain: a computationally efficient feature attribution method for interpreting predictive sequence patterns from gkm-SVM models that has theoretical connections to the method of Integrated Gradients. Using simulated regulatory DNA sequences, we show that GkmExplain identifies predictive patterns with high accuracy while avoiding pitfalls of deltaSVM and ISM and being orders of magnitude more computationally efficient than SHAP. By applying GkmExplain and a recently developed motif discovery method called TF-MoDISco to gkm-SVM models trained on in vivo transcription factor (TF) binding data, we recover consolidated, non-redundant TF motifs. Mutation impact scores derived using GkmExplain consistently outperform deltaSVM and ISM at identifying regulatory genetic variants from gkm-SVM models of chromatin accessibility in lymphoblastoid cell-lines. Availability and implementation Code and example notebooks to reproduce results are at https://github.com/kundajelab/gkmexplain. Supplementary information Supplementary data are available at Bioinformatics online.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Feng, Bo, Wenjun Xu, Fei Luo y Huazhong Wang. "Rytov-approximation-based wave-equation traveltime tomography". GEOPHYSICS 85, n.º 3 (27 de abril de 2020): R289—R297. http://dx.doi.org/10.1190/geo2019-0210.1.

Texto completo
Resumen
Most finite-frequency traveltime tomography methods are based on the Born approximation, which requires that the scale of the velocity heterogeneity and the magnitude of the velocity perturbation should be small enough to satisfy the Born approximation. On the contrary, the Rytov approximation works well for large-scale velocity heterogeneity. Typically, the Rytov-approximation-based finite-frequency traveltime sensitivity kernel (Rytov-FFTSK) can be obtained by integrating the phase-delay sensitivity kernels with a normalized weighting function, in which the calculation of sensitivity kernels requires the numerical solution of Green’s function. However, solving the Green’s function explicitly is quite computationally demanding, especially for 3D problems. To avoid explicit calculation of the Green’s function, we show that the Rytov-FFTSK can be obtained by crosscorrelating a forward-propagated incident wavefield and reverse-propagated adjoint wavefield in the time domain. In addition, we find that the action of the Rytov-FFTSK on a model-space vector, e.g., the product of the sensitivity kernel and a vector, can be computed by calculating the inner product of two time-domain fields. Consequently, the Hessian-vector product can be computed in a matrix-free fashion (i.e., first calculate the product of the sensitivity kernel and a model-space vector and then calculate the product of the transposed sensitivity kernel and a data-space vector), without forming the Hessian matrix or the sensitivity kernels explicitly. We solve the traveltime inverse problem with the Gauss-Newton method, in which the Gauss-Newton equation is approximately solved by the conjugate gradient using our matrix-free Hessian-vector product method. An example with a perfect acquisition geometry found that our Rytov-approximation-based traveltime inversion method can produce a high-quality inversion result with a very fast convergence rate. An overthrust synthetic data test demonstrates that large- to intermediate-scale model perturbations can be recovered by diving waves if long-offset acquisition is available.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Parubochyi, V. O. y R. Ya Shuvar. "PERFORMANCE EVALUATION OF SELF-QUOTIENT IMAGE METHODS". Ukrainian Journal of Information Technology 2, n.º 1 (2020): 8–14. http://dx.doi.org/10.23939/ujit2020.02.008.

Texto completo
Resumen
Lighting Normalization is an especially important issue in the image recognitions systems since different illumination conditions can significantly change the recognition results, and the lighting normalization allows minimizing negative effects of various illumination conditions. In this paper, we are evaluating the recognition performance of several lighting normalization methods based on the Self-Quotient ImagE(SQI) method introduced by Haitao Wang, Stan Z. Li, Yangsheng Wang, and Jianjun Zhang. For evaluation, we chose the original implementation and the most perspective latest modifications of the original SQI method, including the Gabor Quotient ImagE(GQI) method introduced by Sanun Srisuk and Amnart Petpon in 2008, and the Fast Self-Quotient ImagE(FSQI) method and its modifications proposed by authors in previous works. We are proposing an evaluation framework which uses the Cropped Extended Yale Face Database B, which allows showing the difference of the recognition results for different illumination conditions. Also, we are testing all results using two classifiers: Nearest Neighbor Classifier and Linear Support Vector Classifier. This approach allows us not only to calculate recognition accuracy for each method and select the best method but also show the importance of the proper choice of the classification method, which can have a significant influence on recognition results. We were able to show the significant decreasing of recognition accuracy for un-processed (RAW) images with increasing the angle between the lighting source and the normal to the object. From the other side, our experiments had shown the almost uniform distribution of the recognition accuracy for images processed by lighting normalization methods based on the SQI method. Another showed but expected result represented in this paper is the increasing of the recognition accuracy with the increasing of the filter kernel size. However, the large filter kernel sizes are much more computationally expensive and can produce negative effects on output images. Also, we were shown in our experiments, that the second modification of the FSQI method, called FSQI3, is better almost in all cases for all filter kernel sizes, especially, if we use Linear Support Vector Classifier for classification.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Saito, Shota. "Hypergraph Modeling via Spectral Embedding Connection: Hypergraph Cut, Weighted Kernel k-Means, and Heat Kernel". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 7 (28 de junio de 2022): 8141–49. http://dx.doi.org/10.1609/aaai.v36i7.20787.

Texto completo
Resumen
We propose a theoretical framework of multi-way similarity to model real-valued data into hypergraphs for clustering via spectral embedding. For graph cut based spectral clustering, it is common to model real-valued data into graph by modeling pairwise similarities using kernel function. This is because the kernel function has a theoretical connection to the graph cut. For problems where using multi-way similarities are more suitable than pairwise ones, it is natural to model as a hypergraph, which is generalization of a graph. However, although the hypergraph cut is well-studied, there is not yet established a hypergraph cut based framework to model multi-way similarity. In this paper, we formulate multi-way similarities by exploiting the theoretical foundation of kernel function. We show a theoretical connection between our formulation and hypergraph cut in two ways, generalizing both weighted kernel k-means and the heat kernel, by which we justify our formulation. We also provide a fast algorithm for spectral clustering. Our algorithm empirically shows better performance than existing graph and other heuristic modeling methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Buchwald, Fabian, Tobias Girschick, Eibe Frank y Stefan Kramer. "Fast Conditional Density Estimation for Quantitative Structure-Activity Relationships". Proceedings of the AAAI Conference on Artificial Intelligence 24, n.º 1 (4 de julio de 2010): 1268–73. http://dx.doi.org/10.1609/aaai.v24i1.7494.

Texto completo
Resumen
Many methods for quantitative structure-activity relationships (QSARs) deliver point estimates only, without quantifying the uncertainty inherent in the prediction. One way to quantify the uncertainy of a QSAR prediction is to predict the conditional density of the activity given the structure instead of a point estimate. If a conditional density estimate is available, it is easy to derive prediction intervals of activities. In this paper, we experimentally evaluate and compare three methods for conditional density estimation for their suitability in QSAR modeling. In contrast to traditional methods for conditional density estimation, they are based on generic machine learning schemes, more specifically, class probability estimators. Our experiments show that a kernel estimator based on class probability estimates from a random forest classifier is highly competitive with Gaussian process regression, while taking only a fraction of the time for training. Therefore, generic machine-learning based methods for conditional density estimation may be a good and fast option for quantifying uncertainty in QSAR modeling.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Miao, Peng, Shi Han Feng, Qi Zhang y Yuan Yuan Ji. "Real-Time Boundary Detection of Fast Moving Object in Dark Surrounds". Applied Mechanics and Materials 397-400 (septiembre de 2013): 2231–34. http://dx.doi.org/10.4028/www.scientific.net/amm.397-400.2231.

Texto completo
Resumen
Dark surrounds make detection of moving target more difficult based on traditional methods. A real time identification of fast moving object under weak illumination is critical for some special applications. Traditional blob, contour and kernel-based tracking methods either need high computational loads or require normal illumination which limit their application. In this paper, we propose a new method trying to settle such difficulty based on temporal standard deviation. The performance of new method was evaluated with simulation data and real video data recorded by a simple imaging system. Combining hardware acceleration, a real time detection and visualization of fast moving boundary in dark environment can be achieved.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Chen, Yibin, Guohao Nie, Huanlong Zhang, Yuxing Feng y Guanglu Yang. "Fast motion tracking based on moth-flame optimization and kernel correlation filter". Journal of Intelligent & Fuzzy Systems 39, n.º 3 (7 de octubre de 2020): 3825–37. http://dx.doi.org/10.3233/jifs-192172.

Texto completo
Resumen
Kernel Correlation Filter (KCF) tracker has shown great potential on precision, robustness and efficiency. However, the candidate region used to train the correlation filter is fixed, so tracking is difficult when the target escapes from the search window due to fast motion. In this paper, an improved KCF is put forward for long-term tracking. At first, the moth-flame optimization (MFO) algorithm is introduced into tracking to search for lost target. Then, the candidate sample strategy of KCF tracking method is adjusted by MFO algorithm to make it has the capability of fast motion tracking. Finally, we use the conservative learning correlation filter to judge the moving state of the target, and combine the improved KCF tracker to form a unified tracking framework. The proposed algorithm is tested on a self-made dataset benchmark. Moreover, our method obtains scores for both the distance precision plot (0.891 and 0.842) and overlap success plots (0.631 and 0.601) on the OTB-2013 and OTB-2015 data sets, respectively. The results demonstrate the feasibility and effectiveness compared with the state-of-the-art methods, especially in dealing with fast or uncertain motion.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Priambodo, Dimas Febriyan y Ahmad Ashari. "Resource Modification On Multicore Server With Kernel Bypass". IJCCS (Indonesian Journal of Computing and Cybernetics Systems) 14, n.º 4 (31 de octubre de 2020): 331. http://dx.doi.org/10.22146/ijccs.54170.

Texto completo
Resumen
Technology develops very fast marked by many innovations both from hardware and software. Multicore servers with a growing number of cores require efficient software. Kernel and Hardware used to handle various operational needs have some limitations. This limitation is due to the high level of complexity especially in handling as a server such as single socket discriptor, single IRQ and lack of pooling so that it requires some modifications. The Kernel Bypass is one of the methods to overcome the deficiencies of the kernel. Modifications on this server are a combination increase throughput and decrease server latency. Modifications at the driver level with hashing rx signal and multiple receives modification with multiple ip receivers, multiple thread receivers and multiple port listener used to increase throughput. Modifications using pooling principles at either the kernel level or the program level are used to decrease the latency. This combination of modifications makes the server more reliable with an average throughput increase of 250.44% and a decrease in latency 65.83%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Zhao, Yang, Jianyi Zhang y Changyou Chen. "Self-Adversarially Learned Bayesian Sampling". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 5893–900. http://dx.doi.org/10.1609/aaai.v33i01.33015893.

Texto completo
Resumen
Scalable Bayesian sampling is playing an important role in modern machine learning, especially in the fast-developed unsupervised-(deep)-learning models. While tremendous progresses have been achieved via scalable Bayesian sampling such as stochastic gradient MCMC (SG-MCMC) and Stein variational gradient descent (SVGD), the generated samples are typically highly correlated. Moreover, their sample-generation processes are often criticized to be inefficient. In this paper, we propose a novel self-adversarial learning framework that automatically learns a conditional generator to mimic the behavior of a Markov kernel (transition kernel). High-quality samples can be efficiently generated by direct forward passes though a learned generator. Most importantly, the learning process adopts a self-learning paradigm, requiring no information on existing Markov kernels, e.g., knowledge of how to draw samples from them. Specifically, our framework learns to use current samples, either from the generator or pre-provided training data, to update the generator such that the generated samples progressively approach a target distribution, thus it is called self-learning. Experiments on both synthetic and real datasets verify advantages of our framework, outperforming related methods in terms of both sampling efficiency and sample quality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Valentini, Giorgio, Alberto Paccanaro, Horacio Caniza, Alfonso E. Romero y Matteo Re. "An extensive analysis of disease-gene associations using network integration and fast kernel-based gene prioritization methods". Artificial Intelligence in Medicine 61, n.º 2 (junio de 2014): 63–78. http://dx.doi.org/10.1016/j.artmed.2014.03.003.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Ji, Xiaojia, Xuanyi Lu, Chunhong Guo, Weiwei Pei y Hui Xu. "Predictions of Geological Interface Using Relevant Vector Machine with Borehole Data". Sustainability 14, n.º 16 (15 de agosto de 2022): 10122. http://dx.doi.org/10.3390/su141610122.

Texto completo
Resumen
Due to the discreteness, sparsity, multidimensionality, and incompleteness of geotechnical investigation data, traditional methods cannot reasonably predict complex stratigraphic profiles, thus hindering the three-dimensional (3D) reconstruction of geological formation that is vital to the visualization and digitization of geotechnical engineering. The machine learning method of relevant vector machine (RVM) is employed in this work to predict the 3D stratigraphic profile based on limited geotechnical borehole data. The hyper-parameters of kernel functions are determined by maximizing the marginal likelihood using the particle swarm optimization algorithm. Three kinds of kernel functions are employed to investigate the prediction performance of the proposed method in both 2D analysis and 3D analysis. The 2D analysis shows that the Gauss kernel function is more suitable to deal with nonlinear problems but is more sensitive to the number of training data and it is better to use spline kernel functions for RVM model trainings when there are few geotechnical investigation data. In the 3D analysis, it is found that the prediction result of the spline kernel function is the best and the relevant vector machine model with a spline kernel function performs better in the area with a fast change in geological formation. In general, the RVM model can be used to achieve the purpose of 3D stratigraphic reconstruction.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Dash, Ch Sanjeev Kumar, Ajit Kumar Behera, Satchidananda Dehuri y Sung-Bae Cho. "Radial basis function neural networks: a topical state-of-the-art survey". Open Computer Science 6, n.º 1 (2 de mayo de 2016): 33–63. http://dx.doi.org/10.1515/comp-2016-0005.

Texto completo
Resumen
AbstractRadial basis function networks (RBFNs) have gained widespread appeal amongst researchers and have shown good performance in a variety of application domains. They have potential for hybridization and demonstrate some interesting emergent behaviors. This paper aims to offer a compendious and sensible survey on RBF networks. The advantages they offer, such as fast training and global approximation capability with local responses, are attracting many researchers to use them in diversified fields. The overall algorithmic development of RBF networks by giving special focus on their learning methods, novel kernels, and fine tuning of kernel parameters have been discussed. In addition, we have considered the recent research work on optimization of multi-criterions in RBF networks and a range of indicative application areas along with some open source RBFN tools.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Ikemoto, Yusuke, Kenichiro Nishimura, Yuichiro Mizutama, Tohru Sasaki y Mitsuru Jindai. "Network Connectivity Control of Mobile Robots by Fast Position Estimations and Laplacian Kernel". Journal of Robotics and Mechatronics 32, n.º 2 (20 de abril de 2020): 422–36. http://dx.doi.org/10.20965/jrm.2020.p0422.

Texto completo
Resumen
Together with wireless distributed sensor technologies, the connectivity control of mobile robot networks has widely expanded in recent years. Network connectivity has been greatly improved by theoretical frameworks based on graph theory. Most network connectivity studies have focused on algebraic connectivity and the Fiedler vector, which constitutes a network structure matrix eigenpair. Theoretical graph frameworks have popularly been adopted in robot deployment studies; however, the eigenpairs’ computation requires quite a lot of iterative calculations and is extremely time-intensive. In the present study, we propose a robot deployment algorithm that only requires a finite iterative calculation. The proposed algorithm rapidly estimates the robot positions by solving reaction-diffusion equations on the graph, and gradient methods using a Laplacian kernel. The effectiveness of the algorithm is evaluated in computer simulations of mobile robot networks. Furthermore, we implement the algorithm in the actual hardware of a two-wheeled robot.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Ramadhan, Bima, Ilyasa pahlevi Reza yulianto, Achmad Abdurrazzaq, Fulkan Kafilah Al Husein y Bilal Charmouti. "A random exploration based fast adaptive and selective mean filter for salt and pepper noise removal in satellite digital images". Desimal: Jurnal Matematika 5, n.º 3 (20 de diciembre de 2022): 341–52. http://dx.doi.org/10.24042/djm.v5i3.14424.

Texto completo
Resumen
The digital image is one of the discoveries that play an important role in various aspects of modern human life. These findings are useful in various fields, including defense (military and non-military), security, health, education, and others. In practice, the image acquisition process often suffers from problems, both in the process of capturing and transmitting images. Among the problems is the appearance of noise which results in the degradation of information in the image and thus disrupts further processes of image processing. One type of noise that damages digital images is salt and pepper noise which randomly changes the pixel values to 0 (black) or 255 (white). Researchers have proposed several methods to deal with this type of noise, including median filter, adaptive mean filter, switching median filter, modified decision based unsymmetric trimmed median filter, and different applied median filter. However, this method suffers from a decrease in performance when applied to images with high-intensity noise. Therefore, in this research, a new filtering method is proposed that can improve the image by randomly exploring pixels, then collecting the surrounding pixel data from the processed pixels (kernel). The kernel will be enlarged if there are no free-noise pixels in the kernel. Furthermore, the damaged pixels will be replaced using the mean data centering statistic. Images enhanced using the proposed method have better quality than the previous methods, both quantitatively (SSIM and PSNR) and qualitatively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía