Literatura científica selecionada sobre o tema "Dot product kernels"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Dot product kernels".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Dot product kernels"

1

Menegatto, V. A., C. P. Oliveira e A. P. Peron. "Conditionally positive definite dot product kernels". Journal of Mathematical Analysis and Applications 321, n.º 1 (setembro de 2006): 223–41. http://dx.doi.org/10.1016/j.jmaa.2005.08.024.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Menegatto, V. A., C. P. Oliveira e Ana P. Peron. "On conditionally positive definite dot product kernels". Acta Mathematica Sinica, English Series 24, n.º 7 (julho de 2008): 1127–38. http://dx.doi.org/10.1007/s10114-007-6227-4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Lu, Fangyan, e Hongwei Sun. "Positive definite dot product kernels in learning theory". Advances in Computational Mathematics 22, n.º 2 (fevereiro de 2005): 181–98. http://dx.doi.org/10.1007/s10444-004-3140-6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Griffiths, Matthew P., Denys Grombacher, Mason A. Kass, Mathias Ø. Vang, Lichao Liu e Jakob Juul Larsen. "A surface NMR forward in a dot product". Geophysical Journal International 234, n.º 3 (27 de abril de 2023): 2284–90. http://dx.doi.org/10.1093/gji/ggad203.

Texto completo da fonte
Resumo:
SUMMARY The computation required to simulate surface nuclear magnetic resonance (SNMR) data increases proportionally with the number of sequences and number of pulses in each sequence. This poses a particular challenge to modelling steady-state SNMR, where suites of sequences are acquired, each of which require modelling 10–100 s of pulses. To model such data efficiently, we have developed a reformulation of surface NMR forward model, where the geometry of transmit and receive fields are encapsulated into a vector (or set of vectors), which we call B1-volume-receive (BVR) curves. Projecting BVR curve(s) along complimentary magnetization solutions for a particular sequence amounts to computing the full SNMR forward model. The formulation has the additional advantage that computations for increased transmitter current amounts to a relative translation between the BVR and magnetization solutions. We generate 1-D kernels using BVR curves and standard integration techniques and find the difference is within 2 per cent. Using BVR curves, a typical suite of steady-state kernels can be computed two orders of magnitude faster than previous approaches.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Donini, Michele, e Fabio Aiolli. "Learning deep kernels in the space of dot product polynomials". Machine Learning 106, n.º 9-10 (7 de novembro de 2016): 1245–69. http://dx.doi.org/10.1007/s10994-016-5590-8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Filippas, Dionysios, Chrysostomos Nicopoulos e Giorgos Dimitrakopoulos. "Templatized Fused Vector Floating-Point Dot Product for High-Level Synthesis". Journal of Low Power Electronics and Applications 12, n.º 4 (17 de outubro de 2022): 56. http://dx.doi.org/10.3390/jlpea12040056.

Texto completo da fonte
Resumo:
Machine-learning accelerators rely on floating-point matrix and vector multiplication kernels. To reduce their cost, customized many-term fused architectures are preferred, which improve the latency, power, and area of the designs. In this work, we design a parameterized fused many-term floating-point dot product architecture that is ready for high-level synthesis. In this way, we can exploit the efficiency offered by a well-structured fused dot-product architecture and the freedom offered by high-level synthesis in tuning the design’s pipeline to the selected floating-point format and architectural constraints. When compared with optimized dot-product units implemented directly in RTL, the proposed design offers lower-latency implementations under the same clock frequency with marginal area savings. This result holds for a variety of floating-point formats, including standard and reduced-precision representations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Bishwas, Arit Kumar, Ashish Mani e Vasile Palade. "Gaussian kernel in quantum learning". International Journal of Quantum Information 18, n.º 03 (abril de 2020): 2050006. http://dx.doi.org/10.1142/s0219749920500069.

Texto completo da fonte
Resumo:
The Gaussian kernel is a very popular kernel function used in many machine learning algorithms, especially in support vector machines (SVMs). It is more often used than polynomial kernels when learning from nonlinear datasets and is usually employed in formulating the classical SVM for nonlinear problems. Rebentrost et al. discussed an elegant quantum version of a least square support vector machine using quantum polynomial kernels, which is exponentially faster than the classical counterpart. This paper demonstrates a quantum version of the Gaussian kernel and analyzes its runtime complexity using the quantum random access memory (QRAM) in the context of quantum SVM. Our analysis shows that the runtime computational complexity of the quantum Gaussian kernel is approximated to [Formula: see text] and even [Formula: see text] when [Formula: see text] and the error [Formula: see text] are small enough to be ignored, where [Formula: see text] is the dimension of the training instances, [Formula: see text] is the accuracy, [Formula: see text] is the dot product of the two quantum states, and [Formula: see text] is the Taylor remainder error term. Therefore, the run time complexity of the quantum version of the Gaussian kernel seems to be significantly faster when compared with its classical version.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Xiao, Lechao, Hong Hu, Theodor Misiakiewicz, Yue M. Lu e Jeffrey Pennington. "Precise learning curves and higher-order scaling limits for dot-product kernel regression *". Journal of Statistical Mechanics: Theory and Experiment 2023, n.º 11 (1 de novembro de 2023): 114005. http://dx.doi.org/10.1088/1742-5468/ad01b7.

Texto completo da fonte
Resumo:
Abstract As modern machine learning models continue to advance the computational frontier, it has become increasingly important to develop precise estimates for expected performance improvements under different model and data scaling regimes. Currently, theoretical understanding of the learning curves (LCs) that characterize how the prediction error depends on the number of samples is restricted to either large-sample asymptotics ( m → ∞ ) or, for certain simple data distributions, to the high-dimensional asymptotics in which the number of samples scales linearly with the dimension ( m ∝ d ). There is a wide gulf between these two regimes, including all higher-order scaling relations m ∝ d r , which are the subject of the present paper. We focus on the problem of kernel ridge regression for dot-product kernels and present precise formulas for the mean of the test error, bias and variance, for data drawn uniformly from the sphere with isotropic random labels in the rth-order asymptotic scaling regime m → ∞ with m / d r held constant. We observe a peak in the LC whenever m ≈ d r / r ! for any integer r, leading to multiple sample-wise descent and non-trivial behavior at multiple scales. We include a colab (available at: https://tinyurl.com/2nzym7ym) notebook that reproduces the essential results of the paper.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Iakymchuk, Roman, Stef Graillat, David Defour e Enrique S. Quintana-Ortí. "Hierarchical approach for deriving a reproducible unblocked LU factorization". International Journal of High Performance Computing Applications 33, n.º 5 (17 de março de 2019): 791–803. http://dx.doi.org/10.1177/1094342019832968.

Texto completo da fonte
Resumo:
We propose a reproducible variant of the unblocked LU factorization for graphics processor units (GPUs). For this purpose, we build upon Level-1/2 BLAS kernels that deliver correctly-rounded and reproducible results for the dot (inner) product, vector scaling, and the matrix-vector product. In addition, we draw a strategy to enhance the accuracy of the triangular solve via iterative refinement. Following a bottom-up approach, we finally construct a reproducible unblocked implementation of the LU factorization for GPUs, which accommodates partial pivoting for stability and can be eventually integrated in a high performance and stable algorithm for the (blocked) LU factorization.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Azevedo, D., e V. A. Menegatto. "Sharp estimates for eigenvalues of integral operators generated by dot product kernels on the sphere". Journal of Approximation Theory 177 (janeiro de 2014): 57–68. http://dx.doi.org/10.1016/j.jat.2013.10.002.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Dot product kernels"

1

Wacker, Jonas. "Random features for dot product kernels and beyond". Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS241.

Texto completo da fonte
Resumo:
Les noyaux de produit scalaire, tels que les noyaux polynomiaux et exponentiels (softmax), sont parmi les noyaux les plus utilisés en apprentissage automatique, car ils permettent de modéliser les interactions entre les composantes des vecteurs d'entrée, ce qui est crucial dans des applications telles que la vision par ordinateur, le traitement du langage naturel et les systèmes de recommandation. Cependant, un inconvénient fondamental des modèles statistiques basés sur les noyaux est leur évolutivité limitée à un grand nombre de données d'entrée, ce qui nécessite de recourir à des approximations. Dans cette thèse, nous étudions des techniques pour linéariser les méthodes à base de noyaux de produit scalaire au moyen d'approximations de caractéristiques aléatoires. En particulier, nous nous concentrons sur une analyse de variance pour étudier et améliorer leur efficacité statistique
Dot product kernels, such as polynomial and exponential (softmax) kernels, are among the most widely used kernels in machine learning, as they enable modeling the interactions between input features, which is crucial in applications like computer vision, natural language processing, and recommender systems. However, a fundamental drawback of kernel-based statistical models is their limited scalability to a large number of inputs, which requires resorting to approximations. In this thesis, we study techniques to linearize kernel-based methods by means of random feature approximations and we focus on the approximation of polynomial kernels and more general dot product kernels to make these kernels more useful in large scale learning. In particular, we focus on a variance analysis as a main tool to study and improve the statistical efficiency of such sketches
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Dot product kernels"

1

Chen, Degang, Qiang He, Chunru Dong e Xizhao Wang. "A Method to Construct the Mapping to the Feature Space for the Dot Product Kernels". In Advances in Machine Learning and Cybernetics, 918–29. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11739685_96.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Lauriola, Ivano, Mirko Polato e Fabio Aiolli. "Radius-Margin Ratio Optimization for Dot-Product Boolean Kernel Learning". In Artificial Neural Networks and Machine Learning – ICANN 2017, 183–91. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68612-7_21.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Dot product kernels"

1

Azevedo, Douglas, e Valdir A. Menegatto. "Eigenvalues of dot-product kernels on the sphere". In XXXV CNMAC - Congresso Nacional de Matemática Aplicada e Computacional. SBMAC, 2015. http://dx.doi.org/10.5540/03.2015.003.01.0039.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Chen, G. Y., e P. Bhattacharya. "Function Dot Product Kernels for Support Vector Machine". In 18th International Conference on Pattern Recognition (ICPR'06). IEEE, 2006. http://dx.doi.org/10.1109/icpr.2006.586.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Rashed, Muhammad Rashedul Haq, Sumit Kumar Jha e Rickard Ewetz. "Discovering the in-Memory Kernels of 3D Dot-Product Engines". In ASPDAC '23: 28th Asia and South Pacific Design Automation Conference. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3566097.3567855.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Li Zhang, Zhou Weida, Ying Lin e Licheng Jiao. "Support vector novelty detection with dot product kernels for non-spherical data". In 2008 International Conference on Information and Automation (ICIA). IEEE, 2008. http://dx.doi.org/10.1109/icinfa.2008.4607965.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Venkatesan, Sibi, James K. Miller, Jeff Schneider e Artur Dubrawski. "Scaling Active Search using Linear Similarity Functions". In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/401.

Texto completo da fonte
Resumo:
Active Search has become an increasingly useful tool in information retrieval problems where the goal is to discover as many target elements as possible using only limited label queries. With the advent of big data, there is a growing emphasis on the scalability of such techniques to handle very large and very complex datasets. In this paper, we consider the problem of Active Search where we are given a similarity function between data points. We look at an algorithm introduced by Wang et al. [Wang et al., 2013] known as Active Search on Graphs and propose crucial modifications which allow it to scale significantly. Their approach selects points by minimizing an energy function over the graph induced by the similarity function on the data. Our modifications require the similarity function to be a dot-product between feature vectors of data points, equivalent to having a linear kernel for the adjacency matrix. With this, we are able to scale tremendously: for n data points, the original algorithm runs in O(n^2) time per iteration while ours runs in only O(nr + r^2) given r-dimensional features. We also describe a simple alternate approach using a weighted-neighbor predictor which also scales well. In our experiments, we show that our method is competitive with existing semi-supervised approaches. We also briefly discuss conditions under which our algorithm performs well.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

De Jesús Rivera, Edward, Fanny Besem-Cordova e Jean-Charles Bonaccorsi. "Optimization of a High Pressure Industrial Fan". In ASME Turbo Expo 2021: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/gt2021-58967.

Texto completo da fonte
Resumo:
Abstract Fans are used in industrial refineries, power generation, petrochemistry, pollution control, etc. These fans can perform in sometimes extreme, mission-critical conditions. The design of fans has historically relied on turbomachinery affinity laws, resulting in oversized machines that are expensive to manufacture and transport. With the increasingly lower CPU cost of fluid modeling, designers can now turn to CFD optimization to produce the necessary machine performance and flow conditions while respecting manufacturing constraints. The objective of this study is to maximize the pressure rise across an industrial fan while respecting manufacturing constraints. First, a 3D scan of the baseline impeller is used to create the CFD model and validated against experimental data. The baseline impeller geometry is then parameterized with 21 free parameters driving the shape of the hub, shroud, blade lean and camber. A fully automated optimization process is conducted using Numeca’s Fine™/Design3D software, allowing for a CPU-efficient Design Of Experiment (DOE) database generation and a surrogate model using the powerful Minamo optimization kernel and data-mining tool. The optimized impeller coupled with a CFD-aided redesigned volute showed an increase in overall pressure rise over the whole performance line, up to 24% at higher mass flow rates compared to the baseline geometry.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia