Littérature scientifique sur le sujet « Dot product kernels »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Dot product kernels ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Dot product kernels"

1

Menegatto, V. A., C. P. Oliveira et A. P. Peron. « Conditionally positive definite dot product kernels ». Journal of Mathematical Analysis and Applications 321, no 1 (septembre 2006) : 223–41. http://dx.doi.org/10.1016/j.jmaa.2005.08.024.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Menegatto, V. A., C. P. Oliveira et Ana P. Peron. « On conditionally positive definite dot product kernels ». Acta Mathematica Sinica, English Series 24, no 7 (juillet 2008) : 1127–38. http://dx.doi.org/10.1007/s10114-007-6227-4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Lu, Fangyan, et Hongwei Sun. « Positive definite dot product kernels in learning theory ». Advances in Computational Mathematics 22, no 2 (février 2005) : 181–98. http://dx.doi.org/10.1007/s10444-004-3140-6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Griffiths, Matthew P., Denys Grombacher, Mason A. Kass, Mathias Ø. Vang, Lichao Liu et Jakob Juul Larsen. « A surface NMR forward in a dot product ». Geophysical Journal International 234, no 3 (27 avril 2023) : 2284–90. http://dx.doi.org/10.1093/gji/ggad203.

Texte intégral
Résumé :
SUMMARY The computation required to simulate surface nuclear magnetic resonance (SNMR) data increases proportionally with the number of sequences and number of pulses in each sequence. This poses a particular challenge to modelling steady-state SNMR, where suites of sequences are acquired, each of which require modelling 10–100 s of pulses. To model such data efficiently, we have developed a reformulation of surface NMR forward model, where the geometry of transmit and receive fields are encapsulated into a vector (or set of vectors), which we call B1-volume-receive (BVR) curves. Projecting BVR curve(s) along complimentary magnetization solutions for a particular sequence amounts to computing the full SNMR forward model. The formulation has the additional advantage that computations for increased transmitter current amounts to a relative translation between the BVR and magnetization solutions. We generate 1-D kernels using BVR curves and standard integration techniques and find the difference is within 2 per cent. Using BVR curves, a typical suite of steady-state kernels can be computed two orders of magnitude faster than previous approaches.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Donini, Michele, et Fabio Aiolli. « Learning deep kernels in the space of dot product polynomials ». Machine Learning 106, no 9-10 (7 novembre 2016) : 1245–69. http://dx.doi.org/10.1007/s10994-016-5590-8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Filippas, Dionysios, Chrysostomos Nicopoulos et Giorgos Dimitrakopoulos. « Templatized Fused Vector Floating-Point Dot Product for High-Level Synthesis ». Journal of Low Power Electronics and Applications 12, no 4 (17 octobre 2022) : 56. http://dx.doi.org/10.3390/jlpea12040056.

Texte intégral
Résumé :
Machine-learning accelerators rely on floating-point matrix and vector multiplication kernels. To reduce their cost, customized many-term fused architectures are preferred, which improve the latency, power, and area of the designs. In this work, we design a parameterized fused many-term floating-point dot product architecture that is ready for high-level synthesis. In this way, we can exploit the efficiency offered by a well-structured fused dot-product architecture and the freedom offered by high-level synthesis in tuning the design’s pipeline to the selected floating-point format and architectural constraints. When compared with optimized dot-product units implemented directly in RTL, the proposed design offers lower-latency implementations under the same clock frequency with marginal area savings. This result holds for a variety of floating-point formats, including standard and reduced-precision representations.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Bishwas, Arit Kumar, Ashish Mani et Vasile Palade. « Gaussian kernel in quantum learning ». International Journal of Quantum Information 18, no 03 (avril 2020) : 2050006. http://dx.doi.org/10.1142/s0219749920500069.

Texte intégral
Résumé :
The Gaussian kernel is a very popular kernel function used in many machine learning algorithms, especially in support vector machines (SVMs). It is more often used than polynomial kernels when learning from nonlinear datasets and is usually employed in formulating the classical SVM for nonlinear problems. Rebentrost et al. discussed an elegant quantum version of a least square support vector machine using quantum polynomial kernels, which is exponentially faster than the classical counterpart. This paper demonstrates a quantum version of the Gaussian kernel and analyzes its runtime complexity using the quantum random access memory (QRAM) in the context of quantum SVM. Our analysis shows that the runtime computational complexity of the quantum Gaussian kernel is approximated to [Formula: see text] and even [Formula: see text] when [Formula: see text] and the error [Formula: see text] are small enough to be ignored, where [Formula: see text] is the dimension of the training instances, [Formula: see text] is the accuracy, [Formula: see text] is the dot product of the two quantum states, and [Formula: see text] is the Taylor remainder error term. Therefore, the run time complexity of the quantum version of the Gaussian kernel seems to be significantly faster when compared with its classical version.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Xiao, Lechao, Hong Hu, Theodor Misiakiewicz, Yue M. Lu et Jeffrey Pennington. « Precise learning curves and higher-order scaling limits for dot-product kernel regression * ». Journal of Statistical Mechanics : Theory and Experiment 2023, no 11 (1 novembre 2023) : 114005. http://dx.doi.org/10.1088/1742-5468/ad01b7.

Texte intégral
Résumé :
Abstract As modern machine learning models continue to advance the computational frontier, it has become increasingly important to develop precise estimates for expected performance improvements under different model and data scaling regimes. Currently, theoretical understanding of the learning curves (LCs) that characterize how the prediction error depends on the number of samples is restricted to either large-sample asymptotics ( m → ∞ ) or, for certain simple data distributions, to the high-dimensional asymptotics in which the number of samples scales linearly with the dimension ( m ∝ d ). There is a wide gulf between these two regimes, including all higher-order scaling relations m ∝ d r , which are the subject of the present paper. We focus on the problem of kernel ridge regression for dot-product kernels and present precise formulas for the mean of the test error, bias and variance, for data drawn uniformly from the sphere with isotropic random labels in the rth-order asymptotic scaling regime m → ∞ with m / d r held constant. We observe a peak in the LC whenever m ≈ d r / r ! for any integer r, leading to multiple sample-wise descent and non-trivial behavior at multiple scales. We include a colab (available at: https://tinyurl.com/2nzym7ym) notebook that reproduces the essential results of the paper.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Iakymchuk, Roman, Stef Graillat, David Defour et Enrique S. Quintana-Ortí. « Hierarchical approach for deriving a reproducible unblocked LU factorization ». International Journal of High Performance Computing Applications 33, no 5 (17 mars 2019) : 791–803. http://dx.doi.org/10.1177/1094342019832968.

Texte intégral
Résumé :
We propose a reproducible variant of the unblocked LU factorization for graphics processor units (GPUs). For this purpose, we build upon Level-1/2 BLAS kernels that deliver correctly-rounded and reproducible results for the dot (inner) product, vector scaling, and the matrix-vector product. In addition, we draw a strategy to enhance the accuracy of the triangular solve via iterative refinement. Following a bottom-up approach, we finally construct a reproducible unblocked implementation of the LU factorization for GPUs, which accommodates partial pivoting for stability and can be eventually integrated in a high performance and stable algorithm for the (blocked) LU factorization.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Azevedo, D., et V. A. Menegatto. « Sharp estimates for eigenvalues of integral operators generated by dot product kernels on the sphere ». Journal of Approximation Theory 177 (janvier 2014) : 57–68. http://dx.doi.org/10.1016/j.jat.2013.10.002.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Dot product kernels"

1

Wacker, Jonas. « Random features for dot product kernels and beyond ». Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS241.

Texte intégral
Résumé :
Les noyaux de produit scalaire, tels que les noyaux polynomiaux et exponentiels (softmax), sont parmi les noyaux les plus utilisés en apprentissage automatique, car ils permettent de modéliser les interactions entre les composantes des vecteurs d'entrée, ce qui est crucial dans des applications telles que la vision par ordinateur, le traitement du langage naturel et les systèmes de recommandation. Cependant, un inconvénient fondamental des modèles statistiques basés sur les noyaux est leur évolutivité limitée à un grand nombre de données d'entrée, ce qui nécessite de recourir à des approximations. Dans cette thèse, nous étudions des techniques pour linéariser les méthodes à base de noyaux de produit scalaire au moyen d'approximations de caractéristiques aléatoires. En particulier, nous nous concentrons sur une analyse de variance pour étudier et améliorer leur efficacité statistique
Dot product kernels, such as polynomial and exponential (softmax) kernels, are among the most widely used kernels in machine learning, as they enable modeling the interactions between input features, which is crucial in applications like computer vision, natural language processing, and recommender systems. However, a fundamental drawback of kernel-based statistical models is their limited scalability to a large number of inputs, which requires resorting to approximations. In this thesis, we study techniques to linearize kernel-based methods by means of random feature approximations and we focus on the approximation of polynomial kernels and more general dot product kernels to make these kernels more useful in large scale learning. In particular, we focus on a variance analysis as a main tool to study and improve the statistical efficiency of such sketches
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Dot product kernels"

1

Chen, Degang, Qiang He, Chunru Dong et Xizhao Wang. « A Method to Construct the Mapping to the Feature Space for the Dot Product Kernels ». Dans Advances in Machine Learning and Cybernetics, 918–29. Berlin, Heidelberg : Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11739685_96.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Lauriola, Ivano, Mirko Polato et Fabio Aiolli. « Radius-Margin Ratio Optimization for Dot-Product Boolean Kernel Learning ». Dans Artificial Neural Networks and Machine Learning – ICANN 2017, 183–91. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68612-7_21.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Dot product kernels"

1

Azevedo, Douglas, et Valdir A. Menegatto. « Eigenvalues of dot-product kernels on the sphere ». Dans XXXV CNMAC - Congresso Nacional de Matemática Aplicada e Computacional. SBMAC, 2015. http://dx.doi.org/10.5540/03.2015.003.01.0039.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Chen, G. Y., et P. Bhattacharya. « Function Dot Product Kernels for Support Vector Machine ». Dans 18th International Conference on Pattern Recognition (ICPR'06). IEEE, 2006. http://dx.doi.org/10.1109/icpr.2006.586.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Rashed, Muhammad Rashedul Haq, Sumit Kumar Jha et Rickard Ewetz. « Discovering the in-Memory Kernels of 3D Dot-Product Engines ». Dans ASPDAC '23 : 28th Asia and South Pacific Design Automation Conference. New York, NY, USA : ACM, 2023. http://dx.doi.org/10.1145/3566097.3567855.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Li Zhang, Zhou Weida, Ying Lin et Licheng Jiao. « Support vector novelty detection with dot product kernels for non-spherical data ». Dans 2008 International Conference on Information and Automation (ICIA). IEEE, 2008. http://dx.doi.org/10.1109/icinfa.2008.4607965.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Venkatesan, Sibi, James K. Miller, Jeff Schneider et Artur Dubrawski. « Scaling Active Search using Linear Similarity Functions ». Dans Twenty-Sixth International Joint Conference on Artificial Intelligence. California : International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/401.

Texte intégral
Résumé :
Active Search has become an increasingly useful tool in information retrieval problems where the goal is to discover as many target elements as possible using only limited label queries. With the advent of big data, there is a growing emphasis on the scalability of such techniques to handle very large and very complex datasets. In this paper, we consider the problem of Active Search where we are given a similarity function between data points. We look at an algorithm introduced by Wang et al. [Wang et al., 2013] known as Active Search on Graphs and propose crucial modifications which allow it to scale significantly. Their approach selects points by minimizing an energy function over the graph induced by the similarity function on the data. Our modifications require the similarity function to be a dot-product between feature vectors of data points, equivalent to having a linear kernel for the adjacency matrix. With this, we are able to scale tremendously: for n data points, the original algorithm runs in O(n^2) time per iteration while ours runs in only O(nr + r^2) given r-dimensional features. We also describe a simple alternate approach using a weighted-neighbor predictor which also scales well. In our experiments, we show that our method is competitive with existing semi-supervised approaches. We also briefly discuss conditions under which our algorithm performs well.
Styles APA, Harvard, Vancouver, ISO, etc.
6

De Jesús Rivera, Edward, Fanny Besem-Cordova et Jean-Charles Bonaccorsi. « Optimization of a High Pressure Industrial Fan ». Dans ASME Turbo Expo 2021 : Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/gt2021-58967.

Texte intégral
Résumé :
Abstract Fans are used in industrial refineries, power generation, petrochemistry, pollution control, etc. These fans can perform in sometimes extreme, mission-critical conditions. The design of fans has historically relied on turbomachinery affinity laws, resulting in oversized machines that are expensive to manufacture and transport. With the increasingly lower CPU cost of fluid modeling, designers can now turn to CFD optimization to produce the necessary machine performance and flow conditions while respecting manufacturing constraints. The objective of this study is to maximize the pressure rise across an industrial fan while respecting manufacturing constraints. First, a 3D scan of the baseline impeller is used to create the CFD model and validated against experimental data. The baseline impeller geometry is then parameterized with 21 free parameters driving the shape of the hub, shroud, blade lean and camber. A fully automated optimization process is conducted using Numeca’s Fine™/Design3D software, allowing for a CPU-efficient Design Of Experiment (DOE) database generation and a surrogate model using the powerful Minamo optimization kernel and data-mining tool. The optimized impeller coupled with a CFD-aided redesigned volute showed an increase in overall pressure rise over the whole performance line, up to 24% at higher mass flow rates compared to the baseline geometry.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie