Auswahl der wissenschaftlichen Literatur zum Thema „Dot product kernels“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Dot product kernels" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Dot product kernels"

1

Menegatto, V. A., C. P. Oliveira und A. P. Peron. „Conditionally positive definite dot product kernels“. Journal of Mathematical Analysis and Applications 321, Nr. 1 (September 2006): 223–41. http://dx.doi.org/10.1016/j.jmaa.2005.08.024.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Menegatto, V. A., C. P. Oliveira und Ana P. Peron. „On conditionally positive definite dot product kernels“. Acta Mathematica Sinica, English Series 24, Nr. 7 (Juli 2008): 1127–38. http://dx.doi.org/10.1007/s10114-007-6227-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Lu, Fangyan, und Hongwei Sun. „Positive definite dot product kernels in learning theory“. Advances in Computational Mathematics 22, Nr. 2 (Februar 2005): 181–98. http://dx.doi.org/10.1007/s10444-004-3140-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Griffiths, Matthew P., Denys Grombacher, Mason A. Kass, Mathias Ø. Vang, Lichao Liu und Jakob Juul Larsen. „A surface NMR forward in a dot product“. Geophysical Journal International 234, Nr. 3 (27.04.2023): 2284–90. http://dx.doi.org/10.1093/gji/ggad203.

Der volle Inhalt der Quelle
Annotation:
SUMMARY The computation required to simulate surface nuclear magnetic resonance (SNMR) data increases proportionally with the number of sequences and number of pulses in each sequence. This poses a particular challenge to modelling steady-state SNMR, where suites of sequences are acquired, each of which require modelling 10–100 s of pulses. To model such data efficiently, we have developed a reformulation of surface NMR forward model, where the geometry of transmit and receive fields are encapsulated into a vector (or set of vectors), which we call B1-volume-receive (BVR) curves. Projecting BVR curve(s) along complimentary magnetization solutions for a particular sequence amounts to computing the full SNMR forward model. The formulation has the additional advantage that computations for increased transmitter current amounts to a relative translation between the BVR and magnetization solutions. We generate 1-D kernels using BVR curves and standard integration techniques and find the difference is within 2 per cent. Using BVR curves, a typical suite of steady-state kernels can be computed two orders of magnitude faster than previous approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Donini, Michele, und Fabio Aiolli. „Learning deep kernels in the space of dot product polynomials“. Machine Learning 106, Nr. 9-10 (07.11.2016): 1245–69. http://dx.doi.org/10.1007/s10994-016-5590-8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Filippas, Dionysios, Chrysostomos Nicopoulos und Giorgos Dimitrakopoulos. „Templatized Fused Vector Floating-Point Dot Product for High-Level Synthesis“. Journal of Low Power Electronics and Applications 12, Nr. 4 (17.10.2022): 56. http://dx.doi.org/10.3390/jlpea12040056.

Der volle Inhalt der Quelle
Annotation:
Machine-learning accelerators rely on floating-point matrix and vector multiplication kernels. To reduce their cost, customized many-term fused architectures are preferred, which improve the latency, power, and area of the designs. In this work, we design a parameterized fused many-term floating-point dot product architecture that is ready for high-level synthesis. In this way, we can exploit the efficiency offered by a well-structured fused dot-product architecture and the freedom offered by high-level synthesis in tuning the design’s pipeline to the selected floating-point format and architectural constraints. When compared with optimized dot-product units implemented directly in RTL, the proposed design offers lower-latency implementations under the same clock frequency with marginal area savings. This result holds for a variety of floating-point formats, including standard and reduced-precision representations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Bishwas, Arit Kumar, Ashish Mani und Vasile Palade. „Gaussian kernel in quantum learning“. International Journal of Quantum Information 18, Nr. 03 (April 2020): 2050006. http://dx.doi.org/10.1142/s0219749920500069.

Der volle Inhalt der Quelle
Annotation:
The Gaussian kernel is a very popular kernel function used in many machine learning algorithms, especially in support vector machines (SVMs). It is more often used than polynomial kernels when learning from nonlinear datasets and is usually employed in formulating the classical SVM for nonlinear problems. Rebentrost et al. discussed an elegant quantum version of a least square support vector machine using quantum polynomial kernels, which is exponentially faster than the classical counterpart. This paper demonstrates a quantum version of the Gaussian kernel and analyzes its runtime complexity using the quantum random access memory (QRAM) in the context of quantum SVM. Our analysis shows that the runtime computational complexity of the quantum Gaussian kernel is approximated to [Formula: see text] and even [Formula: see text] when [Formula: see text] and the error [Formula: see text] are small enough to be ignored, where [Formula: see text] is the dimension of the training instances, [Formula: see text] is the accuracy, [Formula: see text] is the dot product of the two quantum states, and [Formula: see text] is the Taylor remainder error term. Therefore, the run time complexity of the quantum version of the Gaussian kernel seems to be significantly faster when compared with its classical version.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Xiao, Lechao, Hong Hu, Theodor Misiakiewicz, Yue M. Lu und Jeffrey Pennington. „Precise learning curves and higher-order scaling limits for dot-product kernel regression *“. Journal of Statistical Mechanics: Theory and Experiment 2023, Nr. 11 (01.11.2023): 114005. http://dx.doi.org/10.1088/1742-5468/ad01b7.

Der volle Inhalt der Quelle
Annotation:
Abstract As modern machine learning models continue to advance the computational frontier, it has become increasingly important to develop precise estimates for expected performance improvements under different model and data scaling regimes. Currently, theoretical understanding of the learning curves (LCs) that characterize how the prediction error depends on the number of samples is restricted to either large-sample asymptotics ( m → ∞ ) or, for certain simple data distributions, to the high-dimensional asymptotics in which the number of samples scales linearly with the dimension ( m ∝ d ). There is a wide gulf between these two regimes, including all higher-order scaling relations m ∝ d r , which are the subject of the present paper. We focus on the problem of kernel ridge regression for dot-product kernels and present precise formulas for the mean of the test error, bias and variance, for data drawn uniformly from the sphere with isotropic random labels in the rth-order asymptotic scaling regime m → ∞ with m / d r held constant. We observe a peak in the LC whenever m ≈ d r / r ! for any integer r, leading to multiple sample-wise descent and non-trivial behavior at multiple scales. We include a colab (available at: https://tinyurl.com/2nzym7ym) notebook that reproduces the essential results of the paper.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Iakymchuk, Roman, Stef Graillat, David Defour und Enrique S. Quintana-Ortí. „Hierarchical approach for deriving a reproducible unblocked LU factorization“. International Journal of High Performance Computing Applications 33, Nr. 5 (17.03.2019): 791–803. http://dx.doi.org/10.1177/1094342019832968.

Der volle Inhalt der Quelle
Annotation:
We propose a reproducible variant of the unblocked LU factorization for graphics processor units (GPUs). For this purpose, we build upon Level-1/2 BLAS kernels that deliver correctly-rounded and reproducible results for the dot (inner) product, vector scaling, and the matrix-vector product. In addition, we draw a strategy to enhance the accuracy of the triangular solve via iterative refinement. Following a bottom-up approach, we finally construct a reproducible unblocked implementation of the LU factorization for GPUs, which accommodates partial pivoting for stability and can be eventually integrated in a high performance and stable algorithm for the (blocked) LU factorization.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Azevedo, D., und V. A. Menegatto. „Sharp estimates for eigenvalues of integral operators generated by dot product kernels on the sphere“. Journal of Approximation Theory 177 (Januar 2014): 57–68. http://dx.doi.org/10.1016/j.jat.2013.10.002.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Dot product kernels"

1

Wacker, Jonas. „Random features for dot product kernels and beyond“. Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS241.

Der volle Inhalt der Quelle
Annotation:
Les noyaux de produit scalaire, tels que les noyaux polynomiaux et exponentiels (softmax), sont parmi les noyaux les plus utilisés en apprentissage automatique, car ils permettent de modéliser les interactions entre les composantes des vecteurs d'entrée, ce qui est crucial dans des applications telles que la vision par ordinateur, le traitement du langage naturel et les systèmes de recommandation. Cependant, un inconvénient fondamental des modèles statistiques basés sur les noyaux est leur évolutivité limitée à un grand nombre de données d'entrée, ce qui nécessite de recourir à des approximations. Dans cette thèse, nous étudions des techniques pour linéariser les méthodes à base de noyaux de produit scalaire au moyen d'approximations de caractéristiques aléatoires. En particulier, nous nous concentrons sur une analyse de variance pour étudier et améliorer leur efficacité statistique
Dot product kernels, such as polynomial and exponential (softmax) kernels, are among the most widely used kernels in machine learning, as they enable modeling the interactions between input features, which is crucial in applications like computer vision, natural language processing, and recommender systems. However, a fundamental drawback of kernel-based statistical models is their limited scalability to a large number of inputs, which requires resorting to approximations. In this thesis, we study techniques to linearize kernel-based methods by means of random feature approximations and we focus on the approximation of polynomial kernels and more general dot product kernels to make these kernels more useful in large scale learning. In particular, we focus on a variance analysis as a main tool to study and improve the statistical efficiency of such sketches
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Dot product kernels"

1

Chen, Degang, Qiang He, Chunru Dong und Xizhao Wang. „A Method to Construct the Mapping to the Feature Space for the Dot Product Kernels“. In Advances in Machine Learning and Cybernetics, 918–29. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11739685_96.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Lauriola, Ivano, Mirko Polato und Fabio Aiolli. „Radius-Margin Ratio Optimization for Dot-Product Boolean Kernel Learning“. In Artificial Neural Networks and Machine Learning – ICANN 2017, 183–91. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68612-7_21.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Dot product kernels"

1

Azevedo, Douglas, und Valdir A. Menegatto. „Eigenvalues of dot-product kernels on the sphere“. In XXXV CNMAC - Congresso Nacional de Matemática Aplicada e Computacional. SBMAC, 2015. http://dx.doi.org/10.5540/03.2015.003.01.0039.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Chen, G. Y., und P. Bhattacharya. „Function Dot Product Kernels for Support Vector Machine“. In 18th International Conference on Pattern Recognition (ICPR'06). IEEE, 2006. http://dx.doi.org/10.1109/icpr.2006.586.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Rashed, Muhammad Rashedul Haq, Sumit Kumar Jha und Rickard Ewetz. „Discovering the in-Memory Kernels of 3D Dot-Product Engines“. In ASPDAC '23: 28th Asia and South Pacific Design Automation Conference. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3566097.3567855.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Li Zhang, Zhou Weida, Ying Lin und Licheng Jiao. „Support vector novelty detection with dot product kernels for non-spherical data“. In 2008 International Conference on Information and Automation (ICIA). IEEE, 2008. http://dx.doi.org/10.1109/icinfa.2008.4607965.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Venkatesan, Sibi, James K. Miller, Jeff Schneider und Artur Dubrawski. „Scaling Active Search using Linear Similarity Functions“. In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/401.

Der volle Inhalt der Quelle
Annotation:
Active Search has become an increasingly useful tool in information retrieval problems where the goal is to discover as many target elements as possible using only limited label queries. With the advent of big data, there is a growing emphasis on the scalability of such techniques to handle very large and very complex datasets. In this paper, we consider the problem of Active Search where we are given a similarity function between data points. We look at an algorithm introduced by Wang et al. [Wang et al., 2013] known as Active Search on Graphs and propose crucial modifications which allow it to scale significantly. Their approach selects points by minimizing an energy function over the graph induced by the similarity function on the data. Our modifications require the similarity function to be a dot-product between feature vectors of data points, equivalent to having a linear kernel for the adjacency matrix. With this, we are able to scale tremendously: for n data points, the original algorithm runs in O(n^2) time per iteration while ours runs in only O(nr + r^2) given r-dimensional features. We also describe a simple alternate approach using a weighted-neighbor predictor which also scales well. In our experiments, we show that our method is competitive with existing semi-supervised approaches. We also briefly discuss conditions under which our algorithm performs well.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

De Jesús Rivera, Edward, Fanny Besem-Cordova und Jean-Charles Bonaccorsi. „Optimization of a High Pressure Industrial Fan“. In ASME Turbo Expo 2021: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/gt2021-58967.

Der volle Inhalt der Quelle
Annotation:
Abstract Fans are used in industrial refineries, power generation, petrochemistry, pollution control, etc. These fans can perform in sometimes extreme, mission-critical conditions. The design of fans has historically relied on turbomachinery affinity laws, resulting in oversized machines that are expensive to manufacture and transport. With the increasingly lower CPU cost of fluid modeling, designers can now turn to CFD optimization to produce the necessary machine performance and flow conditions while respecting manufacturing constraints. The objective of this study is to maximize the pressure rise across an industrial fan while respecting manufacturing constraints. First, a 3D scan of the baseline impeller is used to create the CFD model and validated against experimental data. The baseline impeller geometry is then parameterized with 21 free parameters driving the shape of the hub, shroud, blade lean and camber. A fully automated optimization process is conducted using Numeca’s Fine™/Design3D software, allowing for a CPU-efficient Design Of Experiment (DOE) database generation and a surrogate model using the powerful Minamo optimization kernel and data-mining tool. The optimized impeller coupled with a CFD-aided redesigned volute showed an increase in overall pressure rise over the whole performance line, up to 24% at higher mass flow rates compared to the baseline geometry.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie