Artigos de revistas sobre o tema "Dot product kernels"

Siga este link para ver outros tipos de publicações sobre o tema: Dot product kernels.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 48 melhores artigos de revistas para estudos sobre o assunto "Dot product kernels".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Menegatto, V. A., C. P. Oliveira e A. P. Peron. "Conditionally positive definite dot product kernels". Journal of Mathematical Analysis and Applications 321, n.º 1 (setembro de 2006): 223–41. http://dx.doi.org/10.1016/j.jmaa.2005.08.024.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Menegatto, V. A., C. P. Oliveira e Ana P. Peron. "On conditionally positive definite dot product kernels". Acta Mathematica Sinica, English Series 24, n.º 7 (julho de 2008): 1127–38. http://dx.doi.org/10.1007/s10114-007-6227-4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Lu, Fangyan, e Hongwei Sun. "Positive definite dot product kernels in learning theory". Advances in Computational Mathematics 22, n.º 2 (fevereiro de 2005): 181–98. http://dx.doi.org/10.1007/s10444-004-3140-6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Griffiths, Matthew P., Denys Grombacher, Mason A. Kass, Mathias Ø. Vang, Lichao Liu e Jakob Juul Larsen. "A surface NMR forward in a dot product". Geophysical Journal International 234, n.º 3 (27 de abril de 2023): 2284–90. http://dx.doi.org/10.1093/gji/ggad203.

Texto completo da fonte
Resumo:
SUMMARY The computation required to simulate surface nuclear magnetic resonance (SNMR) data increases proportionally with the number of sequences and number of pulses in each sequence. This poses a particular challenge to modelling steady-state SNMR, where suites of sequences are acquired, each of which require modelling 10–100 s of pulses. To model such data efficiently, we have developed a reformulation of surface NMR forward model, where the geometry of transmit and receive fields are encapsulated into a vector (or set of vectors), which we call B1-volume-receive (BVR) curves. Projecting BVR curve(s) along complimentary magnetization solutions for a particular sequence amounts to computing the full SNMR forward model. The formulation has the additional advantage that computations for increased transmitter current amounts to a relative translation between the BVR and magnetization solutions. We generate 1-D kernels using BVR curves and standard integration techniques and find the difference is within 2 per cent. Using BVR curves, a typical suite of steady-state kernels can be computed two orders of magnitude faster than previous approaches.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Donini, Michele, e Fabio Aiolli. "Learning deep kernels in the space of dot product polynomials". Machine Learning 106, n.º 9-10 (7 de novembro de 2016): 1245–69. http://dx.doi.org/10.1007/s10994-016-5590-8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Filippas, Dionysios, Chrysostomos Nicopoulos e Giorgos Dimitrakopoulos. "Templatized Fused Vector Floating-Point Dot Product for High-Level Synthesis". Journal of Low Power Electronics and Applications 12, n.º 4 (17 de outubro de 2022): 56. http://dx.doi.org/10.3390/jlpea12040056.

Texto completo da fonte
Resumo:
Machine-learning accelerators rely on floating-point matrix and vector multiplication kernels. To reduce their cost, customized many-term fused architectures are preferred, which improve the latency, power, and area of the designs. In this work, we design a parameterized fused many-term floating-point dot product architecture that is ready for high-level synthesis. In this way, we can exploit the efficiency offered by a well-structured fused dot-product architecture and the freedom offered by high-level synthesis in tuning the design’s pipeline to the selected floating-point format and architectural constraints. When compared with optimized dot-product units implemented directly in RTL, the proposed design offers lower-latency implementations under the same clock frequency with marginal area savings. This result holds for a variety of floating-point formats, including standard and reduced-precision representations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Bishwas, Arit Kumar, Ashish Mani e Vasile Palade. "Gaussian kernel in quantum learning". International Journal of Quantum Information 18, n.º 03 (abril de 2020): 2050006. http://dx.doi.org/10.1142/s0219749920500069.

Texto completo da fonte
Resumo:
The Gaussian kernel is a very popular kernel function used in many machine learning algorithms, especially in support vector machines (SVMs). It is more often used than polynomial kernels when learning from nonlinear datasets and is usually employed in formulating the classical SVM for nonlinear problems. Rebentrost et al. discussed an elegant quantum version of a least square support vector machine using quantum polynomial kernels, which is exponentially faster than the classical counterpart. This paper demonstrates a quantum version of the Gaussian kernel and analyzes its runtime complexity using the quantum random access memory (QRAM) in the context of quantum SVM. Our analysis shows that the runtime computational complexity of the quantum Gaussian kernel is approximated to [Formula: see text] and even [Formula: see text] when [Formula: see text] and the error [Formula: see text] are small enough to be ignored, where [Formula: see text] is the dimension of the training instances, [Formula: see text] is the accuracy, [Formula: see text] is the dot product of the two quantum states, and [Formula: see text] is the Taylor remainder error term. Therefore, the run time complexity of the quantum version of the Gaussian kernel seems to be significantly faster when compared with its classical version.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Xiao, Lechao, Hong Hu, Theodor Misiakiewicz, Yue M. Lu e Jeffrey Pennington. "Precise learning curves and higher-order scaling limits for dot-product kernel regression *". Journal of Statistical Mechanics: Theory and Experiment 2023, n.º 11 (1 de novembro de 2023): 114005. http://dx.doi.org/10.1088/1742-5468/ad01b7.

Texto completo da fonte
Resumo:
Abstract As modern machine learning models continue to advance the computational frontier, it has become increasingly important to develop precise estimates for expected performance improvements under different model and data scaling regimes. Currently, theoretical understanding of the learning curves (LCs) that characterize how the prediction error depends on the number of samples is restricted to either large-sample asymptotics ( m → ∞ ) or, for certain simple data distributions, to the high-dimensional asymptotics in which the number of samples scales linearly with the dimension ( m ∝ d ). There is a wide gulf between these two regimes, including all higher-order scaling relations m ∝ d r , which are the subject of the present paper. We focus on the problem of kernel ridge regression for dot-product kernels and present precise formulas for the mean of the test error, bias and variance, for data drawn uniformly from the sphere with isotropic random labels in the rth-order asymptotic scaling regime m → ∞ with m / d r held constant. We observe a peak in the LC whenever m ≈ d r / r ! for any integer r, leading to multiple sample-wise descent and non-trivial behavior at multiple scales. We include a colab (available at: https://tinyurl.com/2nzym7ym) notebook that reproduces the essential results of the paper.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Iakymchuk, Roman, Stef Graillat, David Defour e Enrique S. Quintana-Ortí. "Hierarchical approach for deriving a reproducible unblocked LU factorization". International Journal of High Performance Computing Applications 33, n.º 5 (17 de março de 2019): 791–803. http://dx.doi.org/10.1177/1094342019832968.

Texto completo da fonte
Resumo:
We propose a reproducible variant of the unblocked LU factorization for graphics processor units (GPUs). For this purpose, we build upon Level-1/2 BLAS kernels that deliver correctly-rounded and reproducible results for the dot (inner) product, vector scaling, and the matrix-vector product. In addition, we draw a strategy to enhance the accuracy of the triangular solve via iterative refinement. Following a bottom-up approach, we finally construct a reproducible unblocked implementation of the LU factorization for GPUs, which accommodates partial pivoting for stability and can be eventually integrated in a high performance and stable algorithm for the (blocked) LU factorization.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Azevedo, D., e V. A. Menegatto. "Sharp estimates for eigenvalues of integral operators generated by dot product kernels on the sphere". Journal of Approximation Theory 177 (janeiro de 2014): 57–68. http://dx.doi.org/10.1016/j.jat.2013.10.002.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Micchelli, Charles A., e Massimiliano Pontil. "On Learning Vector-Valued Functions". Neural Computation 17, n.º 1 (1 de janeiro de 2005): 177–204. http://dx.doi.org/10.1162/0899766052530802.

Texto completo da fonte
Resumo:
In this letter, we provide a study of learning in a Hilbert space of vector-valued functions. We motivate the need for extending learning theory of scalar-valued functions by practical considerations and establish some basic results for learning vector-valued functions that should prove useful in applications. Specifically, we allow an output space Y to be a Hilbert space, and we consider a reproducing kernel Hilbert space of functions whose values lie in Y. In this setting, we derive the form of the minimal norm interpolant to a finite set of data and apply it to study some regularization functionals that are important in learning theory. We consider specific examples of such functionals corresponding to multiple-output regularization networks and support vector machines, for both regression and classification. Finally, we provide classes of operator-valued kernels of the dot product and translation-invariant type.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Chen, Honghuan, e Keming Wang. "Fusing DCN and BBAV for Remote Sensing Image Object Detection". International Journal of Cognitive Informatics and Natural Intelligence 17, n.º 1 (7 de janeiro de 2024): 1–16. http://dx.doi.org/10.4018/ijcini.335496.

Texto completo da fonte
Resumo:
At the oriented object detection in aerial remote sensing images, the perceptual field boundaries of ordinary convolutional kernels are often not parallel to the boundaries of the objects to be detected, affecting the model precision. Therefore, an object detection model (DCN-BBAV) that fuses deformable convolution networks (DCNs) and box boundary-aware vectors (BBAVs) is proposed. Firstly, a BBAV is used as the baseline, replacing the normal convolution kernels in the backbone network with deformable convolution kernels. Then, the spatial attention module (SAM) and channel attention mechanism (CAM) are used to enhance the feature extraction ability for a DCN. Finally, the dot product of the included angles of four adjacent vectors are added to the loss function of the rotation frame parameter, improving the regression precision of the boundary vector. The DCN-BBAV model demonstrates notable performance with a 77.30% mean average precision (mAP) on the DOTA dataset. Additionally, it outperforms other advanced rotating frame object detection methods, achieving impressive results of 90.52% mAP on VOC07 and 96.67% mAP on VOC12 for HRSC2016.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Soliman, Mostafa I., e Elsayed A. Elsayed. "Simultaneous Multithreaded Matrix Processor". Journal of Circuits, Systems and Computers 24, n.º 08 (12 de agosto de 2015): 1550114. http://dx.doi.org/10.1142/s0218126615501145.

Texto completo da fonte
Resumo:
This paper proposes a simultaneous multithreaded matrix processor (SMMP) to improve the performance of data-parallel applications by exploiting instruction-level parallelism (ILP) data-level parallelism (DLP) and thread-level parallelism (TLP). In SMMP, the well-known five-stage pipeline (baseline scalar processor) is extended to execute multi-scalar/vector/matrix instructions on unified parallel execution datapaths. SMMP can issue four scalar instructions from two threads each cycle or four vector/matrix operations from one thread, where the execution of vector/matrix instructions in threads is done in round-robin fashion. Moreover, this paper presents the implementation of our proposed SMMP using VHDL targeting FPGA Virtex-6. In addition, the performance of SMMP is evaluated on some kernels from the basic linear algebra subprograms (BLAS). Our results show that, the hardware complexity of SMMP is 5.68 times higher than the baseline scalar processor. However, speedups of 4.9, 6.09, 6.98, 8.2, 8.25, 8.72, 9.36, 11.84 and 21.57 are achieved on BLAS kernels of applying Givens rotation, scalar times vector plus another, vector addition, vector scaling, setting up Givens rotation, dot-product, matrix–vector multiplication, Euclidean length, and matrix–matrix multiplications, respectively. The average speedup over the baseline is 9.55 and the average speedup over complexity is 1.68. Comparing with Xilinx MicroBlaze, the complexity of SMMP is 6.36 times higher, however, its speedup ranges from 6.87 to 12.07 on vector/matrix kernels, which is 9.46 in average.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Bodó, Zalán, e Lehel Csató. "Hierarchical and Reweighting Cluster Kernels for Semi-Supervised Learning". International Journal of Computers Communications & Control 5, n.º 4 (1 de novembro de 2010): 469. http://dx.doi.org/10.15837/ijccc.2010.4.2496.

Texto completo da fonte
Resumo:
Recently semi-supervised methods gained increasing attention and many novel semi-supervised learning algorithms have been proposed. These methods exploit the information contained in the usually large unlabeled data set in order to improve classification or generalization performance. Using data-dependent kernels for kernel machines one can build semi-supervised classifiers by building the kernel in such a way that feature space dot products incorporate the structure of the data set. In this paper we propose two such methods: one using specific hierarchical clustering, and another kernel for reweighting an arbitrary base kernel taking into account the cluster structure of the data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Suárez-Cuenca, Jorge Juan, Wei Guo e Qiang Li. "INTEGRATION OF MULTIPLE CLASSIFIERS FOR COMPUTERIZED DETECTION OF LUNG NODULES IN CT". Biomedical Engineering: Applications, Basis and Communications 27, n.º 04 (agosto de 2015): 1550040. http://dx.doi.org/10.4015/s1016237215500404.

Texto completo da fonte
Resumo:
The purpose of this study was to investigate the usefulness of various classifier combination methods for improving the performance of a computer-aided diagnosis (CAD) system for pulmonary nodule detection in computed tomography (CT). We employed 85 CT scans with 110 nodules in the publicly available Lung Image Database Consortium (LIDC) dataset. We first applied our CAD scheme trained previously to the LIDC cases for identifying initial nodule candidates, and extracting 18 features for each nodule candidate. We used eight individual classifiers for false positives (FPs) reduction, including linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), Naïve Bayes, simple logistic, artificial neural network (ANN) and support vector machines (SVMs) with three different kernels. Five classifier combination methods were then employed to integrate the outputs of the eight individual classifiers for improving detection performance. The five combination methods included two supervised (a likelihood ratio (LR) method and a probability method based on the output scores of the eight individual classifiers) and three unsupervised ones (the sum, the product and the majority voting of the output scores from the eight individual classifiers). Leave-one-case-out approach was employed to train and test individual classifiers and supervised combination methods. At a sensitivity of 80%, the numbers of FPs per CT scan for the eight individual classifiers were 6.1 for LDA, 19.9 for QDA, 10.8 for Naïve Bayes, 8.4 for simple logistic, 8.6 for ANN, 23.7 for SVM-dot, 17.0 for SVM-poly, and 23.4 for SVM-anova; the numbers of FPs per CT scan for the five combination methods were 3.3 for the majority voting method, 5.0 for the sum, 4.6 for the product, 65.7 for the LR and 3.9 for the probability method. Compared to the best individual classifier, the majority voting method reduced 45% of FPs at 80% sensitivity. The performance of our CAD can be improved by combining multiple classifiers. The majority voting method achieved higher performance levels than other combination methods and all individual classifiers.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Ünal, Ali Burak, Mete Akgün e Nico Pfeifer. "ESCAPED: Efficient Secure and Private Dot Product Framework for Kernel-based Machine Learning Algorithms with Applications in Healthcare". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 11 (18 de maio de 2021): 9988–96. http://dx.doi.org/10.1609/aaai.v35i11.17199.

Texto completo da fonte
Resumo:
Training sophisticated machine learning models usually requires many training samples. Especially in healthcare settings these samples can be very expensive, meaning that one institution alone usually does not have enough. Merging privacy-sensitive data from different sources is usually restricted by data security and data protection measures. This can lead to approaches that reduce data quality by putting noise onto the variables (e.g., in epsilon-differential privacy) or omitting certain values (e.g., for k-anonymity). Other measures based on cryptographic methods can lead to very time-consuming computations, which is especially problematic for larger multi-omics data. We address this problem by introducing ESCAPED, which stands for Efficient SeCure And PrivatE Dot product framework. ESCAPED enables the computation of the dot product of vectors from multiple sources on a third-party, which later trains kernel-based machine learning algorithms, while neither sacrificing privacy nor adding noise. We have evaluated our framework on drug resistance prediction for HIV-infected people and multi-omics dimensionality reduction and clustering problems in precision medicine. In terms of execution time, our framework significantly outperforms the best-fitting existing approaches without sacrificing the performance of the algorithm. Even though we only present the benefit for kernel-based algorithms, our framework can open up new research opportunities for further machine learning models that require the dot product of vectors from multiple sources.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Yang, Kai, Xingpeng Dong e Jianfeng Zhang. "Polarity-reversal correction for vector-based elastic reverse time migration". GEOPHYSICS 86, n.º 1 (1 de janeiro de 2021): S45—S58. http://dx.doi.org/10.1190/geo2020-0033.1.

Texto completo da fonte
Resumo:
Polarity reversal is a well-known problem in elastic reverse time migration, and it is closely related to the imaging conditions. The dot product of source and receiver wavefields is a stable and efficient way to construct scalar imaging conditions for decomposed elastic vector wavefields. However, for PP images, the dot product introduces an angle-dependent factor that will change the polarity of image amplitudes at large opening angles, and it is also contaminated by low-wavenumber artifacts when sharp contrasts exist in the velocity model. Those two problems can be suppressed by muting the reflections with large opening angles at the expense of losing useful information. We have developed an elastic inverse-scattering imaging condition that can retain the initial polarity of the image amplitude and significantly reduce the low-wavenumber noise. For PS images, much attention is paid to the polarity-reversal problem at the normal incidence, and the dot-product-based imaging condition successfully avoids this kind of polarity reversal. There is another polarity-reversal problem arising from the sign change of the PS reflection coefficient at the Brewster angle. However, this sign change is often neglected in the construction of a stacked PS image, which will lead to reversed or distorted phases after stacking. We suggested using the S-wave impedance kernel used in elastic full-waveform inversion but only in the PS mode as an alternative to the dot-product imaging condition to alleviate this kind of polarity-reversal problem. In addition to dot-product-based imaging conditions, we analytically compare divergence- and curl-based imaging conditions and the elastic energy norm-based imaging condition with the presented imaging conditions to identify their advantages and weaknesses. Two numerical examples on a two-layer model and the SEAM 2D model are used to illustrate the effectiveness and advantages of the presented imaging conditions in suppressing low-wavenumber noise and correcting the polarity-reversal problem.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Yang, Yanqi, e Shuangping Tao. "Singular integrals with variable kernel and fractional differentiation in homogeneous Morrey-Herz-type Hardy spaces with variable exponents". Open Mathematics 16, n.º 1 (10 de abril de 2018): 326–45. http://dx.doi.org/10.1515/math-2018-0036.

Texto completo da fonte
Resumo:
AbstractLet T be the singular integral operator with variable kernel defined by $$\begin{array}{} \displaystyle Tf(x)= p.v. \int\limits_{\mathbb{R}^{n}}\frac{{\it\Omega}(x,x-y)}{|x-y|^{n}}f(y)\text{d}y \end{array} $$and Dγ(0 ≤ γ ≤ 1) be the fractional differentiation operator. Let T∗ and T♯ be the adjoint of T and the pseudo-adjoint of T, respectively. The aim of this paper is to establish some boundedness for TDγ − DγT and (T∗ − T♯)Dγ on the homogeneous Morrey-Herz-type Hardy spaces with variable exponents $\begin{array}{} HM\dot{K}^{\alpha(\cdot),q}_{p(\cdot),\lambda} \end{array} $ via the convolution operator Tm, j and Calderón-Zygmund operator, and then establish their boundedness on these spaces. The boundedness on $\begin{array}{} HM\dot{K}^{\alpha(\cdot),q}_{p(\cdot),\lambda} \end{array} $(ℝn) is shown to hold for TDγ − DγT and (T∗ − T♯)Dγ. Moreover, the authors also establish various norm characterizations for the product T1T2 and the pseudo-product T1 ∘ T2.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Atarashi, Kyohei, Subhransu Maji e Satoshi Oyama. "Random Feature Maps for the Itemset Kernel". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 3199–206. http://dx.doi.org/10.1609/aaai.v33i01.33013199.

Texto completo da fonte
Resumo:
Although kernel methods efficiently use feature combinations without computing them directly, they do not scale well with the size of the training dataset. Factorization machines (FMs) and related models, on the other hand, enable feature combinations efficiently, but their optimization generally requires solving a non-convex problem. We present random feature maps for the itemset kernel, which uses feature combinations, and includes the ANOVA kernel, the all-subsets kernel, and the standard dot product. Linear models using one of our proposed maps can be used as an alternative to kernel methods and FMs, resulting in better scalability during both training and evaluation. We also present theoretical results for a proposed map, discuss the relationship between factorization machines and linear models using a proposed map for the ANOVA kernel, and relate the proposed feature maps to prior work. Furthermore, we show that the maps can be calculated more efficiently by using a signed circulant matrix projection technique. Finally, we demonstrate the effectiveness of using the proposed maps for real-world datasets..
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Chan, Jan Y. K., Alex Po Leung e Yunbo Xie. "Efficient High-Dimensional Kernel k-Means++ with Random Projection". Applied Sciences 11, n.º 15 (28 de julho de 2021): 6963. http://dx.doi.org/10.3390/app11156963.

Texto completo da fonte
Resumo:
Using random projection, a method to speed up both kernel k-means and centroid initialization with k-means++ is proposed. We approximate the kernel matrix and distances in a lower-dimensional space Rd before the kernel k-means clustering motivated by upper error bounds. With random projections, previous work on bounds for dot products and an improved bound for kernel methods are considered for kernel k-means. The complexities for both kernel k-means with Lloyd’s algorithm and centroid initialization with k-means++ are known to be O(nkD) and Θ(nkD), respectively, with n being the number of data points, the dimensionality of input feature vectors D and the number of clusters k. The proposed method reduces the computational complexity for the kernel computation of kernel k-means from O(n2D) to O(n2d) and the subsequent computation for k-means with Lloyd’s algorithm and centroid initialization from O(nkD) to O(nkd). Our experiments demonstrate that the speed-up of the clustering method with reduced dimensionality d=200 is 2 to 26 times with very little performance degradation (less than one percent) in general.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

FERREIRA, JOSEPH L., BARBARA R. BAUMSTARK, MOSTAFA K. HAMDY e STEVEN G. MCCAY. "Polymerase Chain Reaction for Detection of Type A Clostridium botulinum in Foods". Journal of Food Protection 56, n.º 1 (1 de janeiro de 1993): 18–20. http://dx.doi.org/10.4315/0362-028x-56.1.18.

Texto completo da fonte
Resumo:
A DNA fragment of the type A Clostridium botulinum neurotoxin gene was demonstrated in canned food products with the polymerase chain reaction (PCR). The fragment, 1340-bp in size, was amplified from green peas, whole kernel corn, green beans, lima beans, black-eyed peas, and turnip greens previously inoculated with type A C. botulinum. The PCR products were identified by agarose gel electrophoresis and also confirmed by dot blot DNA hybridization with a type A specific gene probe. Some inoculated foods were PCR negative but became PCR positive after subculture and overnight incubation in brain heart infusion broth. No PCR amplification products were obtained from uninoculated foods. The procedure was highly sensitive and detected as few as 100 vegetative cells per ml brain heart infusion broth culture.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Amoah, Barbara A., e Rizana M. Mahroof. "Ozone as a Potential Fumigant Alternative for the Management of Sitophilus oryzae (Coleoptera: Curculionidae) in Wheat". Journal of Economic Entomology 112, n.º 4 (2 de abril de 2019): 1953–63. http://dx.doi.org/10.1093/jee/toz071.

Texto completo da fonte
Resumo:
Abstract Gaseous ozone, an oxidizing agent used as a disinfectant in food processing and preservation, has potential for the control of stored product insects. In this study, we investigated ozone for the management of the rice weevil, Sitophilus oryzae (L.) (Coleoptera: Curculionidae), a serious stored product insect pest. We exposed eggs, immature stages within wheat kernels, and adults of the rice weevil to 200-ppm ozone for 12, 24, 36, 48, and 60 h. Insects were placed at 5, 15, or 25 cm depth within a wheat mass in PVC pipes (10 cm in diameter, 30 cm in height) and exposed to ozone. Egg eclosion was recorded 10 d after treatment (DAT), and immature stages were observed for adult emergence 28 DAT. Adults were observed for survival immediately after ozone exposure and again at 1 and 2 DAT. Egg eclosion was significantly lower at 5 cm compared with 25 cm at all exposure times, but not the 12-h exposure time. For each exposure time tested, significantly lesser adults developed from kernels and none of the adults survived at the 5 cm depth compared with the 15 and 25 cm depths. Survival rate of adults was significantly higher at 25 cm depth than at 15 cm depth at the 24–60 h. The deeper the insect in the grain mass, the higher the survival rate. The work reported suggests that ozone is effective in killing all life stages of S. oryzae; however, the efficacy of the gas is dependent on the concentration, exposure time, depth, and gas loss.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Miller, Kai J., Klaus-Robert Müller, Gabriela Ojeda Valencia, Harvey Huang, Nicholas M. Gregg, Gregory A. Worrell e Dora Hermes. "Canonical Response Parameterization: Quantifying the structure of responses to single-pulse intracranial electrical brain stimulation". PLOS Computational Biology 19, n.º 5 (25 de maio de 2023): e1011105. http://dx.doi.org/10.1371/journal.pcbi.1011105.

Texto completo da fonte
Resumo:
Single-pulse electrical stimulation in the nervous system, often called cortico-cortical evoked potential (CCEP) measurement, is an important technique to understand how brain regions interact with one another. Voltages are measured from implanted electrodes in one brain area while stimulating another with brief current impulses separated by several seconds. Historically, researchers have tried to understand the significance of evoked voltage polyphasic deflections by visual inspection, but no general-purpose tool has emerged to understand their shapes or describe them mathematically. We describe and illustrate a new technique to parameterize brain stimulation data, where voltage response traces are projected into one another using a semi-normalized dot product. The length of timepoints from stimulation included in the dot product is varied to obtain a temporal profile of structural significance, and the peak of the profile uniquely identifies the duration of the response. Using linear kernel PCA, a canonical response shape is obtained over this duration, and then single-trial traces are parameterized as a projection of this canonical shape with a residual term. Such parameterization allows for dissimilar trace shapes from different brain areas to be directly compared by quantifying cross-projection magnitudes, response duration, canonical shape projection amplitudes, signal-to-noise ratios, explained variance, and statistical significance. Artifactual trials are automatically identified by outliers in sub-distributions of cross-projection magnitude, and rejected. This technique, which we call “Canonical Response Parameterization” (CRP) dramatically simplifies the study of CCEP shapes, and may also be applied in a wide range of other settings involving event-triggered data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Shamsolmoali, Pourya, Masoumeh Zareapoor, Eric Granger e Michael Felsberg. "SeTformer Is What You Need for Vision and Language". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 5 (24 de março de 2024): 4713–21. http://dx.doi.org/10.1609/aaai.v38i5.28272.

Texto completo da fonte
Resumo:
The dot product self-attention (DPSA) is a fundamental component of transformers. However, scaling them to long sequences, like documents or high-resolution images, becomes prohibitively expensive due to the quadratic time and memory complexities arising from the softmax operation. Kernel methods are employed to simplify computations by approximating softmax but often lead to performance drops compared to softmax attention. We propose SeTformer, a novel transformer where DPSA is purely replaced by Self-optimal Transport (SeT) for achieving better performance and computational efficiency. SeT is based on two essential softmax properties: maintaining a non-negative attention matrix and using a nonlinear reweighting mechanism to emphasize important tokens in input sequences. By introducing a kernel cost function for optimal transport, SeTformer effectively satisfies these properties. In particular, with small and base-sized models, SeTformer achieves impressive top-1 accuracies of 84.7% and 86.2% on ImageNet-1K. In object detection, SeTformer-base outperforms the FocalNet counterpart by +2.2 mAP, using 38% fewer parameters and 29% fewer FLOPs. In semantic segmentation, our base-size model surpasses NAT by +3.5 mIoU with 33% fewer parameters. SeTformer also achieves state-of-the-art results in language modeling on the GLUE benchmark. These findings highlight SeTformer applicability for vision and language tasks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Fabec, R., G. Ólafsson e A. N. Sengupta. "Holomorphic Fock spaces for positive linear transformations". MATHEMATICA SCANDINAVICA 98, n.º 2 (1 de junho de 2006): 262. http://dx.doi.org/10.7146/math.scand.a-14995.

Texto completo da fonte
Resumo:
Suppose $A$ is a positive real linear transformation on a finite dimensional complex inner product space $V$. The reproducing kernel for the Fock space of square integrable holomorphic functions on $V$ relative to the Gaussian measure $d\mu_A(z)=\frac {\sqrt{\det A}} {\pi^n}e^{-\Re\langle Az,z\rangle}\,dz$ is described in terms of the linear and antilinear decomposition of the linear operator $A$. Moreover, if $A$ commutes with a conjugation on $V$, then a restriction mapping to the real vectors in $V$ is polarized to obtain a Segal-Bargmann transform, which we also study in the Gaussian-measure setting.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Blower, Gordon, e Yang Chen. "On Determinant Expansions for Hankel Operators". Concrete Operators 7, n.º 1 (4 de fevereiro de 2020): 13–44. http://dx.doi.org/10.1515/conop-2020-0002.

Texto completo da fonte
Resumo:
AbstractLet w be a semiclassical weight that is generic in Magnus’s sense, and ({p_n})_{n = 0}^\infty the corresponding sequence of orthogonal polynomials. We express the Christoffel–Darboux kernel as a sum of products of Hankel integral operators. For ψ ∈ L∞ (iℝ), let W(ψ) be the Wiener-Hopf operator with symbol ψ. We give sufficient conditions on ψ such that 1/ det W(ψ) W(ψ−1) = det(I − Γϕ1Γϕ2) where Γϕ1 and Γϕ2 are Hankel operators that are Hilbert–Schmidt. For certain, ψ Barnes’s integral leads to an expansion of this determinant in terms of the generalised hypergeometric 2mF2m-1. These results extend those of Basor and Chen [2], who obtained 4F3 likewise. We include examples where the Wiener–Hopf factors are found explicitly.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Suryadi, Usep Tatang, e Nindi Azis Andriyani. "Klasifikasi Status Calon Pendonor Darah Menggunakan Algoritma Support Vector Machine dan Kernel RBF". Jurnal ICT : Information Communication & Technology 19, n.º 1 (9 de setembro de 2020): 27–33. http://dx.doi.org/10.36054/jict-ikmi.v19i1.113.

Texto completo da fonte
Resumo:
To donate blood, generally must qualify the requirements, including physically and mentally healthy, aged 17-65 years, minimum body weight of 45kg, Hb levels 12.5g% - 17.0g%, upper tension (systole) 100-170mmHg, under tension (diastole) 70-100mmHg, 36.6-37.5 degrees Celsius body temperature, never had hemophilia, pulse range 50-100 times/minute, and the timescales 3 months after a previous blood donors. The problem that arises is the small number of officers who often have difficulty recording donor data on the form sheets. Thus allowing unwanted error occurred when registering the identity or the results of the initial examination of prospective donors. Based on these problems, the classification of potential donors is needed as a step to determine the status of potential donors, whether the prospective donors are accepted as donors or rejected based on predetermined requirements. According to this research that using primary data which obtained from The Indonesian Red Cross Subang Regency, the data are using a plenty 50 of the data has accepting and 50 of the data has rejected. Afterwards required to doing continued analysis toward a method capability based on age, Hb level, body weight, sistolik dan diastolik, that using SVM (Support Vector Machine) algorithm.and using Sequential method for training phase in SVM and Kernel RBF for calculating the dot product value. The algotirhm accuration reached 90% / excellent categories with the value gamma = 0.5, C = 1, epsilon = 0.001.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Chin, Yu-Hao, Chang-Hong Lin, Ernestasia Siahaan e Jia-Ching Wang. "Music Emotion Detection Using Hierarchical Sparse Kernel Machines". Scientific World Journal 2014 (2014): 1–7. http://dx.doi.org/10.1155/2014/270378.

Texto completo da fonte
Resumo:
For music emotion detection, this paper presents a music emotion verification system based on hierarchical sparse kernel machines. With the proposed system, we intend to verify if a music clip possesses happiness emotion or not. There are two levels in the hierarchical sparse kernel machines. In the first level, a set of acoustical features are extracted, and principle component analysis (PCA) is implemented to reduce the dimension. The acoustical features are utilized to generate the first-level decision vector, which is a vector with each element being a significant value of an emotion. The significant values of eight main emotional classes are utilized in this paper. To calculate the significant value of an emotion, we construct its 2-class SVM with calm emotion as the global (non-target) side of the SVM. The probability distributions of the adopted acoustical features are calculated and the probability product kernel is applied in the first-level SVMs to obtain first-level decision vector feature. In the second level of the hierarchical system, we merely construct a 2-class relevance vector machine (RVM) with happiness as the target side and other emotions as the background side of the RVM. The first-level decision vector is used as the feature with conventional radial basis function kernel. The happiness verification threshold is built on the probability value. In the experimental results, the detection error tradeoff (DET) curve shows that the proposed system has a good performance on verifying if a music clip reveals happiness emotion.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Rasoulinezhad, Seyedramin, Esther Roorda, Steve Wilton, Philip H. W. Leong e David Boland. "Rethinking Embedded Blocks for Machine Learning Applications". ACM Transactions on Reconfigurable Technology and Systems 15, n.º 1 (31 de março de 2022): 1–30. http://dx.doi.org/10.1145/3491234.

Texto completo da fonte
Resumo:
The underlying goal of FPGA architecture research is to devise flexible substrates that implement a wide variety of circuits efficiently. Contemporary FPGA architectures have been optimized to support networking, signal processing, and image processing applications through high-precision digital signal processing (DSP) blocks. The recent emergence of machine learning has created a new set of demands characterized by: (1) higher computational density and (2) low precision arithmetic requirements. With the goal of exploring this new design space in a methodical manner, we first propose a problem formulation involving computing nested loops over multiply-accumulate (MAC) operations, which covers many basic linear algebra primitives and standard deep neural network (DNN) kernels. A quantitative methodology for deriving efficient coarse-grained compute block architectures from benchmarks is then proposed together with a family of new embedded blocks, called MLBlocks. An MLBlock instance includes several multiply-accumulate units connected via a flexible routing, where each configuration performs a few parallel dot-products in a systolic array fashion. This architecture is parameterized with support for different data movements, reuse, and precisions, utilizing a columnar arrangement that is compatible with existing FPGA architectures. On synthetic benchmarks, we demonstrate that for 8-bit arithmetic, MLBlocks offer 6× improved performance over the commercial Xilinx DSP48E2 architecture with smaller area and delay; and for time-multiplexed 16-bit arithmetic, achieves 2× higher performance per area with the same area and frequency. All source codes and data, along with documents to reproduce all the results in this article, are available at http://github.com/raminrasoulinezhad/MLBlocks .
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Makbal, Rachida, Myra O. Villareal, Chemseddoha Gadhi, Abdellatif Hafidi e Hiroko Isoda. "Argania Spinosa Fruit Shell Extract-Induced Melanogenesis via cAMP Signaling Pathway Activation". International Journal of Molecular Sciences 21, n.º 7 (6 de abril de 2020): 2539. http://dx.doi.org/10.3390/ijms21072539.

Texto completo da fonte
Resumo:
We have previously reported that argan oil and argan press-cake from the kernels of Argania spinosa have an anti-melanogenesis effect. Here, the effect of argan fruit shell ethanol extract (AFSEE) on melanogenesis in B16F10 cells was determined, and the mechanism underlying its effect was elucidated. The proliferation of AFSEE-treated B16F10 cells was evaluated using the 3-(4,5-dimethylthiazolyl-2)-2,5-diphenyltetrazolium bromide (MTT) assay, while the melanin content was quantified using a spectrophotometric method. The expression of melanogenesis-related proteins was determined by Western blot and real-time PCR, while global gene expression was determined using a DNA microarray. In vitro analysis results showed that the melanin content of B16F10 cells was significantly increased by AFSEE, without cytotoxicity, by increasing the melanogenic enzyme tyrosinase (TRY), tyrosinase related-protein 1 (TRP1), and dopachrome tautomerase (DCT) protein and mRNA expression, as well as upregulating microphthalmia-associated transcription factor (MITF) expression through mitogen-activated protein kinases (MAPKs) extracellular signal-regulated kinase (ERK) and p38, and the cyclic adenosine monophosphate (cAMP) signaling pathway, as indicated by the microarray analysis results. AFSEE’s melanogenesis promotion effect is primarily attributed to its polyphenolic components. In conclusion, AFSEE promotes melanogenesis in B16F10 cells by upregulating the expression of the melanogenic enzymes through the cAMP–MITF signaling pathway.AFSEE may be used as a cosmetics product component to promote melanogenesis, or as a therapeutic against hypopigmentation disorders.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Gabzdyl, Martin. "Comparison of the tree species select classification methods from aerial photo". Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 56, n.º 5 (2008): 279–92. http://dx.doi.org/10.11118/actaun200856050279.

Texto completo da fonte
Resumo:
This article describes a comparison of various programs for the automatic supervised classification used for identification of forest tree species representation from the aerial photographs. These programs were represented by American software Erdas Imagine 8.4, Czech products LuciaG 4.0 and TopoL DMT 6.014. The study displays a minor production forest area with proportion of four most frequently occurring tree species – spruce, larch, oak and ash in the research area of the forest region around Bystřice pod Hostýnem, the Czech Republic. For the reason of lower quality of spectrozonal photographs it was necessary to use some corrections; such as highlighting pen techniques, namely Kernel Processor Low-Frequency and High-Frequency filters, belonging to space operations. Photographs, modified in this way, served for a construction of individual training sets, which were consequently used within individual classification methods of directed classification in each comparative software. Self-classification took place at the level of a particular tree species. Classification accuracy was determined by comparison of results and reference data from the terrain research.The outcome is, that the best classification for oak and ash was in combination with TopoL program, classification according to gravity centre and combination of solation + insolation signature of the treetop parts with an aggressive shade.On the contrary, for spruce and larch was the best classification in combination with software Erdas Imagine, classification roles of intervals mahalanobis with combination of solation signature of the treetop parts, along these tree edges with an aggressive shade.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Fang, Yin-Ying, Chi-Fang Chen e Sheng-Ju Wu. "Feature identification using acoustic signature of Ocean Researcher III (ORIII) of Taiwan". ANZIAM Journal 59 (25 de julho de 2019): C318—C357. http://dx.doi.org/10.21914/anziamj.v59i0.12655.

Texto completo da fonte
Resumo:
Underwater acoustic signature identification has been employed as a technique for detecting underwater vehicles, such as in anti-submarine warfare or harbour security systems. The underwater sound channel, however, has interference due to spatial variations in topography or sea state conditions and temporal variations in water column properties, which cause multipath and scattering in acoustic propagation. Thus, acoustic data quality control can be very challenging. One of challenges for an identification system is how to recognise the same target signature from measurements under different temporal and spatial settings. This paper deals with the above challenges by establishing an identification system composed of feature extraction, classification algorithms, and feature selection with two approaches to recognise the target signature of underwater radiated noise from a research vessel, Ocean Researcher III, with a bottom mounted hydrophone in five cruises in 2016 and 2017. The fundamental frequency and its power spectral density are known as significant features for classification. In feature extraction, we extract the features before deciding which is more significant from the two aforementioned features. The first approach utilises Polynomial Regression (PR) classifiers and feature selection by Taguchi method and analysis of variance under a different combination of factors and levels. The second approach utilises Radial Basis Function Neural Network (RBFNN) selecting the optimised parameters of classifier via genetic algorithm. The real-time classifier of PR model is robust and superior to the RBFNN model in this paper. This suggests that the Automatic Identification System for Vehicles using Acoustic Signature developed here can be carried out by utilising harmonic frequency features extracted from unmasking the frequency bandwidth for ship noises and proves that feature extraction is appropriate for our targets. References Nathan D Merchant, Kurt M Fristrup, Mark P Johnson, Peter L Tyack, Matthew J Witt, Philippe Blondel, and Susan E Parks. Measuring acoustic habitats. Methods in Ecology and Evolution, 6(3):257265, 2015. doi:10.1111/2041-210X.12330. Nathan D Merchant, Philippe Blondel, D Tom Dakin, and John Dorocicz. Averaging underwater noise levels for environmental assessment of shipping. The Journal of the Acoustical Society of America, 132(4):EL343EL349, 2012. doi:10.1121/1.4754429. Chi-Fang Chen, Hsiang-Chih Chan, Ray-I Chang, Tswen-Yung Tang, Sen Jan, Chau-Chang Wang, Ruey-Chang Wei, Yiing-Jang Yang, Lien-Siang Chou, Tzay-Chyn Shin, et al. Data demonstrations on physical oceanography and underwater acoustics from the marine cable hosted observatory (macho). In OCEANS, 2012-Yeosu, pages 16. IEEE, 2012. doi:10.1109/OCEANS-Yeosu.2012.6263639. Sauda Sadaf P Yashaswini, Soumya Halagur, Fazil Khan, and Shanta Rangaswamy. A literature survey on ambient noise analysis for underwater acoustic signals. International Journal of Computer Engineering and Sciences, 1(7):19, 2015. doi:10.26472/ijces.v1i7.37. Shuguang Wang and Xiangyang Zeng. Robust underwater noise targets classification using auditory inspired time-frequency analysis. Applied Acoustics, 78:6876, 2014. doi:10.1016/j.apacoust.2013.11.003. LG Weiss and TL Dixon. Wavelet-based denoising of underwater acoustic signals. The Journal of the Acoustical Society of America, 101(1):377383, 1997. doi:10.1121/1.417983. Timothy Alexis Bodisco, Jason D'Netto, Neil Kelson, Jasmine Banks, Ross Hayward, and Tony Parker. Characterising an ecg signal using statistical modelling: a feasibility study. ANZIAM Journal, 55:3246, 2014. doi:10.21914/anziamj.v55i0.7818. José Ribeiro-Fonseca and Luís Correia. Identification of underwater acoustic noise. In OCEANS'94.'Oceans Engineering for Today's Technology and Tomorrow's Preservation.'Proceedings, volume 2, pages II/597II/602 vol. 2. IEEE. Linus YS Chiu and Hwei-Ruy Chen. Estimation and reduction of effects of sea surface reflection on underwater vertical channel. In Underwater Technology Symposium (UT), 2013 IEEE International, pages 18. IEEE, 2013. doi:10.1109/UT.2013.6519874. G.M. Wenz. Acoustic ambient noise in the ocean: spectra and sources. Thesis, 1962. doi:10.1121/1.1909155. Donald Ross. Mechanics of underwater noise. Elsevier, 2013. doi:10.1121/1.398685. Chris Drummond and Robert C Holte. Exploiting the cost (in) sensitivity of decision tree splitting criteria. In ICML, volume 1, 2000. Charles Elkan. The foundations of cost-sensitive learning. In International joint conference on artificial intelligence, volume 17, pages 973978. Lawrence Erlbaum Associates Ltd, 2001. Chris Gillard, Alexei Kouzoubov, Simon Lourey, Alice von Trojan, Binh Nguyen, Shane Wood, and Jimmy Wang. Automatic classification of active sonar echoes for improved target identification. Douglas C Montgomery. Design and analysis of experiments. John wiley and sons, 2017. doi:10.1002/9781118147634. G Taguchi. Off-line and on-line quality control systems. In Proceedings of International Conference on Quality Control, 1978. Sheng-Ju Wu, Sheau-Wen Shiah, and Wei-Lung Yu. Parametric analysis of proton exchange membrane fuel cell performance by using the taguchi method and a neural network. Renewable Energy, 34(1):135144, 2009. doi:10.1016/j.renene.2008.03.006. Genichi Taguchi. Introduction to quality engineering: designing quality into products and processes. Technical report, 1986. doi:10.1002/qre.4680040216. Richard Horvath, Gyula Matyasi, and Agota Dregelyi-Kiss. Optimization of machining parameters for fine turning operations based on the response surface method. ANZIAM Journal, 55:250265, 2014. doi:10.21914/anziamj.v55i0.7865. Chuan-Tien Li, Sheng-Ju Wu, and Wei-Lung Yu. Parameter design on the multi-objectives of pem fuel cell stack using an adaptive neuro-fuzzy inference system and genetic algorithms. International Journal of Hydrogen Energy, 39(9):45024515, 2014. doi:10.1016/j.ijhydene.2014.01.034. Antoine Guisan, Thomas C Edwards Jr, and Trevor Hastie. Generalized linear and generalized additive models in studies of species distributions: setting the scene. Ecological modelling, 157(2-3):89100, 2002. doi:10.1016/S0304-3800(02)00204-1. Sheng Chen, Colin FN Cowan, and Peter M Grant. Orthogonal least squares learning algorithm for radial basis function networks. IEEE Transactions on neural networks, 2(2):302309, 1991. doi:10.1109/72.80341. Howard Demuth and Mark Beale. Neural network toolbox for use with matlab-user's guide verion 4.0. 1993. Janice Gaffney, Charles Pearce, and David Green. Binary versus real coding for genetic algorithms: A false dichotomy? ANZIAM Journal, 51:347359, 2010. doi:10.21914/anziamj.v51i0.2776. Daniel May and Muttucumaru Sivakumar. Techniques for predicting total phosphorus in urban stormwater runoff at unmonitored catchments. ANZIAM Journal, 45:296309, 2004. doi:10.21914/anziamj.v45i0.889. Chang-Xue Jack Feng, Zhi-Guang Yu, and Andrew Kusiak. Selection and validation of predictive regression and neural network models based on designed experiments. IIE Transactions, 38(1):1323, 2006. doi:10.1080/07408170500346378. Yin-Ying Fang, Ping-Jung Sung, Kai-An Cheng, Meng Fan Tsai, and Chifang Chen. Underwater radiated noise measurement of ocean researcher 3. In The 29th Taiwan Society of Naval Architects and Marine Engineers Conference, 2017. Yin-Ying Fang, Chi-Fang Chen, and Sheng-Ju Wu. Analysis of vibration and underwater radiated noise of ocean researcher 3. In The 30th Taiwan Society of Naval Architects and Marine Engineers Conference, 2018. Det Norske Veritas. Rules for classification of ships new buildings special equipment and systems additional class part 6 chapter 24 silent class notation. Rules for Classification of ShipsNewbuildings, 2010. Underwater acousticsquantities and procedures for description and measurement of underwater sound from ships-part 1requirements for precision measurements in deep water used for comparison purposes. (ISO 17208-1:2012), 2012. Bureau Veritas. Underwater radiated noise, rule note nr 614 dt r00 e. Bureau Veritas, 2014. R.J. Urick. Principles of underwater sound, volume 3. McGraw-Hill New York, 1983. Lars Burgstahler and Martin Neubauer. New modifications of the exponential moving average algorithm for bandwidth estimation. In Proc. of the 15th ITC Specialist Seminar, 2002. Bishnu Prasad Lamichhane. Removing a mixture of gaussian and impulsive noise using the total variation functional and split bregman iterative method. ANZIAM Journal, 56:5267, 2015. doi:10.21914/anziamj.v56i0.9316. Chao-Ton Su. Quality engineering: off-line methods and applications. CRC press, 2016. Jiju Antony and Mike Kaye. Experimental quality: a strategic approach to achieve and improve quality. Springer Science and Business Media, 2012. Ozkan Kucuk, Tayeb Elfarah, Serkan Islak, and Cihan Ozorak. Optimization by using taguchi method of the production of magnesium-matrix carbide reinforced composites by powder metallurgy method. Metals, 7(9):352, 2017. doi:10.3390/met7090352. G Taguchi. System of experimental design, quality resources. New York, 108, 1987. Gavin C Cawley and Nicola LC Talbot. Efficient leave-one-out cross-validation of kernel fisher discriminant classifiers. Pattern Recognition, 36(11):25852592, 2003. doi:10.1016/S0031-3203(03)00136-5.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Sutherland, Danica, Junier Oliva, Barnabás Póczos e Jeff Schneider. "Linear-Time Learning on Distributions with Approximate Kernel Embeddings". Proceedings of the AAAI Conference on Artificial Intelligence 30, n.º 1 (2 de março de 2016). http://dx.doi.org/10.1609/aaai.v30i1.10308.

Texto completo da fonte
Resumo:
Many interesting machine learning problems are best posed by considering instances that are distributions, or sample sets drawn from distributions. Most previous work devoted to machine learning tasks with distributional inputs has done so through pairwise kernel evaluations between pdfs (or sample sets). While such an approach is fine for smaller datasets, the computation of an N × N Gram matrix is prohibitive in large datasets. Recent scalable estimators that work over pdfs have done so only with kernels that use Euclidean metrics, like the L2 distance. However, there are a myriad of other useful metrics available, such as total variation, Hellinger distance, and the Jensen-Shannon divergence. This work develops the first random features for pdfs whose dot product approximates kernels using these non-Euclidean metrics. These random features allow estimators to scale to large datasets by working in a primal space, without computing large Gram matrices. We provide an analysis of the approximation error in using our proposed random features, and show empirically the quality of our approximation both in estimating a Gram matrix and in solving learning tasks in real-world and synthetic data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

BUI, HUY-QUI, THE ANH BUI e XUAN THINH DUONG. "WEIGHTED BESOV AND TRIEBEL–LIZORKIN SPACES ASSOCIATED WITH OPERATORS AND APPLICATIONS". Forum of Mathematics, Sigma 8 (2020). http://dx.doi.org/10.1017/fms.2020.6.

Texto completo da fonte
Resumo:
Let $X$ be a space of homogeneous type and $L$ be a nonnegative self-adjoint operator on $L^{2}(X)$ satisfying Gaussian upper bounds on its heat kernels. In this paper, we develop the theory of weighted Besov spaces ${\dot{B}}_{p,q,w}^{\unicode[STIX]{x1D6FC},L}(X)$ and weighted Triebel–Lizorkin spaces ${\dot{F}}_{p,q,w}^{\unicode[STIX]{x1D6FC},L}(X)$ associated with the operator $L$ for the full range $0<p,q\leqslant \infty$ , $\unicode[STIX]{x1D6FC}\in \mathbb{R}$ and $w$ being in the Muckenhoupt weight class $A_{\infty }$ . Under rather weak assumptions on $L$ as stated above, we prove that our new spaces satisfy important features such as continuous characterizations in terms of square functions, atomic decompositions and the identifications with some well-known function spaces such as Hardy-type spaces and Sobolev-type spaces. One of the highlights of our result is the characterization of these spaces via noncompactly supported functional calculus. An important by-product of this characterization is the characterization via the heat kernel for the full range of indices. Moreover, with extra assumptions on the operator $L$ , we prove that the new function spaces associated with $L$ coincide with the classical function spaces. Finally we apply our results to prove the boundedness of the fractional power of $L$ , the spectral multiplier of $L$ in our new function spaces and the dispersive estimates of wave equations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Hwang, Soojin, Daehyeon Baek, Jongse Park e Jaehyuk Huh. "Cerberus: Triple Mode Acceleration of Sparse Matrix and Vector Multiplication". ACM Transactions on Architecture and Code Optimization, 17 de março de 2024. http://dx.doi.org/10.1145/3653020.

Texto completo da fonte
Resumo:
The multiplication of sparse matrix and vector (SpMV) is one of the most widely used kernels in high-performance computing as well as machine learning acceleration for sparse neural networks. The design space of SpMV accelerators has two axes: algorithm and matrix representation. There have been two widely used algorithms and data representations. Two algorithms, scalar multiplication and dot product, can be combined with two sparse data representations, compressed sparse and bitmap formats for the matrix and vector. Although the prior accelerators adopted one of the possible designs, it is yet to be investigated which design is the best one across different hardware resources and workload characteristics. This paper first investigates the impact of design choices with respect to the algorithm and data representation. Our evaluation shows that no single design always outperforms the others across different workloads, but the two best designs (i.e. compressed sparse format and bitmap format with dot product) have complementary performance with trade-offs incurred by the matrix characteristics. Based on the analysis, this study proposes Cerberus, a triple-mode accelerator supporting two sparse operation modes in addition to the base dense mode. To allow such multi-mode operation, it proposes a prediction model based on matrix characteristics under a given hardware configuration, which statically selects the best mode for a given sparse matrix with its dimension and density information. Our experimental results show that Cerberus provides 12.1 × performance improvements from a dense-only accelerator, and 1.5 × improvements from a fixed best SpMV design.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

"Linear Kernel with Weighted Least Square Regression Co-efficient for SVM Based Tamil Writer Identification". International Journal of Recent Technology and Engineering 8, n.º 2 (30 de julho de 2019): 586–91. http://dx.doi.org/10.35940/ijrte.b1629.078219.

Texto completo da fonte
Resumo:
Tamil writer identification is the task of identifying writer based on their Tamil handwriting. Our earlier work of this research based on SVM implementation with linear, polynomial and RBF kernel showed that linear kernel attains very low accuracy compared to other two kernels. But the observation shows that linear kernel performs faster than the other kernels and also it shows very less computational complexity. Hence, a modified linear kernel is proposed to enrich the performance of the linear kernel in recognizing the Tamil writer. Weighted least square parameter estimation method is used to estimate the weights for the dot products of the linear kernel. SVM implementation with modified linear kernel is carried out on different text images of handwriting at character, word and paragraph levels. Comparing the performance with linear kernel, the modified kernel with weighted least square parameter reported promising results.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Briscik, Mitja, Marie-Agnès Dillies e Sébastien Déjean. "Improvement of variables interpretability in kernel PCA". BMC Bioinformatics 24, n.º 1 (12 de julho de 2023). http://dx.doi.org/10.1186/s12859-023-05404-y.

Texto completo da fonte
Resumo:
Abstract Background Kernel methods have been proven to be a powerful tool for the integration and analysis of high-throughput technologies generated data. Kernels offer a nonlinear version of any linear algorithm solely based on dot products. The kernelized version of principal component analysis is a valid nonlinear alternative to tackle the nonlinearity of biological sample spaces. This paper proposes a novel methodology to obtain a data-driven feature importance based on the kernel PCA representation of the data. Results The proposed method, kernel PCA Interpretable Gradient (KPCA-IG), provides a data-driven feature importance that is computationally fast and based solely on linear algebra calculations. It has been compared with existing methods on three benchmark datasets. The accuracy obtained using KPCA-IG selected features is equal to or greater than the other methods’ average. Also, the computational complexity required demonstrates the high efficiency of the method. An exhaustive literature search has been conducted on the selected genes from a publicly available Hepatocellular carcinoma dataset to validate the retained features from a biological point of view. The results once again remark on the appropriateness of the computed ranking. Conclusions The black-box nature of kernel PCA needs new methods to interpret the original features. Our proposed methodology KPCA-IG proved to be a valid alternative to select influential variables in high-dimensional high-throughput datasets, potentially unravelling new biological and medical biomarkers.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Kathuria, Kunal, Aakrosh Ratan, Michael McConnell e Stefan Bekiranov. "Implementation of a Hamming distance–like genomic quantum classifier using inner products on ibmqx2 and ibmq_16_melbourne". Quantum Machine Intelligence 2, n.º 1 (junho de 2020). http://dx.doi.org/10.1007/s42484-020-00017-7.

Texto completo da fonte
Resumo:
Abstract Motivated by the problem of classifying individuals with a disease versus controls using a functional genomic attribute as input, we present relatively efficient general purpose inner product–based kernel classifiers to classify the test as a normal or disease sample. We encode each training sample as a string of 1 s (presence) and 0 s (absence) representing the attribute’s existence across ordered physical blocks of the subdivided genome. Having binary-valued features allows for highly efficient data encoding in the computational basis for classifiers relying on binary operations. Given that a natural distance between binary strings is Hamming distance, which shares properties with bit-string inner products, our two classifiers apply different inner product measures for classification. The active inner product (AIP) is a direct dot product–based classifier whereas the symmetric inner product (SIP) classifies upon scoring correspondingly matching genomic attributes. SIP is a strongly Hamming distance–based classifier generally applicable to binary attribute-matching problems whereas AIP has general applications as a simple dot product–based classifier. The classifiers implement an inner product between N = 2n dimension test and train vectors using n Fredkin gates while the training sets are respectively entangled with the class-label qubit, without use of an ancilla. Moreover, each training class can be composed of an arbitrary number m of samples that can be classically summed into one input string to effectively execute all test–train inner products simultaneously. Thus, our circuits require the same number of qubits for any number of training samples and are $O(\log {N})$ O ( log N ) in gate complexity after the states are prepared. Our classifiers were implemented on ibmqx2 (IBM-Q-team 2019b) and ibmq_16_melbourne (IBM-Q-team 2019a). The latter allowed encoding of 64 training features across the genome.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Zhao, Y., F. Qian, F. Jain e L. Wang. "A Multi-Bit Non-Volatile Compute-in-Memory Architecture with Quantum-Dot Transistor Based Unit". International Journal of High Speed Electronics and Systems 31, n.º 01n04 (março de 2022). http://dx.doi.org/10.1142/s0129156422400183.

Texto completo da fonte
Resumo:
The recent advance of artificial intelligence (AI) has shown remarkable success for numerous tasks, such as cloud computing, deep-learning, neural network and so on. Most of those applications rely on fast computation and large storage, which brings various challenges to the hardware platform. The hardware performance is the bottle neck to break through and therefore, there is a lot of interest in exploring new solutions for computation architecture in recent years. Compute-in-memory (CIM) has drawn attention to the researchers and it is considered as one of the most promising candidates to solve the above challenges. Computing-In-memory is an emerging technique to fulfill the fast-growing demand for high-performance data processing. This technique offers fast processing, low power and high performance by blurring the boundary between processing cores and memory units. One key aspect of CIM is performing matrix-vector multiplication (MVM) or dot product operation through intertwining of processing and memory elements. As the primary computational kernel in neural networks, dot product operation is targeted to be improved in terms of its performance. In this paper, we present the design, implementation and analysis of quantum-dot transistor (QDT) based CIM, from the multi-bit multiplier to the dot product unit, and then the in-memory computing array.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Ramírez, Cristian, Adrián Castelló e Enrique S. Quintana-Ortí. "A BLIS-like matrix multiplication for machine learning in the RISC-V ISA-based GAP8 processor". Journal of Supercomputing, 28 de maio de 2022. http://dx.doi.org/10.1007/s11227-022-04581-6.

Texto completo da fonte
Resumo:
AbstractWe address the efficient realization of matrix multiplication (gemm), with application in the convolution operator for machine learning, for the RISC-V core present in the GreenWaves GAP8 processor. Our approach leverages BLIS (Basic Linear Algebra Instantiation Software) to develop an implementation that (1) re-organizes the gemm algorithm adapting its micro-kernel to exploit the hardware-supported dot product kernel in the GAP8; (2) explicitly orchestrates the data transfers across the hierarchy of scratchpad memories via DMA (direct memory access); and (3) operates with integer arithmetic.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Gan, Yao, Yanyun Fu, Deyong Wang e Yongming Li. "A novel approach to attention mechanism using kernel functions: Kerformer". Frontiers in Neurorobotics 17 (24 de agosto de 2023). http://dx.doi.org/10.3389/fnbot.2023.1214203.

Texto completo da fonte
Resumo:
Artificial Intelligence (AI) is driving advancements across various fields by simulating and enhancing human intelligence. In Natural Language Processing (NLP), transformer models like the Kerformer, a linear transformer based on a kernel approach, have garnered success. However, traditional attention mechanisms in these models have quadratic calculation costs linked to input sequence lengths, hampering efficiency in tasks with extended orders. To tackle this, Kerformer introduces a nonlinear reweighting mechanism, transforming maximum attention into feature-based dot product attention. By exploiting the non-negativity and non-linear weighting traits of softmax computation, separate non-negativity operations for Query(Q) and Key(K) computations are performed. The inclusion of the SE Block further enhances model performance. Kerformer significantly reduces attention matrix time complexity from O(N2) to O(N), with N representing sequence length. This transformation results in remarkable efficiency and scalability gains, especially for prolonged tasks. Experimental results demonstrate Kerformer's superiority in terms of time and memory consumption, yielding higher average accuracy (83.39%) in NLP and vision tasks. In tasks with long sequences, Kerformer achieves an average accuracy of 58.94% and exhibits superior efficiency and convergence speed in visual tasks. This model thus offers a promising solution to the limitations posed by conventional attention mechanisms in handling lengthy tasks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Li, Futao, Zhongbin Wang, Lei Si, Dong Wei, Chao Tan e Honglin Wu. "A novel recognition method of shearer cutting status based on SDP image and MCK-DCNN". Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 29 de junho de 2023. http://dx.doi.org/10.1177/09544062231185486.

Texto completo da fonte
Resumo:
To solve the problem of large interference of shearer rocker arm vibration signal and difficulty in feature selection, and recognize accurately the cutting status of shearer, a novel pattern identification method based on Symmetrized Dot Pattern (SDP), Local Mean Decomposition (LMD) and Multi-Scale Convolution Kernel Deep Convolutional Neural Network (MCK-DCNN) is presented. Firstly, the vibration signal of shearer rocker arm is decomposed by LMD to get multiple product functions (PFs). The previous three PFs are transformed into SDP images with different features by SDP method, which are input into MCK-DCNN model to automatically extract features and identify shearer cutting status. The method can achieve the classification rate of 97.9%, which is superior to 1D_CNN and LeNet. The comparison result indicates that the method can provide technical support for improving the automatic coal cutting performance of the shearer.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Wolansky, Ivan. "A DEEP DIVE INTO THE BASICS OF DEEP LEARNING". Proceedings of the Shevchenko Scientific Society. Medical Sciences 65, n.º 2 (29 de dezembro de 2021). http://dx.doi.org/10.25040/ntsh2021.02.23.

Texto completo da fonte
Resumo:
Deep learning is a type of machine learning (ML) that is growing in importance in the medical field. It can often perform better than traditional ML models on different metrics, and it can handle non-linear problems due to activation functions. Activation functions are different non-linear functions that are used to restrict the values propagated to an interval. In deep learning, information propagates forward, passing through different layers of weights and activation functions, before reaching the final layer. Then a cost function is evaluated and propagated back through the network to adjust weights. A convolutional neural network (CNN) is a form of deep learning that is used primarily in imaging. CNNs perform significantly well with grid-like inputs because they learn shapes well. CNNs compute dot products between layers and kernels in a convolutional layer, prior to pooling, which outputs summary statistics. CNNs are better than trivial neural networks for imaging due to a number of reasons, like sparse interaction and equivariance of translation
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Fang, Jinwei, Lanying Huang, Ying Shi, Hanming Chen e Bo Wang. "Three-dimensional elastic reverse-time migration using a high-order temporal and spatial staggered-grid finite-difference scheme". Frontiers in Earth Science 11 (26 de janeiro de 2023). http://dx.doi.org/10.3389/feart.2023.1069506.

Texto completo da fonte
Resumo:
Three-dimensional (3D) elastic reverse-time migration (ERTM) can image the subsurface 3D seismic structures, and it is an important tool for the Earth’s interior imaging. A common simulation kernel used in 3D ERTM is the current staggered-grid finite-difference (SGFD) method of the first-order elastic wave equation. However, the mere second-order accuracy in time of the current SGFD method can bring non-negligible time dispersion, which reduces the simulation accuracy and further leads to the distortion of the imaging results. This paper proposes a vector-based 3D ERTM using the high-order accuracy SGFD method in time to obtain high-accuracy images. This approach is a new high-resolution ERTM workflow that improves the imaging accuracy of conventional ERTM from numerical simulation. The proposed ERTM workflow is established on a quasi-stress–velocity wave equation and its vector wavefield decomposition form. Advanced SGFD schemes and their corresponding coefficients with fourth-order temporal accuracy solve the quasi-linear wave equation system. The normalized dot product imaging condition produces high-quality images using high-accuracy vector wavefields solved using the SGFD method. Through the numerical examples, we test the simulation efficiency and analyze how temporal accuracy in numerical simulations affects migration imaging quality. We include that the proposed method obtains highly accurate images.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Chukanov, S. N. "Formation of features based on computational topology methods". Computer Optics 47, n.º 3 (junho de 2023). http://dx.doi.org/10.18287/2412-6179-co-1190.

Texto completo da fonte
Resumo:
The use of traditional methods of algebraic topology to obtain information about the shape of an object is associated with the problem of forming a small amount of information, namely, Betti numbers and Euler characteristics. The central tool for topological data analysis is the persistent homology method, which summarizes the geometric and topological information in the data using persistent diagrams and barcodes. Based on persistent homology methods, topological data can be analyzed to obtain information about the shape of an object. The construction of persistent barcodes and persistent diagrams in computational topology does not allow one to construct a Hilbert space with a scalar product. The possibility of applying the methods of topological data analysis is based on mapping persistent diagrams into a Hilbert space; one of the ways of such mapping is a method of constructing a persistence landscape. It has an advantage of being reversible, so it does not lose any information and has persistence properties. The paper considers mathematical models and functions for representing persistence landscape objects based on the persistent homology method. Methods for converting persistent barcodes and persistent diagrams into persistence landscape functions are considered. Associated with persistence landscape functions is a persistence landscape kernel that forms a mapping into a Hilbert space with a dot product. A formula is proposed for determining a distance between the persistence landscapes, which allows the distance between images of objects to be found. The persistence landscape functions map persistent diagrams into a Hilbert space. Examples of determining the distance between images based on the construction of persistence landscape functions for these images are given. Representations of topological characteristics in various models of computational topology are considered. Results for one-parameter persistence modules are extended onto multi-parameter persistence modules.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Manzhos, Sergei, Johann Lüder e Manabu Ihara. "Machine learning of kinetic energy densities with target and feature smoothing: Better results with fewer training data". Journal of Chemical Physics 159, n.º 23 (19 de dezembro de 2023). http://dx.doi.org/10.1063/5.0175689.

Texto completo da fonte
Resumo:
Machine learning (ML) of kinetic energy functionals (KEFs), in particular kinetic energy density (KED) functionals, is a promising way to construct KEFs for orbital-free density functional theory (DFT). Neural networks and kernel methods including Gaussian process regression (GPR) have been used to learn Kohn–Sham (KS) KED from density-based descriptors derived from KS DFT calculations. The descriptors are typically expressed as functions of different powers and derivatives of the electron density. This can generate large and extremely unevenly distributed datasets, which complicates effective application of ML techniques. Very uneven data distributions require many training datapoints, can cause overfitting, and can ultimately lower the quality of an ML KED model. We show that one can produce more accurate ML models from fewer data by working with smoothed density-dependent variables and KED. Smoothing palliates the issue of very uneven data distributions and associated difficulties of sampling while retaining enough spatial structure necessary for working within the paradigm of KEDF. We use GPR as a function of smoothed terms of the fourth order gradient expansion and KS effective potential and obtain accurate and stable (with respect to different random choices of training points) kinetic energy models for Al, Mg, and Si simultaneously from as few as 2000 samples (about 0.3% of the total KS DFT data). In particular, accuracies on the order of 1% in a measure of the quality of energy–volume dependence B′=EV0−ΔV−2EV0+E(V0+ΔV)ΔV/V02 (where V0 is the equilibrium volume and ΔV is a deviation from it) are obtained simultaneously for all three materials.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Abed, Qutaiba K., e Waleed A. Mahmoud Al-Jawher. "ANEW ARCHITECTURE OF KEY GENERATION USING DWT FOR IMAGE ENCRYPTION WITH THREE LEVELS ARNOLD TRANSFORM PERMUTATION". Journal Port Science Research 5, n.º 3 (17 de outubro de 2023). http://dx.doi.org/10.36371/port.2022.3.6.

Texto completo da fonte
Resumo:
The security of image transmission is an important issue in digital communication. As well as it is necessary to preserve and protect important information for several applications like military, medical and other services related to high confidentiality over the Internet or other unprotected networks. In this paper a proposed encryption scheme was introduced that using Lorenz system with a circular convolution and discreet cosine transform (DCT). The diffusion process was achieved using three levels of Arnold transform permutation: - namely, block level, inside each block and pixel level. The image was divided into blocks of sizes 8x8 pixels and shuffle by applying Fisher-Yates permutation image pixels. The DCT was applied to each block and multiplied by the H kernel matrix to achieve the circular convolution. Next the logistic map is used for diffusion to get the cipher image. A new key generation method is applied in order to produce the key values for generating the chaos numbers sequence. Finally applying the discreet wavelet transform (DWT) to the image will produced four quarters (LL, LH, HL and HH). The cosine of each pixel in the LL and HH quarters were computed. The sine of each pixel in the LH and HL quarters were also computed in order to generate the keys for the chaos sequence. By using this way of keys generation in every image, the differential attack possibility will be negligible
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Zhang, Gezhi, Yong Long, Yuting Lin, Ronald C. Chen e Hao Gao. "A treatment plan optimization method with direct minimization of number of energy jumps for proton arc therapy". Physics in Medicine & Biology, 15 de março de 2023. http://dx.doi.org/10.1088/1361-6560/acc4a7.

Texto completo da fonte
Resumo:
Abstract Objective: The optimization of energy layer distributions is crucial to proton ARC therapy: on one hand, a sufficient number of energy layers is needed to ensure the plan quality; on the other hand, an excess number of energy jumps can substantially slow down the treatment delivery. This work will develop a new treatment plan optimization method with direct minimization of number of energy jumps (NEJ), which will be shown to outperform state-of-the-art methods in both plan quality and delivery efficiency.&#xD;Approach: The proposed method jointly optimizes the plan quality and minimizes the NEJ. To minimize NEJ, (1) the proton spots x is summed per energy layer to form the energy vector y; (2) y is binarized via sigmoid transform into y1; (3) y1 is multiplied with a predefined energy order vector via dot product into y2; (4) y2 is filtered through the finite-differencing kernel into y3 in order to identify NEJ; (5) only the NEJ of y3 is penalized, while x is optimized for plan quality. The solution algorithm to this new method is based on iterative convex relaxation.&#xD;Main Results: The new method is validated in comparison with state-of-the-art methods called energy sequencing (ES) method and energy matrix (EM) method. In terms of delivery efficiency, the new method had fewer NEJ, less energy switching time, and generally less total delivery time. In terms of plan quality, the new method had smaller optimization objective values, lower normal tissue dose, and generally better target coverage. &#xD;Significance: We have developed a new treatment plan optimization method with direct minimization of NEJ, and demonstrated that this new method outperformed state-of-the-art methods (ES and EM) in both plan quality and delivery efficiency.&#xD;
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia