Articles de revues sur le sujet « Mixed precision »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Mixed precision.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Mixed precision ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Higham, Nicholas J., et Theo Mary. « Mixed precision algorithms in numerical linear algebra ». Acta Numerica 31 (mai 2022) : 347–414. http://dx.doi.org/10.1017/s0962492922000022.

Texte intégral
Résumé :
Today’s floating-point arithmetic landscape is broader than ever. While scientific computing has traditionally used single precision and double precision floating-point arithmetics, half precision is increasingly available in hardware and quadruple precision is supported in software. Lower precision arithmetic brings increased speed and reduced communication and energy costs, but it produces results of correspondingly low accuracy. Higher precisions are more expensive but can potentially provide great benefits, even if used sparingly. A variety of mixed precision algorithms have been developed that combine the superior performance of lower precisions with the better accuracy of higher precisions. Some of these algorithms aim to provide results of the same quality as algorithms running in a fixed precision but at a much lower cost; others use a little higher precision to improve the accuracy of an algorithm. This survey treats a broad range of mixed precision algorithms in numerical linear algebra, both direct and iterative, for problems including matrix multiplication, matrix factorization, linear systems, least squares, eigenvalue decomposition and singular value decomposition. We identify key algorithmic ideas, such as iterative refinement, adapting the precision to the data, and exploiting mixed precision block fused multiply–add operations. We also describe the possible performance benefits and explain what is known about the numerical stability of the algorithms. This survey should be useful to a wide community of researchers and practitioners who wish to develop or benefit from mixed precision numerical linear algebra algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Ralha, Rui. « Mixed Precision Bisection ». Mathematics in Computer Science 12, no 2 (13 mars 2018) : 173–81. http://dx.doi.org/10.1007/s11786-018-0336-6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Liu, Xingchao, Mao Ye, Dengyong Zhou et Qiang Liu. « Post-training Quantization with Multiple Points : Mixed Precision without Mixed Precision ». Proceedings of the AAAI Conference on Artificial Intelligence 35, no 10 (18 mai 2021) : 8697–705. http://dx.doi.org/10.1609/aaai.v35i10.17054.

Texte intégral
Résumé :
We consider the post-training quantization problem, which discretizes the weights of pre-trained deep neural networks without re-training the model. We propose multipoint quantization, a quantization method that approximates a full-precision weight vector using a linear combination of multiple vectors of low-bit numbers; this is in contrast to typical quantization methods that approximate each weight using a single low precision number. Computationally, we construct the multipoint quantization with an efficient greedy selection procedure, and adaptively decides the number of low precision points on each quantized weight vector based on the error of its output. This allows us to achieve higher precision levels for important weights that greatly influence the outputs, yielding an ``effect of mixed precision'' but without physical mixed precision implementations (which requires specialized hardware accelerators). Empirically, our method can be implemented by common operands, bringing almost no memory and computation overhead. We show that our method outperforms a range of state-of-the-art methods on ImageNet classification and it can be generalized to more challenging tasks like PASCAL VOC object detection.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Van Zee, Field G., Devangi N. Parikh et Robert A. Van De Geijn. « Supporting Mixed-domain Mixed-precision Matrix Multiplication within the BLIS Framework ». ACM Transactions on Mathematical Software 47, no 2 (avril 2021) : 1–26. http://dx.doi.org/10.1145/3402225.

Texte intégral
Résumé :
We approach the problem of implementing mixed-datatype support within the general matrix multiplication ( gemm ) operation of the BLAS-like Library Instantiation Software framework, whereby each matrix operand A , B , and C may be stored as single- or double-precision real or complex values. Another factor of complexity, whereby the matrix product and accumulation are allowed to take place in a precision different from the storage precisions of either A or B , is also discussed. We first break the problem into orthogonal dimensions, considering the mixing of domains separately from mixing precisions. Support for all combinations of matrix operands stored in either the real or complex domain is mapped out by enumerating the cases and describing an implementation approach for each. Supporting all combinations of storage and computation precisions is handled by typecasting the matrices at key stages of the computation—during packing and/or accumulation, as needed. Several optional optimizations are also documented. Performance results gathered on a 56-core Marvell ThunderX2 and a 52-core Intel Xeon Platinum demonstrate that high performance is mostly preserved, with modest slowdowns incurred from unavoidable typecast instructions. The mixed-datatype implementation confirms that combinatorial intractability is avoided, with the framework relying on only two assembly microkernels to implement 128 datatype combinations.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Kim, Han-Byul, Joo Hyung Lee, Sungjoo Yoo et Hong-Seok Kim. « MetaMix : Meta-State Precision Searcher for Mixed-Precision Activation Quantization ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 12 (24 mars 2024) : 13132–41. http://dx.doi.org/10.1609/aaai.v38i12.29212.

Texte intégral
Résumé :
Mixed-precision quantization of efficient networks often suffer from activation instability encountered in the exploration of bit selections. To address this problem, we propose a novel method called MetaMix which consists of bit selection and weight training phases. The bit selection phase iterates two steps, (1) the mixed-precision-aware weight update, and (2) the bit-search training with the fixed mixed-precision-aware weights, both of which combined reduce activation instability in mixed-precision quantization and contribute to fast and high-quality bit selection. The weight training phase exploits the weights and step sizes trained in the bit selection phase and fine-tunes them thereby offering fast training. Our experiments with efficient and hard-to-quantize networks, i.e., MobileNet v2 and v3, and ResNet-18 on ImageNet show that our proposed method pushes the boundary of mixed-precision quantization, in terms of accuracy vs. operations, by outperforming both mixed- and single-precision SOTA methods.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Kelley, C. T. « Newton's Method in Mixed Precision ». SIAM Review 64, no 1 (février 2022) : 191–211. http://dx.doi.org/10.1137/20m1342902.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Le Gallo, Manuel, Abu Sebastian, Roland Mathis, Matteo Manica, Heiner Giefers, Tomas Tuma, Costas Bekas, Alessandro Curioni et Evangelos Eleftheriou. « Mixed-precision in-memory computing ». Nature Electronics 1, no 4 (avril 2018) : 246–53. http://dx.doi.org/10.1038/s41928-018-0054-8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Ma, Yuexiao, Taisong Jin, Xiawu Zheng, Yan Wang, Huixia Li, Yongjian Wu, Guannan Jiang, Wei Zhang et Rongrong Ji. « OMPQ : Orthogonal Mixed Precision Quantization ». Proceedings of the AAAI Conference on Artificial Intelligence 37, no 7 (26 juin 2023) : 9029–37. http://dx.doi.org/10.1609/aaai.v37i7.26084.

Texte intégral
Résumé :
To bridge the ever-increasing gap between deep neural networks' complexity and hardware capability, network quantization has attracted more and more research attention. The latest trend of mixed precision quantization takes advantage of hardware's multiple bit-width arithmetic operations to unleash the full potential of network quantization. However, existing approaches rely heavily on an extremely time-consuming search process and various relaxations when seeking the optimal bit configuration. To address this issue, we propose to optimize a proxy metric of network orthogonality that can be efficiently solved with linear programming, which proves to be highly correlated with quantized model accuracy and bit-width. Our approach significantly reduces the search time and the required data amount by orders of magnitude, but without a compromise on quantization accuracy. Specifically, we achieve 72.08% Top-1 accuracy on ResNet-18 with 6.7Mb parameters, which does not require any searching iterations. Given the high efficiency and low data dependency of our algorithm, we use it for the post-training quantization, which achieves 71.27% Top-1 accuracy on MobileNetV2 with only 1.5Mb parameters.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Lee, Jong-Eun, Kyung-Bin Jang et Seung-Ho Lim. « Implementation and Performance Analysis of Mixed Precision-based CNN Inference ». Journal of Korean Institute of Information Technology 21, no 12 (31 décembre 2023) : 77–85. http://dx.doi.org/10.14801/jkiit.2023.21.12.77.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Al-Marakeby, A. « PRECISION ON DEMAND : A NOVEL LOSSLES MIXED-PRECISION COMPUTATION TECHNIQUE ». Journal of Al-Azhar University Engineering Sector 15, no 57 (1 octobre 2020) : 1046–56. http://dx.doi.org/10.21608/auej.2020.120378.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Khimich, A. N., et V. A. Sidoruk. « Using Mixed Precision in Mathematical Modeling ». Mathematical and computer modelling. Series : Physical and mathematical sciences, no 19 (25 juin 2019) : 180–87. http://dx.doi.org/10.32626/2308-5878.2019-19.180-187.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
12

Chiang, Wei-Fan, Mark Baranowski, Ian Briggs, Alexey Solovyev, Ganesh Gopalakrishnan et Zvonimir Rakamarić. « Rigorous floating-point mixed-precision tuning ». ACM SIGPLAN Notices 52, no 1 (11 mai 2017) : 300–315. http://dx.doi.org/10.1145/3093333.3009846.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
13

Lohar, Debasmita, Clothilde Jeangoudoux, Anastasia Volkova et Eva Darulova. « Sound Mixed Fixed-Point Quantization of Neural Networks ». ACM Transactions on Embedded Computing Systems 22, no 5s (9 septembre 2023) : 1–26. http://dx.doi.org/10.1145/3609118.

Texte intégral
Résumé :
Neural networks are increasingly being used as components in safety-critical applications, for instance, as controllers in embedded systems. Their formal safety verification has made significant progress but typically considers only idealized real-valued networks. For practical applications, such neural networks have to be quantized, i.e., implemented in finite-precision arithmetic, which inevitably introduces roundoff errors. Choosing a suitable precision that is both guaranteed to satisfy a roundoff error bound to ensure safety and that is as small as possible to not waste resources is highly nontrivial to do manually. This task is especially challenging when quantizing a neural network in fixed-point arithmetic, where one can choose among a large number of precisions and has to ensure overflow-freedom explicitly. This paper presents the first sound and fully automated mixed-precision quantization approach that specifically targets deep feed-forward neural networks. Our quantization is based on mixed-integer linear programming (MILP) and leverages the unique structure of neural networks and effective over-approximations to make MILP optimization feasible. Our approach efficiently optimizes the number of bits needed to implement a network while guaranteeing a provided error bound. Our evaluation on existing embedded neural controller benchmarks shows that our optimization translates into precision assignments that mostly use fewer machine cycles when compiled to an FPGA with a commercial HLS compiler than code generated by (sound) state-of-the-art. Furthermore, our approach handles significantly more benchmarks substantially faster, especially for larger networks.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Paoletti, M. E., X. Tao, J. M. Haut, S. Moreno-Álvarez et A. Plaza. « Deep mixed precision for hyperspectral image classification ». Journal of Supercomputing 77, no 8 (3 février 2021) : 9190–201. http://dx.doi.org/10.1007/s11227-021-03638-2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Wang, Kuan, Zhijian Liu, Yujun Lin, Ji Lin et Song Han. « Hardware-Centric AutoML for Mixed-Precision Quantization ». International Journal of Computer Vision 128, no 8-9 (11 juin 2020) : 2035–48. http://dx.doi.org/10.1007/s11263-020-01339-6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

Baboulin, Marc, Alfredo Buttari, Jack Dongarra, Jakub Kurzak, Julie Langou, Julien Langou, Piotr Luszczek et Stanimire Tomov. « Accelerating scientific computations with mixed precision algorithms ». Computer Physics Communications 180, no 12 (décembre 2009) : 2526–33. http://dx.doi.org/10.1016/j.cpc.2008.11.005.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

Liu, Jie. « Accuracy Controllable SpMV Optimization on GPU ». Journal of Physics : Conference Series 2363, no 1 (1 novembre 2022) : 012008. http://dx.doi.org/10.1088/1742-6596/2363/1/012008.

Texte intégral
Résumé :
Sparse matrix vector multiplication (SpMV) is a key kernel widely used in a variety of fields, and mixed-precision calculation brings opportunities to SpMV optimization. Researchers have proposed to store nonzero elements in the interval (-1, 1) in single precision and calculate SpMV in mixed precision. Though it leads to high performance, it also brings loss of accuracy. This paper proposes an accuracy controllable optimization method for SpMV. By limiting the error caused by converting double-precision floating-point numbers in the interval (-1, 1) into single-precision format, the calculation accuracy of mixed-precision SpMV is effectively improved. We tested sparse matrices from the SuiteSparse Matrix Collection on Tesla V100. Compared with the existing mixed-precision MpSpMV kernel, the mixed-precision SpMV proposed in this paper achieves an accuracy improvement.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Zhang, Hao, Dongdong Chen et Seok-Bum Ko. « Efficient Multiple-Precision Floating-Point Fused Multiply-Add with Mixed-Precision Support ». IEEE Transactions on Computers 68, no 7 (1 juillet 2019) : 1035–48. http://dx.doi.org/10.1109/tc.2019.2895031.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Yang, Linjie, et Qing Jin. « FracBits : Mixed Precision Quantization via Fractional Bit-Widths ». Proceedings of the AAAI Conference on Artificial Intelligence 35, no 12 (18 mai 2021) : 10612–20. http://dx.doi.org/10.1609/aaai.v35i12.17269.

Texte intégral
Résumé :
Model quantization helps to reduce model size and latency of deep neural networks. Mixed precision quantization is favorable with customized hardwares supporting arithmetic operations at multiple bit-widths to achieve maximum efficiency. We propose a novel learning-based algorithm to derive mixed precision models end-to-end under target computation constraints and model sizes. During the optimization, the bit-width of each layer / kernel in the model is at a fractional status of two consecutive bit-widths which can be adjusted gradually. With a differentiable regularization term, the resource constraints can be met during the quantization-aware training which results in an optimized mixed precision model. Our final models achieve comparable or better performance than previous quantization methods with mixed precision on MobilenetV1/V2, ResNet18 under different resource constraints on ImageNet dataset.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Yu, Chuan, Dingding Xian et Jiao Liu. « High Precision Low Temperature Drift Modulus Mixed Multichannel ». OALib 09, no 07 (2022) : 1–11. http://dx.doi.org/10.4236/oalib.1109067.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
21

Ozaki, Katsuhisa, Takeshi Ogita, Florian Bünger et Shin'ichi Oishi. « Accelerating interval matrix multiplication by mixed precision arithmetic ». Nonlinear Theory and Its Applications, IEICE 6, no 3 (2015) : 364–76. http://dx.doi.org/10.1587/nolta.6.364.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Hines, Jonathan, James J. Hack et Michael E. Papka. « Mixed Precision : A Strategy for New Science Opportunities ». Computing in Science & ; Engineering 20, no 6 (1 novembre 2018) : 67–71. http://dx.doi.org/10.1109/mcse.2018.2874161.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Junqing Sun, G. D. Peterson et O. O. Storaasli. « High-Performance Mixed-Precision Linear Solver for FPGAs ». IEEE Transactions on Computers 57, no 12 (décembre 2008) : 1614–23. http://dx.doi.org/10.1109/tc.2008.89.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
24

Prikopa, Karl E., et Wilfried N. Gansterer. « On Mixed Precision Iterative Refinement for Eigenvalue Problems ». Procedia Computer Science 18 (2013) : 2647–50. http://dx.doi.org/10.1016/j.procs.2013.06.002.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Yue, Xiaoqiang, Zhiyong Wang et Shu-Lin Wu. « Convergence Analysis of a Mixed Precision Parareal Algorithm ». SIAM Journal on Scientific Computing 45, no 5 (22 septembre 2023) : A2483—A2510. http://dx.doi.org/10.1137/22m1510169.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Molina, Roméo, Vincent Lafage, David Chamont et Fabienne Jézéquel. « Investigating mixed-precision for AGATA pulse-shape analysis ». EPJ Web of Conferences 295 (2024) : 03020. http://dx.doi.org/10.1051/epjconf/202429503020.

Texte intégral
Résumé :
The AGATA project aims at building a 4π gamma-ray spectrometer consisting of 180 germanium crystals, each crystal being divided into 36 segments. Each gamma ray produces an electrical signal within several neighbouring segments, which is compared with a data base of reference signals, enabling to locate the interaction. This step is called Pulse-Shape Analysis (PSA). In the execution chain leading to the PSA, we observe successive data conversions: the original 14-bit integers given by the electronics are finally converted to 32-bit floats. This made us wonder about the real numerical accuracy of the results, and investigate the use of shorter floats, with the hope to speedup the computation, and also reduce a major cache-miss problem. In this article, we first describe the numerical validation of the PSA code, thanks to the CADNA library. After the code being properly instrumented, CADNA performs each computation three times with a random rounding mode. This allows, for each operation, to evaluate the number of exact significant digits using a Student test with 95% confidence threshold. In a second step, we report our successes and challenges while refactoring the code so to mix different numerical formats, using high precision only when necessary, and taking benefit of hardware speedup elsewhere.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Kimhi, Moshe, Tal Rozen, Avi Mendelson et Chaim Baskin. « AMED : Automatic Mixed-Precision Quantization for Edge Devices ». Mathematics 12, no 12 (11 juin 2024) : 1810. http://dx.doi.org/10.3390/math12121810.

Texte intégral
Résumé :
Quantized neural networks are well known for reducing the latency, power consumption, and model size without significant harm to the performance. This makes them highly appropriate for systems with limited resources and low power capacity. Mixed-precision quantization offers better utilization of customized hardware that supports arithmetic operations at different bitwidths. Quantization methods either aim to minimize the compression loss given a desired reduction or optimize a dependent variable for a specified property of the model (such as FLOPs or model size); both make the performance inefficient when deployed on specific hardware, but more importantly, quantization methods assume that the loss manifold holds a global minimum for a quantized model that copes with the global minimum of the full precision counterpart. Challenging this assumption, we argue that the optimal minimum changes as the precision changes, and thus, it is better to look at quantization as a random process, placing the foundation for a different approach to quantize neural networks, which, during the training procedure, quantizes the model to a different precision, looks at the bit allocation as a Markov Decision Process, and then, finds an optimal bitwidth allocation for measuring specified behaviors on a specific device via direct signals from the particular hardware architecture. By doing so, we avoid the basic assumption that the loss behaves the same way for a quantized model. Automatic Mixed-Precision Quantization for Edge Devices (dubbed AMED) demonstrates its superiority over current state-of-the-art schemes in terms of the trade-off between neural network accuracy and hardware efficiency, backed by a comprehensive evaluation.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Mohanty, Sanjit Ku, Debasish Das et Rajani B. Dash. « Dual Mixed Gaussian Quadrature Based Adaptive Scheme for Analytic Functions ». Annals of Pure and Applied Mathematics 22, no 02 (2020) : 83–92. http://dx.doi.org/10.22457/apam.v22n2a03704.

Texte intégral
Résumé :
An efficient adaptive scheme based on a dual mixed quadrature rule of precision eleven for approximate evaluation of line integral of analytic functions has been constructed. At first, the precision of Gauss-Legendre four point transformed rule is enhanced by using Richardson extrapolation. A suitable convex combination of the resulting rule and the Gauss-Legendre five point rule further enhances the precision producing a new mixed quadrature rule . This mixed rule is termed as dual mixed Gaussian quadrature rule as it acquires a very high precision eleven using Gaussian quadrature rule in two steps. An adaptive quadrature scheme is designed .Some test integrals having analytic function integrands have been evaluated using the dual mixed rule and its constituent rules in non- adaptive mode. The same set of test integrals have been evaluated using those rules as base rules in the adaptive scheme. The dual mixed rule based adaptive scheme is found to be most effective.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Patra, Pritikanta, Debasish Das et Rajani Ballav Dash. « Numerical Approximation of Surface Integrals Using Mixed Cubature Adaptive Scheme ». Annals of Pure and Applied Mathematics 22, no 01 (2020) : 29–39. http://dx.doi.org/10.22457/apam.v22n1a05678.

Texte intégral
Résumé :
This research described the development of a new mixed cubature rule for evaluation of surface integrals over rectangular domains. Taking the linear combination of Clenshaw-Curtis 5- point rule and Gauss-Legendre 3-point rule ( each rule is of same precision i.e. precision 5) in two dimensions the mixed cubature rule of higher precision was formed (i.e. precision 7). This method is iterative in nature and relies on the function values at uneven spaced points on the rectangle of integration. Also as supplement, an adaptive cubature algorithm is designed in order to reinforce our mixed cubature rule. With the illustration of numerical examples this mixed cubature rule is turned out to be more powerful when compared with the constituents standard cubature procedures both in adaptive and non-adaptive environment.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Fuengfusin, Ninnart, et Hakaru Tamukoh. « Mixed-precision weights network for field-programmable gate array ». PLOS ONE 16, no 5 (10 mai 2021) : e0251329. http://dx.doi.org/10.1371/journal.pone.0251329.

Texte intégral
Résumé :
In this study, we introduced a mixed-precision weights network (MPWN), which is a quantization neural network that jointly utilizes three different weight spaces: binary {−1,1}, ternary {−1,0,1}, and 32-bit floating-point. We further developed the MPWN from both software and hardware aspects. From the software aspect, we evaluated the MPWN on the Fashion-MNIST and CIFAR10 datasets. We systematized the accuracy sparsity bit score, which is a linear combination of accuracy, sparsity, and number of bits. This score allows Bayesian optimization to be used efficiently to search for MPWN weight space combinations. From the hardware aspect, we proposed XOR signed-bits to explore floating-point and binary weight spaces in the MPWN. XOR signed-bits is an efficient implementation equivalent to multiplication of floating-point and binary weight spaces. Using the concept from XOR signed bits, we also provide a ternary bitwise operation that is an efficient implementation equivalent to the multiplication of floating-point and ternary weight space. To demonstrate the compatibility of the MPWN with hardware implementation, we synthesized and implemented the MPWN in a field-programmable gate array using high-level synthesis. Our proposed MPWN implementation utilized up to 1.68-4.89 times less hardware resources depending on the type of resources than a conventional 32-bit floating-point model. In addition, our implementation reduced the latency up to 31.55 times compared to 32-bit floating-point model without optimizations.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Hoo, Choon Lih, Sallehuddin Mohamed Haris et Nik Abdullah Nik Mohamed. « A floating point conversion algorithm for mixed precision computations ». Journal of Zhejiang University SCIENCE C 13, no 9 (septembre 2012) : 711–18. http://dx.doi.org/10.1631/jzus.c1200043.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
32

Chu, Tianshu, Qin Luo, Jie Yang et Xiaolin Huang. « Mixed-precision quantized neural networks with progressively decreasing bitwidth ». Pattern Recognition 111 (mars 2021) : 107647. http://dx.doi.org/10.1016/j.patcog.2020.107647.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Carson, Erin, et Noaman Khan. « Mixed Precision Iterative Refinement with Sparse Approximate Inverse Preconditioning ». SIAM Journal on Scientific Computing 45, no 3 (9 juin 2023) : C131—C153. http://dx.doi.org/10.1137/22m1487709.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
34

Debasish Das, Pritikanta Patra et Rajani Ballav Dash. « An adaptive integration scheme using a mixed quadrature of three different quadrature rules ». Malaya Journal of Matematik 3, no 03 (1 juillet 2015) : 224–32. http://dx.doi.org/10.26637/mjm303/001.

Texte intégral
Résumé :
In the present work,a mixed quadrature rule of precision seven is constructed blending Gauss-Legendre 2-point rule, Fejer’s first and second 3-point rules each having precision three.The error analysis of the mixed rule is incorporated.An algorithm is designed for adaptive integration scheme using the mixed quadrature rule.Through some numerical examples,the effectiveness of adopting mixed quadrature rule in place of their constituent rules in the adaptive integration scheme is discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Wang, Shengquan, Chao Wang, Yong Cai et Guangyao Li. « A novel parallel finite element procedure for nonlinear dynamic problems using GPU and mixed-precision algorithm ». Engineering Computations 37, no 6 (22 février 2020) : 2193–211. http://dx.doi.org/10.1108/ec-07-2019-0328.

Texte intégral
Résumé :
Purpose The purpose of this paper is to improve the computational speed of solving nonlinear dynamics by using parallel methods and mixed-precision algorithm on graphic processing units (GPUs). The computational efficiency of traditional central processing units (CPUs)-based computer aided engineering software has been difficult to satisfy the needs of scientific research and practical engineering, especially for nonlinear dynamic problems. Besides, when calculations are performed on GPUs, double-precision operations are slower than single-precision operations. So this paper implemented mixed precision for nonlinear dynamic problem simulation using Belytschko-Tsay (BT) shell element on GPU. Design/methodology/approach To minimize data transfer between heterogeneous architectures, the parallel computation of the fully explicit finite element (FE) calculation is realized using a vectorized thread-level parallelism algorithm. An asynchronous data transmission strategy and a novel dependency relationship link-based method, for efficiently solving parallel explicit shell element equations, are used to improve the GPU utilization ratio. Finally, this paper implements mixed precision for nonlinear dynamic problems simulation using the BT shell element on a GPU and compare it to the CPU-based serially executed program and a GPU-based double-precision parallel computing program. Findings For a car body model containing approximately 5.3 million degrees of freedom, the computational speed is improved 25 times over CPU sequential computation, and approximately 10% over double-precision parallel computing method. The accuracy error of the mixed-precision computation is small and can satisfy the requirements of practical engineering problems. Originality/value This paper realized a novel FE parallel computing procedure for nonlinear dynamic problems using mixed-precision algorithm on CPU-GPU platform. Compared with the CPU serial program, the program implemented in this article obtains a 25 times acceleration ratio when calculating the model of 883,168 elements, which greatly improves the calculation speed for solving nonlinear dynamic problems.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Bodner, Benjamin Jacob, Gil Ben-Shalom et Eran Treister. « GradFreeBits : Gradient-Free Bit Allocation for Mixed-Precision Neural Networks ». Sensors 22, no 24 (13 décembre 2022) : 9772. http://dx.doi.org/10.3390/s22249772.

Texte intégral
Résumé :
Quantized neural networks (QNNs) are among the main approaches for deploying deep neural networks on low-resource edge devices. Training QNNs using different levels of precision throughout the network (mixed-precision quantization) typically achieves superior trade-offs between performance and computational load. However, optimizing the different precision levels of QNNs can be complicated, as the values of the bit allocations are discrete and difficult to differentiate for. Moreover, adequately accounting for the dependencies between the bit allocation of different layers is not straightforward. To meet these challenges, in this work, we propose GradFreeBits: a novel joint optimization scheme for training mixed-precision QNNs, which alternates between gradient-based optimization for the weights and gradient-free optimization for the bit allocation. Our method achieves a better or on par performance with the current state-of-the-art low-precision classification networks on CIFAR10/100 and ImageNet, semantic segmentation networks on Cityscapes, and several graph neural networks benchmarks. Furthermore, our approach can be extended to a variety of other applications involving neural networks used in conjunction with parameters that are difficult to optimize for.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Tintó Prims, Oriol, Mario C. Acosta, Andrew M. Moore, Miguel Castrillo, Kim Serradell, Ana Cortés et Francisco J. Doblas-Reyes. « How to use mixed precision in ocean models : exploring a potential reduction of numerical precision in NEMO 4.0 and ROMS 3.6 ». Geoscientific Model Development 12, no 7 (24 juillet 2019) : 3135–48. http://dx.doi.org/10.5194/gmd-12-3135-2019.

Texte intégral
Résumé :
Abstract. Mixed-precision approaches can provide substantial speed-ups for both computing- and memory-bound codes with little effort. Most scientific codes have overengineered the numerical precision, leading to a situation in which models are using more resources than required without knowing where they are required and where they are not. Consequently, it is possible to improve computational performance by establishing a more appropriate choice of precision. The only input that is needed is a method to determine which real variables can be represented with fewer bits without affecting the accuracy of the results. This paper presents a novel method that enables modern and legacy codes to benefit from a reduction of the precision of certain variables without sacrificing accuracy. It consists of a simple idea: we reduce the precision of a group of variables and measure how it affects the outputs. Then we can evaluate the level of precision that they truly need. Modifying and recompiling the code for each case that has to be evaluated would require a prohibitive amount of effort. Instead, the method presented in this paper relies on the use of a tool called a reduced-precision emulator (RPE) that can significantly streamline the process. Using the RPE and a list of parameters containing the precisions that will be used for each real variable in the code, it is possible within a single binary to emulate the effect on the outputs of a specific choice of precision. When we are able to emulate the effects of reduced precision, we can proceed with the design of the tests that will give us knowledge of the sensitivity of the model variables regarding their numerical precision. The number of possible combinations is prohibitively large and therefore impossible to explore. The alternative of performing a screening of the variables individually can provide certain insight about the required precision of variables, but, on the other hand, other complex interactions that involve several variables may remain hidden. Instead, we use a divide-and-conquer algorithm that identifies the parts that require high precision and establishes a set of variables that can handle reduced precision. This method has been tested using two state-of-the-art ocean models, the Nucleus for European Modelling of the Ocean (NEMO) and the Regional Ocean Modeling System (ROMS), with very promising results. Obtaining this information is crucial to build an actual mixed-precision version of the code in the next phase that will bring the promised performance benefits.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Gonopolsky, A. M., et E. A. Milaeva. « The Technology of Precision-selective Electrostatic Separation of Mixed Secondary Polymers ». Ecology and Industry of Russia 25, no 6 (22 juin 2021) : 10–14. http://dx.doi.org/10.18412/1816-0395-2021-6-10-14.

Texte intégral
Résumé :
This technology of precision-selective electrostatic separation of mixed secondary polymers with pre-activation of their surfaces with surfactants for chemical destruction of surface polymer is described. A prototype of the process of precision selective separation of mixed polymer materials in the electrostatic field has been developed. Optimum operating modes of the process of activation of polymer surfaces with surfactant solutions as a preliminary stage of separation of mixed polymers in the electrostatic field are determined. An experimental prototype of a process line has been created for selective electrostatic separation chemically activated narrow-fractional mixed crushed polymer waste.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Abdelfattah, Ahmad, Hartwig Anzt, Erik G. Boman, Erin Carson, Terry Cojean, Jack Dongarra, Alyson Fox et al. « A survey of numerical linear algebra methods utilizing mixed-precision arithmetic ». International Journal of High Performance Computing Applications 35, no 4 (19 mars 2021) : 344–69. http://dx.doi.org/10.1177/10943420211003313.

Texte intégral
Résumé :
The efficient utilization of mixed-precision numerical linear algebra algorithms can offer attractive acceleration to scientific computing applications. Especially with the hardware integration of low-precision special-function units designed for machine learning applications, the traditional numerical algorithms community urgently needs to reconsider the floating point formats used in the distinct operations to efficiently leverage the available compute power. In this work, we provide a comprehensive survey of mixed-precision numerical linear algebra routines, including the underlying concepts, theoretical background, and experimental results for both dense and sparse linear algebra problems.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Farah, Ashraf. « Assessment study of static-PPP convergence behaviour using gps, glonass and mixed gps/glonass observations ». Artificial Satellites 49, no 1 (1 mars 2014) : 55–61. http://dx.doi.org/10.2478/arsa-2014-0005.

Texte intégral
Résumé :
Abstract Precise Point Positioning (PPP) has been used for the last decade as a cost-effective alternative for the ordinary DGPS-Differential GPS with an estimated precision sufficient for many applications. PPP requires handling different types of errors using proper models. PPP precision varies with the use of observations from different satellite systems (GPS, GLONASS and mixed GPS/GLONASS) and the duration of observations. This research presents an evaluation study for the variability of Static-PPP precision based on different observation types (GPS, GLONASS and mixed observations) and observation duration. It can be concluded that Static-PPP solution using mixed observations is offering similar accuracy as the one using GPS-only observations and saving 15 minutes observation time. For 30 minutes of observation duration, mixed observations offers improvement percentages of 14%, 26% and 25% for latitude, longitude and height respectively.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Wang, Junbin, Shaoxia Fang, Xi Wang, Jiangsha Ma, Taobo Wang et Yi Shan. « High-Performance Mixed-Low-Precision CNN Inference Accelerator on FPGA ». IEEE Micro 41, no 4 (1 juillet 2021) : 31–38. http://dx.doi.org/10.1109/mm.2021.3081735.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Massei, Stefano, et Leonardo Robol. « Mixed Precision Recursive Block Diagonalization for Bivariate Functions of Matrices ». SIAM Journal on Matrix Analysis and Applications 43, no 2 (19 avril 2022) : 638–60. http://dx.doi.org/10.1137/21m1407872.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Yang, L. Minah, Alyson Fox et Geoffrey Sanders. « Rounding Error Analysis of Mixed Precision Block Householder QR Algorithms ». SIAM Journal on Scientific Computing 43, no 3 (janvier 2021) : A1723—A1753. http://dx.doi.org/10.1137/19m1296367.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
44

Borcoman, Edith, et Christophe Le Tourneau. « Precision medicine strategies in oncology : mixed approaches to matched therapies ». Future Oncology 14, no 2 (janvier 2018) : 105–9. http://dx.doi.org/10.2217/fon-2017-0524.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
45

Li, Xiaoye S., James W. Demmel, David H. Bailey, Greg Henry, Yozo Hida, Jimmy Iskandar, William Kahan et al. « Design, implementation and testing of extended and mixed precision BLAS ». ACM Transactions on Mathematical Software 28, no 2 (juin 2002) : 152–205. http://dx.doi.org/10.1145/567806.567808.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
46

Emans, Maximilian, et Albert van der Meer. « Mixed-precision AMG as linear equation solver for definite systems ». Procedia Computer Science 1, no 1 (mai 2010) : 175–83. http://dx.doi.org/10.1016/j.procs.2010.04.020.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
47

Ahmad, Khalid, Hari Sundar et Mary Hall. « Data-driven Mixed Precision Sparse Matrix Vector Multiplication for GPUs ». ACM Transactions on Architecture and Code Optimization 16, no 4 (10 janvier 2020) : 1–24. http://dx.doi.org/10.1145/3371275.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
48

Wagner, Lynne I., Mollie Canzona, Janet A. Tooze, Kathryn E. Weaver, Kelsey Shore, Lauren Peters, William J. Petty, Mercedes Porosnicu, Wei Zhang et Boris Pasche. « Mixed methods examination of diverse patients’ experiences with precision oncology. » Journal of Clinical Oncology 36, no 15_suppl (20 mai 2018) : e22148-e22148. http://dx.doi.org/10.1200/jco.2018.36.15_suppl.e22148.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
49

Asadchev, Andrey, et Mark S. Gordon. « Mixed-precision evaluation of two-electron integrals by Rys quadrature ». Computer Physics Communications 183, no 8 (août 2012) : 1563–67. http://dx.doi.org/10.1016/j.cpc.2012.02.020.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
50

Chen, Jun, Shipeng Bai, Tianxin Huang, Mengmeng Wang, Guanzhong Tian et Yong Liu. « Data-free quantization via mixed-precision compensation without fine-tuning ». Pattern Recognition 143 (novembre 2023) : 109780. http://dx.doi.org/10.1016/j.patcog.2023.109780.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie