Gotowa bibliografia na temat „Sparse Matrix Storage Formats”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Sparse Matrix Storage Formats”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Sparse Matrix Storage Formats"

1

Langr, Daniel, and Pavel Tvrdik. "Evaluation Criteria for Sparse Matrix Storage Formats." IEEE Transactions on Parallel and Distributed Systems 27, no. 2 (2016): 428–40. http://dx.doi.org/10.1109/tpds.2015.2401575.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

MUKADDES, ABUL MUKID MOHAMMAD, MASAO OGINO, and RYUJI SHIOYA. "PERFORMANCE EVALUATION OF DOMAIN DECOMPOSITION METHOD WITH SPARSE MATRIX STORAGE SCHEMES IN MODERN SUPERCOMPUTER." International Journal of Computational Methods 11, supp01 (2014): 1344007. http://dx.doi.org/10.1142/s0219876213440076.

Pełny tekst źródła
Streszczenie:
The use of proper data structures with corresponding algorithms is critical to achieve good performance in scientific computing. The need of sparse matrix vector multiplication in each iteration of the iterative domain decomposition method has led to implementation of a variety of sparse matrix storage formats. Many storage formats have been presented to represent sparse matrix and integrated in the method. In this paper, the storage efficiency of those sparse matrix storage formats are evaluated and compared. The performance results of sparse matrix vector multiplication used in the domain decomposition method is considered. Based on our experiments in the FX10 supercomputer system, some useful conclusions that can serve as guidelines for the optimization of domain decomposition method are extracted.
Style APA, Harvard, Vancouver, ISO itp.
3

Chen, Shizhao, Jianbin Fang, Chuanfu Xu, and Zheng Wang. "Adaptive Hybrid Storage Format for Sparse Matrix–Vector Multiplication on Multi-Core SIMD CPUs." Applied Sciences 12, no. 19 (2022): 9812. http://dx.doi.org/10.3390/app12199812.

Pełny tekst źródła
Streszczenie:
Optimizing sparse matrix–vector multiplication (SpMV) is challenging due to the non-uniform distribution of the non-zero elements of the sparse matrix. The best-performing SpMV format changes depending on the input matrix and the underlying architecture, and there is no “one-size-fit-for-all” format. A hybrid scheme combining multiple SpMV storage formats allows one to choose an appropriate format to use for the target matrix and hardware. However, existing hybrid approaches are inadequate for utilizing the SIMD cores of modern multi-core CPUs with SIMDs, and it remains unclear how to best mix different SpMV formats for a given matrix. This paper presents a new hybrid storage format for sparse matrices, specifically targeting multi-core CPUs with SIMDs. Our approach partitions the target sparse matrix into two segmentations based on the regularities of the memory access pattern, where each segmentation is stored in a format suitable for its memory access patterns. Unlike prior hybrid storage schemes that rely on the user to determine the data partition among storage formats, we employ machine learning to build a predictive model to automatically determine the partition threshold on a per matrix basis. Our predictive model is first trained off line, and the trained model can be applied to any new, unseen sparse matrix. We apply our approach to 956 matrices and evaluate its performance on three distinct multi-core CPU platforms: a 72-core Intel Knights Landing (KNL) CPU, a 128-core AMD EPYC CPU, and a 64-core Phytium ARMv8 CPU. Experimental results show that our hybrid scheme, combined with the predictive model, outperforms the best-performing alternative by 2.9%, 17.5% and 16% on average on KNL, AMD, and Phytium, respectively.
Style APA, Harvard, Vancouver, ISO itp.
4

Sanderson, Conrad, and Ryan Curtin. "Practical Sparse Matrices in C++ with Hybrid Storage and Template-Based Expression Optimisation." Mathematical and Computational Applications 24, no. 3 (2019): 70. http://dx.doi.org/10.3390/mca24030070.

Pełny tekst źródła
Streszczenie:
Despite the importance of sparse matrices in numerous fields of science, software implementations remain difficult to use for non-expert users, generally requiring the understanding of the underlying details of the chosen sparse matrix storage format. In addition, to achieve good performance, several formats may need to be used in one program, requiring explicit selection and conversion between the formats. This can be both tedious and error-prone, especially for non-expert users. Motivated by these issues, we present a user-friendly and open-source sparse matrix class for the C++ language, with a high-level application programming interface deliberately similar to the widely-used MATLAB language. This facilitates prototyping directly in C++ and aids the conversion of research code into production environments. The class internally uses two main approaches to achieve efficient execution: (i) a hybrid storage framework, which automatically and seamlessly switches between three underlying storage formats (compressed sparse column, red-black tree, coordinate list) depending on which format is best suited and/or available for specific operations, and (ii) a template-based meta-programming framework to automatically detect and optimise the execution of common expression patterns. Empirical evaluations on large sparse matrices with various densities of non-zero elements demonstrate the advantages of the hybrid storage framework and the expression optimisation mechanism.
Style APA, Harvard, Vancouver, ISO itp.
5

FRAGUELA, BASILIO B., RAMÓN DOALLO, and EMILIO L. ZAPATA. "MEMORY HIERARCHY PERFORMANCE PREDICTION FOR BLOCKED SPARSE ALGORITHMS." Parallel Processing Letters 09, no. 03 (1999): 347–60. http://dx.doi.org/10.1142/s0129626499000323.

Pełny tekst źródła
Streszczenie:
Nowadays the performance gap between processors and main memory makes an efficient usage of the memory hierarchy necessary for good program performance. Several techniques have been proposed for this purpose. Nevertheless most of them consider only regular access patterns, while many scientific and numerical applications give place to irregular patterns. A typical case is that of indirect accesses due to the use of compressed storage formats for sparse matrices. This paper describes an analytic approach to model both regular and irregular access patterns. The application modeled is an optimized sparse matrix-dense matrix product algorithm with several levels of blocking. Our model can be directly applied to any memory hierarchy consisting of K-way associative caches. Results are shown for several current microprocessor architectures.
Style APA, Harvard, Vancouver, ISO itp.
6

Smith, Barry F., and William D. Gropp. "The Design of Data-Structure-Neutral Libraries for the Iterative Solution of Sparse Linear Systems." Scientific Programming 5, no. 4 (1996): 329–36. http://dx.doi.org/10.1155/1996/417629.

Pełny tekst źródła
Streszczenie:
Over the past few years several proposals have been made for the standardization of sparse matrix storage formats in order to allow for the development of portable matrix libraries for the iterative solution of linear systems. We believe that this is the wrong approach. Rather than define one standard (or a small number of standards) for matrix storage, the community should define an interface (i.e., the calling sequences) for the functions that act on the data. In addition, we cannot ignore the interface to the vector operations because, in many applications, vectors may not be stored as consecutive elements in memory. With the acceptance of shared memory, distributed memory, and cluster memory parallel machines, the flexibility of the distribution of the elements of vectors is also extremely important. This issue is ignored in most proposed standards. In this article we demonstrate how such libraries may be written using data encapsulation techniques.
Style APA, Harvard, Vancouver, ISO itp.
7

Guo, Dahai, and William Gropp. "Applications of the streamed storage format for sparse matrix operations." International Journal of High Performance Computing Applications 28, no. 1 (2013): 3–12. http://dx.doi.org/10.1177/1094342012470469.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Akhunov, R. R., S. P. Kuksenko, V. K. Salov, and T. R. Gazizov. "Sparse matrix storage formats and acceleration of iterative solution of linear algebraic systems with dense matrices." Journal of Mathematical Sciences 191, no. 1 (2013): 10–18. http://dx.doi.org/10.1007/s10958-013-1296-7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Merrill, Duane, and Michael Garland. "Merge-based sparse matrix-vector multiplication (SpMV) using the CSR storage format." ACM SIGPLAN Notices 51, no. 8 (2016): 1–2. http://dx.doi.org/10.1145/3016078.2851190.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Zhang, Jilin, Jian Wan, Fangfang Li, et al. "Efficient sparse matrix–vector multiplication using cache oblivious extension quadtree storage format." Future Generation Computer Systems 54 (January 2016): 490–500. http://dx.doi.org/10.1016/j.future.2015.03.005.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Więcej źródeł
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii