Добірка наукової літератури з теми "Sparse Matrix Storage Formats"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Sparse Matrix Storage Formats".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Sparse Matrix Storage Formats"
Langr, Daniel, and Pavel Tvrdik. "Evaluation Criteria for Sparse Matrix Storage Formats." IEEE Transactions on Parallel and Distributed Systems 27, no. 2 (February 1, 2016): 428–40. http://dx.doi.org/10.1109/tpds.2015.2401575.
Повний текст джерелаMUKADDES, ABUL MUKID MOHAMMAD, MASAO OGINO, and RYUJI SHIOYA. "PERFORMANCE EVALUATION OF DOMAIN DECOMPOSITION METHOD WITH SPARSE MATRIX STORAGE SCHEMES IN MODERN SUPERCOMPUTER." International Journal of Computational Methods 11, supp01 (November 2014): 1344007. http://dx.doi.org/10.1142/s0219876213440076.
Повний текст джерелаChen, Shizhao, Jianbin Fang, Chuanfu Xu, and Zheng Wang. "Adaptive Hybrid Storage Format for Sparse Matrix–Vector Multiplication on Multi-Core SIMD CPUs." Applied Sciences 12, no. 19 (September 29, 2022): 9812. http://dx.doi.org/10.3390/app12199812.
Повний текст джерелаSanderson, Conrad, and Ryan Curtin. "Practical Sparse Matrices in C++ with Hybrid Storage and Template-Based Expression Optimisation." Mathematical and Computational Applications 24, no. 3 (July 19, 2019): 70. http://dx.doi.org/10.3390/mca24030070.
Повний текст джерелаFRAGUELA, BASILIO B., RAMÓN DOALLO, and EMILIO L. ZAPATA. "MEMORY HIERARCHY PERFORMANCE PREDICTION FOR BLOCKED SPARSE ALGORITHMS." Parallel Processing Letters 09, no. 03 (September 1999): 347–60. http://dx.doi.org/10.1142/s0129626499000323.
Повний текст джерелаSmith, Barry F., and William D. Gropp. "The Design of Data-Structure-Neutral Libraries for the Iterative Solution of Sparse Linear Systems." Scientific Programming 5, no. 4 (1996): 329–36. http://dx.doi.org/10.1155/1996/417629.
Повний текст джерелаGuo, Dahai, and William Gropp. "Applications of the streamed storage format for sparse matrix operations." International Journal of High Performance Computing Applications 28, no. 1 (January 3, 2013): 3–12. http://dx.doi.org/10.1177/1094342012470469.
Повний текст джерелаAkhunov, R. R., S. P. Kuksenko, V. K. Salov, and T. R. Gazizov. "Sparse matrix storage formats and acceleration of iterative solution of linear algebraic systems with dense matrices." Journal of Mathematical Sciences 191, no. 1 (April 21, 2013): 10–18. http://dx.doi.org/10.1007/s10958-013-1296-7.
Повний текст джерелаMerrill, Duane, and Michael Garland. "Merge-based sparse matrix-vector multiplication (SpMV) using the CSR storage format." ACM SIGPLAN Notices 51, no. 8 (November 9, 2016): 1–2. http://dx.doi.org/10.1145/3016078.2851190.
Повний текст джерелаZhang, Jilin, Jian Wan, Fangfang Li, Jie Mao, Li Zhuang, Junfeng Yuan, Enyi Liu, and Zhuoer Yu. "Efficient sparse matrix–vector multiplication using cache oblivious extension quadtree storage format." Future Generation Computer Systems 54 (January 2016): 490–500. http://dx.doi.org/10.1016/j.future.2015.03.005.
Повний текст джерелаДисертації з теми "Sparse Matrix Storage Formats"
Haque, Sardar Anisul, and University of Lethbridge Faculty of Arts and Science. "A computational study of sparse matrix storage schemes." Thesis, Lethbridge, Alta. : University of Lethbridge, Deptartment of Mathematics and Computer Science, 2008, 2008. http://hdl.handle.net/10133/777.
Повний текст джерелаxi, 76 leaves : ill. ; 29 cm.
Pawlowski, Filip igor. "High-performance dense tensor and sparse matrix kernels for machine learning." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN081.
Повний текст джерелаIn this thesis, we develop high performance algorithms for certain computations involving dense tensors and sparse matrices. We address kernel operations that are useful for machine learning tasks, such as inference with deep neural networks (DNNs). We develop data structures and techniques to reduce memory use, to improve data locality and hence to improve cache reuse of the kernel operations. We design both sequential and shared-memory parallel algorithms. In the first part of the thesis we focus on dense tensors kernels. Tensor kernels include the tensor--vector multiplication (TVM), tensor--matrix multiplication (TMM), and tensor--tensor multiplication (TTM). Among these, TVM is the most bandwidth-bound and constitutes a building block for many algorithms. We focus on this operation and develop a data structure and sequential and parallel algorithms for it. We propose a novel data structure which stores the tensor as blocks, which are ordered using the space-filling curve known as the Morton curve (or Z-curve). The key idea consists of dividing the tensor into blocks small enough to fit cache, and storing them according to the Morton order, while keeping a simple, multi-dimensional order on the individual elements within them. Thus, high performance BLAS routines can be used as microkernels for each block. We evaluate our techniques on a set of experiments. The results not only demonstrate superior performance of the proposed approach over the state-of-the-art variants by up to 18%, but also show that the proposed approach induces 71% less sample standard deviation for the TVM across the d possible modes. Finally, we show that our data structure naturally expands to other tensor kernels by demonstrating that it yields up to 38% higher performance for the higher-order power method. Finally, we investigate shared-memory parallel TVM algorithms which use the proposed data structure. Several alternative parallel algorithms were characterized theoretically and implemented using OpenMP to compare them experimentally. Our results on up to 8 socket systems show near peak performance for the proposed algorithm for 2, 3, 4, and 5-dimensional tensors. In the second part of the thesis, we explore the sparse computations in neural networks focusing on the high-performance sparse deep inference problem. The sparse DNN inference is the task of using sparse DNN networks to classify a batch of data elements forming, in our case, a sparse feature matrix. The performance of sparse inference hinges on efficient parallelization of the sparse matrix--sparse matrix multiplication (SpGEMM) repeated for each layer in the inference function. We first characterize efficient sequential SpGEMM algorithms for our use case. We then introduce the model-parallel inference, which uses a two-dimensional partitioning of the weight matrices obtained using the hypergraph partitioning software. The model-parallel variant uses barriers to synchronize at layers. Finally, we introduce tiling model-parallel and tiling hybrid algorithms, which increase cache reuse between the layers, and use a weak synchronization module to hide load imbalance and synchronization costs. We evaluate our techniques on the large network data from the IEEE HPEC 2019 Graph Challenge on shared-memory systems and report up to 2x times speed-up versus the baseline
Dičpinigaitis, Petras. "Delninukų energijos suvartojimo apdorojant išretintas matricas saugomas stulpeliais modeliavimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2008. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2008~D_20080128_103036-96655.
Повний текст джерелаNowadays major problem is energy consumtion in portable devices which has a battery. In this job we have evaluated energy consumption for Pocket PC. We wanted to see memory and processor influence in battery energy consumption. We have created a program which can do matrix multiplication and sparse matrix „storage by columns“ multiplication. During multiplication program takes battery information and saves it into the file. After that I have investigated the result and saw, that sparse matrix storage by columns multiplication is much more effectived than normal matrix multiplication. Sparce matrix storage by columns multiplication take less memory and more processor commands then normal matrix multiplication. We suggest to use sparse matrix storage by columns model instead simple model, because you can save much more operation time, battery resources and memory.
Ramesh, Chinthala. "Hardware-Software Co-Design Accelerators for Sparse BLAS." Thesis, 2017. http://etd.iisc.ac.in/handle/2005/4276.
Повний текст джерелаЧастини книг з теми "Sparse Matrix Storage Formats"
D’Azevedo, Eduardo F., Mark R. Fahey, and Richard T. Mills. "Vectorized Sparse Matrix Multiply for Compressed Row Storage Format." In Lecture Notes in Computer Science, 99–106. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11428831_13.
Повний текст джерелаLuján, Mikel, Anila Usman, Patrick Hardie, T. L. Freeman, and John R. Gurd. "Storage Formats for Sparse Matrices in Java." In Lecture Notes in Computer Science, 364–71. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11428831_45.
Повний текст джерелаScott, Jennifer, and Miroslav Tůma. "Sparse Matrix Ordering Algorithms." In Nečas Center Series, 135–61. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-25820-6_8.
Повний текст джерелаUsman, Anila, Mikel Luján, Len Freeman, and John R. Gurd. "Performance Evaluation of Storage Formats for Sparse Matrices in Fortran." In High Performance Computing and Communications, 160–69. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11847366_17.
Повний текст джерелаEcker, Jan Philipp, Rudolf Berrendorf, and Florian Mannuss. "New Efficient General Sparse Matrix Formats for Parallel SpMV Operations." In Lecture Notes in Computer Science, 523–37. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-64203-1_38.
Повний текст джерелаMarichal, Raúl, Ernesto Dufrechou, and Pablo Ezzatti. "Optimizing Sparse Matrix Storage for the Big Data Era." In Communications in Computer and Information Science, 121–35. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-84825-5_9.
Повний текст джерелаKatagiri, Takahiro, Takao Sakurai, Mitsuyoshi Igai, Satoshi Ohshima, Hisayasu Kuroda, Ken Naono, and Kengo Nakajima. "Control Formats for Unsymmetric and Symmetric Sparse Matrix–Vector Multiplications on OpenMP Implementations." In Lecture Notes in Computer Science, 236–48. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38718-0_24.
Повний текст джерелаKubota, Yuji, and Daisuke Takahashi. "Optimization of Sparse Matrix-Vector Multiplication by Auto Selecting Storage Schemes on GPU." In Computational Science and Its Applications - ICCSA 2011, 547–61. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21887-3_42.
Повний текст джерелаNeuss, Nicolas. "A New Sparse Matrix Storage Method for Adaptive Solving of Large Systems of Reaction-Diffusion-Transport Equations." In Scientific Computing in Chemical Engineering II, 175–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/978-3-642-60185-9_19.
Повний текст джерелаJamalmohammed, Saira Banu, Lavanya K., Sumaiya Thaseen I., and Biju V. "Review on Sparse Matrix Storage Formats With Space Complexity Analysis." In Applications of Artificial Intelligence for Smart Technology, 122–45. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-3335-2.ch009.
Повний текст джерелаТези доповідей конференцій з теми "Sparse Matrix Storage Formats"
imecek, Ivan, D. Langr, and P. Tvrdik. "Space-efficient Sparse Matrix Storage Formats for Massively Parallel Systems." In 2012 IEEE 14th Int'l Conf. on High Performance Computing and Communication (HPCC) & 2012 IEEE 9th Int'l Conf. on Embedded Software and Systems (ICESS). IEEE, 2012. http://dx.doi.org/10.1109/hpcc.2012.18.
Повний текст джерелаLiang Yuan, Yunquan Zhang, Xiangzheng Sun, and Ting Wang. "Optimizing Sparse Matrix Vector Multiplication Using Diagonal Storage Matrix Format." In 2010 IEEE 12th International Conference on High Performance Computing and Communications (HPCC 2010). IEEE, 2010. http://dx.doi.org/10.1109/hpcc.2010.67.
Повний текст джерелаSimecek, Ivan. "Sparse Matrix Computations Using the Quadtree Storage Format." In 2009 11th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC). IEEE, 2009. http://dx.doi.org/10.1109/synasc.2009.55.
Повний текст джерелаShi, Shaohuai, Qiang Wang, and Xiaowen Chu. "Efficient Sparse-Dense Matrix-Matrix Multiplication on GPUs Using the Customized Sparse Storage Format." In 2020 IEEE 26th International Conference on Parallel and Distributed Systems (ICPADS). IEEE, 2020. http://dx.doi.org/10.1109/icpads51040.2020.00013.
Повний текст джерелаSriram, Sumithra, Saira Banu J., and Rajasekhara Babu. "Space complexity analysis of various sparse matrix storage formats used in rectangular segmentation image compression technique." In 2014 International Conference on Electronics,Communication and Computational Engineering (ICECCE). IEEE, 2014. http://dx.doi.org/10.1109/icecce.2014.7086618.
Повний текст джерелаKawamura, Tomoki, Yoneda Kazunori, Takashi Yamazaki, Takashi Iwamura, Masahiro Watanabe, and Yasushi Inoguchi. "A Compression Method for Storage Formats of a Sparse Matrix in Solving the Large-Scale Linear Systems." In 2017 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 2017. http://dx.doi.org/10.1109/ipdpsw.2017.174.
Повний текст джерелаWang, Kebing, Bianny Bian, and Yan Hao. "Innovative Unit-Vector-Block Storage Format of Sparse Matrix and Vector." In 2019 IEEE 4th International Conference on Computer and Communication Systems (ICCCS). IEEE, 2019. http://dx.doi.org/10.1109/ccoms.2019.8821708.
Повний текст джерелаChen, Haonan, Zhuowei Wang, and Lianglun Cheng. "GPU Sparse Matrix Vector Multiplication Optimization Based on ELLB Storage Format." In ICSCA 2023: 2023 12th International Conference on Software and Computer Applications. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3587828.3587834.
Повний текст джерелаGreathouse, Joseph L., and Mayank Daga. "Efficient Sparse Matrix-Vector Multiplication on GPUs Using the CSR Storage Format." In SC14: International Conference for High Performance Computing, Networking, Storage and Analysis. IEEE, 2014. http://dx.doi.org/10.1109/sc.2014.68.
Повний текст джерелаMerrill, Duane, and Michael Garland. "Merge-based sparse matrix-vector multiplication (SpMV) using the CSR storage format." In PPoPP '16: 21st ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2851141.2851190.
Повний текст джерела