Gotowa bibliografia na temat „Sparse Matrix Storage Formats”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Sparse Matrix Storage Formats”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Sparse Matrix Storage Formats"
Langr, Daniel, i Pavel Tvrdik. "Evaluation Criteria for Sparse Matrix Storage Formats". IEEE Transactions on Parallel and Distributed Systems 27, nr 2 (1.02.2016): 428–40. http://dx.doi.org/10.1109/tpds.2015.2401575.
Pełny tekst źródłaMUKADDES, ABUL MUKID MOHAMMAD, MASAO OGINO i RYUJI SHIOYA. "PERFORMANCE EVALUATION OF DOMAIN DECOMPOSITION METHOD WITH SPARSE MATRIX STORAGE SCHEMES IN MODERN SUPERCOMPUTER". International Journal of Computational Methods 11, supp01 (listopad 2014): 1344007. http://dx.doi.org/10.1142/s0219876213440076.
Pełny tekst źródłaChen, Shizhao, Jianbin Fang, Chuanfu Xu i Zheng Wang. "Adaptive Hybrid Storage Format for Sparse Matrix–Vector Multiplication on Multi-Core SIMD CPUs". Applied Sciences 12, nr 19 (29.09.2022): 9812. http://dx.doi.org/10.3390/app12199812.
Pełny tekst źródłaSanderson, Conrad, i Ryan Curtin. "Practical Sparse Matrices in C++ with Hybrid Storage and Template-Based Expression Optimisation". Mathematical and Computational Applications 24, nr 3 (19.07.2019): 70. http://dx.doi.org/10.3390/mca24030070.
Pełny tekst źródłaFRAGUELA, BASILIO B., RAMÓN DOALLO i EMILIO L. ZAPATA. "MEMORY HIERARCHY PERFORMANCE PREDICTION FOR BLOCKED SPARSE ALGORITHMS". Parallel Processing Letters 09, nr 03 (wrzesień 1999): 347–60. http://dx.doi.org/10.1142/s0129626499000323.
Pełny tekst źródłaSmith, Barry F., i William D. Gropp. "The Design of Data-Structure-Neutral Libraries for the Iterative Solution of Sparse Linear Systems". Scientific Programming 5, nr 4 (1996): 329–36. http://dx.doi.org/10.1155/1996/417629.
Pełny tekst źródłaGuo, Dahai, i William Gropp. "Applications of the streamed storage format for sparse matrix operations". International Journal of High Performance Computing Applications 28, nr 1 (3.01.2013): 3–12. http://dx.doi.org/10.1177/1094342012470469.
Pełny tekst źródłaAkhunov, R. R., S. P. Kuksenko, V. K. Salov i T. R. Gazizov. "Sparse matrix storage formats and acceleration of iterative solution of linear algebraic systems with dense matrices". Journal of Mathematical Sciences 191, nr 1 (21.04.2013): 10–18. http://dx.doi.org/10.1007/s10958-013-1296-7.
Pełny tekst źródłaMerrill, Duane, i Michael Garland. "Merge-based sparse matrix-vector multiplication (SpMV) using the CSR storage format". ACM SIGPLAN Notices 51, nr 8 (9.11.2016): 1–2. http://dx.doi.org/10.1145/3016078.2851190.
Pełny tekst źródłaZhang, Jilin, Jian Wan, Fangfang Li, Jie Mao, Li Zhuang, Junfeng Yuan, Enyi Liu i Zhuoer Yu. "Efficient sparse matrix–vector multiplication using cache oblivious extension quadtree storage format". Future Generation Computer Systems 54 (styczeń 2016): 490–500. http://dx.doi.org/10.1016/j.future.2015.03.005.
Pełny tekst źródłaRozprawy doktorskie na temat "Sparse Matrix Storage Formats"
Haque, Sardar Anisul, i University of Lethbridge Faculty of Arts and Science. "A computational study of sparse matrix storage schemes". Thesis, Lethbridge, Alta. : University of Lethbridge, Deptartment of Mathematics and Computer Science, 2008, 2008. http://hdl.handle.net/10133/777.
Pełny tekst źródłaxi, 76 leaves : ill. ; 29 cm.
Pawlowski, Filip igor. "High-performance dense tensor and sparse matrix kernels for machine learning". Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN081.
Pełny tekst źródłaIn this thesis, we develop high performance algorithms for certain computations involving dense tensors and sparse matrices. We address kernel operations that are useful for machine learning tasks, such as inference with deep neural networks (DNNs). We develop data structures and techniques to reduce memory use, to improve data locality and hence to improve cache reuse of the kernel operations. We design both sequential and shared-memory parallel algorithms. In the first part of the thesis we focus on dense tensors kernels. Tensor kernels include the tensor--vector multiplication (TVM), tensor--matrix multiplication (TMM), and tensor--tensor multiplication (TTM). Among these, TVM is the most bandwidth-bound and constitutes a building block for many algorithms. We focus on this operation and develop a data structure and sequential and parallel algorithms for it. We propose a novel data structure which stores the tensor as blocks, which are ordered using the space-filling curve known as the Morton curve (or Z-curve). The key idea consists of dividing the tensor into blocks small enough to fit cache, and storing them according to the Morton order, while keeping a simple, multi-dimensional order on the individual elements within them. Thus, high performance BLAS routines can be used as microkernels for each block. We evaluate our techniques on a set of experiments. The results not only demonstrate superior performance of the proposed approach over the state-of-the-art variants by up to 18%, but also show that the proposed approach induces 71% less sample standard deviation for the TVM across the d possible modes. Finally, we show that our data structure naturally expands to other tensor kernels by demonstrating that it yields up to 38% higher performance for the higher-order power method. Finally, we investigate shared-memory parallel TVM algorithms which use the proposed data structure. Several alternative parallel algorithms were characterized theoretically and implemented using OpenMP to compare them experimentally. Our results on up to 8 socket systems show near peak performance for the proposed algorithm for 2, 3, 4, and 5-dimensional tensors. In the second part of the thesis, we explore the sparse computations in neural networks focusing on the high-performance sparse deep inference problem. The sparse DNN inference is the task of using sparse DNN networks to classify a batch of data elements forming, in our case, a sparse feature matrix. The performance of sparse inference hinges on efficient parallelization of the sparse matrix--sparse matrix multiplication (SpGEMM) repeated for each layer in the inference function. We first characterize efficient sequential SpGEMM algorithms for our use case. We then introduce the model-parallel inference, which uses a two-dimensional partitioning of the weight matrices obtained using the hypergraph partitioning software. The model-parallel variant uses barriers to synchronize at layers. Finally, we introduce tiling model-parallel and tiling hybrid algorithms, which increase cache reuse between the layers, and use a weak synchronization module to hide load imbalance and synchronization costs. We evaluate our techniques on the large network data from the IEEE HPEC 2019 Graph Challenge on shared-memory systems and report up to 2x times speed-up versus the baseline
Dičpinigaitis, Petras. "Delninukų energijos suvartojimo apdorojant išretintas matricas saugomas stulpeliais modeliavimas". Master's thesis, Lithuanian Academic Libraries Network (LABT), 2008. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2008~D_20080128_103036-96655.
Pełny tekst źródłaNowadays major problem is energy consumtion in portable devices which has a battery. In this job we have evaluated energy consumption for Pocket PC. We wanted to see memory and processor influence in battery energy consumption. We have created a program which can do matrix multiplication and sparse matrix „storage by columns“ multiplication. During multiplication program takes battery information and saves it into the file. After that I have investigated the result and saw, that sparse matrix storage by columns multiplication is much more effectived than normal matrix multiplication. Sparce matrix storage by columns multiplication take less memory and more processor commands then normal matrix multiplication. We suggest to use sparse matrix storage by columns model instead simple model, because you can save much more operation time, battery resources and memory.
Ramesh, Chinthala. "Hardware-Software Co-Design Accelerators for Sparse BLAS". Thesis, 2017. http://etd.iisc.ac.in/handle/2005/4276.
Pełny tekst źródłaCzęści książek na temat "Sparse Matrix Storage Formats"
D’Azevedo, Eduardo F., Mark R. Fahey i Richard T. Mills. "Vectorized Sparse Matrix Multiply for Compressed Row Storage Format". W Lecture Notes in Computer Science, 99–106. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11428831_13.
Pełny tekst źródłaLuján, Mikel, Anila Usman, Patrick Hardie, T. L. Freeman i John R. Gurd. "Storage Formats for Sparse Matrices in Java". W Lecture Notes in Computer Science, 364–71. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11428831_45.
Pełny tekst źródłaScott, Jennifer, i Miroslav Tůma. "Sparse Matrix Ordering Algorithms". W Nečas Center Series, 135–61. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-25820-6_8.
Pełny tekst źródłaUsman, Anila, Mikel Luján, Len Freeman i John R. Gurd. "Performance Evaluation of Storage Formats for Sparse Matrices in Fortran". W High Performance Computing and Communications, 160–69. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11847366_17.
Pełny tekst źródłaEcker, Jan Philipp, Rudolf Berrendorf i Florian Mannuss. "New Efficient General Sparse Matrix Formats for Parallel SpMV Operations". W Lecture Notes in Computer Science, 523–37. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-64203-1_38.
Pełny tekst źródłaMarichal, Raúl, Ernesto Dufrechou i Pablo Ezzatti. "Optimizing Sparse Matrix Storage for the Big Data Era". W Communications in Computer and Information Science, 121–35. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-84825-5_9.
Pełny tekst źródłaKatagiri, Takahiro, Takao Sakurai, Mitsuyoshi Igai, Satoshi Ohshima, Hisayasu Kuroda, Ken Naono i Kengo Nakajima. "Control Formats for Unsymmetric and Symmetric Sparse Matrix–Vector Multiplications on OpenMP Implementations". W Lecture Notes in Computer Science, 236–48. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38718-0_24.
Pełny tekst źródłaKubota, Yuji, i Daisuke Takahashi. "Optimization of Sparse Matrix-Vector Multiplication by Auto Selecting Storage Schemes on GPU". W Computational Science and Its Applications - ICCSA 2011, 547–61. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21887-3_42.
Pełny tekst źródłaNeuss, Nicolas. "A New Sparse Matrix Storage Method for Adaptive Solving of Large Systems of Reaction-Diffusion-Transport Equations". W Scientific Computing in Chemical Engineering II, 175–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/978-3-642-60185-9_19.
Pełny tekst źródłaJamalmohammed, Saira Banu, Lavanya K., Sumaiya Thaseen I. i Biju V. "Review on Sparse Matrix Storage Formats With Space Complexity Analysis". W Applications of Artificial Intelligence for Smart Technology, 122–45. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-3335-2.ch009.
Pełny tekst źródłaStreszczenia konferencji na temat "Sparse Matrix Storage Formats"
imecek, Ivan, D. Langr i P. Tvrdik. "Space-efficient Sparse Matrix Storage Formats for Massively Parallel Systems". W 2012 IEEE 14th Int'l Conf. on High Performance Computing and Communication (HPCC) & 2012 IEEE 9th Int'l Conf. on Embedded Software and Systems (ICESS). IEEE, 2012. http://dx.doi.org/10.1109/hpcc.2012.18.
Pełny tekst źródłaLiang Yuan, Yunquan Zhang, Xiangzheng Sun i Ting Wang. "Optimizing Sparse Matrix Vector Multiplication Using Diagonal Storage Matrix Format". W 2010 IEEE 12th International Conference on High Performance Computing and Communications (HPCC 2010). IEEE, 2010. http://dx.doi.org/10.1109/hpcc.2010.67.
Pełny tekst źródłaSimecek, Ivan. "Sparse Matrix Computations Using the Quadtree Storage Format". W 2009 11th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC). IEEE, 2009. http://dx.doi.org/10.1109/synasc.2009.55.
Pełny tekst źródłaShi, Shaohuai, Qiang Wang i Xiaowen Chu. "Efficient Sparse-Dense Matrix-Matrix Multiplication on GPUs Using the Customized Sparse Storage Format". W 2020 IEEE 26th International Conference on Parallel and Distributed Systems (ICPADS). IEEE, 2020. http://dx.doi.org/10.1109/icpads51040.2020.00013.
Pełny tekst źródłaSriram, Sumithra, Saira Banu J. i Rajasekhara Babu. "Space complexity analysis of various sparse matrix storage formats used in rectangular segmentation image compression technique". W 2014 International Conference on Electronics,Communication and Computational Engineering (ICECCE). IEEE, 2014. http://dx.doi.org/10.1109/icecce.2014.7086618.
Pełny tekst źródłaKawamura, Tomoki, Yoneda Kazunori, Takashi Yamazaki, Takashi Iwamura, Masahiro Watanabe i Yasushi Inoguchi. "A Compression Method for Storage Formats of a Sparse Matrix in Solving the Large-Scale Linear Systems". W 2017 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 2017. http://dx.doi.org/10.1109/ipdpsw.2017.174.
Pełny tekst źródłaWang, Kebing, Bianny Bian i Yan Hao. "Innovative Unit-Vector-Block Storage Format of Sparse Matrix and Vector". W 2019 IEEE 4th International Conference on Computer and Communication Systems (ICCCS). IEEE, 2019. http://dx.doi.org/10.1109/ccoms.2019.8821708.
Pełny tekst źródłaChen, Haonan, Zhuowei Wang i Lianglun Cheng. "GPU Sparse Matrix Vector Multiplication Optimization Based on ELLB Storage Format". W ICSCA 2023: 2023 12th International Conference on Software and Computer Applications. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3587828.3587834.
Pełny tekst źródłaGreathouse, Joseph L., i Mayank Daga. "Efficient Sparse Matrix-Vector Multiplication on GPUs Using the CSR Storage Format". W SC14: International Conference for High Performance Computing, Networking, Storage and Analysis. IEEE, 2014. http://dx.doi.org/10.1109/sc.2014.68.
Pełny tekst źródłaMerrill, Duane, i Michael Garland. "Merge-based sparse matrix-vector multiplication (SpMV) using the CSR storage format". W PPoPP '16: 21st ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2851141.2851190.
Pełny tekst źródła