Добірка наукової літератури з теми "Sparse Vector Vector Multiplication"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Sparse Vector Vector Multiplication".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Sparse Vector Vector Multiplication"

1

Tao, Yuan, Yangdong Deng, Shuai Mu, Zhenzhong Zhang, Mingfa Zhu, Limin Xiao, and Li Ruan. "GPU accelerated sparse matrix-vector multiplication and sparse matrix-transpose vector multiplication." Concurrency and Computation: Practice and Experience 27, no. 14 (October 7, 2014): 3771–89. http://dx.doi.org/10.1002/cpe.3415.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Filippone, Salvatore, Valeria Cardellini, Davide Barbieri, and Alessandro Fanfarillo. "Sparse Matrix-Vector Multiplication on GPGPUs." ACM Transactions on Mathematical Software 43, no. 4 (March 23, 2017): 1–49. http://dx.doi.org/10.1145/3017994.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

ERHEL, JOCELYNE. "SPARSE MATRIX MULTIPLICATION ON VECTOR COMPUTERS." International Journal of High Speed Computing 02, no. 02 (June 1990): 101–16. http://dx.doi.org/10.1142/s012905339000008x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Haque, Sardar Anisul, Shahadat Hossain, and M. Moreno Maza. "Cache friendly sparse matrix-vector multiplication." ACM Communications in Computer Algebra 44, no. 3/4 (January 28, 2011): 111–12. http://dx.doi.org/10.1145/1940475.1940490.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Bienz, Amanda, William D. Gropp, and Luke N. Olson. "Node aware sparse matrix–vector multiplication." Journal of Parallel and Distributed Computing 130 (August 2019): 166–78. http://dx.doi.org/10.1016/j.jpdc.2019.03.016.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Heath, L. S., C. J. Ribbens, and S. V. Pemmaraju. "Processor-efficient sparse matrix-vector multiplication." Computers & Mathematics with Applications 48, no. 3-4 (August 2004): 589–608. http://dx.doi.org/10.1016/j.camwa.2003.06.009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Yang, Xintian, Srinivasan Parthasarathy, and P. Sadayappan. "Fast sparse matrix-vector multiplication on GPUs." Proceedings of the VLDB Endowment 4, no. 4 (January 2011): 231–42. http://dx.doi.org/10.14778/1938545.1938548.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Romero, L. F., and E. L. Zapata. "Data distributions for sparse matrix vector multiplication." Parallel Computing 21, no. 4 (April 1995): 583–605. http://dx.doi.org/10.1016/0167-8191(94)00087-q.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Thomas, Rajesh, Victor DeBrunner, and Linda S. DeBrunner. "A Sparse Algorithm for Computing the DFT Using Its Real Eigenvectors." Signals 2, no. 4 (October 11, 2021): 688–705. http://dx.doi.org/10.3390/signals2040041.

Повний текст джерела
Анотація:
Direct computation of the discrete Fourier transform (DFT) and its FFT computational algorithms requires multiplication (and addition) of complex numbers. Complex number multiplication requires four real-valued multiplications and two real-valued additions, or three real-valued multiplications and five real-valued additions, as well as the requisite added memory for temporary storage. In this paper, we present a method for computing a DFT via a natively real-valued algorithm that is computationally equivalent to a N=2k-length DFT (where k is a positive integer), and is substantially more efficient for any other length, N. Our method uses the eigenstructure of the DFT, and the fact that sparse, real-valued, eigenvectors can be found and used to advantage. Computation using our method uses only vector dot products and vector-scalar products.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Sun, C. C., J. Götze, H. Y. Jheng, and S. J. Ruan. "Sparse matrix-vector multiplication on network-on-chip." Advances in Radio Science 8 (December 22, 2010): 289–94. http://dx.doi.org/10.5194/ars-8-289-2010.

Повний текст джерела
Анотація:
Abstract. In this paper, we present an idea for performing matrix-vector multiplication by using Network-on-Chip (NoC) architecture. In traditional IC design on-chip communications have been designed with dedicated point-to-point interconnections. Therefore, regular local data transfer is the major concept of many parallel implementations. However, when dealing with the parallel implementation of sparse matrix-vector multiplication (SMVM), which is the main step of all iterative algorithms for solving systems of linear equation, the required data transfers depend on the sparsity structure of the matrix and can be extremely irregular. Using the NoC architecture makes it possible to deal with arbitrary structure of the data transfers; i.e. with the irregular structure of the sparse matrices. So far, we have already implemented the proposed SMVM-NoC architecture with the size 4×4 and 5×5 in IEEE 754 single float point precision using FPGA.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Sparse Vector Vector Multiplication"

1

Ashari, Arash. "Sparse Matrix-Vector Multiplication on GPU." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1417770100.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ramachandran, Shridhar. "Incremental PageRank acceleration using Sparse Matrix-Sparse Vector Multiplication." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1462894358.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Balasubramanian, Deepan Karthik. "Efficient Sparse Matrix Vector Multiplication for Structured Grid Representation." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1339730490.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Mansour, Ahmad [Verfasser]. "Sparse Matrix-Vector Multiplication Based on Network-on-Chip / Ahmad Mansour." München : Verlag Dr. Hut, 2015. http://d-nb.info/1075409470/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Singh, Kunal. "High-Performance Sparse Matrix-Multi Vector Multiplication on Multi-Core Architecture." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1524089757826551.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

El-Kurdi, Yousef M. "Sparse Matrix-Vector floating-point multiplication with FPGAs for finite element electromagnetics." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98958.

Повний текст джерела
Анотація:
The Finite Element Method (FEM) is a computationally intensive scientific and engineering analysis tool that has diverse applications ranging from structural engineering to electromagnetic simulation. Field Programmable Gate Arrays (FPGAs) have been shown to have higher peak floating-point performance than general purpose CPUs, and the trends are moving in favor of FPGAs. We present an architecture and implementation of an FPGA-based Sparse Matrix-Vector Multiplier (SMVM) for use in the iterative solution of large, sparse systems of equations arising from FEM applications. Our architecture exploits the FEM matrix sparsity structure to achieve a balance between performance and hardware resource requirements. The architecture is based on a pipelined linear array of Processing Elements (PEs). A hardware-oriented matrix "striping" scheme is developed which reduces the number of required processing elements. The implemented SMVM-pipeline prototype contains 8 PEs and is clocked at 110 MHz obtaining a peak performance of 1.76 GFLOPS. For 8 GB/s of memory bandwidth typical of recent FPGA reconfigurable systems, this architecture can achieve 1.5 GFLOPS sustained performance. A single pipeline uses 30% of the logic resources and 40% of the memory resources of a Stratix S80 FPGA. Using multiple instances of the pipeline, linear scaling of the peak and sustained performance can be achieved. Our stream-through architecture provides the added advantage of enabling an iterative implementation of the SMVM computation required by iterative solvers such as the conjugate gradient method, avoiding initialization time due to data loading and setup inside the FPGA internal memory.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Godwin, Jeswin Samuel. "High-Performancs Sparse Matrix-Vector Multiplication on GPUS for Structured Grid Computations." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1357280824.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Pantawongdecha, Payut. "Autotuning divide-and-conquer matrix-vector multiplication." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105968.

Повний текст джерела
Анотація:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 73-75).
Divide and conquer is an important concept in computer science. It is used ubiquitously to simplify and speed up programs. However, it needs to be optimized, with respect to parameter settings for example, in order to achieve the best performance. The problem boils down to searching for the best implementation choice on a given set of requirements, such as which machine the program is running on. The goal of this thesis is to apply and evaluate the Ztune approach [14] on serial divide-and-conquer matrix-vector multiplication. We implemented Ztune to autotune serial divide-and-conquer matrix-vector multiplication on machines with different hardware configurations, and found that Ztuneoptimized codes ran 1%-5% faster than the hand-optimized counterparts. We also compared Ztune-optimized results with other matrix-vector multiplication libraries including the Intel Math Kernel Library and OpenBLAS. Since the matrix-vector multiplication problem is a level 2 BLAS, it is not as computationally intensive as level 3 BLAS problems such as matrix-matrix multiplication and stencil computation. As a result, the measurement in matrix-vector multiplication is more prone to error from factors such as noise, cache alignment of the matrix, and cache states, which lead to wrong decision choices for Ztune. We explored multiple options to get more accurate measurements and demonstrated the techniques that remedied these issues. Lastly, we applied the Ztune approach to matrix-matrix multiplication, and we were able to achieve 2%-85% speedup compared to the hand-tuned code. This thesis represents joint work with Ekanathan Palamadai Natarajan.
by Payut Pantawongdecha.
M. Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hopkins, T. M. "The design of a sparse vector processor." Thesis, University of Edinburgh, 1993. http://hdl.handle.net/1842/14094.

Повний текст джерела
Анотація:
This thesis describes the development of a new vector processor architecture capable of high efficiency when computing with very sparse vector and matrix data, of irregular structure. Two applications are identified as of particular importance: sparse Gaussian elimination, and Linear Programming, and the algorithmic steps involved in the solution of these problems are analysed. Existing techniques for sparse vector computation, which are only able to achieve a small fraction of the arithmetic performance commonly expected on dense matrix problems, are critically examined. A variety of new techniques with potential for hardware support is discussed. From these, the most promising are selected, and efficient hardware implementations developed. The architecture of a complete vector processor incorporating the new vector and matrix mechanisms is described - the new architecture also uses an innovative control structure for the vector processor, which enables high efficiency even when computing with vectors with very small numbers of non-zeroes. The practical feasibility of the design is demonstrated by describing the prototype implementation, under construction from off-the-shelf components. The expected performance of the new architecture is analysed, and simulation results are presented which demonstrate that the machine could be expected to provide an order of magnitude speed-up on many large sparse Linear Programming problems, compared to a scalar processor with the same clock rate. The simulation results indicate that the vector processor control structure is successful - the vector half-performance length is as low as 8 for standard vector instruction loop tests. In some cases, simulations indicate that the performance of the machine is limited by the speed of some scalar processor operations. Finally, the scope for re-implementing the new architecture in technology faster than the prototype's 8MHz is briefly discussed, and particular potential difficulties identified.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Belgin, Mehmet. "Structure-based Optimizations for Sparse Matrix-Vector Multiply." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/30260.

Повний текст джерела
Анотація:
This dissertation introduces two novel techniques, OSF and PBR, to improve the performance of Sparse Matrix-vector Multiply (SMVM) kernels, which dominate the runtime of iterative solvers for systems of linear equations. SMVM computations that use sparse formats typically achieve only a small fraction of peak CPU speeds because they are memory bound due to their low flops:byte ratio, they access memory irregularly, and exhibit poor ILP due to inefficient pipelining. We particularly focus on improving the flops:byte ratio, which is the main limiter on performance, by exploiting recurring structures or sub-structures in matrices. Our techniques also support micro-architecture level optimizations to further improve performance. Operation Stacking Framework (OSF) stacks problems in large ensemble computations, which run the same sparse kernel using an identical matrix structure, such that they share a single copy of the indexing information to significantly reduce memory bandwidth usage. OSF provides performance improvements of up to 1.94x on an AMD Opteron compared to the CSR method. We validate performance results using hardware event counters, which demonstrate significantly improved cache and pipeline utilization. Pattern-based Representation (PBR) exploits recurring block nonzero patterns by generating custom code for each recurring block pattern. In this way, no indexing data for individual nonzero elements are read from memory, reducing the overall size of the indices by up to 98%. Our code generator emits highly tuned codes that utilize SSE vectorization and software prefetching. PBR accurately identifies a block size that achieves optimal or near-optimal performance using a linear multiple regression performance model. On recent multicore machines, PBR provides performance improvements of up to 3.4x sequentially and 5x in parallel, compared to the CSR method. The PBR library we provide converts matrices at runtime, allowing our method to be used as a drop-in replacement for existing methods. We compare PBRâ s overhead relative to its benefits and show that PBR is beneficial for many applications that repetitively call the SMVM kernel for the same matrix structure.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Sparse Vector Vector Multiplication"

1

Andersen, J. The scheduling of sparse matrix-vector multiplicatiion on a massively parallel DAP computer. Uxbridge: Brunel University, Department of Mathematics and Statistics, 1991.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Itai, Yad-Shalom, and Langley Research Center, eds. Fast multiresolution algorithms for matrix-vector multiplication. Hampton, Va: National Aeronautics and Space Administration, Langley Research Center, 1992.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

M¨uhlherr, Bernhard, Holger P. Petersson, and Richard M. Weiss. Quadratic Forms of Type F4. Princeton University Press, 2017. http://dx.doi.org/10.23943/princeton/9780691166902.003.0009.

Повний текст джерела
Анотація:
This chapter presents various results about quadratic forms of type F₄. The Moufang quadrangles of type F₄ were discovered in the course of carrying out the classification of Moufang polygons and gave rise to the notion of a quadratic form of type F₄. The chapter begins with the notation stating that a quadratic space Λ‎ = (K, L, q) is of type F₄ if char(K) = 2, q is anisotropic and: for some separable quadratic extension E/K with norm N; for some subfield F of K containing K² viewed as a vector space over K with respect to the scalar multiplication (t, s) ↦ t²s for all (t, s) ∈ K x F; and for some α‎ ∈ F* and some β‎ ∈ K*. The chapter also considers a number of propositions regarding quadratic spaces and discrete valuations.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Bisseling, Rob H. Parallel Scientific Computation. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198788348.001.0001.

Повний текст джерела
Анотація:
This book explains how to use the bulk synchronous parallel (BSP) model to design and implement parallel algorithms in the areas of scientific computing and big data. Furthermore, it presents a hybrid BSP approach towards new hardware developments such as hierarchical architectures with both shared and distributed memory. The book provides a full treatment of core problems in scientific computing and big data, starting from a high-level problem description, via a sequential solution algorithm to a parallel solution algorithm and an actual parallel program written in the communication library BSPlib. Numerical experiments are presented for parallel programs on modern parallel computers ranging from desktop computers to massively parallel supercomputers. The introductory chapter of the book gives a complete overview of BSPlib, so that the reader already at an early stage is able to write his/her own parallel programs. Furthermore, it treats BSP benchmarking and parallel sorting by regular sampling. The next three chapters treat basic numerical linear algebra problems such as linear system solving by LU decomposition, sparse matrix-vector multiplication (SpMV), and the fast Fourier transform (FFT). The final chapter explores parallel algorithms for big data problems such as graph matching. The book is accompanied by a software package BSPedupack, freely available online from the author’s homepage, which contains all programs of the book and a set of test programs.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Mann, Peter. Legendre Transforms. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198822370.003.0033.

Повний текст джерела
Анотація:
This chapter introduces vector calculus to the reader from the very basics to a level appropriate for studying classical mechanics. However, it provides only the necessary vector calculus required to understand some of the operations perform in the text and perhaps support self-learning in more advanced topics, so the analysis is not be definitive. The chapter begins by examining the axioms of vector algebra, vector multiplication and vector differentiation, and then tackles the gradient, divergence and curl and other elements of vector integration. Topics discussed include contour integrals, the continuity equation, the Kronecker delta and the Levi-Civita symbol. Particular care is taken to explain every mathematical relation used in the main text, leaving no stone unturned!
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Algebraic And Geometric Aspects Of Integrable Systems And Random Matrices Ams Special Session Algebraic And Geometric Aspects Of Integrable Systems And Random Matrices January 67 2012 Boston Ma. American Mathematical Society, 2013.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Sparse Vector Vector Multiplication"

1

Vassiliadis, Stamatis, Sorin Cotofana, and Pyrrhos Stathis. "Vector ISA Extension for Sparse Matrix-Vector Multiplication." In Euro-Par’99 Parallel Processing, 708–15. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48311-x_100.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Maeda, Hiroshi, and Daisuke Takahashi. "Parallel Sparse Matrix-Vector Multiplication Using Accelerators." In Computational Science and Its Applications – ICCSA 2016, 3–18. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-42108-7_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Plaksa, Sergiy A., and Vitalii S. Shpakivskyi. "Differentiation in Vector Spaces." In Monogenic Functions in Spaces with Commutative Multiplication and Applications, 13–23. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-32254-9_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Hishinuma, Toshiaki, Hidehiko Hasegawa, and Teruo Tanaka. "SIMD Parallel Sparse Matrix-Vector and Transposed-Matrix-Vector Multiplication in DD Precision." In High Performance Computing for Computational Science – VECPAR 2016, 21–34. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-61982-8_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Monakov, Alexander, and Arutyun Avetisyan. "Implementing Blocked Sparse Matrix-Vector Multiplication on NVIDIA GPUs." In Lecture Notes in Computer Science, 289–97. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03138-0_32.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

AlAhmadi, Sarah, Thaha Muhammed, Rashid Mehmood, and Aiiad Albeshri. "Performance Characteristics for Sparse Matrix-Vector Multiplication on GPUs." In Smart Infrastructure and Applications, 409–26. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-13705-2_17.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Çatalyürek, Ümit V., and Cevdet Aykanat. "Decomposing irregularly sparse matrices for parallel matrix-vector multiplication." In Parallel Algorithms for Irregularly Structured Problems, 75–86. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/bfb0030098.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Wellein, Gerhard, Georg Hager, Achim Basermann, and Holger Fehske. "Fast Sparse Matrix-Vector Multiplication for TeraFlop/s Computers." In Lecture Notes in Computer Science, 287–301. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-36569-9_18.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Monakov, Alexander, Anton Lokhmotov, and Arutyun Avetisyan. "Automatically Tuning Sparse Matrix-Vector Multiplication for GPU Architectures." In High Performance Embedded Architectures and Compilers, 111–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-11515-8_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Vuduc, Richard W., and Hyun-Jin Moon. "Fast Sparse Matrix-Vector Multiplication by Exploiting Variable Block Structure." In High Performance Computing and Communications, 807–16. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11557654_91.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Sparse Vector Vector Multiplication"

1

Zhuo, Ling, and Viktor K. Prasanna. "Sparse Matrix-Vector multiplication on FPGAs." In the 2005 ACM/SIGDA 13th international symposium. New York, New York, USA: ACM Press, 2005. http://dx.doi.org/10.1145/1046192.1046202.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Haque, Sardar Anisul, Shahadat Hossain, and Marc Moreno Maza. "Cache friendly sparse matrix-vector multiplication." In the 4th International Workshop. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1837210.1837238.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Li, Haoran, Harumichi Yokoyama, and Takuya Araki. "Merge-Based Parallel Sparse Matrix-Sparse Vector Multiplication with a Vector Architecture." In 2018 IEEE 20th International Conference on High Performance Computing and Communications; IEEE 16th International Conference on Smart City; IEEE 4th International Conference on Data Science and Systems (HPCC/SmartCity/DSS). IEEE, 2018. http://dx.doi.org/10.1109/hpcc/smartcity/dss.2018.00038.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Shah, Monika. "Sparse Matrix Sparse Vector Multiplication - A Novel Approach." In 2015 44th International Conference on Parallel Processing Workshops (ICPPW). IEEE, 2015. http://dx.doi.org/10.1109/icppw.2015.18.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Buluç, Aydin, Jeremy T. Fineman, Matteo Frigo, John R. Gilbert, and Charles E. Leiserson. "Parallel sparse matrix-vector and matrix-transpose-vector multiplication using compressed sparse blocks." In the twenty-first annual symposium. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1583991.1584053.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zhuowei Wang, Xianbin Xu, Wuqing Zhao, Yuping Zhang, and Shuibing He. "Optimizing sparse matrix-vector multiplication on CUDA." In 2010 2nd International Conference on Education Technology and Computer (ICETC 2010). IEEE, 2010. http://dx.doi.org/10.1109/icetc.2010.5529724.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Pinar, Ali, and Michael T. Heath. "Improving performance of sparse matrix-vector multiplication." In the 1999 ACM/IEEE conference. New York, New York, USA: ACM Press, 1999. http://dx.doi.org/10.1145/331532.331562.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Sun, Junqing, Gregory Peterson, and Olaf Storaasli. "Sparse Matrix-Vector Multiplication Design on FPGAs." In 15th Annual IEEE Symposium on Field-Programmable Custom Computing Machines (FCCM 2007). IEEE, 2007. http://dx.doi.org/10.1109/fccm.2007.56.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Merrill, Duane, and Michael Garland. "Merge-Based Parallel Sparse Matrix-Vector Multiplication." In SC16: International Conference for High Performance Computing, Networking, Storage and Analysis. IEEE, 2016. http://dx.doi.org/10.1109/sc.2016.57.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Quang Anh, Pham Nguyen, Rui Fan, and Yonggang Wen. "Reducing Vector I/O for Faster GPU Sparse Matrix-Vector Multiplication." In 2015 IEEE International Parallel and Distributed Processing Symposium (IPDPS). IEEE, 2015. http://dx.doi.org/10.1109/ipdps.2015.100.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Sparse Vector Vector Multiplication"

1

Vuduc, R., and H. Moon. Fast sparse matrix-vector multiplication by exploiting variable block structure. Office of Scientific and Technical Information (OSTI), July 2005. http://dx.doi.org/10.2172/891708.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Calahan, D. A. Sparse Elimination on Vector Multiprocessors. Fort Belvoir, VA: Defense Technical Information Center, May 1988. http://dx.doi.org/10.21236/ada204321.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Calahan, D. A. Sparse Elimination on Vector Multiprocessors. Fort Belvoir, VA: Defense Technical Information Center, April 1985. http://dx.doi.org/10.21236/ada158274.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Calahan, D. A. Sparse Elimination on Vector Multiprocessors. Fort Belvoir, VA: Defense Technical Information Center, April 1986. http://dx.doi.org/10.21236/ada175121.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Hendrickson, B., R. Leland, and S. Plimpton. An efficient parallel algorithm for matrix-vector multiplication. Office of Scientific and Technical Information (OSTI), March 1993. http://dx.doi.org/10.2172/6519330.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Liberty, Edo, and Steven W. Zucker. The Mailman Algorithm: A Note on Matrix Vector Multiplication. Fort Belvoir, VA: Defense Technical Information Center, January 2008. http://dx.doi.org/10.21236/ada481737.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Simon, Horst D. Ordering Methods for Sparse Matrices and Vector Computers. Fort Belvoir, VA: Defense Technical Information Center, August 1986. http://dx.doi.org/10.21236/ada186350.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Lewis, John G. Ordering Methods for Sparse Matrices and Vector Computers. Fort Belvoir, VA: Defense Technical Information Center, March 1988. http://dx.doi.org/10.21236/ada198291.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Gropp, W. D., D. K. Kaushik, M. Minkoff, and B. F. Smith. Improving the performance of tensor matrix vector multiplication in quantum chemistry codes. Office of Scientific and Technical Information (OSTI), May 2008. http://dx.doi.org/10.2172/928654.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Tolleson, Blayne, Matthew Marinella, Christopher Bennett, Hugh Barnaby, Donald Wilson, and Jesse Short. Vector-Matrix Multiplication Engine for Neuromorphic Computation with a CBRAM Crossbar Array. Office of Scientific and Technical Information (OSTI), February 2022. http://dx.doi.org/10.2172/1846087.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії