Artículos de revistas sobre el tema "Sparse Vector Vector Multiplication"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Sparse Vector Vector Multiplication.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Sparse Vector Vector Multiplication".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Tao, Yuan, Yangdong Deng, Shuai Mu, Zhenzhong Zhang, Mingfa Zhu, Limin Xiao y Li Ruan. "GPU accelerated sparse matrix-vector multiplication and sparse matrix-transpose vector multiplication". Concurrency and Computation: Practice and Experience 27, n.º 14 (7 de octubre de 2014): 3771–89. http://dx.doi.org/10.1002/cpe.3415.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Filippone, Salvatore, Valeria Cardellini, Davide Barbieri y Alessandro Fanfarillo. "Sparse Matrix-Vector Multiplication on GPGPUs". ACM Transactions on Mathematical Software 43, n.º 4 (23 de marzo de 2017): 1–49. http://dx.doi.org/10.1145/3017994.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

ERHEL, JOCELYNE. "SPARSE MATRIX MULTIPLICATION ON VECTOR COMPUTERS". International Journal of High Speed Computing 02, n.º 02 (junio de 1990): 101–16. http://dx.doi.org/10.1142/s012905339000008x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Haque, Sardar Anisul, Shahadat Hossain y M. Moreno Maza. "Cache friendly sparse matrix-vector multiplication". ACM Communications in Computer Algebra 44, n.º 3/4 (28 de enero de 2011): 111–12. http://dx.doi.org/10.1145/1940475.1940490.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Bienz, Amanda, William D. Gropp y Luke N. Olson. "Node aware sparse matrix–vector multiplication". Journal of Parallel and Distributed Computing 130 (agosto de 2019): 166–78. http://dx.doi.org/10.1016/j.jpdc.2019.03.016.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Heath, L. S., C. J. Ribbens y S. V. Pemmaraju. "Processor-efficient sparse matrix-vector multiplication". Computers & Mathematics with Applications 48, n.º 3-4 (agosto de 2004): 589–608. http://dx.doi.org/10.1016/j.camwa.2003.06.009.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Yang, Xintian, Srinivasan Parthasarathy y P. Sadayappan. "Fast sparse matrix-vector multiplication on GPUs". Proceedings of the VLDB Endowment 4, n.º 4 (enero de 2011): 231–42. http://dx.doi.org/10.14778/1938545.1938548.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Romero, L. F. y E. L. Zapata. "Data distributions for sparse matrix vector multiplication". Parallel Computing 21, n.º 4 (abril de 1995): 583–605. http://dx.doi.org/10.1016/0167-8191(94)00087-q.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Thomas, Rajesh, Victor DeBrunner y Linda S. DeBrunner. "A Sparse Algorithm for Computing the DFT Using Its Real Eigenvectors". Signals 2, n.º 4 (11 de octubre de 2021): 688–705. http://dx.doi.org/10.3390/signals2040041.

Texto completo
Resumen
Direct computation of the discrete Fourier transform (DFT) and its FFT computational algorithms requires multiplication (and addition) of complex numbers. Complex number multiplication requires four real-valued multiplications and two real-valued additions, or three real-valued multiplications and five real-valued additions, as well as the requisite added memory for temporary storage. In this paper, we present a method for computing a DFT via a natively real-valued algorithm that is computationally equivalent to a N=2k-length DFT (where k is a positive integer), and is substantially more efficient for any other length, N. Our method uses the eigenstructure of the DFT, and the fact that sparse, real-valued, eigenvectors can be found and used to advantage. Computation using our method uses only vector dot products and vector-scalar products.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Sun, C. C., J. Götze, H. Y. Jheng y S. J. Ruan. "Sparse matrix-vector multiplication on network-on-chip". Advances in Radio Science 8 (22 de diciembre de 2010): 289–94. http://dx.doi.org/10.5194/ars-8-289-2010.

Texto completo
Resumen
Abstract. In this paper, we present an idea for performing matrix-vector multiplication by using Network-on-Chip (NoC) architecture. In traditional IC design on-chip communications have been designed with dedicated point-to-point interconnections. Therefore, regular local data transfer is the major concept of many parallel implementations. However, when dealing with the parallel implementation of sparse matrix-vector multiplication (SMVM), which is the main step of all iterative algorithms for solving systems of linear equation, the required data transfers depend on the sparsity structure of the matrix and can be extremely irregular. Using the NoC architecture makes it possible to deal with arbitrary structure of the data transfers; i.e. with the irregular structure of the sparse matrices. So far, we have already implemented the proposed SMVM-NoC architecture with the size 4×4 and 5×5 in IEEE 754 single float point precision using FPGA.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Isupov, Konstantin. "Multiple-precision sparse matrix–vector multiplication on GPUs". Journal of Computational Science 61 (mayo de 2022): 101609. http://dx.doi.org/10.1016/j.jocs.2022.101609.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Zou, Dan, Yong Dou, Song Guo y Shice Ni. "High performance sparse matrix-vector multiplication on FPGA". IEICE Electronics Express 10, n.º 17 (2013): 20130529. http://dx.doi.org/10.1587/elex.10.20130529.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Gao, Jiaquan, Yifei Xia, Renjie Yin y Guixia He. "Adaptive diagonal sparse matrix-vector multiplication on GPU". Journal of Parallel and Distributed Computing 157 (noviembre de 2021): 287–302. http://dx.doi.org/10.1016/j.jpdc.2021.07.007.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Yzelman, A. N. y Rob H. Bisseling. "Two-dimensional cache-oblivious sparse matrix–vector multiplication". Parallel Computing 37, n.º 12 (diciembre de 2011): 806–19. http://dx.doi.org/10.1016/j.parco.2011.08.004.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Yilmaz, Buse, Bariş Aktemur, MaríA J. Garzarán, Sam Kamin y Furkan Kiraç. "Autotuning Runtime Specialization for Sparse Matrix-Vector Multiplication". ACM Transactions on Architecture and Code Optimization 13, n.º 1 (5 de abril de 2016): 1–26. http://dx.doi.org/10.1145/2851500.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Liu, Sheng, Yasong Cao y Shuwei Sun. "Mapping and Optimization Method of SpMV on Multi-DSP Accelerator". Electronics 11, n.º 22 (11 de noviembre de 2022): 3699. http://dx.doi.org/10.3390/electronics11223699.

Texto completo
Resumen
Sparse matrix-vector multiplication (SpMV) solves the product of a sparse matrix and dense vector, and the sparseness of a sparse matrix is often more than 90%. Usually, the sparse matrix is compressed to save storage resources, but this causes irregular access to dense vectors in the algorithm, which takes a lot of time and degrades the SpMV performance of the system. In this study, we design a dedicated channel in the DMA to implement an indirect memory access process to speed up the SpMV operation. On this basis, we propose six SpMV algorithm schemes and map them to optimize the performance of SpMV. The results show that the M processor’s SpMV performance reached 6.88 GFLOPS. Besides, the average performance of the HPCG benchmark is 2.8 GFLOPS.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Jao, Nicholas, Akshay Krishna Ramanathan, John Sampson y Vijaykrishnan Narayanan. "Sparse Vector-Matrix Multiplication Acceleration in Diode-Selected Crossbars". IEEE Transactions on Very Large Scale Integration (VLSI) Systems 29, n.º 12 (diciembre de 2021): 2186–96. http://dx.doi.org/10.1109/tvlsi.2021.3114186.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Kamin, Sam, María Jesús Garzarán, Barış Aktemur, Danqing Xu, Buse Yılmaz y Zhongbo Chen. "Optimization by runtime specialization for sparse matrix-vector multiplication". ACM SIGPLAN Notices 50, n.º 3 (12 de mayo de 2015): 93–102. http://dx.doi.org/10.1145/2775053.2658773.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Fernandez, D. M., D. Giannacopoulos y W. J. Gross. "Efficient Multicore Sparse Matrix-Vector Multiplication for FE Electromagnetics". IEEE Transactions on Magnetics 45, n.º 3 (marzo de 2009): 1392–95. http://dx.doi.org/10.1109/tmag.2009.2012640.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Shantharam, Manu, Anirban Chatterjee y Padma Raghavan. "Exploiting dense substructures for fast sparse matrix vector multiplication". International Journal of High Performance Computing Applications 25, n.º 3 (agosto de 2011): 328–41. http://dx.doi.org/10.1177/1094342011414748.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Gao, Jiaquan, Panpan Qi y Guixia He. "Efficient CSR-Based Sparse Matrix-Vector Multiplication on GPU". Mathematical Problems in Engineering 2016 (2016): 1–14. http://dx.doi.org/10.1155/2016/4596943.

Texto completo
Resumen
Sparse matrix-vector multiplication (SpMV) is an important operation in computational science and needs be accelerated because it often represents the dominant cost in many widely used iterative methods and eigenvalue problems. We achieve this objective by proposing a novel SpMV algorithm based on the compressed sparse row (CSR) on the GPU. Our method dynamically assigns different numbers of rows to each thread block and executes different optimization implementations on the basis of the number of rows it involves for each block. The process of accesses to the CSR arrays is fully coalesced, and the GPU’s DRAM bandwidth is efficiently utilized by loading data into the shared memory, which alleviates the bottleneck of many existing CSR-based algorithms (i.e., CSR-scalar and CSR-vector). Test results on C2050 and K20c GPUs show that our method outperforms a perfect-CSR algorithm that inspires our work, the vendor tuned CUSPARSE V6.5 and CUSP V0.5.1, and three popular algorithms clSpMV, CSR5, and CSR-Adaptive.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Maggioni, Marco y Tanya Berger-Wolf. "Optimization techniques for sparse matrix–vector multiplication on GPUs". Journal of Parallel and Distributed Computing 93-94 (julio de 2016): 66–86. http://dx.doi.org/10.1016/j.jpdc.2016.03.011.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Geus, Roman y Stefan Röllin. "Towards a fast parallel sparse symmetric matrix–vector multiplication". Parallel Computing 27, n.º 7 (junio de 2001): 883–96. http://dx.doi.org/10.1016/s0167-8191(01)00073-4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Zardoshti, Pantea, Farshad Khunjush y Hamid Sarbazi-Azad. "Adaptive sparse matrix representation for efficient matrix–vector multiplication". Journal of Supercomputing 72, n.º 9 (28 de noviembre de 2015): 3366–86. http://dx.doi.org/10.1007/s11227-015-1571-0.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Zhang, Jilin, Enyi Liu, Jian Wan, Yongjian Ren, Miao Yue y Jue Wang. "Implementing Sparse Matrix-Vector Multiplication with QCSR on GPU". Applied Mathematics & Information Sciences 7, n.º 2 (1 de marzo de 2013): 473–82. http://dx.doi.org/10.12785/amis/070207.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Feng, Xiaowen, Hai Jin, Ran Zheng, Zhiyuan Shao y Lei Zhu. "A segment-based sparse matrix-vector multiplication on CUDA". Concurrency and Computation: Practice and Experience 26, n.º 1 (7 de diciembre de 2012): 271–86. http://dx.doi.org/10.1002/cpe.2978.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Neves, Samuel y Filipe Araujo. "Straight-line programs for fast sparse matrix-vector multiplication". Concurrency and Computation: Practice and Experience 27, n.º 13 (28 de enero de 2014): 3245–61. http://dx.doi.org/10.1002/cpe.3211.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Nastea, Sorin G., Ophir Frieder y Tarek El-Ghazawi. "Load-Balanced Sparse Matrix–Vector Multiplication on Parallel Computers". Journal of Parallel and Distributed Computing 46, n.º 2 (noviembre de 1997): 180–93. http://dx.doi.org/10.1006/jpdc.1997.1361.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

MUKADDES, ABUL MUKID MOHAMMAD, MASAO OGINO y RYUJI SHIOYA. "PERFORMANCE EVALUATION OF DOMAIN DECOMPOSITION METHOD WITH SPARSE MATRIX STORAGE SCHEMES IN MODERN SUPERCOMPUTER". International Journal of Computational Methods 11, supp01 (noviembre de 2014): 1344007. http://dx.doi.org/10.1142/s0219876213440076.

Texto completo
Resumen
The use of proper data structures with corresponding algorithms is critical to achieve good performance in scientific computing. The need of sparse matrix vector multiplication in each iteration of the iterative domain decomposition method has led to implementation of a variety of sparse matrix storage formats. Many storage formats have been presented to represent sparse matrix and integrated in the method. In this paper, the storage efficiency of those sparse matrix storage formats are evaluated and compared. The performance results of sparse matrix vector multiplication used in the domain decomposition method is considered. Based on our experiments in the FX10 supercomputer system, some useful conclusions that can serve as guidelines for the optimization of domain decomposition method are extracted.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

He, Guixia y Jiaquan Gao. "A Novel CSR-Based Sparse Matrix-Vector Multiplication on GPUs". Mathematical Problems in Engineering 2016 (2016): 1–12. http://dx.doi.org/10.1155/2016/8471283.

Texto completo
Resumen
Sparse matrix-vector multiplication (SpMV) is an important operation in scientific computations. Compressed sparse row (CSR) is the most frequently used format to store sparse matrices. However, CSR-based SpMVs on graphic processing units (GPUs), for example, CSR-scalar and CSR-vector, usually have poor performance due to irregular memory access patterns. This motivates us to propose a perfect CSR-based SpMV on the GPU that is called PCSR. PCSR involves two kernels and accesses CSR arrays in a fully coalesced manner by introducing a middle array, which greatly alleviates the deficiencies of CSR-scalar (rare coalescing) and CSR-vector (partial coalescing). Test results on a single C2050 GPU show that PCSR fully outperforms CSR-scalar, CSR-vector, and CSRMV and HYBMV in the vendor-tuned CUSPARSE library and is comparable with a most recently proposed CSR-based algorithm, CSR-Adaptive. Furthermore, we extend PCSR on a single GPU to multiple GPUs. Experimental results on four C2050 GPUs show that no matter whether the communication between GPUs is considered or not PCSR on multiple GPUs achieves good performance and has high parallel efficiency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Yzelman, A. N. y Rob H. Bisseling. "Cache-Oblivious Sparse Matrix–Vector Multiplication by Using Sparse Matrix Partitioning Methods". SIAM Journal on Scientific Computing 31, n.º 4 (enero de 2009): 3128–54. http://dx.doi.org/10.1137/080733243.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Liu, Yongchao y Bertil Schmidt. "LightSpMV: Faster CUDA-Compatible Sparse Matrix-Vector Multiplication Using Compressed Sparse Rows". Journal of Signal Processing Systems 90, n.º 1 (10 de enero de 2017): 69–86. http://dx.doi.org/10.1007/s11265-016-1216-4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Giannoula, Christina, Ivan Fernandez, Juan Gómez-Luna, Nectarios Koziris, Georgios Goumas y Onur Mutlu. "Towards Efficient Sparse Matrix Vector Multiplication on Real Processing-In-Memory Architectures". ACM SIGMETRICS Performance Evaluation Review 50, n.º 1 (20 de junio de 2022): 33–34. http://dx.doi.org/10.1145/3547353.3522661.

Texto completo
Resumen
Several manufacturers have already started to commercialize near-bank Processing-In-Memory (PIM) architectures, after decades of research efforts. Near-bank PIM architectures place simple cores close to DRAM banks. Recent research demonstrates that they can yield significant performance and energy improvements in parallel applications by alleviating data access costs. Real PIM systems can provide high levels of parallelism, large aggregate memory bandwidth and low memory access latency, thereby being a good fit to accelerate the Sparse Matrix Vector Multiplication (SpMV) kernel. SpMV has been characterized as one of the most significant and thoroughly studied scientific computation kernels. It is primarily a memory-bound kernel with intensive memory accesses due its algorithmic nature, the compressed matrix format used, and the sparsity patterns of the input matrices given. This paper provides the first comprehensive analysis of SpMV on a real-world PIM architecture, and presents SparseP, the first SpMV library for real PIM architectures. We make two key contributions. First, we design efficient SpMV algorithms to accelerate the SpMV kernel in current and future PIM systems, while covering a wide variety of sparse matrices with diverse sparsity patterns. Second, we provide the first comprehensive analysis of SpMV on a real PIM architecture. Specifically, we conduct our rigorous experimental analysis of SpMV kernels in the UPMEM PIM system, the first publicly-available real-world PIM architecture. Our extensive evaluation provides new insights and recommendations for software designers and hardware architects to efficiently accelerate the SpMV kernel on real PIM systems. For more information about our thorough characterization on the SpMV PIM execution, results, insights and the open-source SparseP software package [21], we refer the reader to the full version of the paper [3, 4]. The SparseP software package is publicly and freely available at https://github.com/CMU-SAFARI/SparseP.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Karsavuran, M. Ozan, Kadir Akbudak y Cevdet Aykanat. "Locality-Aware Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication on Many-Core Processors". IEEE Transactions on Parallel and Distributed Systems 27, n.º 6 (1 de junio de 2016): 1713–26. http://dx.doi.org/10.1109/tpds.2015.2453970.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Dubois, David, Andrew Dubois, Thomas Boorman, Carolyn Connor y Steve Poole. "Sparse Matrix-Vector Multiplication on a Reconfigurable Supercomputer with Application". ACM Transactions on Reconfigurable Technology and Systems 3, n.º 1 (enero de 2010): 1–31. http://dx.doi.org/10.1145/1661438.1661440.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Catalyurek, U. V. y C. Aykanat. "Hypergraph-partitioning-based decomposition for parallel sparse-matrix vector multiplication". IEEE Transactions on Parallel and Distributed Systems 10, n.º 7 (julio de 1999): 673–93. http://dx.doi.org/10.1109/71.780863.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Toledo, S. "Improving the memory-system performance of sparse-matrix vector multiplication". IBM Journal of Research and Development 41, n.º 6 (noviembre de 1997): 711–25. http://dx.doi.org/10.1147/rd.416.0711.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Williams, Samuel, Leonid Oliker, Richard Vuduc, John Shalf, Katherine Yelick y James Demmel. "Optimization of sparse matrix–vector multiplication on emerging multicore platforms". Parallel Computing 35, n.º 3 (marzo de 2009): 178–94. http://dx.doi.org/10.1016/j.parco.2008.12.006.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Peters, Alexander. "Sparse matrix vector multiplication techniques on the IBM 3090 VF". Parallel Computing 17, n.º 12 (diciembre de 1991): 1409–24. http://dx.doi.org/10.1016/s0167-8191(05)80007-9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Li, ShiGang, ChangJun Hu, JunChao Zhang y YunQuan Zhang. "Automatic tuning of sparse matrix-vector multiplication on multicore clusters". Science China Information Sciences 58, n.º 9 (24 de junio de 2015): 1–14. http://dx.doi.org/10.1007/s11432-014-5254-x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Dehn, T., M. Eiermann, K. Giebermann y V. Sperling. "Structured sparse matrix-vector multiplication on massively parallel SIMD architectures". Parallel Computing 21, n.º 12 (diciembre de 1995): 1867–94. http://dx.doi.org/10.1016/0167-8191(95)00055-0.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Zeiser, Andreas. "Fast Matrix-Vector Multiplication in the Sparse-Grid Galerkin Method". Journal of Scientific Computing 47, n.º 3 (26 de noviembre de 2010): 328–46. http://dx.doi.org/10.1007/s10915-010-9438-2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Yang, Bing, Shuo Gu, Tong-Xiang Gu, Cong Zheng y Xing-Ping Liu. "Parallel Multicore CSB Format and Its Sparse Matrix Vector Multiplication". Advances in Linear Algebra & Matrix Theory 04, n.º 01 (2014): 1–8. http://dx.doi.org/10.4236/alamt.2014.41001.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Ahmad, Khalid, Hari Sundar y Mary Hall. "Data-driven Mixed Precision Sparse Matrix Vector Multiplication for GPUs". ACM Transactions on Architecture and Code Optimization 16, n.º 4 (10 de enero de 2020): 1–24. http://dx.doi.org/10.1145/3371275.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Tao, Yuan y Huang Zhi-Bin. "Shuffle Reduction Based Sparse Matrix-Vector Multiplication on Kepler GPU". International Journal of Grid and Distributed Computing 9, n.º 10 (31 de octubre de 2016): 99–106. http://dx.doi.org/10.14257/ijgdc.2016.9.10.09.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Dehnavi, Maryam Mehri, David M. Fernandez y Dennis Giannacopoulos. "Finite-Element Sparse Matrix Vector Multiplication on Graphic Processing Units". IEEE Transactions on Magnetics 46, n.º 8 (agosto de 2010): 2982–85. http://dx.doi.org/10.1109/tmag.2010.2043511.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Liang, Yun, Wai Teng Tang, Ruizhe Zhao, Mian Lu, Huynh Phung Huynh y Rick Siow Mong Goh. "Scale-Free Sparse Matrix-Vector Multiplication on Many-Core Architectures". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 36, n.º 12 (diciembre de 2017): 2106–19. http://dx.doi.org/10.1109/tcad.2017.2681072.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Aktemur, Barış. "A sparse matrix-vector multiplication method with low preprocessing cost". Concurrency and Computation: Practice and Experience 30, n.º 21 (25 de mayo de 2018): e4701. http://dx.doi.org/10.1002/cpe.4701.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Chen, Xinhai, Peizhen Xie, Lihua Chi, Jie Liu y Chunye Gong. "An efficient SIMD compression format for sparse matrix-vector multiplication". Concurrency and Computation: Practice and Experience 30, n.º 23 (29 de junio de 2018): e4800. http://dx.doi.org/10.1002/cpe.4800.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Arai, Kenichi y Hiroyuki Okazaki. "N-Dimensional Binary Vector Spaces". Formalized Mathematics 21, n.º 2 (1 de junio de 2013): 75–81. http://dx.doi.org/10.2478/forma-2013-0008.

Texto completo
Resumen
Summary The binary set {0, 1} together with modulo-2 addition and multiplication is called a binary field, which is denoted by F2. The binary field F2 is defined in [1]. A vector space over F2 is called a binary vector space. The set of all binary vectors of length n forms an n-dimensional vector space Vn over F2. Binary fields and n-dimensional binary vector spaces play an important role in practical computer science, for example, coding theory [15] and cryptology. In cryptology, binary fields and n-dimensional binary vector spaces are very important in proving the security of cryptographic systems [13]. In this article we define the n-dimensional binary vector space Vn. Moreover, we formalize some facts about the n-dimensional binary vector space Vn.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía