To see the other types of publications on this topic, follow the link: Multiplication de matrices creuses.

Journal articles on the topic 'Multiplication de matrices creuses'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multiplication de matrices creuses.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Keles, Hasan. "Multiplication of Matrices." Indonesian Journal of Mathematics and Applications 2, no. 1 (March 31, 2024): 1–8. http://dx.doi.org/10.21776/ub.ijma.2024.002.01.1.

Full text
Abstract:
This study is about multiplication of matrices. Multiplication of real numbers, which can be written along a line, is also two way. Here, the direction is not an influential factor even when the elements are switched. For example $3.2=6$ and $2.3=6. $ In matrices this makes left and right multiplication is mandatory. Left multiplication is already defined. This is multiplication in known matrices. Left multiplication is used in the studies since the definition of this operation until today. The most insurmountable situation here is that matrices do not commutative Property according to this operation. The left product is taken into account, when $AB $ is written. Here the matrix $A$ is made to be effective. This left product is denoted by $AB. $ The right definition in this study is denoted by $\underleftarrow{AB}. $ This operation multiplication is seen to be compatible with the left multiplication. The commutativity property in matrices is reinvestigated with this approach. The relation between the right multiplication and the Cracovian Product is given by J. Koci´nski (2004).
APA, Harvard, Vancouver, ISO, and other styles
2

Roesler, Friedrich. "Generalized Matrices." Canadian Journal of Mathematics 41, no. 3 (June 1, 1989): 556–76. http://dx.doi.org/10.4153/cjm-1989-024-5.

Full text
Abstract:
Similar to the multiplication of square matrices one can define multiplications for three dimensional matrices, i.e., for the "cubes" of the vector spacewhere I denotes a finite set of indices and Kis any field. The multiplications shall imitate the matrix multiplication: To obtain the coefficient γxyzof the product (γxyz) — (αxyz)( βxyz),all coefficients axij, ij∈ I, of the horizontal plane with index xof (αxyz)are multiplied with certain coefficients βhgzof the vertical plane with index z of (βxyz)and the results are added:
APA, Harvard, Vancouver, ISO, and other styles
3

Bair, J. "72.34 Multiplication by Diagonal Matrices." Mathematical Gazette 72, no. 461 (October 1988): 228. http://dx.doi.org/10.2307/3618262.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sowa, Artur. "Factorizing matrices by Dirichlet multiplication." Linear Algebra and its Applications 438, no. 5 (March 2013): 2385–93. http://dx.doi.org/10.1016/j.laa.2012.09.021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Councilman, Samuel. "Sharing Teaching Ideas: Bisymmetric Matrices: Some Elementary New Problems." Mathematics Teacher 82, no. 8 (November 1989): 622–23. http://dx.doi.org/10.5951/mt.82.8.0622.

Full text
Abstract:
In introductory linear algebra courses one continually seeks interesting sets of matrices that are closed under the operations of matrix addition, scalar multiplication, and if possible, matrix multiplication. Most texts mention symmetric and antisymmetric matrices and ask the reader to show that these sets are closed under matrix addition and scalar multiplication but fail to be closed under matrix multiplication. Few textbooks, if any, suggest an investigation of the set of matrices that are symmetric with respect to both diagonals, namely bisymmetric matrices. The following is a sequence of relatively straightforward problems that can be used as homework, class discussion, or even examination material in elementary linear algebra classes.
APA, Harvard, Vancouver, ISO, and other styles
6

Ignatenko, M. V., and L. A. Yanovich. "On the theory of interpolation of functions on sets of matrices with the Hadamard multiplication." Proceedings of the National Academy of Sciences of Belarus. Physics and Mathematics Series 58, no. 3 (October 12, 2022): 263–79. http://dx.doi.org/10.29235/1561-2430-2022-58-3-263-279.

Full text
Abstract:
This article is devoted to the problem of interpolation of functions defined on sets of matrices with multiplication in the sense of Hadamard and is mainly an overview. It contains some known information about the Hadamard matrix multiplication and its properties. For functions defined on sets of square and rectangular matrices, various interpolation polynomials of the Lagrange type, containing both the operation of matrix multiplication in the Hadamard sense and the usual matrix product, are given. In the case of analytic functions defined on sets of square matrices with the Hadamard multiplication, some analogues of the Lagrange type trigonometric interpolation formulas are considered. Matrix analogues of splines and the Cauchy integral are given on sets of matrices with the Hadamard multiplication. Some of its applications in the theory of interpolation are considered. Theorems on the convergence of some Lagrange interpolation processes for analytic functions defined on a set of matrices with multiplication in the Hadamard sense are proved. The results obtained are based on the application of some well-known provisions of the theory of interpolation of scalar functions. Data presentation is illustrated by a number of examples.
APA, Harvard, Vancouver, ISO, and other styles
7

Abobala, Mohammad. "On Refined Neutrosophic Matrices and Their Application in Refined Neutrosophic Algebraic Equations." Journal of Mathematics 2021 (February 13, 2021): 1–5. http://dx.doi.org/10.1155/2021/5531093.

Full text
Abstract:
The objective of this paper is to introduce the concept of refined neutrosophic matrices as matrices such as multiplication, addition, and ring property. Also, it determines the necessary and sufficient condition for the invertibility of these matrices with respect to multiplication. On the contrary, nilpotency and idempotency properties will be discussed.
APA, Harvard, Vancouver, ISO, and other styles
8

Waterhouse, William C. "Circulant-style matrices closed under multiplication." Linear and Multilinear Algebra 18, no. 3 (November 1985): 197–206. http://dx.doi.org/10.1080/03081088508817686.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Theeracheep, Siraphob, and Jaruloj Chongstitvatana. "Multiplication of medium-density matrices using TensorFlow on multicore CPUs." Tehnički glasnik 13, no. 4 (December 11, 2019): 286–90. http://dx.doi.org/10.31803/tg-20191104183930.

Full text
Abstract:
Matrix multiplication is an essential part of many applications, such as linear algebra, image processing and machine learning. One platform used in such applications is TensorFlow, which is a machine learning library whose structure is based on dataflow programming paradigm. In this work, a method for multiplication of medium-density matrices on multicore CPUs using TensorFlow platform is proposed. This method, called tbt_matmul, utilizes TensorFlow built-in methods tf.matmul and tf.sparse_matmul. By partitioning each input matrix into four smaller sub-matrices, called tiles, and applying an appropriate multiplication method to each pair depending on their density, the proposed method outperforms the built-in methods for matrices of medium density and matrices of significantly uneven distribution of non-zeros.
APA, Harvard, Vancouver, ISO, and other styles
10

Mangngiri, Itsar, Qonita Qurrota A’yun, and Wasono Wasono. "AN ORDER-P TENSOR MULTIPLICATION WITH CIRCULANT STRUCTURE." BAREKENG: Jurnal Ilmu Matematika dan Terapan 17, no. 4 (December 19, 2023): 2293–304. http://dx.doi.org/10.30598/barekengvol17iss4pp2293-2304.

Full text
Abstract:
Research on mathematical operations involving multidimensional arrays or tensors has increased along with the growing applications involving multidimensional data analysis. The -product of order- tensor is one of tensor multiplications. The -product is defined using two operations that transform the multiplication of two tensors into the multiplication of two block matrices, then the result is a block matrix which is further transformed back into a tensor. The composition of both operations used in the definition of -product can transform a tensor into a block circulant matrix. This research discusses the -product of tensors based on their circulant structure. First, we present a theorem of the -product of tensors involving circulant matrices. Second, we use the definition of identity, transpose, and inverse tensors under -product operation and investigate their relationship with circulant matrices. Third, we manifest the computation of the -product involving circulant matrices. The results of the discussion show that the -product of tensors fundamentally involves circulant matrix multiplication, which means that the operation at its core relies on multiplying circulant matrices. This implies the -product operation of tensors having properties analogous to standard matrix multiplication. Furthermore, since the -product of tensors fundamentally involves circulant matrix multiplication, its computation can be simplified by diagonalizing the circulant matrix first using the discrete Fourier transform matrix. Finally, based on the obtained results, an algorithm is constructed in MATLAB to calculate the -product.
APA, Harvard, Vancouver, ISO, and other styles
11

Safonov, Ilia, Anton Kornilov, and Daria Makienko. "An Approach for Matrix Multiplication of 32-Bit Fixed Point Numbers by Means of 16-Bit SIMD Instructions on DSP." Electronics 12, no. 1 (December 25, 2022): 78. http://dx.doi.org/10.3390/electronics12010078.

Full text
Abstract:
Matrix multiplication is an important operation for many engineering applications. Sometimes new features that include matrix multiplication should be added to existing and even out-of-date embedded platforms. In this paper, an unusual problem is considered: how to implement matrix multiplication of 32-bit signed integers and fixed-point numbers on DSP having SIMD instructions for 16-bit integers only. For examined tasks, matrix size may vary from several tens to two hundred. The proposed mathematical approach for dense rectangular matrix multiplication of 32-bit numbers comprises decomposition of 32-bit matrices to matrices of 16-bit numbers, four matrix multiplications of 16-bit unsigned integers via outer product, and correction of outcome for signed integers and fixed point numbers. Several tricks for performance optimization are analyzed. In addition, ways for block-wise and parallel implementations are described. An implementation of the proposed method by means of 16-bit vector instructions is faster than matrix multiplication using 32-bit scalar instructions and demonstrates performance close to a theoretically achievable limit. The described technique can be generalized for matrix multiplication of n-bit integers and fixed point numbers via handling with matrices of n/2-bit integers. In conclusion, recommendations for practitioners who work on implementation of matrix multiplication for various DSP are presented.
APA, Harvard, Vancouver, ISO, and other styles
12

Orbach, Ron, Sivan Lilienthal, Michael Klein, R. D. Levine, Francoise Remacle, and Itamar Willner. "Ternary DNA computing using 3 × 3 multiplication matrices." Chemical Science 6, no. 2 (2015): 1288–92. http://dx.doi.org/10.1039/c4sc02930e.

Full text
Abstract:
Ternary computing, beyond Boolean logic, is anticipated to enhance computational complexity. DNA-based ternary computing is demonstrated by the assembly of a 3 × 3 multiplication table, and the parallel operation of three 3 × 3 multiplication matrices is highlighted.
APA, Harvard, Vancouver, ISO, and other styles
13

Lin, Chunxu, Wensheng Luo, Yixiang Fang, Chenhao Ma, Xilin Liu, and Yuchi Ma. "On Efficient Large Sparse Matrix Chain Multiplication." Proceedings of the ACM on Management of Data 2, no. 3 (May 29, 2024): 1–27. http://dx.doi.org/10.1145/3654959.

Full text
Abstract:
Sparse matrices are often used to model the interactions among different objects and they are prevalent in many areas including e-commerce, social network, and biology. As one of the fundamental matrix operations, the sparse matrix chain multiplication (SMCM) aims to efficiently multiply a chain of sparse matrices, which has found various real-world applications in areas like network analysis, data mining, and machine learning. The efficiency of SMCM largely hinges on the order of multiplying the matrices, which further relies on the accurate estimation of the sparsity values of intermediate matrices. Existing matrix sparsity estimators often struggle with large sparse matrices, because they suffer from the accuracy issue in both theory and practice. To enable efficient SMCM, in this paper we introduce a novel row-wise sparsity estimator (RS-estimator), a straightforward yet effective estimator that leverages matrix structural properties to achieve efficient, accurate, and theoretically guaranteed sparsity estimation. Based on the RS-estimator, we propose a novel ordering algorithm for determining a good order of efficient SMCM. We further develop an efficient parallel SMCM algorithm by effectively utilizing multiple CPU threads. We have conducted experiments by multiplying various chains of large sparse matrices extracted from five real-world large graph datasets, and the results demonstrate the effectiveness and efficiency of our proposed methods. In particular, our SMCM algorithm is up to three orders of magnitude faster than the state-of-the-art algorithms.
APA, Harvard, Vancouver, ISO, and other styles
14

Hallman-Thrasher, Allyson, Erin T. Litchfield, and Kevin E. Dael. "Original Recipes for Matrix Multiplication." Mathematics Teacher 110, no. 3 (October 2016): 182–90. http://dx.doi.org/10.5951/mathteacher.110.3.0182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Mechal Fheed Alslman, Nassr Aldin Ide, Ahmad Zakzak, Mechal Fheed Alslman, Nassr Aldin Ide, Ahmad Zakzak. "Building matrixes of higher order to achieve the special commutative multiplication and its applications in cryptography: بناء مصفوفات تبديلية من مراتب عليا وتطبيقاتها في التشفير." Journal of natural sciences, life and applied sciences 5, no. 3 (September 30, 2021): 16–1. http://dx.doi.org/10.26389/ajsrp.c260521.

Full text
Abstract:
In this paper, we introduce a method for building matrices that verify the commutative property of multiplication on the basis of circular matrices, as each of these matrices can be divided into four circular matrices, and we can also build matrices that verify the commutative property of multiplication from higher order and are not necessarily divided into circular matrices. Using these matrixes, we provide a way to securely exchange a secret encryption key, which is a square matrix, over open communication channels, and then use this key to exchange encrypted messages between two sides or two parties. Moreover, using these matrixes we also offer a public-key encryption method, whereby the two parties exchange encrypted messages without previously agreeing on a common secret key between them.
APA, Harvard, Vancouver, ISO, and other styles
16

Borna, Keivan, and Sohrab Fard. "A note on the multiplication of sparse matrices." Open Computer Science 4, no. 1 (January 1, 2014): 1–11. http://dx.doi.org/10.2478/s13537-014-0201-x.

Full text
Abstract:
AbstractWe present a practical algorithm for multiplication of two sparse matrices. In fact if A and B are two matrices of size n with m 1 and m 2 non-zero elements respectively, then our algorithm performs O(min{m 1 n, m 2 n, m 1 m 2}) multiplications and O(k) additions where k is the number of non-zero elements in the tiny matrices that are obtained by the columns times rows matrix multiplication method. Note that in the useful case, k ≤ m 2 n. However, in Proposition 3.3 and Proposition 3.4 we obtain tight upper bounds for the complexity of additions. We also study the complexity of multiplication in a practical case where non-zero elements of A (resp. B) are distributed independently with uniform distribution among columns (resp. rows) of them and show that the expected number of multiplications is O(m 1 m 2/n). Finally a comparison of number of required multiplications in the naïve matrix multiplication, Strassen’s method and our algorithm is given.
APA, Harvard, Vancouver, ISO, and other styles
17

Stojanoff, Demetrio. "Index of Hadamard multiplication by positive matrices." Linear Algebra and its Applications 290, no. 1-3 (March 1999): 95–108. http://dx.doi.org/10.1016/s0024-3795(98)10215-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Stojčev, M. K., E. I. Milovanović, and I. Ž. Milovanović. "An algorithm for multiplication of concatenated matrices." Parallel Computing 13, no. 2 (February 1990): 211–23. http://dx.doi.org/10.1016/0167-8191(90)90148-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Vavřín, Zdeněk. "Multiplication of diagonal transforms of Loewner matrices." Linear Algebra and its Applications 99 (February 1988): 1–40. http://dx.doi.org/10.1016/0024-3795(88)90123-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Tiskin, Alexander. "Fast Distance Multiplication of Unit-Monge Matrices." Algorithmica 71, no. 4 (September 19, 2013): 859–88. http://dx.doi.org/10.1007/s00453-013-9830-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Reva, V. P., S. V. Korinets, A. G. Golenkov, S. V. Sapon, A. M. Torchinsky, V. V. Zabudsky, and F. F. Sizov. "Sensitivity of CCD matrices with electronic multiplication." Технология и конструирование в электронной аппаратуре, no. 2 (2018): 9–14. http://dx.doi.org/10.15222/tkea2018.2.09.

Full text
Abstract:
The sensitivity and basic electrical characteristics of the developed direct illumination matrices with charge-coupled devices and electronic multiplication were investigated at room temperatures and low illumination. Photomatrices of 576´288 and 640´512 format were designed using frame transfer architecture and 1.5-µm design rules with photosensitive cell sizes of 20´30 and 16´16 µm, respectively, and manufactured using n-channel technology with buried channel, four levels of polysilicon electrodes and two levels of metallization. To analyze the possibilities of the developed EMCCD matrices used in monitoring systems at low-light conditions, an experimental assessment of the matrices sensitivity was carried out. The assessment was based on a comparison of the luxmeter readings and Johnson's criteria using the standard 1951 USAF resolution target test table for the minimum size of line pairs distinguished by the observer (one pair consists of a dark and a light lines). The characteristics obtained with illumination of 5∙10–4 lux (glow of the starry sky with light clouds) and 10–2 lux (glow of the starry sky and the quarter of the Moon) corresponds to the parameters of generation 2+ electron-optical converters, which implies the possibility to use such matrices in night vision devices. At Åv ≈ 5∙10–4 lux, the camera with the developed EMCCD matrices will detect a human figure at the distance of about 200 m. With illumination of 10–2 lux at this distance a human figure can be identified.
APA, Harvard, Vancouver, ISO, and other styles
22

Saeed, Hayatullah, and Mohammad Azim Nazari. "Applications of Matrix Multiplication." Journal for Research in Applied Sciences and Biotechnology 3, no. 3 (June 2, 2024): 1–8. http://dx.doi.org/10.55544/jrasb.3.3.1.

Full text
Abstract:
In this paper we present some interesting applications of the matrix’s multiplication, that include the Leslie matrix and population change, which we calculate this kind of changes from year to other year by matrix multiplication. Another important part of the paper is Analysis of Traffic Flow, we represent the flow of traffic through a network of one-way streets. Another much important part is the production costs, this is fantastic usage of matrix multiplication, in which, A company manufactures three products. Its production expenses are divided into three categories, here in this paper we well describe this beautiful issue. By matrix multiplication, we can encode and decode messages. To encode a message, we choose an invertible matrix and multiply the uncoded row matrices (on the right) by to obtain coded row matrices, this idea will be clarify by some useful examples. Also we used to study certain relationships between objects by matrix multiplication. We will clarify all of these applications by useful examples. In this paper, we present six different applications of matrix multiplication.
APA, Harvard, Vancouver, ISO, and other styles
23

Li, Jin, Nan Liu, and Wei Kang. "Minimizing Computation and Communication Costs of Two-Sided Secure Distributed Matrix Multiplication under Arbitrary Collusion Pattern." Entropy 26, no. 5 (May 8, 2024): 407. http://dx.doi.org/10.3390/e26050407.

Full text
Abstract:
This paper studies the problem of minimizing the total cost, including computation cost and communication cost, in the system of two-sided secure distributed matrix multiplication (SDMM) under an arbitrary collusion pattern. In order to perform SDMM, the two input matrices are split into some blocks, blocks of random matrices are appended to protect the security of the two input matrices, and encoded copies of the blocks are distributed to all computing nodes for matrix multiplication calculation. Our aim is to minimize the total cost, overall matrix splitting factors, number of appended random matrices, and distribution vector, while satisfying the security constraint of the two input matrices, the decodability constraint of the desired result of the multiplication, the storage capacity of the computing nodes, and the delay constraint. First, a strategy of appending zeros to the input matrices is proposed to overcome the divisibility problem of matrix splitting. Next, the optimization problem is divided into two subproblems with the aid of alternating optimization (AO), where a feasible solution can be obtained. In addition, some necessary conditions for the problem to be feasible are provided. Simulation results demonstrate the superiority of our proposed scheme compared to the scheme without appending zeros and the scheme with no alternating optimization.
APA, Harvard, Vancouver, ISO, and other styles
24

LAUNOIS, S., and T. H. LENAGAN. "AUTOMORPHISMS OF QUANTUM MATRICES." Glasgow Mathematical Journal 55, A (October 2013): 89–100. http://dx.doi.org/10.1017/s0017089513000529.

Full text
Abstract:
AbstractWe study the automorphism group of the algebra $\co_q(M_n)$ of n × n generic quantum matrices. We provide evidence for our conjecture that this group is generated by the transposition and the subgroup of those automorphisms acting on the canonical generators of $\co_q(M_n)$ by multiplication by scalars. Moreover, we prove this conjecture in the case when n = 3.
APA, Harvard, Vancouver, ISO, and other styles
25

Daugulis, Pēteris, Elfrīda Krastiņa, Anita Sondore, and Vija Vagale. "VARIETY OF ARRANGEMENTS OF NUMERICAL DATA FOR A DEEPER UNDERSTANDING OF MATHEMATICS." SOCIETY. INTEGRATION. EDUCATION. Proceedings of the International Scientific Conference 1 (May 20, 2020): 107. http://dx.doi.org/10.17770/sie2020vol1.5081.

Full text
Abstract:
Effective arranging of numerical data and design of associated computational algorithms are important for any area of mathematics for teaching, learning and research purposes. Usage of various algorithms for the same area makes mathematics teaching goal-oriented and diverse. Matrices and linear-algebraic ideas can be used to make algorithms visual, two dimensional (2D) and easy to use. It may contribute to the planned educational reforms by teaching school and university students deeper mathematical thinking. In this article we give novel data arranging techniques (2D and 3D) for matrix multiplication. Our 2D method differs from the standard, formal approach by using block matrices. We find this method a helpful alternative for introducing matrix multiplication. We also give a new innovative 3D visualisation technique for matrix multiplication. In this method, matrices are positioned on the faces of a rectangular cuboid. Computerized implementations of this method may be considered as student project proposals.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhang, Qize. "Research on fast matrix multiplication algorithm." Applied and Computational Engineering 31, no. 1 (January 22, 2024): 229–34. http://dx.doi.org/10.54254/2755-2721/31/20230154.

Full text
Abstract:
This research mainly focuses on fast matrix multiplication algorithms. Fast matrix multiplication is one of the most fundamental problems in computer science. The fast matrix multiplication algorithm differs from conventional matrix multiplication in that it offers a faster computational approach that can perform the operation in less than O(n3) time complexity. This algorithm provides a more efficient method for multiplying matrices, significantly reducing the computational requirements. The Laser method, developed by Coppersmith and Winograd, is an algorithm for matrix multiplication that does not involve direct computation. It establishes a relationship between matrix multiplication and tensors and simplifies the operation by finding an intermediate tensor that is computationally manageable. This method applies a series of simplification operations to determine an upper bound on the computational complexity of matrix multiplication. However, as matrices become larger, the computational and memory requirements increase, posing challenges for practical implementation. This research will present the main ideas and performance of the Laser method and discuss the improvements made to the Laser method, including refined analysis and asymmetric hashing techniques. Additionally, it highlights the need for further exploration, such as parallel computing and optimization strategies, to enhance the efficiency of matrix multiplication algorithms. Furthermore, this research will also provide a prospectus for the future of matrix multiplication algorithms, such as the practical implementation of the Laser method.
APA, Harvard, Vancouver, ISO, and other styles
27

Treyderowski, Krzysztof, and Christoph Schwarzweller. "Multiplication of Polynomials using Discrete Fourier Transformation." Formalized Mathematics 14, no. 4 (January 1, 2006): 121–28. http://dx.doi.org/10.2478/v10037-006-0015-y.

Full text
Abstract:
Multiplication of Polynomials using Discrete Fourier Transformation In this article we define the Discrete Fourier Transformation for univariate polynomials and show that multiplication of polynomials can be carried out by two Fourier Transformations with a vector multiplication in-between. Our proof follows the standard one found in the literature and uses Vandermonde matrices, see e.g. [27].
APA, Harvard, Vancouver, ISO, and other styles
28

Paniello, Irene. "Genetic coalgebras and their cubic stochastic matrices." Journal of Algebra and Its Applications 16, no. 12 (November 20, 2017): 1750239. http://dx.doi.org/10.1142/s0219498817502395.

Full text
Abstract:
We study the structure of genetic coalgebras and prove the existence of a decomposition as the direct sum of indecomposable subcoalgebras with genetic realization. To obtain such a decomposition, we first define a new multiplication for their related cubic stochastic matrices of type (1,2) and then, considering these matrices as cubic non-negative, we show how this new multiplication induces an index classification which can be used to study the genetic coalgebra structure. Genetically, the given coalgebra decomposition can be understood as a genetic clustering of the different types for the genetic trait defining the genetic coalgebra, which allows us to identify isolated ancestral populations.
APA, Harvard, Vancouver, ISO, and other styles
29

Abdurrazzaq, Achmad, Ari Wardayani, and Suroto Suroto. "RING MATRIKS ATAS RING KOMUTATIF." Jurnal Ilmiah Matematika dan Pendidikan Matematika 7, no. 1 (June 26, 2015): 11. http://dx.doi.org/10.20884/1.jmp.2015.7.1.2895.

Full text
Abstract:
This paper discusses a matrices over a commutative ring. A matrices over commutative rings is a matrices whose entries are the elements of the commutative ring. We investigates the structure of the set of the matrices over the commutative ring. We obtain that the set of the matrices over the commutative ring equipped with an addition and a multiplication operation of matrices is a ring with a unit element.
APA, Harvard, Vancouver, ISO, and other styles
30

Sanyal, Atri, Ashika Jain, Anwesha Dey, and Prakash Kumar Gupta. "A High-Speed Floating Point Matrix Multiplier Implemented in Reconfigurable Architecture." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 10, no. 2 (March 20, 2024): 193–99. http://dx.doi.org/10.32628/cseit2390661.

Full text
Abstract:
Matrix multiplication is a fundamental operation in computational applications across various domains. This paper introduces a novel reconfigurable co-processor that enhances the efficiency of matrix multiplication by concurrently executing addition and multiplication operations upon matrix elements of different sizes. The proposed design aims to reduce computation time and improve efficiency for matrix multiplication equations. Experimental evaluations were conducted on matrices of different sizes to demonstrate the effectiveness of the processor. The results reveal substantial improvements in both time and efficiency when compared to traditional approaches. The reconfigurable transformation processor harnesses parallel processing capabilities, enabling the simultaneous execution of addition and multiplication operations by partitioning input matrices into smaller submatrices and performing parallel computations, thus the processor achieves faster results. Additionally, the design incorporates configurable arithmetic units that dynamically adapt to matrix characteristics, further optimizing performance. The experimental evaluations provide evidence of reduction in computation time and improvement in efficiency. present significant benefits over traditional sequential methods. This makes this co-processor ideally fit for domains that require intensive linear algebra computations such as computer vision, machine learning, and signal processing.
APA, Harvard, Vancouver, ISO, and other styles
31

Wan, Yuanyu, and Lijun Zhang. "Approximate Multiplication of Sparse Matrices with Limited Space." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (May 18, 2021): 10058–66. http://dx.doi.org/10.1609/aaai.v35i11.17207.

Full text
Abstract:
Approximate matrix multiplication with limited space has received ever-increasing attention due to the emergence of large-scale applications. Recently, based on a popular matrix sketching algorithm---frequent directions, previous work has introduced co-occuring directions (COD) to reduce the approximation error for this problem. Although it enjoys the space complexity of O((m_x+m_y)l) for two input matrices X∈ℝ^{m_x ╳ n} and Y∈ℝ^{m_y ╳ n} where l is the sketch size, its time complexity is O(n(m_x+m_y+l)l), which is still very high for large input matrices. In this paper, we propose to reduce the time complexity by exploiting the sparsity of the input matrices. The key idea is to employ an approximate singular value decomposition (SVD) method which can utilize the sparsity, to reduce the number of QR decompositions required by COD. In this way, we develop sparse co-occuring directions, which reduces the time complexity to Õ((nnz(X)+nnz(Y))l+nl^2) in expectation while keeps the same space complexity as O((m_x+m_y)l), where nnz(X) denotes the number of non-zero entries in X and the Õ notation hides constant factors as well as polylogarithmic factors. Theoretical analysis reveals that the approximation error of our algorithm is almost the same as that of COD. Furthermore, we empirically verify the efficiency and effectiveness of our algorithm.
APA, Harvard, Vancouver, ISO, and other styles
32

Boullis, N., and A. Tisserand. "Some Optimizations of Hardware Multiplication by Constant Matrices." IEEE Transactions on Computers 54, no. 10 (October 2005): 1271–82. http://dx.doi.org/10.1109/tc.2005.168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Corach, Gustavo, and Demetrio Stojanoff. "Index of Hadamard multiplication by positive matrices II." Linear Algebra and its Applications 332-334 (August 2001): 503–17. http://dx.doi.org/10.1016/s0024-3795(01)00306-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Kautsky, Jaroslav. "Generalized Pascal matrices generate classes closed under multiplication." Linear Algebra and its Applications 437, no. 12 (December 2012): 2887–95. http://dx.doi.org/10.1016/j.laa.2012.07.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Gohberg, I., and V. Olshevsky. "Complexity of multiplication with vectors for structured matrices." Linear Algebra and its Applications 202 (April 1994): 163–92. http://dx.doi.org/10.1016/0024-3795(94)90189-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Lin, Ferng-Ching, and I.-Chen Wu. "Area-period tradeoffs for multiplication of rectangular matrices." Journal of Computer and System Sciences 30, no. 3 (June 1985): 329–42. http://dx.doi.org/10.1016/0022-0000(85)90050-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Lundqvist, Samuel. "Multiplication Matrices and Ideals of Projective Dimension Zero." Mathematics in Computer Science 6, no. 1 (February 18, 2012): 43–59. http://dx.doi.org/10.1007/s11786-012-0108-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Uchino, Yuki, Katsuhisa Ozaki, and Takeshi Terao. "Inclusion Methods for Multiplication of Three Point Matrices." Journal of Advanced Simulation in Science and Engineering 10, no. 1 (2023): 83–101. http://dx.doi.org/10.15748/jasse.10.83.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Horton, Robert M., Elaine M. Wiegert, and Jeff C. Marshall. "Squaring Matrices: Connecting Mathematics and Science." Mathematics Teacher 102, no. 2 (September 2008): 102–6. http://dx.doi.org/10.5951/mt.102.2.0102.

Full text
Abstract:
We have all heard that mathematics is the language of science, yet many teachers have not had sufficient opportunities to learn how to use mathematics to provide insight into scientific ideas. Consequently, in many mathematics classes, students learn procedures yet have difficulty understanding why they are learning them. In working with both high school and college students, we have found that matrix multiplication is one of these procedures. In fact, when we introduced our students to problems similar to the ones presented here, we discovered that, although they had been taught how to multiply matrices, they did not understand the rationale behind the procedure. Further, they were unaware how useful this procedure could be for studying some important realworld phenomena. In this article, we present a technique for matrix multiplication that is appropriate for algebra 2 or late algebra 1 students and provides a means for investigating food chains, a rich source of study in both science and mathematics classes.
APA, Harvard, Vancouver, ISO, and other styles
40

Horton, Robert M., Elaine M. Wiegert, and Jeff C. Marshall. "Squaring Matrices: Connecting Mathematics and Science." Mathematics Teacher 102, no. 2 (September 2008): 102–6. http://dx.doi.org/10.5951/mt.102.2.0102.

Full text
Abstract:
We have all heard that mathematics is the language of science, yet many teachers have not had sufficient opportunities to learn how to use mathematics to provide insight into scientific ideas. Consequently, in many mathematics classes, students learn procedures yet have difficulty understanding why they are learning them. In working with both high school and college students, we have found that matrix multiplication is one of these procedures. In fact, when we introduced our students to problems similar to the ones presented here, we discovered that, although they had been taught how to multiply matrices, they did not understand the rationale behind the procedure. Further, they were unaware how useful this procedure could be for studying some important realworld phenomena. In this article, we present a technique for matrix multiplication that is appropriate for algebra 2 or late algebra 1 students and provides a means for investigating food chains, a rich source of study in both science and mathematics classes.
APA, Harvard, Vancouver, ISO, and other styles
41

Reddy, K. Swetha, Surabhi Seethai, Akanksha, Meenakshi, and V. Sagar Reddy. "ASIC Implementation of Bit Matrix Multiplier." E3S Web of Conferences 391 (2023): 01028. http://dx.doi.org/10.1051/e3sconf/202339101028.

Full text
Abstract:
In computer science and digital electronics, a bit matrix multiplier (BMM) is a mathematical operation that is used to quickly multiply binary matrices. BMM is a basic component of many computer algorithms and is utilized in fields including machine learning, image processing, and cryptography. BMM creates a new matrix that represents the product of the two input matrices by performing logical AND and XOR operations on each matrix element’s binary value. BMM is a crucial method for large-scale matrix operations since it has a lower computational complexity than conventional matrix multiplication. Reduced computational complexity: When compared to conventional matrix multiplication algorithms, BMM has a lower computational complexity since it performs matrix multiplication using bitwise operations like logical AND and XOR. Faster processing speeds are the result, particularly for complex matrix computations. Less memory is needed to store the binary values of the matrices in BMM because these values can be expressed using Boolean logic. As a result, less memory is needed, and the resources can be used more effectively.
APA, Harvard, Vancouver, ISO, and other styles
42

Partner, Alexander, and Alexei Vernitski. "Maths lecturers in denial about their own maths practice? A case of teaching matrix operations to undergraduate students." MSOR Connections 21, no. 3 (June 9, 2023): 18–28. http://dx.doi.org/10.21100/msor.v21i3.1353.

Full text
Abstract:
This case study provides evidence of an apparent disparity in the way that certain mathematics topics are taught compared to the way that they are used in professional practice. In particular, we focus on the topic of matrices by comparing sources from published research articles against typical undergraduate textbooks and lecture notes. Our results show that the most important operation when using matrices in research is that of matrix multiplication, with 33 of the 40 publications which we surveyed utilising this as the most prominent operation and the remainder of the publications instead opting not to use matrix multiplication at all rather than offering weighting to alternative operations. This is in contrast to the way in which matrices are taught, with very few of these teaching sources highlighting that matrix multiplication is the most important operation for mathematicians. We discuss the implications of this discrepancy and offer an insight as to why it can be beneficial to consider the professional uses of such topics when teaching mathematics to undergraduate students.
APA, Harvard, Vancouver, ISO, and other styles
43

Bakhadly, Bakhad, Alexander Guterman, and María Jesús de la Puente. "Orthogonality for (0, −1) tropical normal matrices." Special Matrices 8, no. 1 (February 17, 2020): 40–60. http://dx.doi.org/10.1515/spma-2020-0006.

Full text
Abstract:
AbstractWe study pairs of mutually orthogonal normal matrices with respect to tropical multiplication. Minimal orthogonal pairs are characterized. The diameter and girth of three graphs arising from the orthogonality equivalence relation are computed.
APA, Harvard, Vancouver, ISO, and other styles
44

Długosz, Rafal, Katarzyna Kubiak, Tomasz Talaśka, and Inga Zbierska-Piątek. "Parallel matrix multiplication circuits for use in Kalman filtering." Facta universitatis - series: Electronics and Energetics 32, no. 4 (2019): 479–501. http://dx.doi.org/10.2298/fuee1904479d.

Full text
Abstract:
In this work we propose several ways of the CMOS implementation of a circuit for the multiplication of matrices. We mainly focus on parallel and asynchronous solutions, however serial and mixed approaches are also discussed for the comparison. Practical applications are the motivation behind our investigations. They include fast Kalman filtering commonly used in automotive active safety functions, for example. In such filters, numerous time-consuming operations on matrices are performed. An additional problem is the growing amount of data to be processed. It results from the growing number of sensors in the vehicle as fully autonomous driving is developed. Software solutions may prove themselves to be insuffucient in the nearest future. That is why hardware coprocessors are in the area of our interests as they could take over some of the most time-consuming operations. The paper presents possible solutions, tailored to specific problems (sizes of multiplied matrices, number of bits in signals, etc.). The estimates of the performance made on the basis of selected simulation and measurement results show that multiplication of 3?3 matrices with data rate of 20 100 MSps is achievable in the CMOS 130 nm technology.
APA, Harvard, Vancouver, ISO, and other styles
45

Zheng, Qiang, Shou Shan Luo, and Yang Xin. "Research on the Secure Multi-Party Computation of some Linear Algebra Problems." Applied Mechanics and Materials 20-23 (January 2010): 265–70. http://dx.doi.org/10.4028/www.scientific.net/amm.20-23.265.

Full text
Abstract:
Considering constant-round protocols for generating random shared values, for secure multiplication and for addition of shared values, etc are available and can be met by known techniques in all standard models of communication. Protocols are presented allowing the players to securely solve standard computational problems in linear algebra. In particular, securely, efficiently and in constant-round compute determinant of matrices product, rank of a matrix, and determine similarity between matrices. If the basic protocols (addition and multiplication, etc) are unconditionally secure, then so are our protocols. Furthermore our protocols offer more efficient solutions than previous techniques for secure linear algebra.
APA, Harvard, Vancouver, ISO, and other styles
46

Rabin, A. V., S. V. Michurin, and V. A. Lipatnikov. "DEVELOPMENT OF A CLASS OF SYSTEM AND RETURN SYSTEM MATRIXES PROVIDING INCREASE IN NOISE IMMUNITY OF SPECTRALLY EFFECTIVE MODULATION SCHEMES ON BASIS OF ORTHOGONAL CODING." Issues of radio electronics, no. 10 (October 20, 2018): 75–79. http://dx.doi.org/10.21778/2218-5453-2018-10-75-79.

Full text
Abstract:
In work it is proposed in the digital systems of messages transmission for noise immunity's increase with the fixed code rate to use an additional coding called by the authors orthogonal. The way of a definition of orthogonal codes is presented, the synthesis algorithm of system and inverse system matrices of orthogonal codes is developed, and the main parameters of some matrices constructed by the offered algorithm are specified. Orthogonal coding as a special case of convolutional coding is defined by matrices, which elements are polynomials in the delay variable with integer coefficients. Code words are given by multiplication of an information polynomial by a system matrix, and decoding is performed by multiplication by an inverse system matrix. Basic ratios for orthogonal coding are given in article, and properties of system and inverse matrices are specified. Parameters of system and inverse system matrices assure additional gain in signal-to-noise ratio. This gain is got as a result of a more effective use of energy of transmitted signals. For transmission of one symbol energy of several symbols is accumulated.
APA, Harvard, Vancouver, ISO, and other styles
47

Bhatia, R., and L. Elsner. "Symmetries and Variation of Spectra." Canadian Journal of Mathematics 44, no. 6 (October 1, 1992): 1155–66. http://dx.doi.org/10.4153/cjm-1992-069-3.

Full text
Abstract:
AbstractAn interesting class of matrices is shown to have the property that the spectrum of each of its elements is invariant under multiplication by p-th. roots of unity. For this class and for a class of Hamiltonian matrices improved spectral variation bounds are obtained.
APA, Harvard, Vancouver, ISO, and other styles
48

Rudhito, Marcellinus Andy, Sri Wahyuni, Ari Suparwanto, and Frans Susilo. "Matriks atas Aljabar Max-Plus Interval." Jurnal Natur Indonesia 13, no. 2 (November 21, 2012): 94. http://dx.doi.org/10.31258/jnat.13.2.94-99.

Full text
Abstract:
This paper aims to discuss the matrix algebra over interval max-plus algebra (interval matrix) and a method tosimplify the computation of the operation of them. This matrix algebra is an extension of matrix algebra over max-plus algebra and can be used to discuss the matrix algebra over fuzzy number max-plus algebra via its alpha-cut.The finding shows that the set of all interval matrices together with the max-plus scalar multiplication operationand max-plus addition is a semimodule. The set of all square matrices over max-plus algebra together with aninterval of max-plus addition operation and max-plus multiplication operation is a semiring idempotent. As reasoningfor the interval matrix operations can be performed through the corresponding matrix interval, because thatsemimodule set of all interval matrices is isomorphic with semimodule the set of corresponding interval matrix,and the semiring set of all square interval matrices is isomorphic with semiring the set of the correspondingsquare interval matrix.
APA, Harvard, Vancouver, ISO, and other styles
49

Tonchev, Vladimir D. "Hadamard matrices of order 36 with automorphisms of order 17." Nagoya Mathematical Journal 104 (December 1986): 163–74. http://dx.doi.org/10.1017/s002776300002273x.

Full text
Abstract:
A Hadamard matrix of order n is an n by n matrix of 1’s and − 1’s such that HHt − nI. In such a matrix n is necessarily 1, 2 or a multiple of 4. Two Hadamard matrices H1 and H2 are called equivalent if there exist monomial matrices P, Q with PH1Q = H2. An automorphism of a Hadamard matrix H is an equivalence of the matrix to itself, i.e. a pair (P, Q) of monomial matrices such that PHQ = H. In other words, an automorphism of H is a permutation of its rows followed by multiplication of some rows by − 1, which leads to reordering of its columns and multiplication of some columns by − 1. The set of all automorphisms form a group under composition called the automorphism group (Aut H) of H. For a detailed study of the basic properties and applications of Hadamard matrices see, e.g. [1], [7, Chap. 14], [8].
APA, Harvard, Vancouver, ISO, and other styles
50

Peng, Richard, and Santosh S. Vempala. "Solving Sparse Linear Systems Faster than Matrix Multiplication." Communications of the ACM 67, no. 7 (July 2024): 79–86. http://dx.doi.org/10.1145/3615679.

Full text
Abstract:
Can linear systems be solved faster than matrix multiplication? While there has been remarkable progress for the special cases of graph-structured linear systems, in the general setting, the bit complexity of solving an n × n linear system Ax = b is Õ ( n ω ), where ω is the matrix multiplication exponent. Improving on this has been an open problem even for sparse linear systems with poly( n ) condition number. In this paper, we present an algorithm that solves linear systems with sparse coefficient matrices asymptotically faster than matrix multiplication for any ω > 2. This speedup holds for any input matrix A with o ( n ω−1 / log (κ( A ))) non-zeros, where κ( A ) is the condition number of A . Our algorithm can be viewed as an efficient, randomized implementation of the block Krylov method via recursive low displacement rank factorization. It is inspired by an algorithm of Eberly et al. for inverting matrices over finite fields. In our analysis of numerical stability, we develop matrix anti-concentration techniques to bound the smallest eigenvalue and the smallest gap in the eigenvalues of semi-random matrices.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography