Literatura científica selecionada sobre o tema "Multiplication de matrices creuses"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Multiplication de matrices creuses".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Multiplication de matrices creuses"

1

Keles, Hasan. "Multiplication of Matrices". Indonesian Journal of Mathematics and Applications 2, n.º 1 (31 de março de 2024): 1–8. http://dx.doi.org/10.21776/ub.ijma.2024.002.01.1.

Texto completo da fonte
Resumo:
This study is about multiplication of matrices. Multiplication of real numbers, which can be written along a line, is also two way. Here, the direction is not an influential factor even when the elements are switched. For example $3.2=6$ and $2.3=6. $ In matrices this makes left and right multiplication is mandatory. Left multiplication is already defined. This is multiplication in known matrices. Left multiplication is used in the studies since the definition of this operation until today. The most insurmountable situation here is that matrices do not commutative Property according to this operation. The left product is taken into account, when $AB $ is written. Here the matrix $A$ is made to be effective. This left product is denoted by $AB. $ The right definition in this study is denoted by $\underleftarrow{AB}. $ This operation multiplication is seen to be compatible with the left multiplication. The commutativity property in matrices is reinvestigated with this approach. The relation between the right multiplication and the Cracovian Product is given by J. Koci´nski (2004).
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Roesler, Friedrich. "Generalized Matrices". Canadian Journal of Mathematics 41, n.º 3 (1 de junho de 1989): 556–76. http://dx.doi.org/10.4153/cjm-1989-024-5.

Texto completo da fonte
Resumo:
Similar to the multiplication of square matrices one can define multiplications for three dimensional matrices, i.e., for the "cubes" of the vector spacewhere I denotes a finite set of indices and Kis any field. The multiplications shall imitate the matrix multiplication: To obtain the coefficient γxyzof the product (γxyz) — (αxyz)( βxyz),all coefficients axij, ij∈ I, of the horizontal plane with index xof (αxyz)are multiplied with certain coefficients βhgzof the vertical plane with index z of (βxyz)and the results are added:
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Bair, J. "72.34 Multiplication by Diagonal Matrices". Mathematical Gazette 72, n.º 461 (outubro de 1988): 228. http://dx.doi.org/10.2307/3618262.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Sowa, Artur. "Factorizing matrices by Dirichlet multiplication". Linear Algebra and its Applications 438, n.º 5 (março de 2013): 2385–93. http://dx.doi.org/10.1016/j.laa.2012.09.021.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Councilman, Samuel. "Sharing Teaching Ideas: Bisymmetric Matrices: Some Elementary New Problems". Mathematics Teacher 82, n.º 8 (novembro de 1989): 622–23. http://dx.doi.org/10.5951/mt.82.8.0622.

Texto completo da fonte
Resumo:
In introductory linear algebra courses one continually seeks interesting sets of matrices that are closed under the operations of matrix addition, scalar multiplication, and if possible, matrix multiplication. Most texts mention symmetric and antisymmetric matrices and ask the reader to show that these sets are closed under matrix addition and scalar multiplication but fail to be closed under matrix multiplication. Few textbooks, if any, suggest an investigation of the set of matrices that are symmetric with respect to both diagonals, namely bisymmetric matrices. The following is a sequence of relatively straightforward problems that can be used as homework, class discussion, or even examination material in elementary linear algebra classes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Ignatenko, M. V., e L. A. Yanovich. "On the theory of interpolation of functions on sets of matrices with the Hadamard multiplication". Proceedings of the National Academy of Sciences of Belarus. Physics and Mathematics Series 58, n.º 3 (12 de outubro de 2022): 263–79. http://dx.doi.org/10.29235/1561-2430-2022-58-3-263-279.

Texto completo da fonte
Resumo:
This article is devoted to the problem of interpolation of functions defined on sets of matrices with multiplication in the sense of Hadamard and is mainly an overview. It contains some known information about the Hadamard matrix multiplication and its properties. For functions defined on sets of square and rectangular matrices, various interpolation polynomials of the Lagrange type, containing both the operation of matrix multiplication in the Hadamard sense and the usual matrix product, are given. In the case of analytic functions defined on sets of square matrices with the Hadamard multiplication, some analogues of the Lagrange type trigonometric interpolation formulas are considered. Matrix analogues of splines and the Cauchy integral are given on sets of matrices with the Hadamard multiplication. Some of its applications in the theory of interpolation are considered. Theorems on the convergence of some Lagrange interpolation processes for analytic functions defined on a set of matrices with multiplication in the Hadamard sense are proved. The results obtained are based on the application of some well-known provisions of the theory of interpolation of scalar functions. Data presentation is illustrated by a number of examples.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Abobala, Mohammad. "On Refined Neutrosophic Matrices and Their Application in Refined Neutrosophic Algebraic Equations". Journal of Mathematics 2021 (13 de fevereiro de 2021): 1–5. http://dx.doi.org/10.1155/2021/5531093.

Texto completo da fonte
Resumo:
The objective of this paper is to introduce the concept of refined neutrosophic matrices as matrices such as multiplication, addition, and ring property. Also, it determines the necessary and sufficient condition for the invertibility of these matrices with respect to multiplication. On the contrary, nilpotency and idempotency properties will be discussed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Waterhouse, William C. "Circulant-style matrices closed under multiplication". Linear and Multilinear Algebra 18, n.º 3 (novembro de 1985): 197–206. http://dx.doi.org/10.1080/03081088508817686.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Theeracheep, Siraphob, e Jaruloj Chongstitvatana. "Multiplication of medium-density matrices using TensorFlow on multicore CPUs". Tehnički glasnik 13, n.º 4 (11 de dezembro de 2019): 286–90. http://dx.doi.org/10.31803/tg-20191104183930.

Texto completo da fonte
Resumo:
Matrix multiplication is an essential part of many applications, such as linear algebra, image processing and machine learning. One platform used in such applications is TensorFlow, which is a machine learning library whose structure is based on dataflow programming paradigm. In this work, a method for multiplication of medium-density matrices on multicore CPUs using TensorFlow platform is proposed. This method, called tbt_matmul, utilizes TensorFlow built-in methods tf.matmul and tf.sparse_matmul. By partitioning each input matrix into four smaller sub-matrices, called tiles, and applying an appropriate multiplication method to each pair depending on their density, the proposed method outperforms the built-in methods for matrices of medium density and matrices of significantly uneven distribution of non-zeros.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Mangngiri, Itsar, Qonita Qurrota A’yun e Wasono Wasono. "AN ORDER-P TENSOR MULTIPLICATION WITH CIRCULANT STRUCTURE". BAREKENG: Jurnal Ilmu Matematika dan Terapan 17, n.º 4 (19 de dezembro de 2023): 2293–304. http://dx.doi.org/10.30598/barekengvol17iss4pp2293-2304.

Texto completo da fonte
Resumo:
Research on mathematical operations involving multidimensional arrays or tensors has increased along with the growing applications involving multidimensional data analysis. The -product of order- tensor is one of tensor multiplications. The -product is defined using two operations that transform the multiplication of two tensors into the multiplication of two block matrices, then the result is a block matrix which is further transformed back into a tensor. The composition of both operations used in the definition of -product can transform a tensor into a block circulant matrix. This research discusses the -product of tensors based on their circulant structure. First, we present a theorem of the -product of tensors involving circulant matrices. Second, we use the definition of identity, transpose, and inverse tensors under -product operation and investigate their relationship with circulant matrices. Third, we manifest the computation of the -product involving circulant matrices. The results of the discussion show that the -product of tensors fundamentally involves circulant matrix multiplication, which means that the operation at its core relies on multiplying circulant matrices. This implies the -product operation of tensors having properties analogous to standard matrix multiplication. Furthermore, since the -product of tensors fundamentally involves circulant matrix multiplication, its computation can be simplified by diagonalizing the circulant matrix first using the discrete Fourier transform matrix. Finally, based on the obtained results, an algorithm is constructed in MATLAB to calculate the -product.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Multiplication de matrices creuses"

1

Gonon, Antoine. "Harnessing symmetries for modern deep learning challenges : a path-lifting perspective". Electronic Thesis or Diss., Lyon, École normale supérieure, 2024. http://www.theses.fr/2024ENSL0043.

Texto completo da fonte
Resumo:
Les réseaux de neurones connaissent un grand succès pratique, mais les outils théoriques pour les analyser sont encore souvent limités à des situations simples qui ne reflètent pas toute la complexité des cas pratiques d'intérêts. Cette thèse vise à réduire cet écart en rendant les outils théoriques plus concrets. Le premier axe de recherche concerne la généralisation : un réseau donné pourra-t-il bien se comporter sur des données jamais vues auparavant ? Ce travail améliore les garanties de généralisation basées sur la norme de chemins, les rendant applicables à des réseaux ReLU incluant du pooling ou des connexions résiduelles. En réduisant l'écart entre les réseaux analytiquement étudiables et ceux utilisés en pratique, cette thèse permet la première évaluation empirique de ces garanties sur des réseaux ReLU pratiques tels que les ResNets.Le second axe porte sur l'optimisation des ressources (temps, énergie, mémoire). Une nouvelle méthode d'élagage des paramètres, fondée sur la norme de chemins, est proposée. Cette approche conserve non seulement la précision de l'élagage par amplitude, tout en étant robuste aux symétries des paramètres. Cette thèse fournit aussi un nouvel algorithme de multiplication de matrices sur GPU qui améliore l'état de l'art pour les matrices creuses à support de Kronecker, offrant des gains en temps et en énergie. Enfin, ce travail rend les garanties d'approximation pour les réseaux de neurones plus concrètes en établissant des conditions suffisantes de précision en bits pour que les réseaux quantifiés conservent la même vitesse d'approximation que les réseaux avec des poids réels non contraints
Neural networks have demonstrated impressive practical success, but theoretical tools for analyzing them are often limited to simple cases that do not capture the complexity of real-world applications. This thesis seeks to narrow this gap by making theoretical tools more applicable to practical scenarios.The first focus of this work is on generalization: can a given network perform well on previously unseen data? This thesis improves generalization guarantees based on the path-norm and extends their applicability to ReLU networks incorporating pooling or skip connections. By reducing the gap between theoretically analyzable networks and those used in practice, this work provides the first empirical evaluation of these guarantees on practical ReLU networks, such as ResNets.The second focus is on resource optimization (time, energy, memory). This thesis introduces a novel pruning method based on the path-norm, which not only retains the accuracy of traditional magnitude pruning but also exhibits robustness to parameter symmetries. Additionally, this work presents a new GPU matrix multiplication algorithm that enhances the state-of-the-art for sparse matrices with Kronecker-structured support, achieving gains in both time and energy. Finally, this thesis makes approximation guarantees for neural networks more concrete by establishing sufficient bit-precision conditions to ensure that quantized networks maintain the same approximation speed as their unconstrained real-weight counterparts
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Lawson, Jean-Christophe. "Smart : un neurocalculateur parallèle exploitant des matrices creuses". Grenoble INPG, 1993. http://www.theses.fr/1993INPG0030.

Texto completo da fonte
Resumo:
Les annees 80 montrerent l'eclosion du paradigme neuromimetique qui sommeillait depuis un demi-siecle. La simulation interactive intensive est un element cle pour des progres de cette approche qui fait generalement appel a des systemes non lineaires de grande taille. Ainsi, la faible efficacite des calculateurs est un element qui ralentit le developpement de nouveaux modeles. Bien que les modeles existant fassent apparaitre des comportements prometteurs, le fosse separant ces modeles des architectures nerveuses reste abyssal. L'analyse des principales caracteristiques nerveuses et leur traduction en termes informatique est le point de depart a l'amelioration de l'efficacite et des performances. Ainsi, le comportement dynamique est un element a developper a de multiples niveaux. Dans ce memoire, nous rappelons quelques aspects biologiques des structures nerveuses et nous les mettons en relief selon le point de vue du traitement de l'information. Ensuite, nous resumons certains principes fondamentaux du traitement parallele. Nous les exploitons alors dans la partie principale de ce travail pour proposer un calculateur a jeu d'instructions reduit (risc) faisant un tres large appel au traitement en temps masque (pipeline), pour un traitement vectoriel distribue. Un tel modele de calcul assure une bonne efficacite pour les taches tres repetitives qui sont impliquees dans les modeles neuromimetiques: le traitement distribue est bien adapte au faible taux de reutilisation de nombreuses donnees et une structure lineaire permet des echanges et une supervision aises. Une procedure de repartition des donnees, associee a des ressources materielles specifiques, autorise un traitement distribue efficace sur des matrices creuses de topologie dynamique. Une double mise en uvre logicielle et materielle est proposee a partir d'un modele risc standard (sparc). Une evaluation de performance est egalement fournie. La conclusion de ce travail est qu'un formalisme vectoriel est bien adapte a la description des reseaux neuromimetiques et a leur mise en uvre parallele, l'exploitation efficace de matrices de connexion creuses permet alors la simulation efficace des modeles les plus generaux. Cela implique a la fois un materiel specifique et un environnement logiciel adapte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Geronimi, Sylvain. "Determination d'ensembles essentiels minimaux dans les matrices creuses : application a l'analyse des circuits". Toulouse 3, 1987. http://www.theses.fr/1987TOU30104.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Vömel, Christof. "Contributions à la recherche en calcul scientifique haute performance pour les matrices creuses". Toulouse, INPT, 2003. http://www.theses.fr/2003INPT003H.

Texto completo da fonte
Resumo:
Nous nous intéressons au développement d'un nouvel algorithme pour estimer la norme d'une matrice de manière incrémentale, à l'implantation d'un modèle de référence des Basic Linear Algebra Subprograms for sparse matrices (Sparse BLAS), et à la réalisation d'un nouveau gestionnaire de tâches pour MUMPS, un solveur multifrontal pour des architectures à mémoire distribuée. Notre méthode pour estimer la norme d'une matrice s'applique aux matrices denses et creuses. Elle peut s'avérer utile dans le cadre des factorisations QR, Cholesky, ou LU. Le standard Sparse BLAS définit des interfaces génériques. Nous avons été amenés à répondre aux questions concernant la représentation et la gestion des données. Le séquencement de tâches devient un enjeu important dès que nous travaillons sur un grand nombre de processeurs. Grâce à notre nouvelle approche, nous pouvons améliorer le passage a l'échelle du solveur MUMPS.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Grigori, Laura. "Prédiction de structure et algorithmique parallèle pour la factorisation LU des matrices creuses". Nancy 1, 2001. http://www.theses.fr/2001NAN10264.

Texto completo da fonte
Resumo:
Cette thèse traite du calcul numérique parallèle et les résultats de recherche portent sur la factorisation LU, telle qu'elle est utilisée pour résoudre des systèmes linéaires creux non-symétriques. En général, les calculs sur des matrices creuses ont une phase initiale de prédiction structurelle de la sortie, qui permet l'allocation de la mémoire, l'initialisation des structures de données et l'ordonnancement des tâches en parallèle. Dans ce but, nous étudions la prédiction structurelle pour la factorisation LU avec pivotage partiel. Nous nous intéressons principalement à identifier des limites supérieures aussi proches que possible de ces structures. Cette prédiction de structure est ensuite utilisée dans une étape appelée étape de factorisation symbolique qui précède l'étape de calcul numérique effectif des facteurs appelée étape de factorisation numérique. Pour des matrices de très grande taille, une partie significative de l'espace mémoire globale est nécessaire pour des structures utilisées lors de l'étape de factorisation symbolique, et ceci pourrait empêcher l'exécution de la factorisation LU. Nous proposons et étudions un algorithme parallèle pour améliorer les besoins en mémoire de la factorisation symbolique. Pour une exécution parallèle efficace de la factorisation numérique, nous considérons l'analyse et la manipulation des graphes de dépendances de données issus du traitement des matrices creuses. Cette analyse nous permet de développer des algorithmes scalables, qui utilisent d'une manière efficace la mémoire et les ressources de calcul disponibles
This dissertation treats of parallel numerical computing considering the Gaussian elimination, as it is used to solve large sparse nonsymmetric linear systems. Usually, computations on sparse matrices have an initial phase that predicts the nonzero structure of the output, which helps with memory allocations, set up data structures and schedule parallel tasks prior to the numerical computation itself. To this end, we study the structure prediction for the sparse LU factorization with partial pivoting. We are mainly interested to identify upper bounds as tight as possible to these structures. This structure prediction is then used in a phase called symbolic factorization, followed by a phase that performs the numerical computation of the factors, called numerical factorization. For very large matrices, a significant part of the overall memory space is needed by structures used during the symbolic factorization, and this can prevent a swap-free execution of the LU factorization. We propose and study a parallel algorithm to decrease the memory requirements of the nonsymmetric symbolic factorization. For an efficient parallel execution of the numerical factorization, we consider the analysis and the handling of the data dependencies graphs resulting from the processing of sparse matrices. This analysis enables us to develop scalable algorithms, which manage memory and computing resources in an effective way
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Geronimi, Sylvain. "Détermination d'ensembles essentiels minimaux dans les matrices creuses application à l'analyse des circuits /". Grenoble 2 : ANRT, 1987. http://catalogue.bnf.fr/ark:/12148/cb376053608.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Puglisi, Chiara. "Factorisation QR de grandes matrices creuses basée sur une méthode multifrontale dans un environnement multiprocesseur". Toulouse, INPT, 1993. http://www.theses.fr/1993INPT091H.

Texto completo da fonte
Resumo:
Nous nous interessons a la factorisation qr de grandes matrices creuses carrees et sur-determinees dans un environnement mimd a memoire partagee. Nous supposons que le rang des vecteurs colonnes de ces matrices est maximal. Notre demarche est basee sur la methode multifrontale (duff et reid (1983)) et utilise les transformations de householder. Nous donnons une description detaillee de l'approche multifrontale pour la factorisation qr et de son implementation dans un environnement multiprocesseur. Nous montrons qu'en choisissant de facon adequate la strategie de factorisation de nuds, des gains considerables peuvent etre obtenus, aussi bien du point de vue de la memoire que du parallelisme et du temps de calcul. Un niveau de parallelisme supplementaire est utilise pour equilibrer le manque de parallelisme pres de la racine de l'arbre d'elimination. Par ailleurs, nous decrivons aussi comment modifier l'arbre d'elimination pour ameliorer les performances du code. Nous examinons ensuite les problemes de stabilite et de precision numerique de la factorisation qr en tant que methode de resolution des systemes lineaires et des problemes de moindres carres. Nous etudions l'influence du raffinement iteratif et du pivotage des lignes et montrons qu'il existe des matrices pour lesquelles une solution precise ne peut etre obtenue que si la factorisation qr est effectuee avec un pivotage par lignes et suivie de quelques etapes de raffinement iteratif. Finalement, nous etudions la factorisation qr en tant que methode de resolution des problemes de moindres carres et nous la comparons a trois autres approches classiques: la methode des equations normales, la methode des equations semi-normales, et l'approche par systeme augmente. La methode qr s'avere etre particulierement appropriee pour les problemes tres mal conditionnes. En outre, notre code parallele optimise est efficace sur toutes les classes de problemes que nous avons testees
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

EDJLALI, GUY. "Contribution a la parallelisation de methodes iteratives hybrides pour matrices creuses sur architectures heterogenes". Paris 6, 1994. http://www.theses.fr/1994PA066360.

Texto completo da fonte
Resumo:
Cette these traite de programmation parallele heterogene, des structures de donnees irregulieres et de methode iterative hybride. La methode iterative choisie est la methode d'arnoldi de calcul de valeurs propres et de vecteurs propres de matrices creuses. Dans une premiere partie, une implementation data-parallele de cette methode a ete realisee. Cela a permis de mettre en evidence le comportement du programme et les lacunes existantes au niveau des outils de manipulation de matrices creuses. Dans une deuxieme partie, nous avons developpe des outils de manipulation de matrices creuses et propose un format de stockage data parallele de matrices creuses generales. Ce format est utilise pour implementer le noyau d'une bibliotheque data parallele de manipulation de matrices creuses. Dans une troisieme partie, nous avons etudie les structures de donnees irregulieres pour les machines mimd a memoire distribuee. Cette etude a mis en evidence l'interet de la compilation a l'execution. Nous avons etendu la compilation a l'execution aux environnements adaptatifs, c'est-a-dire aux environnements dont le nombre de processeurs varie a l'execution. Enfin, a travers le calcul de valeurs propres et de vecteurs propres, nous avons aborde les environnements d'execution heterogenes. Nous avons propose une methode d'arnoldi hybride s'executant sur un ensemble de machines paralleles distribuees. Nous avons mis en evidence la necessite de nouveaux algorithmes pour exploiter ces ensembles de machines
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Brown, Christopher Ian. "A VLSI device for multiplication of high order sparse matrices". Thesis, University of Sheffield, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.265915.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Guermouche, Abdou. "Étude et optimisation du comportement mémoire dans les méthodes parallèles de factorisation de matrices creuses". Lyon, École normale supérieure (sciences), 2004. http://www.theses.fr/2004ENSL0284.

Texto completo da fonte
Resumo:
Les méthodes directes de résolution de systèmes linéaires creux sont connues pour leurs besoins mémoire importants qui peuvent constituer une barrière au traitement de problèmes de grandes taille. De ce fait, les travaux effectués durant cette thèse ont porté d'une part sur l'étude du comportement mémoire d'un algorithme de factorisation de matrices creuses, en l'occurrence la méthode multifrontale, et d'autre part sur l'optimisation et la minimisation de la mémoire nécessaire au bon déroulement de la factorisation aussi bien dans un cadre séquentiel que parallèle. Ainsi, des algorithmes optimaux pour la minimisation de la mémoire ont été proposés pour le cas séquentiel. Pour le cas parallèle, nous avons introduit dans un premier temps des stratégies d'ordonnancement visant une amélioration du comportement mémoire de la méthode. Puis, nous les avons étendues pour avoir un objectif de performance tout en gardant un bon comportement mémoire. Enfin, dans le cas où l'ensemble des données à traiter a encore une taille plus importante que celle de la mémoire, il est nécessaire de concevoir des approches de factorisation out-of-core. Pour être efficaces, ces méthodes nécessitent d'une part de recouvrir les opérations d'entrées/sorties par des calculs, et d'autre part de réutiliser des données déjà présentes en mémoire pour réduire le volume d'entrées/sorties. Ainsi, une partie des travaux présentés dans cette thèse ont porté sur la conception de techniques out-of-core implicites adaptées au schéma des accès de la méthode multifrontale et reposant sur une modification de la politique de pagination du système d'exploitation à l'aide d'un outil bas-niveau (MMUM&MMUSSEL)
Direct methods for solving sparse linear systems are known for their large memory requirements that can represent the limiting factor to solve large systems. The work done during this thesis concerns the study and the optimization of the memory behaviour of a sparse direct method, the multifrontal method, for both the sequential and the parallel cases. Thus, optimal memory minimization algorithms have been proposed for the sequential case. Concerning the parallel case, we have introduced new scheduling strategies aiming at improving the memory behaviour of the method. After that, we extended these approaches to have a good performance while keeping a good memory behaviour. In addition, in the case where the data to be treated cannot fit into memory, out-of-core factorization schemes have to be designed. To be efficient, such approaches require to overlap I/O operations with computations and to reuse the data sets already in memory to reduce the amount of I/O operations. Therefore, another part of the work presented in this thesis concerns the design and the study of implicit out-of-core techniques well-adapted to the memory access pattern of the multifrontal method. These techniques are based on a modification of the standard paging policies of the operating system using a low-level tool (MMUM&MMUSSEL)
Estilos ABNT, Harvard, Vancouver, APA, etc.

Livros sobre o assunto "Multiplication de matrices creuses"

1

United States. National Aeronautics and Space Administration. Scientific and Technical Information Division., ed. An efficient sparse matrix multiplication scheme for the CYBER 205 computer. [Washington, DC]: National Aeronautics and Space Administration, Scientific and Technical Information Division, 1988.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Munerman, Viktor, Vadim Borisov e Aleksandra Kononova. Mass data processing. Algebraic models and methods. ru: INFRA-M Academic Publishing LLC., 2023. http://dx.doi.org/10.12737/1906037.

Texto completo da fonte
Resumo:
The monograph is devoted to mathematical and algorithmic support of mass data processing based on algebraic models. One of the most common classes of mass processing is considered - processing of highly active structured data. The construction of algebraic models of data and calculations and methods of proving their correspondence are analyzed. Three algebraic systems are studied, which can be used both as data models and as models of calculations. The algebraic and axiomatic methods of proving the correspondence of these models are investigated. A proof of their correspondence is given: homomorphism and isomorphism. The problem of optimizing the processes of mass processing of data presented in the form of algebraic expressions in the proposed algebra models is raised. The algorithms of synthesis and optimization of calculation of these expressions, the method of symmetric horizontal data distribution providing parallel implementation of calculation of algebraic expressions and generalization of the block algorithm of parallel matrix multiplication for the case of multiplication of multidimensional matrices are described in detail. Architectures of software and hardware complexes for effective parallel implementation of operations in the considered algebra models are proposed. A number of real-world examples illustrating the application of the proposed methods are given. For students, postgraduates and teachers of technical and physical-mathematical universities and faculties.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Gohberg, Israel, Yuli Eidelman e Iulian Haimovici. Separable Type Representations of Matrices and Fast Algorithms: Volume 1 Basics. Completion Problems. Multiplication and Inversion Algorithms. Birkhauser Verlag, 2013.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Mann, Peter. The (Not So?) Basics. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198822370.003.0030.

Texto completo da fonte
Resumo:
This chapter discusses matrices. Matrices appear in many instances across physics, and it is in this chapter that the background necessary for understanding how to use them in calculations is provided. Although matrices can be a little daunting upon first exposure, they are very handy for a lot of classical physics. This chapter reviews the basics of matrices and their operations. It discusses square matrices, adjoint matrices, cofactor matrices and skew-symmetric matrices. The concepts of matrix multiplication, transpose, inverse, diagonal, identity, Pfaffian and determinant are examined. The chapter also discusses the terms Hermitian, symmetric and antisymmetric, as well as the Levi-Civita symbol and Laplace expansion.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Multiplication de matrices creuses"

1

Eidelman, Yuli, Israel Gohberg e Iulian Haimovici. "Multiplication of Matrices". In Separable Type Representations of Matrices and Fast Algorithms, 309–26. Basel: Springer Basel, 2013. http://dx.doi.org/10.1007/978-3-0348-0606-0_17.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Josipović, Miroslav. "Geometric Algebra and Matrices". In Geometric Multiplication of Vectors, 141–60. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-01756-9_4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Russo, Luís M. S. "Multiplication Algorithms for Monge Matrices". In String Processing and Information Retrieval, 94–105. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-16321-0_9.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Tiskin, A. "Bulk-synchronous parallel multiplication of boolean matrices". In Automata, Languages and Programming, 494–506. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0055078.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Tiskin, A. "Erratum: Bulk-Synchronous Parallel Multiplication of Boolean Matrices". In Automata, Languages and Programming, 717–18. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48523-6_68.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Çatalyürek, Ümit V., e Cevdet Aykanat. "Decomposing irregularly sparse matrices for parallel matrix-vector multiplication". In Parallel Algorithms for Irregularly Structured Problems, 75–86. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/bfb0030098.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Ghosh, Koustabh, Jonathan Fuchs, Parisa Amiri Eliasi e Joan Daemen. "Universal Hashing Based on Field Multiplication and (Near-)MDS Matrices". In Progress in Cryptology - AFRICACRYPT 2023, 129–50. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-37679-5_6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Beierle, Christof, Thorsten Kranz e Gregor Leander. "Lightweight Multiplication in $$GF(2^n)$$ with Applications to MDS Matrices". In Advances in Cryptology – CRYPTO 2016, 625–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2016. http://dx.doi.org/10.1007/978-3-662-53018-4_23.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Ren, Da Qi, e Reiji Suda. "Modeling and Optimizing the Power Performance of Large Matrices Multiplication on Multi-core and GPU Platform with CUDA". In Parallel Processing and Applied Mathematics, 421–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-14390-8_44.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Stitt, Timothy, N. Stan Scott, M. Penny Scott e Phil G. Burke. "2-D R-Matrix Propagation: A Large Scale Electron Scattering Simulation Dominated by the Multiplication of Dynamically Changing Matrices". In Lecture Notes in Computer Science, 354–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-36569-9_23.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Multiplication de matrices creuses"

1

Ikeda, Kohei, Mitsumasa Nakajima, Shota Kita, Akihiko Shinya, Masaya Notomi e Toshikazu Hashimoto. "High-Fidelity WDM-Compatible Photonic Processor for Matrix-Matrix Multiplication". In CLEO: Applications and Technology, JTh2A.87. Washington, D.C.: Optica Publishing Group, 2024. http://dx.doi.org/10.1364/cleo_at.2024.jth2a.87.

Texto completo da fonte
Resumo:
We experimentally demonstrate an 8 × 8 MZI-mesh photonic processor using silica-based waveguide technology. An accurate implementation of unitary matrices with high fidelity >0.96 over C-band was achieved, enabling matrix-matrix operation using wavelength multiplexing.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Liang, Tianyu, Riley Murray, Aydın Buluç e James Demmel. "Fast multiplication of random dense matrices with sparse matrices". In 2024 IEEE International Parallel and Distributed Processing Symposium (IPDPS). IEEE, 2024. http://dx.doi.org/10.1109/ipdps57955.2024.00014.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Qian, Qiuming. "Optical full-parallel three matrices multiplication". In International Conference on Optoelectronic Science and Engineering '90. SPIE, 2017. http://dx.doi.org/10.1117/12.2294902.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Tiskin, Alexander. "Fast distance multiplication of unit-Monge matrices". In Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2010. http://dx.doi.org/10.1137/1.9781611973075.103.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Glushan, V. M., e Lozovoy A. Yu. "On Distributed Multiplication of Large-Scale Matrices". In 2021 IEEE 15th International Conference on Application of Information and Communication Technologies (AICT). IEEE, 2021. http://dx.doi.org/10.1109/aict52784.2021.9620434.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Austin, Brian, Eric Roman e Xiaoye Li. "Resilient Matrix Multiplication of Hierarchical Semi-Separable Matrices". In HPDC'15: The 24th International Symposium on High-Performance Parallel and Distributed Computing. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2751504.2751507.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Ramamoorthy, Aditya, Li Tang e Pascal O. Vontobel. "Universally Decodable Matrices for Distributed Matrix-Vector Multiplication". In 2019 IEEE International Symposium on Information Theory (ISIT). IEEE, 2019. http://dx.doi.org/10.1109/isit.2019.8849451.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Buluc, Aydin, e John R. Gilbert. "On the representation and multiplication of hypersparse matrices". In Distributed Processing Symposium (IPDPS). IEEE, 2008. http://dx.doi.org/10.1109/ipdps.2008.4536313.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Ballard, Grey, Aydin Buluc, James Demmel, Laura Grigori, Benjamin Lipshitz, Oded Schwartz e Sivan Toledo. "Communication optimal parallel multiplication of sparse random matrices". In SPAA '13: 25th ACM Symposium on Parallelism in Algorithms and Architectures. New York, NY, USA: ACM, 2013. http://dx.doi.org/10.1145/2486159.2486196.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Labini, Paolo Sylos, Massimo Bernaschi, Werner Nutt, Francesco Silvestri e Flavio Vella. "Blocking Sparse Matrices to Leverage Dense-Specific Multiplication". In 2022 IEEE/ACM Workshop on Irregular Applications: Architectures and Algorithms (IA3). IEEE, 2022. http://dx.doi.org/10.1109/ia356718.2022.00009.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Relatórios de organizações sobre o assunto "Multiplication de matrices creuses"

1

Ballard, Grey, Aydin Buluc, James Demmel, Laura Grigori, Benjamin Lipshitz, Oded Schwartz e Sivan Toledo. Communication Optimal Parallel Multiplication of Sparse Random Matrices. Fort Belvoir, VA: Defense Technical Information Center, fevereiro de 2013. http://dx.doi.org/10.21236/ada580140.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia