Dissertations / Theses on the topic 'Numerical linear and multilinear algebra'

To see the other types of publications on this topic, follow the link: Numerical linear and multilinear algebra.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Numerical linear and multilinear algebra.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Waldherr, Konrad [Verfasser]. "Numerical Linear and Multilinear Algebra in Quantum Control and Quantum Tensor Networks / Konrad Waldherr." München : Verlag Dr. Hut, 2014. http://d-nb.info/1064560601/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lim, Lek-Heng. "Foundations of numerical multilinear algebra : decomposition and approximation of tensors /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Battles, Zachary. "Numerical linear algebra for continuous functions." Thesis, University of Oxford, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.427900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Higham, N. J. "Nearness problems in numerical linear algebra." Thesis, University of Manchester, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.374580.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zounon, Mawussi. "On numerical resilience in linear algebra." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0038/document.

Full text
Abstract:
Comme la puissance de calcul des systèmes de calcul haute performance continue de croître, en utilisant un grand nombre de cœurs CPU ou d’unités de calcul spécialisées, les applications hautes performances destinées à la résolution des problèmes de très grande échelle sont de plus en plus sujettes à des pannes. En conséquence, la communauté de calcul haute performance a proposé de nombreuses contributions pour concevoir des applications tolérantes aux pannes. Cette étude porte sur une nouvelle classe d’algorithmes numériques de tolérance aux pannes au niveau de l’application qui ne nécessite pas de ressources supplémentaires, à savoir, des unités de calcul ou du temps de calcul additionnel, en l’absence de pannes. En supposant qu’un mécanisme distinct assure la détection des pannes, nous proposons des algorithmes numériques pour extraire des informations pertinentes à partir des données disponibles après une pannes. Après l’extraction de données, les données critiques manquantes sont régénérées grâce à des stratégies d’interpolation pour constituer des informations pertinentes pour redémarrer numériquement l’algorithme. Nous avons conçu ces méthodes appelées techniques d’Interpolation-restart pour des problèmes d’algèbre linéaire numérique tels que la résolution de systèmes linéaires ou des problèmes aux valeurs propres qui sont indispensables dans de nombreux noyaux scientifiques et applications d’ingénierie. La résolution de ces problèmes est souvent la partie dominante; en termes de temps de calcul, des applications scientifiques. Dans le cadre solveurs linéaires du sous-espace de Krylov, les entrées perdues de l’itération sont interpolées en utilisant les entrées disponibles sur les nœuds encore disponibles pour définir une nouvelle estimation de la solution initiale avant de redémarrer la méthode de Krylov. En particulier, nous considérons deux politiques d’interpolation qui préservent les propriétés numériques clés de solveurs linéaires bien connus, à savoir la décroissance monotone de la norme-A de l’erreur du gradient conjugué ou la décroissance monotone de la norme résiduelle de GMRES. Nous avons évalué l’impact du taux de pannes et l’impact de la quantité de données perdues sur la robustesse des stratégies de résilience conçues. Les expériences ont montré que nos stratégies numériques sont robustes même en présence de grandes fréquences de pannes, et de perte de grand volume de données. Dans le but de concevoir des solveurs résilients de résolution de problèmes aux valeurs propres, nous avons modifié les stratégies d’interpolation conçues pour les systèmes linéaires. Nous avons revisité les méthodes itératives de l’état de l’art pour la résolution des problèmes de valeurs propres creux à la lumière des stratégies d’Interpolation-restart. Pour chaque méthode considérée, nous avons adapté les stratégies d’Interpolation-restart pour régénérer autant d’informations spectrale que possible. Afin d’évaluer la performance de nos stratégies numériques, nous avons considéré un solveur parallèle hybride (direct/itérative) pleinement fonctionnel nommé MaPHyS pour la résolution des systèmes linéaires creux, et nous proposons des solutions numériques pour concevoir une version tolérante aux pannes du solveur. Le solveur étant hybride, nous nous concentrons dans cette étude sur l’étape de résolution itérative, qui est souvent l’étape dominante dans la pratique. Les solutions numériques proposées comportent deux volets. A chaque fois que cela est possible, nous exploitons la redondance de données entre les processus du solveur pour effectuer une régénération exacte des données en faisant des copies astucieuses dans les processus. D’autre part, les données perdues qui ne sont plus disponibles sur aucun processus sont régénérées grâce à un mécanisme d’interpolation
As the computational power of high performance computing (HPC) systems continues to increase by using huge number of cores or specialized processing units, HPC applications are increasingly prone to faults. This study covers a new class of numerical fault tolerance algorithms at application level that does not require extra resources, i.e., computational unit or computing time, when no fault occurs. Assuming that a separate mechanism ensures fault detection, we propose numerical algorithms to extract relevant information from available data after a fault. After data extraction, well chosen part of missing data is regenerated through interpolation strategies to constitute meaningful inputs to numerically restart the algorithm. We have designed these methods called Interpolation-restart techniques for numerical linear algebra problems such as the solution of linear systems or eigen-problems that are the inner most numerical kernels in many scientific and engineering applications and also often ones of the most time consuming parts. In the framework of Krylov subspace linear solvers the lost entries of the iterate are interpolated using the available entries on the still alive nodes to define a new initial guess before restarting the Krylov method. In particular, we consider two interpolation policies that preserve key numerical properties of well-known linear solvers, namely the monotony decrease of the A-norm of the error of the conjugate gradient or the residual norm decrease of GMRES. We assess the impact of the fault rate and the amount of lost data on the robustness of the resulting linear solvers.For eigensolvers, we revisited state-of-the-art methods for solving large sparse eigenvalue problems namely the Arnoldi methods, subspace iteration methods and the Jacobi-Davidson method, in the light of Interpolation-restart strategies. For each considered eigensolver, we adapted the Interpolation-restart strategies to regenerate as much spectral information as possible. Through intensive experiments, we illustrate the qualitative numerical behavior of the resulting schemes when the number of faults and the amount of lost data are varied; and we demonstrate that they exhibit a numerical robustness close to that of fault-free calculations. In order to assess the efficiency of our numerical strategies, we have consideredan actual fully-featured parallel sparse hybrid (direct/iterative) linear solver, MaPHyS, and we proposed numerical remedies to design a resilient version of the solver. The solver being hybrid, we focus in this study on the iterative solution step, which is often the dominant step in practice. The numerical remedies we propose are twofold. Whenever possible, we exploit the natural data redundancy between processes from the solver toperform an exact recovery through clever copies over processes. Otherwise, data that has been lost and is not available anymore on any process is recovered through Interpolationrestart strategies. These numerical remedies have been implemented in the MaPHyS parallel solver so that we can assess their efficiency on a large number of processing units (up to 12; 288 CPU cores) for solving large-scale real-life problems
APA, Harvard, Vancouver, ISO, and other styles
6

Kannan, Ramaseshan. "Numerical linear algebra problems in structural analysis." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/numerical-linear-algebra-problems-in-structural-analysis(7df0f708-fc12-4807-a1f5-215960d9c4d4).html.

Full text
Abstract:
A range of numerical linear algebra problems that arise in finite element-based structural analysis are considered. These problems were encountered when implementing the finite element method in the software package Oasys GSA. We present novel solutions to these problems in the form of a new method for error detection, algorithms with superior numerical effeciency and algorithms with scalable performance on parallel computers. The solutions and their corresponding software implementations have been integrated into GSA's program code and we present results that demonstrate the use of these implementations by engineers to solve real-world structural analysis problems.
APA, Harvard, Vancouver, ISO, and other styles
7

Steele, Hugh Paul. "Combinatorial arguments for linear logic full completeness." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/combinatorial-arguments-for-linear-logic-full-completeness(274c6b87-dc58-4dc3-86bc-8c29abc2fc34).html.

Full text
Abstract:
We investigate categorical models of the unit-free multiplicative and multiplicative-additive fragments of linear logic by representing derivations as particular structures known as dinatural transformations. Suitable categories are considered to satisfy a property known as full completeness if all such entities are the interpretation of a correct derivation. It is demonstrated that certain Hyland-Schalk double glueings [HS03] are capable of transforming large numbers of degenerate models into more accurate ones. Compact closed categories with finite biproducts possess enough structure that their morphisms can be described as forms of linear arrays. We introduce the notion of an extended tensor (or ‘extensor’) over arbitrary semirings, and show that they uniquely describe arrows between objects generated freely from the tensor unit in such categories. It is made evident that the concept may be extended yet further to provide meaningful decompositions of more general arrows. We demonstrate how the calculus of extensors makes it possible to examine the combinatorics of certain double glueing constructions. From this we show that the Hyland-Tan version [Tan97], when applied to compact closed categories satisfying a far weaker version of full completeness, produces genuine fully complete models of unit-free multiplicative linear logic. Research towards the development of a full completeness result for the multiplicative-additive fragment is detailed. The proofs work for categories of finite arrays over certain semirings under both the Hyland-Tan and Schalk [Sch04] constructions. We offer a possible route to finishing this proof. An interpretation of these results with respect to linear logic proof theory is provided, and possible further research paths and generalisations are discussed.
APA, Harvard, Vancouver, ISO, and other styles
8

Gulliksson, Rebecka. "A comparison of parallelization approaches for numerical linear algebra." Thesis, Umeå universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-81116.

Full text
Abstract:
The efficiency of numerical libraries for a given computation is highly dependent on the size of the inputs. For very small inputs it is expected that LAPACK combined with BLAS is the superior alternative, while the new generation of parallelized numerical libraries (such as PLASMA and SuperMatrix) is expected to be superior for large inputs. In between these two extremes in input sizes, there might exist a niche for a new class of numerical libraries.In this thesis a prototype library, targeting medium sized inputs, is presented. The prototype library uses a mixed data and task parallel approach, with the Static BFS Scheduling of M-tasks (SBSM) algorithm, and provides lightweight scheduling for numerical computations. The aim of the prototype library is to explore the competitiveness of such an approach, compared to the more traditional parallelization approaches (data and task parallelism). The competitiveness is measured in a set of synthetic benchmarks using matrix multiplication as the reference computation.The results show that the prototype library exhibits potential for superseding the performance of today’s libraries for medium sized inputs. To give a conclusive answer as to whether the approach is worth pursuing with large research and development efforts, further investigation of the scheduling algorithm, as well as the numerical computations, should be examined through additional benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
9

Song, Zixu. "Software engineering abstractions for a numerical linear algebra library." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/software-engineering-abstractions-for-a-numerical-linear-algebra-library(68304a9b-56db-404b-8ffb-4613f5102c1a).html.

Full text
Abstract:
This thesis aims at building a numerical linear algebra library with appropriate software engineering abstractions. Three areas of knowledge, namely, Numerical Linear Algebra (NLA), Software Engineering and Compiler Optimisation Techniques, are involved. Numerical simulation is widely used in a large number of distinct disciplines to help scientists understand and discover the world. The solutions to frequently occurring numerical problems have been implemented in subroutines, which were then grouped together to form libraries for ease of use. The design, implementation and maintenance of a NLA library require a great deal of work so that the other two topics, namely, software engineering and compiler optimisation techniques have emerged. Generally speaking, these both try to divide the system into smaller and controllable concerns, and allow the programmer to deal with fewer concerns at one time. Band matrix operation, as a new level of abstraction, is proposed for simplifying library implementation and enhancing extensibility for future functionality upgrades. Iteration Space Partitioning (ISP) is applied, in order to make the performance of this generalised implementation for band matrices comparable to that of the specialised implementations for dense and triangular matrices. The optimisation of ISP can be either programmed using the pointcut-advice model of Aspect-Oriented Programming, or integrated as part of a compiler. This naturally leads to a comparison of these two different techniques for resolving one fundamental problem. The thesis shows that software engineering properties of a library, such as modularity and extensibility, can be improved by the use of the appropriate level of abstraction, while performance is either not sacrificed at all, or at least the loss of performance is limited. In other words, the perceived trade-off between the use of high-level abstraction and fast execution is made less significant than previously assumed.
APA, Harvard, Vancouver, ISO, and other styles
10

Sato, Hiroyuki. "Riemannian Optimization Algorithms and Their Applications to Numerical Linear Algebra." 京都大学 (Kyoto University), 2013. http://hdl.handle.net/2433/180615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Zhu, Shengxin. "Numerical linear approximation involving radial basis functions." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:b870646b-5155-45f8-b38c-ae6cf4d22f27.

Full text
Abstract:
This thesis aims to acquire, deepen and promote understanding of computing techniques for high dimensional scattered data approximation with radial basis functions. The main contributions of this thesis include sufficient conditions for the sovability of compactly supported radial basis functions with different shapes, near points preconditioning techniques for high dimensional interpolation systems with compactly supported radial basis functions, a heterogeneous hierarchical radial basis function interpolation scheme, which allows compactly supported radial basis functions of different shapes at the same level, an O(N) algorithm for constructing hierarchical scattered data set andan O(N) algorithm for sparse kernel summation on Cartesian grids. Besides the main contributions, we also investigate the eigenvalue distribution of interpolation matrices related to radial basis functions, and propose a concept of smoothness matching. We look at the problem from different perspectives, giving a systematic and concise description of other relevant theoretical results and numerical techniques. These results are interesting in themselves and become more interesting when placed in the context of the bigger picture. Finally, we solve several real-world problems. Presented applications include 3D implicit surface reconstruction, terrain modelling, high dimensional meteorological data approximation on the earth and scattered spatial environmental data approximation.
APA, Harvard, Vancouver, ISO, and other styles
12

Kaya, Dogan. "Parallel algorithms for numerical linear algebra on a shared memory multiprocessor." Thesis, University of Newcastle Upon Tyne, 1995. http://hdl.handle.net/10443/2008.

Full text
Abstract:
This thesis discusses a variety of parallel algorithms for linear algebra problems including the solution of the linear system of equations Ax = b using QR and L U decomposition, reduction of a general matrix A to Hessenberg form, reduction of a real symmetric matrix B to tridiagonal form, and solution of the symmetric tridiagonal eigenproblem. Empirical comparisons are carried out using various different versions of the above algorithms and this is described in this thesis. We also compare three different synchronisation mechanisms when applied to the reduction to Hessenberg form problem. We implement Cuppen's method for computing both eigenvalues and eigenvectors of a real symmetric tridiagonal matrix T using both recursive and non-recursive implementations. We consider parallel implementations of these versions and also consider parallelisation of the matrix multiplication part of the algorithm. We present some numerical results illustrating an experimental evaluation of the effect of deflation on accuracy, comparison of the parallel implementations and comparison of the additional parallelisation for matrix multiplication. 11 A variety of algorithms are investigated which involve varying amounts of overlap between different parts of the calculation and collecting together updates as far as possible to make good use of the storage hierarchy of the shared memory multiprocessor. Algorithms using dynamic task allocation are compared with ones which do not. The results presented have been obtained using the C++ programming language, with parallel constructs provided by the Encore Parallel Threads package on a shared memory Encore Multimax (MIMD) computer. The experimental results demonstrate that dynamic task allocation can be sometimes very effective on this machine, and that very high efficiency is often obtainable with careful construction of the parallel algorithms even for relatively small matrices.
APA, Harvard, Vancouver, ISO, and other styles
13

Badreddine, Siwar. "Symétries et structures de rang faible des matrices et tenseurs pour des problèmes en chimie quantique." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS029.

Full text
Abstract:
Cette thèse présente de nouveaux algorithmes numériques et effectue une étude approfondie de certaines méthodes numériques existantes pour relever les défis de haute dimension résultant de la résolution de l'équation de Schrödinger électronique en chimie quantique. En se concentrant sur deux problèmes spécifiques, notre approche implique l'identification et l'exploitation des symétries et des structures de rang faible au sein de matrices et de tenseurs. Le premier problème abordé dans cette thèse concerne l'évaluation numérique efficace de la composante à longue portée du potentiel de Coulomb à séparation de portée et des intégrales à deux électrons à longue portée, un tenseur du quatrième ordre qui intervient dans de nombreuses méthodes de chimie quantique. Nous présentons deux nouvelles méthodes d'approximation. Cela est réalisé en s'appuyant sur l'interpolation Chebyshev, des règles de quadrature Gaussienne combinées à des approximations de rang faible ainsi que des méthodes rapides multipolaires (FMM). Ce travail offre une explication détaillée de ces approches et algorithmes introduits, accompagnée d'une comparaison approfondie entre les méthodes nouvellement proposées. Le deuxième problème abordé concerne l'exploitation des symétries et des structures de rang faible pour dériver des représentations efficaces en train de tenseurs des opérateurs impliqués dans l'algorithme DMRG. Cet algorithme est une méthode d'optimisation itérative précise utilisée pour résoudre numériquement l'équation de Schrödinger indépendante du temps. Ce travail vise à comprendre et interpréter les résultats obtenus par les communautés de physique et de chimie, et cherche à offrir des perspectives théoriques nouvelles qui, selon nos connaissances, n'ont pas reçu une attention significative auparavant. Nous menons une étude approfondie et fournissons des démonstrations, si nécessaire, pour explorer l'existence d'une représentation particulière en train de tenseurs, creuse par blocs, de l'opérateur Hamiltonien et de sa fonction d'onde associée. Cela est réalisé tout en maintenant les lois de conservation physiques, manifestées sous forme de symétries de groupe dans les tenseurs, telles que la conservation du nombre de particules. La troisième partie de ce travail est dédiée à la réalisation d'une bibliothèque prototype en Julia, pour l'implémentation de DMRG qui est conçue pour le modèle d'opérateur Hamiltonien de la chimie quantique. Nous exploitons ici la représentation en train de tenseurs, creuse par blocs, de l'opérateur et de la fonction d'onde (fonction propre). Avec ces structures, notre objectif est d'accélérer les étapes les plus coûteuses de la DMRG, y compris les contractions de tenseurs, les opérations matrice-vecteur, et la compression de matrices par décomposition en valeurs singulières tronquée. De plus, nous fournissons des résultats issus de diverses simulations moléculaires, tout en comparant les performances de notre bibliothèque avec la bibliothèque ITensors de pointe, où nous démontrons avoir atteint une performance similaire
This thesis presents novel numerical algorithms and conducts a comprehensive study of some existing numerical methods to address high-dimensional challenges arising from the resolution of the electronic Schrödinger equation in quantum chemistry. Focusing on two specific problems, our approach involves the identification and exploitation of symmetries and low-rank structures within matrices and tensors, aiming to mitigate the curse of dimensionality. The first problem considered in this thesis is the efficient numerical evaluation of the long-range component of the range-separated Coulomb potential and the long-range two-electron integrals 4th-order tensor which occurs in many quantum chemistry methods. We present two novel approximation methods. This is achieved by relying on tensorized Chebyshev interpolation, Gaussian quadrature rules combined with low-rank approximations as well as Fast Multipole Methods (FMM). This work offers a detailed explanation of these introduced approaches and algorithms, accompanied by a thorough comparison between the newly proposed methods. The second problem of interest is the exploitation of symmetries and low-rank structures to derive efficient tensor train representations of operators involved in the Density Matrix Renormalization Group (DMRG) algorithm. This algorithm, referred to as the Quantum Chemical DMRG (QC-DMRG) when applied in the field of quantum chemistry, is an accurate iterative optimization method employed to numerically solve the time-independent Schrödinger equation. This work aims to understand and interpret the results obtained from the physics and chemistry communities and seeks to offer novel theoretical insights that, to the best of our knowledge, have not received significant attention before. We conduct a comprehensive study and provide demonstrations, when necessary, to explore the existence of a particular block-sparse tensor train representation of the Hamiltonian operator and its associated eigenfunction. This is achieved while maintaining physical conservation laws, manifested as group symmetries in tensors, such as the conservation of the particle number. The third part of this work is dedicated to the realization of a proof-of-concept Quantum Chemical DMRG (QC-DMRG) Julia library, designed for the quantum chemical Hamiltonian operator model. We exploit here the block-sparse tensor train representation of both the operator and the eigenfunction. With these structures, our goal is to speed up the most time-consuming steps in QC-DMRG, including tensor contractions, matrix-vector operations, and matrix compression through truncated Singular Value Decompositions (SVD). Furthermore, we provide empirical results from various molecular simulations, while comparing the performance of our library with the state-of-the-art ITensors library where we show that we attain a similar performance
APA, Harvard, Vancouver, ISO, and other styles
14

Nguyen, Hong Diep. "Efficient algorithms for verified scientific computing : Numerical linear algebra using interval arithmetic." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2011. http://tel.archives-ouvertes.fr/tel-00680352.

Full text
Abstract:
Interval arithmetic is a means to compute verified results. However, a naive use of interval arithmetic does not provide accurate enclosures of the exact results. Moreover, interval arithmetic computations can be time-consuming. We propose several accurate algorithms and efficient implementations in verified linear algebra using interval arithmetic. Two fundamental problems are addressed, namely the multiplication of interval matrices and the verification of a floating-point solution of a linear system. For the first problem, we propose two algorithms which offer new tradeoffs between speed and accuracy. For the second problem, which is the verification of the solution of a linear system, our main contributions are twofold. First, we introduce a relaxation technique, which reduces drastically the execution time of the algorithm. Second, we propose to use extended precision for few, well-chosen parts of the computations, to gain accuracy without losing much in term of execution time.
APA, Harvard, Vancouver, ISO, and other styles
15

Musco, Cameron N. (Cameron Nicholas). "The power of randomized algorithms : from numerical linear algebra to biological systems." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120424.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 323-347).
In this thesis we study simple, randomized algorithms from a dual perspective. The first part of the work considers how randomized methods can be used to accelerate the solution of core problems in numerical linear algebra. In particular, we give a randomized low-rank approximation algorithm for positive semidefinite matrices that runs in sublinear time, significantly improving upon what is possible with traditional deterministic methods. We also discuss lower bounds on low-rank approximation and spectral summarization problems that attempt to explain the importance of randomization and approximation in accelerating linear algebraic computation. The second part of the work considers how the theory of randomized algorithms can be used more generally as a tool to understand how complexity emerges from low-level stochastic behavior in biological systems. We study population density- estimation in ant colonies, which is a key primitive in social decision-making and task allocation. We define a basic computational model and show how agents in this model can estimate their density using a simple random-walk-based algorithm. We also consider simple randomized algorithms for computational primitives in spiking neural networks, focusing on fast winner-take-all networks.
by Cameron Nicholas Musco.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
16

Theveny, Philippe. "Numerical Quality and High Performance In Interval Linear Algebra on Multi-Core Processors." Thesis, Lyon, École normale supérieure, 2014. http://www.theses.fr/2014ENSL0941/document.

Full text
Abstract:
L'objet est de comparer des algorithmes de multiplication de matrices à coefficients intervalles et leurs implémentations.Le premier axe est la mesure de la précision numérique. Les précédentes analyses d'erreur se limitent à établir une borne sur la surestimation du rayon du résultat en négligeant les erreurs dues au calcul en virgule flottante. Après examen des différentes possibilités pour quantifier l'erreur d'approximation entre deux intervalles, l'erreur d'arrondi est intégrée dans l'erreur globale. À partir de jeux de données aléatoires, la dispersion expérimentale de l'erreur globale permet d'éclairer l'importance des différentes erreurs (de méthode et d'arrondi) en fonction de plusieurs facteurs : valeur et homogénéité des précisions relatives des entrées, dimensions des matrices, précision de travail. Cette démarche conduit à un nouvel algorithme moins coûteux et tout aussi précis dans certains cas déterminés.Le deuxième axe est d'exploiter le parallélisme des opérations. Les implémentations précédentes se ramènent à des produits de matrices de nombres flottants. Pour contourner les limitations d'une telle approche sur la validité du résultat et sur la capacité à monter en charge, je propose une implémentation par blocs réalisée avec des threads OpenMP qui exécutent des noyaux de calcul utilisant les instructions vectorielles. L'analyse des temps d'exécution sur une machine de 4 octo-coeurs montre que les coûts de calcul sont du même ordre de grandeur sur des matrices intervalles et numériques de même dimension et que l'implémentation par bloc passe mieux à l'échelle que l'implémentation avec plusieurs appels aux routines BLAS
This work aims at determining suitable scopes for several algorithms of interval matrices multiplication.First, we quantify the numerical quality. Former error analyses of interval matrix products establish bounds on the radius overestimation by neglecting the roundoff error. We discuss here several possible measures for interval approximations. We then bound the roundoff error and compare experimentally this bound with the global error distribution on several random data sets. This approach enlightens the relative importance of the roundoff and arithmetic errors depending on the value and homogeneity of relative accuracies of inputs, on the matrix dimension, and on the working precision. This also leads to a new algorithm that is cheaper yet as accurate as previous ones under well-identified conditions.Second, we exploit the parallelism of linear algebra. Previous implementations use calls to BLAS routines on numerical matrices. We show that this may lead to wrong interval results and also restrict the scalability of the performance when the core count increases. To overcome these problems, we implement a blocking version with OpenMP threads executing block kernels with vector instructions. The timings on a 4-octo-core machine show that this implementation is more scalable than the BLAS one and that the cost of numerical and interval matrix products are comparable
APA, Harvard, Vancouver, ISO, and other styles
17

Phillips, Adam. "GPU Accelerated Approach to Numerical Linear Algebra and Matrix Analysis with CFD Applications." Honors in the Major Thesis, University of Central Florida, 2014. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/1635.

Full text
Abstract:
A GPU accelerated approach to numerical linear algebra and matrix analysis with CFD applications is presented. The works objectives are to (1) develop stable and efficient algorithms utilizing multiple NVIDIA GPUs with CUDA to accelerate common matrix computations, (2) optimize these algorithms through CPU/GPU memory allocation, GPU kernel development, CPU/GPU communication, data transfer and bandwidth control to (3) develop parallel CFD applications for Navier Stokes and Lattice Boltzmann analysis methods. Special consideration will be given to performing the linear algebra algorithms under certain matrix types (banded, dense, diagonal, sparse, symmetric and triangular). Benchmarks are performed for all analyses with baseline CPU times being determined to find speed-up factors and measure computational capability of the GPU accelerated algorithms. The GPU implemented algorithms used in this work along with the optimization techniques performed are measured against preexisting work and test matrices available in the NIST Matrix Market. CFD analysis looked to strengthen the assessment of this work by providing a direct engineering application to analysis that would benefit from matrix optimization techniques and accelerated algorithms. Overall, this work desired to develop optimization for selected linear algebra and matrix computations performed with modern GPU architectures and CUDA developer which were applied directly to mathematical and engineering applications through CFD analysis.
B.S.
Bachelors
Mathematics
Sciences
APA, Harvard, Vancouver, ISO, and other styles
18

CONCAS, ANNA. "Numerical Linear Algebra applications in Archaeology: the seriation and the photometric stereo problems." Doctoral thesis, Università degli Studi di Cagliari, 2020. http://hdl.handle.net/11584/285374.

Full text
Abstract:
The aim of this thesis is to explore the application of Numerical Linear Algebra to Archaeology. An ordering problem called the seriation problem, used for dating findings and/or artifacts deposits, is analysed in terms of graph theory. In particular, a Matlab implementation of an algorithm for spectral seriation, based on the use of the Fiedler vector of the Laplacian matrix associated with the problem, is presented. We consider bipartite graphs for describing the seriation problem, since the interrelationship between the units (i.e. archaeological sites) to be reordered, can be described in terms of these graphs. In our archaeological metaphor of seriation, the two disjoint nodes sets into which the vertices of a bipartite graph can be divided, represent the excavation sites and the artifacts found inside them. Since it is a difficult task to determine the closest bipartite network to a given one, we describe how a starting network can be approximated by a bipartite one by solving a sequence of fairly simple optimization problems. Another numerical problem related to Archaeology is the 3D reconstruction of the shape of an object from a set of digital pictures. In particular, the Photometric Stereo (PS) photographic technique is considered.
APA, Harvard, Vancouver, ISO, and other styles
19

Shikongo, Albert. "Numerical Treatment of Non-Linear singular pertubation problems." Thesis, Online access, 2007. http://etd.uwc.ac.za/usrfiles/modules/etd/docs/etd_gen8Srv25Nme4_3831_1257936459.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Najahi, Mohamed amine. "Synthesis of certified programs in fixed-point arithmetic, and its application to linear algebra basic blocks : and its application to linear algebra basic blocks." Thesis, Perpignan, 2014. http://www.theses.fr/2014PERP1212.

Full text
Abstract:
Pour réduire les coûts des systèmes embarqués, ces derniers sont livrés avec des micro-processeurs peu puissants. Ces processeurs sont dédiés à l'exécution de tâches calculatoires dont certaines, comme la transformée de Fourier rapide, peuvent s'avérer exigeantes en termes de ressources de calcul. Afin que les implémentations de ces algorithmes soient efficaces, les programmeurs utilisent l'arithmétique à virgule fixe qui est plus adaptée aux processeurs dépourvus d'unité flottante. Cependant, ils se retrouvent confrontés à deux difficultés: D'abord, coder en virgule fixe est fastidieux et exige que le programmeur gère tous les détails arithmétiques. Ensuite, et en raison de la faible dynamique des nombres à virgule fixe par rapport aux nombres flottants, les calculs en fixe sont souvent perçus comme intrinsèquement peu précis. La première partie de cette thèse propose une méthodologie pour dépasser ces deux limitations. Elle montre comment concevoir et mettre en œuvre des outils pour générer automatiquement des programmes en virgule fixe. Ensuite, afin de rassurer l'utilisateur quant à la qualité numérique des codes synthétisés, des certificats sont générés qui fournissent des bornes sur les erreurs d'arrondi. La deuxième partie de cette thèse est dédiée à l'étude des compromis lors de la génération de programmes en virgule fixe pour les briques d'algèbre linéaire. Des données expérimentales y sont fournies sur la synthèse de code pour la multiplication et l'inversion matricielles
To be cost effective, embedded systems are shipped with low-end micro-processors. These processors are dedicated to one or few tasks that are highly demanding on computational resources. Examples of widely deployed tasks include the fast Fourier transform, convolutions, and digital filters. For these tasks to run efficiently, embedded systems programmers favor fixed-point arithmetic over the standardized and costly floating-point arithmetic. However, they are faced with two difficulties: First, writing fixed-point codes is tedious and requires that the programmer must be in charge of every arithmetical detail. Second, because of the low dynamic range of fixed-point numbers compared to floating-point numbers, there is a persistent belief that fixed-point computations are inherently inaccurate. The first part of this thesis addresses these two limitations as follows: It shows how to design and implement tools to automatically synthesize fixed-point programs. Next, to strengthen the user's confidence in the synthesized codes, analytic methods are suggested to generate certificates. These certificates can be checked using a formal verification tool, and assert that the rounding errors of the generated codes are indeed below a given threshold. The second part of the thesis is a study of the trade-offs involved when generating fixed-point code for linear algebra basic blocks. It gives experimental data on fixed-point synthesis for matrix multiplication and matrix inversion through Cholesky decomposition
APA, Harvard, Vancouver, ISO, and other styles
21

Takahashi, Ryan. "Structured Matrices and the Algebra of Displacement Operators." Scholarship @ Claremont, 2013. http://scholarship.claremont.edu/hmc_theses/45.

Full text
Abstract:
Matrix calculations underlie countless problems in science, mathematics, and engineering. When the involved matrices are highly structured, displacement operators can be used to accelerate fundamental operations such as matrix-vector multiplication. In this thesis, we provide an introduction to the theory of displacement operators and study the interplay between displacement and natural matrix constructions involving direct sums, Kronecker products, and blocking. We also investigate the algebraic behavior of displacement operators, developing results about invertibility and kernels.
APA, Harvard, Vancouver, ISO, and other styles
22

Karakutuk, Serkan. "Blind And Semi-blind Channel Order Estimation In Simo Systems." Phd thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12611107/index.pdf.

Full text
Abstract:
Channel order estimation is an important problem in many fields including signal processing, communications, acoustics, and more. In this thesis, blind channel order estimation problem is considered for single-input, multi-output (SIMO) FIR systems. The problem is to estimate the effective channel order for the SIMO system given only the output samples corrupted by noise. Two new methods for channel order estimation are presented. These methods have several useful features compared to the currently known techniques. They are guaranteed to find the true channel order for noise free case and they perform significantly better for noisy observations. These algorithms show a consistent performance when the number of observations, channels and channel order are changed. The proposed algorithms are integrated with the least squares smoothing (LSS) algorithm for blind identification of the channel coefficients. LSS algorithm is selected since it is a deterministic algorithm and has some additional features suitable for order estimation. The proposed algorithms are compared with a variety of dierent algorithms including linear prediction (LP) based methods. LP approaches are known to be robust to channel order overestimation. In this thesis, it is shown that significant gain can be obtained compared to LP based approaches when the proposed techniques are used. The proposed algorithms are also compared with the oversampled single-input, single-output (SISO) system with a generic decision feedback equalizer, and better mean-square error performance is observed for the blind setting. Channel order estimation problem is also investigated for semi-blind systems where a pilot signal is used which is known at the receiver. In this case, two new methods are proposed which exploit the pilot signal in dierent ways. When both unknown and pilot symbols are used, a better estimation performance can be achieved compared to the proposed blind methods. The semi-blind approach is especially effective in terms of bit error rate (BER) evaluation thanks to the use of pilot symbols in better estimation of channel coecients. This approach is also more robust to ill-conditioned channels. The constraints for these approaches, such as synchronization, and the decrease in throughput still make the blind approaches a good alternative for channel order estimation. True and effective channel order estimation topics are discussed in detail and several simulations are done in order to show the significant performance gain achieved by the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
23

Wilkerson, Owen Tanner. "Fast, Sparse Matrix Factorization and Matrix Algebra via Random Sampling for Integral Equation Formulations in Electromagnetics." UKnowledge, 2019. https://uknowledge.uky.edu/ece_etds/147.

Full text
Abstract:
Many systems designed by electrical & computer engineers rely on electromagnetic (EM) signals to transmit, receive, and extract either information or energy. In many cases, these systems are large and complex. Their accurate, cost-effective design requires high-fidelity computer modeling of the underlying EM field/material interaction problem in order to find a design with acceptable system performance. This modeling is accomplished by projecting the governing Maxwell equations onto finite dimensional subspaces, which results in a large matrix equation representation (Zx = b) of the EM problem. In the case of integral equation-based formulations of EM problems, the M-by-N system matrix, Z, is generally dense. For this reason, when treating large problems, it is necessary to use compression methods to store and manipulate Z. One such sparse representation is provided by so-called H^2 matrices. At low-to-moderate frequencies, H^2 matrices provide a controllably accurate data-sparse representation of Z. The scale at which problems in EM are considered ``large'' is continuously being redefined to be larger. This growth of problem scale is not only happening in EM, but respectively across all other sub-fields of computational science as well. The pursuit of increasingly large problems is unwavering in all these sub-fields, and this drive has long outpaced the rate of advancements in processing and storage capabilities in computing. This has caused computational science communities to now face the computational limitations of standard linear algebraic methods that have been relied upon for decades to run quickly and efficiently on modern computing hardware. This common set of algorithms can only produce reliable results quickly and efficiently for small to mid-sized matrices that fit into the memory of the host computer. Therefore, the drive to pursue larger problems has even began to outpace the reasonable capabilities of these common numerical algorithms; the deterministic numerical linear algebra algorithms that have gotten matrix computation this far have proven to be inadequate for many problems of current interest. This has computational science communities focusing on improvements in their mathematical and software approaches in order to push further advancement. Randomized numerical linear algebra (RandNLA) is an emerging area that both academia and industry believe to be strong candidates to assist in overcoming the limitations faced when solving massive and computationally expensive problems. This thesis presents results of recent work that uses a random sampling method (RSM) to implement algebraic operations involving multiple H^2 matrices. Significantly, this work is done in a manner that is non-invasive to an existing H^2 code base for filling and factoring H^2 matrices. The work presented thus expands the existing code's capabilities with minimal impact on existing (and well-tested) applications. In addition to this work with randomized H^2 algebra, improvements in sparse factorization methods for the compressed H^2 data structure will also be presented. The reported developments in filling and factoring H^2 data structures assist in, and allow for, the further pursuit of large and complex problems in computational EM (CEM) within simulation code bases that utilize the H^2 data structure.
APA, Harvard, Vancouver, ISO, and other styles
24

Shank, Stephen David. "Low-rank solution methods for large-scale linear matrix equations." Diss., Temple University Libraries, 2014. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/273331.

Full text
Abstract:
Mathematics
Ph.D.
We consider low-rank solution methods for certain classes of large-scale linear matrix equations. Our aim is to adapt existing low-rank solution methods based on standard, extended and rational Krylov subspaces to solve equations which may viewed as extensions of the classical Lyapunov and Sylvester equations. The first class of matrix equations that we consider are constrained Sylvester equations, which essentially consist of Sylvester's equation along with a constraint on the solution matrix. These therefore constitute a system of matrix equations. The second are generalized Lyapunov equations, which are Lyapunov equations with additional terms. Such equations arise as computational bottlenecks in model order reduction.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
25

Kaperick, Bryan James. "Diagonal Estimation with Probing Methods." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/90402.

Full text
Abstract:
Probing methods for trace estimation of large, sparse matrices has been studied for several decades. In recent years, there has been some work to extend these techniques to instead estimate the diagonal entries of these systems directly. We extend some analysis of trace estimators to their corresponding diagonal estimators, propose a new class of deterministic diagonal estimators which are well-suited to parallel architectures along with heuristic arguments for the design choices in their construction, and conclude with numerical results on diagonal estimation and ordering problems, demonstrating the strengths of our newly-developed methods alongside existing methods.
Master of Science
In the past several decades, as computational resources increase, a recurring problem is that of estimating certain properties very large linear systems (matrices containing real or complex entries). One particularly important quantity is the trace of a matrix, defined as the sum of the entries along its diagonal. In this thesis, we explore a problem that has only recently been studied, in estimating the diagonal entries of a particular matrix explicitly. For these methods to be computationally more efficient than existing methods, and with favorable convergence properties, we require the matrix in question to have a majority of its entries be zero (the matrix is sparse), with the largest-magnitude entries clustered near and on its diagonal, and very large in size. In fact, this thesis focuses on a class of methods called probing methods, which are of particular efficiency when the matrix is not known explicitly, but rather can only be accessed through matrix vector multiplications with arbitrary vectors. Our contribution is new analysis of these diagonal probing methods which extends the heavily-studied trace estimation problem, new applications for which probing methods are a natural choice for diagonal estimation, and a new class of deterministic probing methods which have favorable properties for large parallel computing architectures which are becoming ever-more-necessary as problem sizes continue to increase beyond the scope of single processor architectures.
APA, Harvard, Vancouver, ISO, and other styles
26

Rubensson, Emanuel H. "Matrix Algebra for Quantum Chemistry." Doctoral thesis, Stockholm : Bioteknologi, Kungliga Tekniska högskolan, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-9447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ozdamar, Huseyin Hasan. "A Stiffened Dkt Shell Element." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/2/12605741/index.pdf.

Full text
Abstract:
A stiffened DKT shell element is formulated for the linear static analysis of stiffened plates and shells. Three-noded triangular shell elements and two-noded beam elements with 18 and 12 degrees of freedom are used respectively in the formulation. The stiffeners follow the nodal lines of the shell element. Eccentricity of the stiffener is taken into account. The dynamic and stability characteristic of the element is also investigated. With the developed computer program, the results obtained by the proposed element agrees fairly well with the existing literature.
APA, Harvard, Vancouver, ISO, and other styles
28

Durmaz, Murat. "One-dimensional Real-time Signal Denoising Using Wavelet-based Kalman Filtering." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608336/index.pdf.

Full text
Abstract:
Denoising signals is an important task of digital signal processing. Many linear and non-linear methods for signal denoising have been developed. Wavelet based denoising is the most famous nonlinear denoising method lately. In the linear case, Kalman filter is famous for its easy implementation and real-time nature. Wavelet- Kalman filter developed lately is an important improvement over Kalman filter, in which the Kalman filter operates in the wavelet domain, filtering the wavelet coeffi- cients, and resulting in the filtered wavelet transform of the signal in real-time. The real-time filtering and multiresolution representation is a powerful feature for many real world applications. This study explains in detail the derivation and implementation of Real-Time Wavelet-Kalman Filter method to remove noise from signals in real-time. The filter is enhanced to use different wavelet types than the Haar wavelet, and also it is improved to operate on higer block sizes than two. Wavelet shrinkage is integrated to the filter and it is shown that by utilizing this integration more noise suppression is obtainable. A user friendly application is developed to import, filter and export signals in Java programming language. And finally, the applicability of the proposed method to suppress noise from seismic waves coming from eartquakes and to enhance spontaneous potentials measured from groundwater wells is also shown.
APA, Harvard, Vancouver, ISO, and other styles
29

Kaskaloglu, Kerem. "Some Generalized Multipartite Access Structures." Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/2/12611965/index.pdf.

Full text
Abstract:
In this work, we study some generalized multipartite access structures and linear secret sharing schemes for their realizations. Given a multipartite set of participants with m compartments (or levels) and m conditions to be satisfied by an authorized set, we firstly examine the intermediary access structures arousing from the natural case concerning that any c out of m of these conditions suffice, instead of requiring anyone or all of the m conditions simultaneously, yielding to generalizations for both the compartmented and hierarchical cases. These are realized essentially by employing a series of Lagrange interpolations and a simple frequently-used connective tool called access structure product, as well as some known constructions for existing ideal schemes. The resulting schemes are non-ideal but perfect. We also consider nested multipartite access structures, where we let a compartment to be defined within another, so that the access structure is composed of some multipartite substructures. We extend formerly employed bivariate interpolation techniques to multivariate interpolation, in order to realize such access structures. The generic scheme we consider is perfect with a high probability such as 1-O(1/q) on a finite field F_q. In particular, we propose a non-nested generalization for the conventional compartmented access structures, which depicts a stronger way of controlling the additional participants.
APA, Harvard, Vancouver, ISO, and other styles
30

Fasi, Massimiliano. "Weighted geometric mean of large-scale matrices: numerical analysis and algorithms." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8274/.

Full text
Abstract:
Computing the weighted geometric mean of large sparse matrices is an operation that tends to become rapidly intractable, when the size of the matrices involved grows. However, if we are not interested in the computation of the matrix function itself, but just in that of its product times a vector, the problem turns simpler and there is a chance to solve it even when the matrix mean would actually be impossible to compute. Our interest is motivated by the fact that this calculation has some practical applications, related to the preconditioning of some operators arising in domain decomposition of elliptic problems. In this thesis, we explore how such a computation can be efficiently performed. First, we exploit the properties of the weighted geometric mean and find several equivalent ways to express it through real powers of a matrix. Hence, we focus our attention on matrix powers and examine how well-known techniques can be adapted to the solution of the problem at hand. In particular, we consider two broad families of approaches for the computation of f(A) v, namely quadrature formulae and Krylov subspace methods, and generalize them to the pencil case f(A\B) v. Finally, we provide an extensive experimental evaluation of the proposed algorithms and also try to assess how convergence speed and execution time are influenced by some characteristics of the input matrices. Our results suggest that a few elements have some bearing on the performance and that, although there is no best choice in general, knowing the conditioning and the sparsity of the arguments beforehand can considerably help in choosing the best strategy to tackle the problem.
APA, Harvard, Vancouver, ISO, and other styles
31

Sète, Olivier [Verfasser], Jörg [Akademischer Betreuer] Liesen, Jörg [Gutachter] Liesen, Reinhard [Gutachter] Nabben, and Elias [Gutachter] Wegert. "On interpolation and approximation problems in numerical linear algebra / Olivier Sète ; Gutachter: Jörg Liesen, Reinhard Nabben, Elias Wegert ; Betreuer: Jörg Liesen." Berlin : Technische Universität Berlin, 2016. http://d-nb.info/1156018498/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Eisenlohr, John Merrick. "Parallel ILU Preconditioning for Structured Grid Matrices." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1429820221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Ladenheim, Scott Aaron. "Constraint Preconditioning of Saddle Point Problems." Diss., Temple University Libraries, 2015. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/319906.

Full text
Abstract:
Mathematics
Ph.D.
This thesis is concerned with the fast iterative solution of linear systems of equations of saddle point form. Saddle point problems are a ubiquitous class of matrices that arise in a host of computational science and engineering applications. The focus here is on improving the convergence of iterative methods for these problems by preconditioning. Preconditioning is a way to transform a given linear system into a different problem for which iterative methods converge faster. Saddle point matrices have a very specific block structure and many preconditioning strategies for these problems exploit this structure. The preconditioners considered in this thesis are constraint preconditioners. This class of preconditioner mimics the structure of the original saddle point problem. In this thesis, we prove norm- and field-of-values-equivalence for constraint preconditioners associated to saddle point matrices with a particular structure. As a result of these equivalences, the number of iterations needed for convergence of a constraint preconditioned minimal residual Krylov subspace method is bounded, independent of the size of the matrix. In particular, for saddle point systems that arise from the finite element discretization of partial differential equations (p.d.e.s), the number of iterations it takes for GMRES to converge for theses constraint preconditioned systems is bounded (asymptotically), independent of the size of the mesh width. Moreover, we extend these results when appropriate inexact versions of the constraint preconditioner are used. We illustrate this theory by presenting numerical experiments on saddle point matrices that arise from the finite element solution of coupled Stokes-Darcy flow. This is a system of p.d.e.s that models the coupling of a free flow to a porous media flow by conditions across the interface of the two flow regions. We present experiments in both two and three dimensions, using different types of elements (triangular, quadrilateral), different finite element schemes (continuous, discontinuous Galerkin methods), and different geometries. In all cases, the effectiveness of the constraint preconditioner is demonstrated.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
34

Zhang, Weijian. "Evolving graphs and similarity-based graphs with applications." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/evolving-graphs-and-similaritybased-graphs-with-applications(66a23d3d-1ad0-454b-9ba0-175b566af95d).html.

Full text
Abstract:
A graph is a mathematical structure for modelling the pairwise relations between objects. This thesis studies two types of graphs, namely, similarity-based graphs and evolving graphs. We look at ways to traverse an evolving graph. In particular, we examine the influence of temporal information on node centrality. In the process, we develop EvolvingGraphs.jl, a software package for analyzing time-dependent networks. We develop Etymo, a search system for discovering interesting research papers. Etymo utilizes both similarity-based graphs and evolving graphs to build a knowledge graph of research articles in order to help users to track the development of ideas. We construct content similarity-based graphs using the full text of research papers. And we extract key concepts from research papers and exploit the temporal information in research papers to construct a concepts evolving graph.
APA, Harvard, Vancouver, ISO, and other styles
35

DiPaolo, Conner. "Randomized Algorithms for Preconditioner Selection with Applications to Kernel Regression." Scholarship @ Claremont, 2019. https://scholarship.claremont.edu/hmc_theses/230.

Full text
Abstract:
The task of choosing a preconditioner M to use when solving a linear system Ax=b with iterative methods is often tedious and most methods remain ad-hoc. This thesis presents a randomized algorithm to make this chore less painful through use of randomized algorithms for estimating traces. In particular, we show that the preconditioner stability || I - M-1A ||F, known to forecast preconditioner quality, can be computed in the time it takes to run a constant number of iterations of conjugate gradients through use of sketching methods. This is in spite of folklore which suggests the quantity is impractical to compute, and a proof we give that ensures the quantity could not possibly be approximated in a useful amount of time by a deterministic algorithm. Using our estimator, we provide a method which can provably select a quality preconditioner among n candidates using floating operations commensurate with running about n log(n) steps of the conjugate gradients algorithm. In the absence of such a preconditioner among the candidates, our method can advise the practitioner to use no preconditioner at all. The algorithm is extremely easy to implement and trivially parallelizable, and along the way we provide theoretical improvements to the literature on trace estimation. In empirical experiments, we show the selection method can be quite helpful. For example, it allows us to create to the best of our knowledge the first preconditioning method for kernel regression which never uses more iterations over the non-preconditioned analog in standard settings.
APA, Harvard, Vancouver, ISO, and other styles
36

Frazier, William. "Application of Symplectic Integration on a Dynamical System." Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/etd/3213.

Full text
Abstract:
Molecular Dynamics (MD) is the numerical simulation of a large system of interacting molecules, and one of the key components of a MD simulation is the numerical estimation of the solutions to a system of nonlinear differential equations. Such systems are very sensitive to discretization and round-off error, and correspondingly, standard techniques such as Runge-Kutta methods can lead to poor results. However, MD systems are conservative, which means that we can use Hamiltonian mechanics and symplectic transformations (also known as canonical transformations) in analyzing and approximating solutions. This is standard in MD applications, leading to numerical techniques known as symplectic integrators, and often, these techniques are developed for well-understood Hamiltonian systems such as Hill’s lunar equation. In this presentation, we explore how well symplectic techniques developed for well-understood systems (specifically, Hill’s Lunar equation) address discretization errors in MD systems which fail for one or more reasons.
APA, Harvard, Vancouver, ISO, and other styles
37

Sharify, Meisam. "Algorithmes de mise à l'échelle et méthodes tropicales en analyse numérique matricielle." Phd thesis, Ecole Polytechnique X, 2011. http://pastel.archives-ouvertes.fr/pastel-00643836.

Full text
Abstract:
L'Algèbre tropicale peut être considérée comme un domaine relativement nouveau en mathématiques. Elle apparait dans plusieurs domaines telles que l'optimisation, la synchronisation de la production et du transport, les systèmes à événements discrets, le contrôle optimal, la recherche opérationnelle, etc. La première partie de ce manuscrit est consacrée a l'étude des applications de l'algèbre tropicale à l'analyse numérique matricielle. Nous considérons tout d'abord le problème classique de l'estimation des racines d'un polynôme univarié. Nous prouvons plusieurs nouvelles bornes pour la valeur absolue des racines d'un polynôme en exploitant les méthodes tropicales. Ces résultats sont particulièrement utiles lorsque l'on considère des polynômes dont les coefficients ont des ordres de grandeur différents. Nous examinons ensuite le problème du calcul des valeurs propres d'une matrice polynomiale. Ici, nous introduisons une technique de mise à l'échelle générale, basée sur l'algèbre tropicale, qui s'applique en particulier à la forme compagnon. Cette mise à l'échelle est basée sur la construction d'une fonction polynomiale tropicale auxiliaire, ne dépendant que de la norme des matrices. Les raciness (les points de non-différentiabilité) de ce polynôme tropical fournissent une pré-estimation de la valeur absolue des valeurs propres. Ceci se justifie en particulier par un nouveau résultat montrant que sous certaines hypothèses faites sur le conditionnement, il existe un groupe de valeurs propres bornées en norme. L'ordre de grandeur de ces bornes est fourni par la plus grande racine du polynôme tropical auxiliaire. Un résultat similaire est valable pour un groupe de petites valeurs propres. Nous montrons expérimentalement que cette mise à l'échelle améliore la stabilité numérique, en particulier dans des situations où les données ont des ordres de grandeur différents. Nous étudions également le problème du calcul des valeurs propres tropicales (les points de non-différentiabilité du polynôme caractéristique) d'une matrice polynômiale tropicale. Du point de vue combinatoire, ce problème est équivalent à trouver une fonction de couplage: la valeur d'un couplage de poids maximum dans un graphe biparti dont les arcs sont valués par des fonctions convexes et linéaires par morceaux. Nous avons développé un algorithme qui calcule ces valeurs propres tropicales en temps polynomial. Dans la deuxième partie de cette thèse, nous nous intéressons à la résolution de problèmes d'affectation optimale de très grande taille, pour lesquels les algorithms séquentiels classiques ne sont pas efficaces. Nous proposons une nouvelle approche qui exploite le lien entre le problème d'affectation optimale et le problème de maximisation d'entropie. Cette approche conduit à un algorithme de prétraitement pour le problème d'affectation optimale qui est basé sur une méthode itérative qui élimine les entrées n'appartenant pas à une affectation optimale. Nous considérons deux variantes itératives de l'algorithme de prétraitement, l'une utilise la méthode Sinkhorn et l'autre utilise la méthode de Newton. Cet algorithme de prétraitement ramène le problème initial à un problème beaucoup plus petit en termes de besoins en mémoire. Nous introduisons également une nouvelle méthode itérative basée sur une modification de l'algorithme Sinkhorn, dans lequel un paramètre de déformation est lentement augmenté. Nous prouvons que cette méthode itérative(itération de Sinkhorn déformée) converge vers une matrice dont les entrées non nulles sont exactement celles qui appartiennent aux permutations optimales. Une estimation du taux de convergence est également présentée.
APA, Harvard, Vancouver, ISO, and other styles
38

Pearson, John W. "Fast iterative solvers for PDE-constrained optimization problems." Thesis, University of Oxford, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.581405.

Full text
Abstract:
In this thesis, we develop preconditioned iterative methods for the solution of matrix systems arising from PDE-constrained optimization problems. In order to do this, we exploit saddle point theory, as this is the form of the matrix systems we wish to solve. We utilize well-known results on saddle point systems to motivate preconditioners based on effective approximations of the (1,1)-block and Schur complement of the matrices involved. These preconditioners are used in conjunction with suitable iterative solvers, which include MINRES, non-standard Conjugate Gradients, GMRES and BiCG. The solvers we use are selected based on the particular problem and preconditioning strategy employed. We consider the numerical solution of a range of PDE-constrained optimization problems, namely the distributed control, Neumann boundary control and subdomain control of Poisson's equation, convection-diffusion control, Stokes and Navier-Stokes control, the optimal control of the heat equation, and the optimal control of reaction-diffusion problems arising in chemical processes. Each of these problems has a special structure which we make use of when developing our preconditioners, and specific techniques and approximations are required for each problem. In each case, we motivate and derive our preconditioners, obtain eigenvalue bounds for the preconditioners where relevant, and demonstrate the effectiveness of our strategies through numerical experiments. The goal throughout this work is for our iterative solvers to be feasible and reliable, but also robust with respect to the parameters involved in the problems we consider.
APA, Harvard, Vancouver, ISO, and other styles
39

Ishigami, Hiroyuki. "Studies on Parallel Solvers Based on Bisection and Inverse Iterationfor Subsets of Eigenpairs and Singular Triplets." 京都大学 (Kyoto University), 2016. http://hdl.handle.net/2433/215685.

Full text
Abstract:
5章(本文31~40ページ)と元となった論文の著作権はIEEEに属するため、規約に従い、本文79ページにおいて出典を示すともに、コピーライト表記を付している。本文39、40ページの全ての図の著作権は、IEEEに属する。このため、これら全ての図においてコピーライト表記を付している。
Kyoto University (京都大学)
0048
新制・課程博士
博士(情報学)
甲第19858号
情博第609号
新制||情||106(附属図書館)
32894
京都大学大学院情報学研究科数理工学専攻
(主査)教授 中村 佳正, 教授 梅野 健, 教授 中島 浩
学位規則第4条第1項該当
APA, Harvard, Vancouver, ISO, and other styles
40

Dinckal, Cigdem. "Decomposition Of Elastic Constant Tensor Into Orthogonal Parts." Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612226/index.pdf.

Full text
Abstract:
All procedures in the literature for decomposing symmetric second rank (stress) tensor and symmetric fourth rank (elastic constant) tensor are elaborated and compared which have many engineering and scientific applications for anisotropic materials. The decomposition methods for symmetric second rank tensors are orthonormal tensor basis method, complex variable representation and spectral method. For symmetric fourth rank (elastic constant) tensor, there are four mainly decomposition methods namely as, orthonormal tensor basis, irreducible, harmonic decomposition and spectral. Those are applied to anisotropic materials possessing various symmetry classes which are isotropic, cubic, transversely isotropic, tetragonal, trigonal and orthorhombic. For isotropic materials, an expression for the elastic constant tensor different than the traditionally known form is given. Some misprints found in the literature are corrected. For comparison purposes, numerical examples of each decomposition process are presented for the materials possessing different symmetry classes. Some applications of these decomposition methods are given. Besides, norm and norm ratio concepts are introduced to measure and compare the anisotropy degree for various materials with the same or di¤
erent symmetries. For these materials,norm and norm ratios are calculated. It is suggested that the norm of a tensor may be used as a criterion for comparing the overall e¤
ect of the properties of anisotropic materials and the norm ratios may be used as a criterion to represent the anisotropy degree of the properties of materials. Finally, comparison of all methods are done in order to determine similarities and differences between them. As a result of this comparison process, it is proposed that the spectral method is a non-linear decomposition method which yields non-linear orthogonal decomposed parts. For symmetric second rank and fourth rank tensors, this case is a significant innovation in decomposition procedures in the literature.
APA, Harvard, Vancouver, ISO, and other styles
41

Lacoursière, Claude. "Ghosts and machines : regularized variational methods for interactive simulations of multibodies with dry frictional contacts." Doctoral thesis, Umeå University, Computing Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-1143.

Full text
Abstract:

A time-discrete formulation of the variational principle of mechanics is used to provide a consistent theoretical framework for the construction and analysis of low order integration methods. These are applied to mechanical systems subject to mixed constraints and dry frictional contacts and impacts---machines. The framework includes physics motivated constraint regularization and stabilization schemes. This is done by adding potential energy and Rayleigh dissipation terms in the Lagrangian formulation used throughout. These terms explicitly depend on the value of the Lagrange multipliers enforcing constraints. Having finite energy, the multipliers are thus massless ghost particles. The main numerical stepping method produced with the framework is called SPOOK.

Variational integrators preserve physical invariants globally, exactly in some cases, approximately but within fixed global bounds for others. This allows to product realistic physical trajectories even with the low order methods. These are needed in the solution of nonsmooth problems such as dry frictional contacts and in addition, they are computationally inexpensive. The combination of strong stability, low order, and the global preservation of invariants allows for large integration time steps, but without loosing accuracy on the important and visible physical quantities. SPOOK is thus well-suited for interactive simulations, such as those commonly used in virtual environment applications, because it is fast, stable, and faithful to the physics.

New results include a stable discretization of highly oscillatory terms of constraint regularization; a linearly stable constraint stabilization scheme based on ghost potential and Rayleigh dissipation terms; a single-step, strictly dissipative, approximate impact model; a quasi-linear complementarity formulation of dry friction that is isotropic and solvable for any nonnegative value of friction coefficients; an analysis of a splitting scheme to solve frictional contact complementarity problems; a stable, quaternion-based rigid body stepping scheme and a stable linear approximation thereof. SPOOK includes all these elements. It is linearly implicit and linearly stable, it requires the solution of either one linear system of equations of one mixed linear complementarity problem per regular time step, and two of the same when an impact condition is detected. The changes in energy caused by constraints, impacts, and dry friction, are all shown to be strictly dissipative in comparison with the free system. Since all regularization and stabilization parameters are introduced in the physics, they map directly onto physical properties and thus allow modeling of a variety of phenomena, such as constraint compliance, for instance.

Tutorial material is included for continuous and discrete-time analytic mechanics, quaternion algebra, complementarity problems, rigid body dynamics, constraint kinematics, and special topics in numerical linear algebra needed in the solution of the stepping equations of SPOOK.

The qualitative and quantitative aspects of SPOOK are demonstrated by comparison with a variety of standard techniques on well known test cases which are analyzed in details. SPOOK compares favorably for all these examples. In particular, it handles ill-posed and degenerate problems seamlessly and systematically. An implementation suitable for large scale performance and accuracy testing is left for future work.

APA, Harvard, Vancouver, ISO, and other styles
42

Shen, Chong. "Topic Analysis of Tweets on the European Refugee Crisis Using Non-negative Matrix Factorization." Scholarship @ Claremont, 2016. http://scholarship.claremont.edu/cmc_theses/1388.

Full text
Abstract:
The ongoing European Refugee Crisis has been one of the most popular trending topics on Twitter for the past 8 months. This paper applies topic modeling on bulks of tweets to discover the hidden patterns within these social media discussions. In particular, we perform topic analysis through solving Non-negative Matrix Factorization (NMF) as an Inexact Alternating Least Squares problem. We accelerate the computation using techniques including tweet sampling and augmented NMF, compare NMF results with different ranks and visualize the outputs through topic representation and frequency plots. We observe that supportive sentiments maintained a strong presence while negative sentiments such as safety concerns have emerged over time.
APA, Harvard, Vancouver, ISO, and other styles
43

Tisseur, Françoise. "Méthodes numériques pour le calcul d'éléments spectraux : étude de la précision, la stabilité et la parallélisation." Saint-Etienne, 1997. http://www.theses.fr/1997STET4006.

Full text
Abstract:
Cette thèse est constituée de deux parties. La première traite de la méthode QR pour le calcul de valeurs propres de matrices quelconques, de tailles modérées. Les contributions originales à ce sujet sont a) une preuve rigoureuse de sa stabilité inverse, b) un nouveau critère d'arrêt justifié par une analyse mathématique. La seconde partie traite de la méthode de Yau et Lu pour le calcul de valeurs propres de matrices symétriques réelles de grandes tailles. Les contributions dans ce travail à ce sujet sont a) une compréhension mathématique accrue de la méthode, b) sa validation numérique, c) une proposition de parallélisation
APA, Harvard, Vancouver, ISO, and other styles
44

Srđan, Milićević. "Algorithms for computing the optimal Geršgorin-type localizations." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2020. https://www.cris.uns.ac.rs/record.jsf?recordId=114425&source=NDLTD&language=en.

Full text
Abstract:
There are numerous ways to localize eigenvalues. One of the best known results is that the spectrum of a given matrix ACn,n is a subset of a union of discs centered at diagonal elements whose radii equal to the sum of the absolute values of the off-diagonal elements of a corresponding row in the matrix. This result (Geršgorin's theorem, 1931) is one of the most important and elegant ways of eigenvalues localization ([63]). Among all Geršgorintype sets, the minimal Geršgorin set gives the sharpest and the most precise localization of the spectrum ([39]). In this thesis, new algorithms for computing an efficient and accurate approximation of the minimal Geršgorin set are presented.
Постоје бројни начини за локализацију карактеристичних корена. Један од најчувенијих резултата је да се спектар дате матрице АCn,n налази у скупу који представља унију кругова са центрима у дијагоналним елементима матрице и полупречницима који су једнаки суми модула вандијагоналних елемената одговарајуће врсте у матрици. Овај резултат (Гершгоринова теорема, 1931.), сматра се једним од најзначајнијих и најелегантнијих начина за локализацију карактеристичних корена ([61]). Међу свим локализацијама Гершгориновог типа, минимални Гершгоринов скуп даје најпрецизнију локализацију спектра ([39]). У овој дисертацији, приказани су нови алгоритми за одређивање тачне и поуздане апроксимације минималног Гершгориновог скупа.
Postoje brojni načini za lokalizaciju karakterističnih korena. Jedan od najčuvenijih rezultata je da se spektar date matrice ACn,n nalazi u skupu koji predstavlja uniju krugova sa centrima u dijagonalnim elementima matrice i poluprečnicima koji su jednaki sumi modula vandijagonalnih elemenata odgovarajuće vrste u matrici. Ovaj rezultat (Geršgorinova teorema, 1931.), smatra se jednim od najznačajnijih i najelegantnijih načina za lokalizaciju karakterističnih korena ([61]). Među svim lokalizacijama Geršgorinovog tipa, minimalni Geršgorinov skup daje najprecizniju lokalizaciju spektra ([39]). U ovoj disertaciji, prikazani su novi algoritmi za određivanje tačne i pouzdane aproksimacije minimalnog Geršgorinovog skupa.
APA, Harvard, Vancouver, ISO, and other styles
45

Savas, Berkant. "Algorithms in data mining using matrix and tensor methods." Doctoral thesis, Linköpings universitet, Beräkningsvetenskap, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11597.

Full text
Abstract:
In many fields of science, engineering, and economics large amounts of data are stored and there is a need to analyze these data in order to extract information for various purposes. Data mining is a general concept involving different tools for performing this kind of analysis. The development of mathematical models and efficient algorithms is of key importance. In this thesis we discuss algorithms for the reduced rank regression problem and algorithms for the computation of the best multilinear rank approximation of tensors. The first two papers deal with the reduced rank regression problem, which is encountered in the field of state-space subspace system identification. More specifically the problem is \[ \min_{\rank(X) = k} \det (B - X A)(B - X A)\tp, \] where $A$ and $B$ are given matrices and we want to find $X$ under a certain rank condition that minimizes the determinant. This problem is not properly stated since it involves implicit assumptions on $A$ and $B$ so that $(B - X A)(B - X A)\tp$ is never singular. This deficiency of the determinant criterion is fixed by generalizing the minimization criterion to rank reduction and volume minimization of the objective matrix. The volume of a matrix is defined as the product of its nonzero singular values. We give an algorithm that solves the generalized problem and identify properties of the input and output signals causing a singular objective matrix. Classification problems occur in many applications. The task is to determine the label or class of an unknown object. The third paper concerns with classification of handwritten digits in the context of tensors or multidimensional data arrays. Tensor and multilinear algebra is an area that attracts more and more attention because of the multidimensional structure of the collected data in various applications. Two classification algorithms are given based on the higher order singular value decomposition (HOSVD). The main algorithm makes a data reduction using HOSVD of 98--99 \% prior the construction of the class models. The models are computed as a set of orthonormal bases spanning the dominant subspaces for the different classes. An unknown digit is expressed as a linear combination of the basis vectors. The resulting algorithm achieves 5\% in classification error with fairly low amount of computations. The remaining two papers discuss computational methods for the best multilinear rank approximation problem \[ \min_{\cB} \| \cA - \cB\| \] where $\cA$ is a given tensor and we seek the best low multilinear rank approximation tensor $\cB$. This is a generalization of the best low rank matrix approximation problem. It is well known that for matrices the solution is given by truncating the singular values in the singular value decomposition (SVD) of the matrix. But for tensors in general the truncated HOSVD does not give an optimal approximation. For example, a third order tensor $\cB \in \RR^{I \x J \x K}$ with rank$(\cB) = (r_1,r_2,r_3)$ can be written as the product \[ \cB = \tml{X,Y,Z}{\cC}, \qquad b_{ijk}=\sum_{\lambda,\mu,\nu} x_{i\lambda} y_{j\mu} z_{k\nu} c_{\lambda\mu\nu}, \] where $\cC \in \RR^{r_1 \x r_2 \x r_3}$ and $X \in \RR^{I \times r_1}$, $Y \in \RR^{J \times r_2}$, and $Z \in \RR^{K \times r_3}$ are matrices of full column rank. Since it is no restriction to assume that $X$, $Y$, and $Z$ have orthonormal columns and due to these constraints, the approximation problem can be considered as a nonlinear optimization problem defined on a product of Grassmann manifolds. We introduce novel techniques for multilinear algebraic manipulations enabling means for theoretical analysis and algorithmic implementation. These techniques are used to solve the approximation problem using Newton and Quasi-Newton methods specifically adapted to operate on products of Grassmann manifolds. The presented algorithms are suited for small, large and sparse problems and, when applied on difficult problems, they clearly outperform alternating least squares methods, which are standard in the field.
APA, Harvard, Vancouver, ISO, and other styles
46

Gittens, Alex A. "Topics in Randomized Numerical Linear Algebra." Thesis, 2013. https://thesis.library.caltech.edu/7880/13/ch1.pdf.

Full text
Abstract:

This thesis studies three classes of randomized numerical linear algebra algorithms, namely: (i) randomized matrix sparsification algorithms, (ii) low-rank approximation algorithms that use randomized unitary transformations, and (iii) low-rank approximation algorithms for positive-semidefinite (PSD) matrices.

Randomized matrix sparsification algorithms set randomly chosen entries of the input matrix to zero. When the approximant is substituted for the original matrix in computations, its sparsity allows one to employ faster sparsity-exploiting algorithms. This thesis contributes bounds on the approximation error of nonuniform randomized sparsification schemes, measured in the spectral norm and two NP-hard norms that are of interest in computational graph theory and subset selection applications.

Low-rank approximations based on randomized unitary transformations have several desirable properties: they have low communication costs, are amenable to parallel implementation, and exploit the existence of fast transform algorithms. This thesis investigates the tradeoff between the accuracy and cost of generating such approximations. State-of-the-art spectral and Frobenius-norm error bounds are provided.

The last class of algorithms considered are SPSD "sketching" algorithms. Such sketches can be computed faster than approximations based on projecting onto mixtures of the columns of the matrix. The performance of several such sketching schemes is empirically evaluated using a suite of canonical matrices drawn from machine learning and data analysis applications, and a framework is developed for establishing theoretical error bounds.

In addition to studying these algorithms, this thesis extends the Matrix Laplace Transform framework to derive Chernoff and Bernstein inequalities that apply to all the eigenvalues of certain classes of random matrices. These inequalities are used to investigate the behavior of the singular values of a matrix under random sampling, and to derive convergence rates for each individual eigenvalue of a sample covariance matrix.

APA, Harvard, Vancouver, ISO, and other styles
47

"High performance algorithms for numerical linear algebra." Thesis, 2003. http://hdl.handle.net/2237/6641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Yamamoto, Yusaku. "High performance algorithms for numerical linear algebra." Thesis, 2003. http://hdl.handle.net/2237/6641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

(9179300), Evgenia-Maria Kontopoulou. "RANDOMIZED NUMERICAL LINEAR ALGEBRA APPROACHES FOR APPROXIMATING MATRIX FUNCTIONS." Thesis, 2020.

Find full text
Abstract:

This work explores how randomization can be exploited to deliver sophisticated

algorithms with provable bounds for: (i) The approximation of matrix functions, such

as the log-determinant and the Von-Neumann entropy; and (ii) The low-rank approximation

of matrices. Our algorithms are inspired by recent advances in Randomized

Numerical Linear Algebra (RandNLA), an interdisciplinary research area that exploits

randomization as a computational resource to develop improved algorithms for

large-scale linear algebra problems. The main goal of this work is to encourage the

practical use of RandNLA approaches to solve Big Data bottlenecks at industrial

level. Our extensive evaluation tests are complemented by a thorough theoretical

analysis that proves the accuracy of the proposed algorithms and highlights their

scalability as the volume of data increases. Finally, the low computational time and

memory consumption, combined with simple implementation schemes that can easily

be extended in parallel and distributed environments, render our algorithms suitable

for use in the development of highly efficient real-world software.

APA, Harvard, Vancouver, ISO, and other styles
50

Luszczek, Piotr Rafal. "Performance improvements of common sparse numerical linear algebra computations." 2003. http://etd.utk.edu/2003/LuszczekPiotr.pdf.

Full text
Abstract:
Thesis (Ph. D.)--University of Tennessee, Knoxville, 2003.
Title from title page screen (viewed Nov. 12, 2003). Thesis advisor: Dr. Jack J. Dongarra. Document formatted into pages (viii, 84 p. : ill.). Vita. Includes bibliographical references (p. 65-79).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography