Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Sparse Vector Vector Multiplication.

Rozprawy doktorskie na temat „Sparse Vector Vector Multiplication”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Sparse Vector Vector Multiplication”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Ashari, Arash. "Sparse Matrix-Vector Multiplication on GPU." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1417770100.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Ramachandran, Shridhar. "Incremental PageRank acceleration using Sparse Matrix-Sparse Vector Multiplication." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1462894358.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Balasubramanian, Deepan Karthik. "Efficient Sparse Matrix Vector Multiplication for Structured Grid Representation." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1339730490.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Mansour, Ahmad [Verfasser]. "Sparse Matrix-Vector Multiplication Based on Network-on-Chip / Ahmad Mansour." München : Verlag Dr. Hut, 2015. http://d-nb.info/1075409470/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Singh, Kunal. "High-Performance Sparse Matrix-Multi Vector Multiplication on Multi-Core Architecture." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1524089757826551.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

El-Kurdi, Yousef M. "Sparse Matrix-Vector floating-point multiplication with FPGAs for finite element electromagnetics." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98958.

Pełny tekst źródła
Streszczenie:
The Finite Element Method (FEM) is a computationally intensive scientific and engineering analysis tool that has diverse applications ranging from structural engineering to electromagnetic simulation. Field Programmable Gate Arrays (FPGAs) have been shown to have higher peak floating-point performance than general purpose CPUs, and the trends are moving in favor of FPGAs. We present an architecture and implementation of an FPGA-based Sparse Matrix-Vector Multiplier (SMVM) for use in the iterative solution of large, sparse systems of equations arising from FEM applications. Our architecture exp
Style APA, Harvard, Vancouver, ISO itp.
7

Godwin, Jeswin Samuel. "High-Performancs Sparse Matrix-Vector Multiplication on GPUS for Structured Grid Computations." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1357280824.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Pantawongdecha, Payut. "Autotuning divide-and-conquer matrix-vector multiplication." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105968.

Pełny tekst źródła
Streszczenie:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 73-75).<br>Divide and conquer is an important concept in computer science. It is used ubiquitously to simplify and speed up programs. However, it needs to be optimized, with respect to parameter settings for example, in orde
Style APA, Harvard, Vancouver, ISO itp.
9

Hopkins, T. M. "The design of a sparse vector processor." Thesis, University of Edinburgh, 1993. http://hdl.handle.net/1842/14094.

Pełny tekst źródła
Streszczenie:
This thesis describes the development of a new vector processor architecture capable of high efficiency when computing with very sparse vector and matrix data, of irregular structure. Two applications are identified as of particular importance: sparse Gaussian elimination, and Linear Programming, and the algorithmic steps involved in the solution of these problems are analysed. Existing techniques for sparse vector computation, which are only able to achieve a small fraction of the arithmetic performance commonly expected on dense matrix problems, are critically examined. A variety of new tech
Style APA, Harvard, Vancouver, ISO itp.
10

Belgin, Mehmet. "Structure-based Optimizations for Sparse Matrix-Vector Multiply." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/30260.

Pełny tekst źródła
Streszczenie:
This dissertation introduces two novel techniques, OSF and PBR, to improve the performance of Sparse Matrix-vector Multiply (SMVM) kernels, which dominate the runtime of iterative solvers for systems of linear equations. SMVM computations that use sparse formats typically achieve only a small fraction of peak CPU speeds because they are memory bound due to their low flops:byte ratio, they access memory irregularly, and exhibit poor ILP due to inefficient pipelining. We particularly focus on improving the flops:byte ratio, which is the main limiter on performance, by exploiting recurring struct
Style APA, Harvard, Vancouver, ISO itp.
11

DeLorimier, Michael DeHon André. "Floating-point sparse matrix-vector multiply for FPGAs /." Diss., Pasadena, Calif. : California Institute of Technology, 2005. http://resolver.caltech.edu/CaltechETD:etd-05132005-144347.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Xia, Xiao-Lei. "Sparse learning for support vector machines with biological applications." Thesis, Queen's University Belfast, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.527935.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Mellet, Dieter Sydney-Charles. "An integrated continuous output linear power sensor using Hall effect vector multiplication." Diss., Pretoria : [s.n.], 2002. http://upetd.up.ac.za/thesis/available/etd-09012005-120807.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Chen, Jihong. "Sparse Modeling in Classification, Compression and Detection." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5051.

Pełny tekst źródła
Streszczenie:
The principal focus of this thesis is the exploration of sparse structures in a variety of statistical modelling problems. While more comprehensive models can be useful to solve a larger number of problems, its calculation may be ill-posed in most practical instances because of the sparsity of informative features in the data. If this sparse structure can be exploited, the models can often be solved very efficiently. The thesis is composed of four projects. Firstly, feature sparsity is incorporated to improve the performance of support vector machines when there are a lot of noise features pr
Style APA, Harvard, Vancouver, ISO itp.
15

Muradov, Feruz. "Development, Implementation, Optimization and Performance Analysis of Matrix-Vector Multiplication on Eight-Core Digital Signal Processor." Thesis, KTH, Numerisk analys, NA, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-131289.

Pełny tekst źródła
Streszczenie:
This thesis work aims at implementing the sparse matrix vector multiplication on eight-core Digital Signal Processor (DSP) and giving insights on how to optimize matrix multiplication on DSP to achieve high energy efficiency. We used two sparse matrix formats: the Compressed Sparse Row (CSR) and the Block Compressed Sparse Row (BCSR) formats. We carried out loop unrolling optimization of the naive algorithm. In addition, we implemented the Registerblocked and the Cache-blocked sparse matrix vector multiplications to optimize the naive algorithm. The computational performance improvement with l
Style APA, Harvard, Vancouver, ISO itp.
16

Determe, Jean-François. "Greedy algorithms for multi-channel sparse recovery." Doctoral thesis, Universite Libre de Bruxelles, 2018. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/265808.

Pełny tekst źródła
Streszczenie:
During the last decade, research has shown compressive sensing (CS) to be a promising theoretical framework for reconstructing high-dimensional sparse signals. Leveraging a sparsity hypothesis, algorithms based on CS reconstruct signals on the basis of a limited set of (often random) measurements. Such algorithms require fewer measurements than conventional techniques to fully reconstruct a sparse signal, thereby saving time and hardware resources. This thesis addresses several challenges. The first is to theoretically understand how some parameters—such as noise variance—affect the performanc
Style APA, Harvard, Vancouver, ISO itp.
17

Murugandi, Iyyappa Thirunavukkarasu. "A New Representation of Structured Grids for Matrix-vector Operation and Optimization of Doitgen Kernel." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1276878729.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Fourie, Christoff. "A one-class object-based system for sparse geographic feature identification." Thesis, Stellenbosch : University of Stellenbosch, 2011. http://hdl.handle.net/10019.1/6666.

Pełny tekst źródła
Streszczenie:
Thesis (MSc (Geography and Environmental Studies))--University of Stellenbosch, 2011.<br>ENGLISH ABSTRACT: The automation of information extraction from earth observation imagery has become a field of active research. This is mainly due to the high volumes of remotely sensed data that remain unused and the possible benefits that the extracted information can provide to a wide range of interest groups. In this work an earth observation image processing system is presented and profiled that attempts to streamline the information extraction process, without degradation of the quality of the extra
Style APA, Harvard, Vancouver, ISO itp.
19

Tordsson, Pontus. "Compressed sensing for error correction on real-valued vectors." Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-85499.

Pełny tekst źródła
Streszczenie:
Compressed sensing (CS) is a relatively new branch of mathematics with very interesting applications in signal processing, statistics and computer science. This thesis presents some theory of compressed sensing, which allows us to recover (high-dimensional) sparse vectors from (low-dimensional) compressed measurements by solving the L1-minimization problem. A possible application of CS to the problem of error correction is also presented, where sparse vectors are that of arbitrary noise. Successful sparse recovery by L1-minimization relies on certain properties of rectangular matrices. But the
Style APA, Harvard, Vancouver, ISO itp.
20

Eibner, Tino, and Jens Markus Melenk. "Fast algorithms for setting up the stiffness matrix in hp-FEM: a comparison." Universitätsbibliothek Chemnitz, 2006. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200601623.

Pełny tekst źródła
Streszczenie:
We analyze and compare different techniques to set up the stiffness matrix in the hp-version of the finite element method. The emphasis is on methods for second order elliptic problems posed on meshes including triangular and tetrahedral elements. The polynomial degree may be variable. We present a generalization of the Spectral Galerkin Algorithm of [7], where the shape functions are adapted to the quadrature formula, to the case of triangles/tetrahedra. Additionally, we study on-the-fly matrix-vector multiplications, where merely the matrix-vector multiplication is realized with
Style APA, Harvard, Vancouver, ISO itp.
21

Flegar, Goran. "Sparse Linear System Solvers on GPUs: Parallel Preconditioning, Workload Balancing, and Communication Reduction." Doctoral thesis, Universitat Jaume I, 2019. http://hdl.handle.net/10803/667096.

Pełny tekst źródła
Streszczenie:
With the breakdown of Dennard scaling in the mid-2000s and the end of Moore's law on the horizon, the high performance computing community is turning its attention towards unconventional accelerator hardware to ensure the continued growth of computational capacity. This dissertation presents several contributions related to the iterative solution of sparse linear systems on the most widely used general purpose accelerator - the Graphics Processing Unit (GPU). Specifically, it accelerates the major building blocks of Krylov solvers, and describes their realization as part of a software library
Style APA, Harvard, Vancouver, ISO itp.
22

Chew, Sien Wei. "Recognising facial expressions with noisy data." Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/63523/1/Sien_Wei_Chew_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
Techniques to improve the automated analysis of natural and spontaneous facial expressions have been developed. The outcome of the research has applications in several fields including national security (eg: expression invariant face recognition); education (eg: affect aware interfaces); mental and physical health (eg: depression and pain recognition).
Style APA, Harvard, Vancouver, ISO itp.
23

Grah, Joana Sarah. "Mathematical imaging tools in cancer research : from mitosis analysis to sparse regularisation." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/273243.

Pełny tekst źródła
Streszczenie:
This dissertation deals with customised image analysis tools in cancer research. In the field of biomedical sciences, mathematical imaging has become crucial in order to account for advancements in technical equipment and data storage by sound mathematical methods that can process and analyse imaging data in an automated way. This thesis contributes to the development of such mathematically sound imaging models in four ways: (i) automated cell segmentation and tracking. In cancer drug development, time-lapse light microscopy experiments are conducted for performance validation. The aim is to m
Style APA, Harvard, Vancouver, ISO itp.
24

Simeão, Sandra Fiorelli de Almeida Penteado. "Técnicas de esparsidade em sistemas estáticos de energia elétrica." Universidade de São Paulo, 2001. http://www.teses.usp.br/teses/disponiveis/18/18133/tde-07032016-105658/.

Pełny tekst źródła
Streszczenie:
Neste trabalho foi realizado um grande levantamento de técnicas de esparsidade relacionadas a sistemas estáticos de energia elétrica. Tais técnicas visam, do ponto de vista computacional, ao aumento da eficiência na solução de rede elétrica objetivando, além da resolução em si, a redução dos requisitos de memória, armazenamento e tempo de processamento. Para tanto, uma extensa revisão bibliográfica foi compilada, apresentando um posicionamento histórico e uma ampla visão do desenvolvimento teórico. Os testes comparativos realizados para sistemas de 14, 30, 57 e 118 barras, sobre a implantação
Style APA, Harvard, Vancouver, ISO itp.
25

Cheng, Long. "Relaxor ferroelectrics for neuromorphic computing." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPAST073.

Pełny tekst źródła
Streszczenie:
Pour surmonter les défis posés par les architectures traditionnelles de von Neumann, l'informatique neuromorphique s'inspire des sciences du cerveau pour créer du matérielécoénergétique adaptable à des tâches complexes. Les memristors, bien que novateurs,rencontrent des problèmes tels que la chaleur de Joule entravant le calcul neuronal à trèsbasse puissance.Pour remédier à cela, nous proposons un mécanisme de memcapacitor -la transition de phase induite par champ électrique. Les memcapacitors, qui expriment les signaux en tension, offrent une consommation d'énergie inférieure aux memristors (
Style APA, Harvard, Vancouver, ISO itp.
26

Herrmann, Felix J., Deli Wang, and Gilles Hennenfent. "Multiple prediction from incomplete data with the focused curvelet transform." Society of Exploration Geophysicists, 2007. http://hdl.handle.net/2429/563.

Pełny tekst źródła
Streszczenie:
Incomplete data represents a major challenge for a successful prediction and subsequent removal of multiples. In this paper, a new method will be represented that tackles this challenge in a two-step approach. During the first step, the recenly developed curvelet-based recovery by sparsity-promoting inversion (CRSI) is applied to the data, followed by a prediction of the primaries. During the second high-resolution step, the estimated primaries are used to improve the frequency content of the recovered data by combining the focal transform, defined in terms of the estimated primaries
Style APA, Harvard, Vancouver, ISO itp.
27

LE, COZ SERGE. "La rhizomanie de la betterave sucriere : multiplication du virus et aspects agronomiques de la maladie." Paris 6, 1986. http://www.theses.fr/1986PA066644.

Pełny tekst źródła
Streszczenie:
Les chloroplastes des feuilles de betteraves rhizomaniees sont appauvries en pigments, en proteines, en liquides polaires et leur activite photosynthetique est reduite. Dans les cellules virosees, l'etude ultrastructure montre une association des amas de virus avec le reticulum endoplasmique granuleux. Un protocole pour la preparation de suspensions enrichies en cytosores isoles de polymyxa betae est propose. Le champignon est retrouve a tous les niveaux du sol de quatre parcelles rhizomaniees ou saines de la region de pithiviers. Une terre rhizomaniee reste infectieuse aprese un an et demi de
Style APA, Harvard, Vancouver, ISO itp.
28

Behúň, Kamil. "Příznaky z videa pro klasifikaci." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236367.

Pełny tekst źródła
Streszczenie:
This thesis compares hand-designed features with features learned by feature learning methods in video classification. The features learned by Principal Component Analysis whitening, Independent subspace analysis and Sparse Autoencoders were tested in a standard Bag of Visual Word classification paradigm replacing hand-designed features (e.g. SIFT, HOG, HOF). The classification performance was measured on Human Motion DataBase and YouTube Action Data Set. Learned features showed better performance than the hand-desined features. The combination of hand-designed features and learned features by
Style APA, Harvard, Vancouver, ISO itp.
29

Pradat, Yannick. "Retraite et risque financier." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLED022/document.

Pełny tekst źródła
Streszczenie:
Le premier chapitre examine les caractéristiques statistiques à long terme des rendements financiers en France et aux USA. Les propriétés des différents actifs font apparaître qu’à long terme les actions procurent un risque sensiblement moins élevé. En outre, les propriétés de retour à la moyenne des actions justifient qu’elles soient utilisées dans une stratégie de cycle de vie comme « option par défaut » de plans d’épargne retraite. Le chapitre deux fournit une explication au débat sur l'hypothèse d’efficience des marchés. La cause du débat est souvent attribuée à la petite taille des échant
Style APA, Harvard, Vancouver, ISO itp.
30

Tsai, Sung-Han, and 蔡松翰. "Optimization for sparse matrix-vector multiplication based on NVIDIA CUDA platform." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/qw23p7.

Pełny tekst źródła
Streszczenie:
碩士<br>國立彰化師範大學<br>資訊工程學系<br>105<br>In recent years, large size sparse matrices are often used in fields such as science and engineering which usually apply in computing linear model. Using the ELLPACK format to store sparse matrices, it can reduce the matrix storage space. But if there is too much nonzero elements in one of row of the original sparse matrix, it still waste too much memory space. There are many research focusing on the Sparse Matrix–Vector Multiplication(SpMV)with ELLPACK format on Graphic Processing Unit(GPU). Therefore, the purpose of our research is reducing the access space
Style APA, Harvard, Vancouver, ISO itp.
31

Jheng, Hong-Yuan, and 鄭弘元. "FPGA Acceleration of Sparse Matrix-Vector Multiplication Based on Network-on-Chip." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/y884tf.

Pełny tekst źródła
Streszczenie:
碩士<br>國立臺灣科技大學<br>電子工程系<br>99<br>The Sparse Matrix-Vector Multiplication (SMVM) is a pervasive operation in many scientific and engineering applications. Moreover, SMVM is a computational intensive operation that dominates the performance in most iterative linear system solvers. There are some optimization challenges in computations involving SMVM due to its high memory access rate and irregular memory access pattern. In this thesis, a new design concept for SMVM in an FPGA by using Network-on-Chip (NoC) is presented. In traditional circuit design on-chip communications have been designed with
Style APA, Harvard, Vancouver, ISO itp.
32

Hsu, Wei-chun, and 徐偉郡. "Sparse Matrix-Vector Multiplication: A Low Communication Cost Data Mapping-Based Architecture." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/09761233547687794389.

Pełny tekst źródła
Streszczenie:
碩士<br>國立臺灣科技大學<br>電子工程系<br>103<br>The performance of the sparse matrix-vector multiplication (SMVM) on a parallel system is strongly conditioned by the distribution of data among its components. Two costs arise as a result of the used data mapping method: arithmetic and communication. The communication cost of an algorithm often dominates the arithmetic cost, and the gap between these costs tends to increase. Therefore, finding a mapping method that reduces the communication cost is of high importance. On the other hand, the load distribution among the processing units must not be sacrificed a
Style APA, Harvard, Vancouver, ISO itp.
33

TSAI, NIAN-YING, and 蔡念穎. "On Job Allocation Strategies for Running Sparse Matrix-Vector Multiplication on GPUs." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/mpwmh4.

Pełny tekst źródła
Streszczenie:
碩士<br>國立彰化師範大學<br>資訊工程學系<br>105<br>In the era of big data, Graphic Processing Unit (GPU) has been widely used to deal with many parallelization problems as the amount of data to be processed. Sparse Matrix – Vector Multiplication is an important and basic operation in many fields, there are still many improved space for raising the performance of the GPU operation. This paper is mainly about job allocation strategies for running Sparse Matrix-Vector Multiplication on GPUs. The LightSpMV algorithm is based on the standard CSR format. The CSR format is a common sparse matrix storage format which
Style APA, Harvard, Vancouver, ISO itp.
34

Ramesh, Chinthala. "Hardware-Software Co-Design Accelerators for Sparse BLAS." Thesis, 2017. http://etd.iisc.ac.in/handle/2005/4276.

Pełny tekst źródła
Streszczenie:
Sparse Basic Linear Algebra Subroutines (Sparse BLAS) is an important library. Sparse BLAS includes three levels of subroutines. Level 1, Level2 and Level 3 Sparse BLAS routines. Level 1 Sparse BLAS routines do computations over sparse vector and spare/dense vector. Level 2 deals with sparse matrix and vector operations. Level 3 deals with sparse matrix and dense matrix operations. The computations of these Sparse BLAS routines on General Purpose Processors (GPPs) not only suffer from less utilization of hardware resources but also takes more compute time than the workload due to poor data loc
Style APA, Harvard, Vancouver, ISO itp.
35

Mirza, Salma. "Scalable, Memory-Intensive Scientific Computing on Field Programmable Gate Arrays." 2010. https://scholarworks.umass.edu/theses/404.

Pełny tekst źródła
Streszczenie:
Cache-based, general purpose CPUs perform at a small fraction of their maximum floating point performance when executing memory-intensive simulations, such as those required for many scientific computing problems. This is due to the memory bottleneck that is encountered with large arrays that must be stored in dynamic RAM. A system of FPGAs, with a large enough memory bandwidth, and clocked at only hundreds of MHz can outperform a CPU clocked at GHz in terms of floating point performance. An FPGA core designed for a target performance that does not unnecessarily exceed the memory imposed bottl
Style APA, Harvard, Vancouver, ISO itp.
36

Batjargal, Delgerdalai, and 白德格. "Parallel Matrix Transposition and Vector Multiplication Using OpenMP." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/50550637149183575586.

Pełny tekst źródła
Streszczenie:
碩士<br>靜宜大學<br>資訊工程學系<br>101<br>In this thesis, we propose two parallel algorithms for sparse matrix-transpose and vector multiplication using CSR (Compressed Sparse Row) format. Even though this storage format is simple and hence easy to understand and maintained, one of its limitation is difficult to parallelized, and a performance of a naïve parallel algorithm can be worst. But by preprocessing useful information that is hidden and indirect in its data structure during reading a matrix from a file, our algorithm of the matrix transposition can then be performed in parallel using OpenMP. Ou
Style APA, Harvard, Vancouver, ISO itp.
37

deLorimier, Michael John. "Floating-Point Sparse Matrix-Vector Multiply for FPGAs." Thesis, 2005. https://thesis.library.caltech.edu/1776/1/smvm_thesis.pdf.

Pełny tekst źródła
Streszczenie:
<p>Large, high density FPGAs with high local distributed memory bandwidth surpass the peak floating-point performance of high-end, general-purpose processors. Microprocessors do not deliver near their peak floating-point performance on efficient algorithms that use the Sparse Matrix-Vector Multiply (SMVM) kernel. In fact, microprocessors rarely achieve 33% of their peak floating-point performance when computing SMVM. We develop and analyze a scalable SMVM implementation on modern FPGAs and show that it can sustain high throughput, near peak, floating-point performance. Our implementation consi
Style APA, Harvard, Vancouver, ISO itp.
38

Girosi, Federico. "An Equivalence Between Sparse Approximation and Support Vector Machines." 1997. http://hdl.handle.net/1721.1/7289.

Pełny tekst źródła
Streszczenie:
In the first part of this paper we show a similarity between the principle of Structural Risk Minimization Principle (SRM) (Vapnik, 1982) and the idea of Sparse Approximation, as defined in (Chen, Donoho and Saunders, 1995) and Olshausen and Field (1996). Then we focus on two specific (approximate) implementations of SRM and Sparse Approximation, which have been used to solve the problem of function approximation. For SRM we consider the Support Vector Machine technique proposed by V. Vapnik and his team at AT&T Bell Labs, and for Sparse Approximation we consider a modification of
Style APA, Harvard, Vancouver, ISO itp.
39

"Sparse learning under regularization framework." Thesis, 2011. http://library.cuhk.edu.hk/record=b6075111.

Pełny tekst źródła
Streszczenie:
Regularization is a dominant theme in machine learning and statistics due to its prominent ability in providing an intuitive and principled tool for learning from high-dimensional data. As large-scale learning applications become popular, developing efficient algorithms and parsimonious models become promising and necessary for these applications. Aiming at solving large-scale learning problems, this thesis tackles the key research problems ranging from feature selection to learning with unlabeled data and learning data similarity representation. More specifically, we focus on the problems in
Style APA, Harvard, Vancouver, ISO itp.
40

"Bayesian Framework for Sparse Vector Recovery and Parameter Bounds with Application to Compressive Sensing." Master's thesis, 2019. http://hdl.handle.net/2286/R.I.55639.

Pełny tekst źródła
Streszczenie:
abstract: Signal compressed using classical compression methods can be acquired using brute force (i.e. searching for non-zero entries in component-wise). However, sparse solutions require combinatorial searches of high computations. In this thesis, instead, two Bayesian approaches are considered to recover a sparse vector from underdetermined noisy measurements. The first is constructed using a Bernoulli-Gaussian (BG) prior distribution and is assumed to be the true generative model. The second is constructed using a Gamma-Normal (GN) prior distribution and is, therefore, a different (i.e. mi
Style APA, Harvard, Vancouver, ISO itp.
41

Miron, David John. "The parallel solution of sparse linear least squares problems." Phd thesis, 1998. http://hdl.handle.net/1885/145188.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

"Exploring the potential for accelerating sparse matrix-vector product on a Processing-in-Memory architecture." Thesis, 2009. http://hdl.handle.net/1911/61946.

Pełny tekst źródła
Streszczenie:
As the importance of memory access delays on performance has mushroomed over the past few decades, researchers have begun exploring Processing-in-Memory (PIM) technology, which offers higher memory bandwidth, lower memory latency, and lower power consumption. In this study, we investigate whether an emerging PIM design from Sandia National Laboratories can boost performance for sparse matrix-vector product (SMVP). While SMVP is in the best-case bandwidth-bound, factors related to matrix structure and representation also limit performance. We analyze SMVP both in the context of an AMD Opteron p
Style APA, Harvard, Vancouver, ISO itp.
43

Zein, Ahmed H. El. "Use of graphics processing units for sparse matrix-vector products in statistical machine learning applications." Master's thesis, 2009. http://hdl.handle.net/1885/148368.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Prasanna, Dheeraj. "Structured Sparse Signal Recovery for mmWave Channel Estimation: Intra-vector Correlation and Modulo Compressed Sensing." Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5215.

Pełny tekst źródła
Streszczenie:
This thesis contributes new theoretical results and recovery algorithms for the area of sparse signal recovery motivated by applications to the problem of channel estimation in mmWave communication systems. The presentation is in two parts. The first part focuses on the recovery of sparse vectors with correlated non-zero entries from their noisy low dimensional projections. Such structured sparse signals can be recovered using the technique of covariance matching. Here, we first estimate the covariance of the signal from the compressed measurements, and then use the obtained covariance matr
Style APA, Harvard, Vancouver, ISO itp.
45

Wannenwetsch, Katrin Ulrike. "Deterministic Sparse FFT Algorithms." Doctoral thesis, 2016. http://hdl.handle.net/11858/00-1735-0000-002B-7C10-0.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Tiyyagura, Sunil Reddy [Verfasser]. "Efficient solution of sparse linear systems arising in engineering applications on vector hardware / vorgelegt von Sunil Reddy Tiyyagura." 2010. http://d-nb.info/1004991452/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Gadekar, Ameet. "On Learning k-Parities and the Complexity of k-Vector-SUM." Thesis, 2016. http://etd.iisc.ac.in/handle/2005/3060.

Pełny tekst źródła
Streszczenie:
In this work, we study two problems: first is one of the central problem in learning theory of learning sparse parities and the other k-Vector-SUM is an extension of the not oriousk-SUM problem. We first consider the problem of learning k-parities in the on-line mistake-bound model: given a hidden vector ∈ {0,1}nwith|x|=kand a sequence of “questions” a ,a ,12··· ∈{0,1}n, where the algorithm must reply to each question with〈a ,xi〉(mod 2), what is the best trade off between the number of mistakes made by the algorithm and its time complexity? We improve the previous best result of Buhrman et. al
Style APA, Harvard, Vancouver, ISO itp.
48

Gadekar, Ameet. "On Learning k-Parities and the Complexity of k-Vector-SUM." Thesis, 2016. http://hdl.handle.net/2005/3060.

Pełny tekst źródła
Streszczenie:
In this work, we study two problems: first is one of the central problem in learning theory of learning sparse parities and the other k-Vector-SUM is an extension of the not oriousk-SUM problem. We first consider the problem of learning k-parities in the on-line mistake-bound model: given a hidden vector ∈ {0,1}nwith|x|=kand a sequence of “questions” a ,a ,12··· ∈{0,1}n, where the algorithm must reply to each question with〈a ,xi〉(mod 2), what is the best trade off between the number of mistakes made by the algorithm and its time complexity? We improve the previous best result of Buhrman et. al
Style APA, Harvard, Vancouver, ISO itp.
49

(6642491), Jingzhao Dai. "SPARSE DISCRETE WAVELET DECOMPOSITION AND FILTER BANK TECHNIQUES FOR SPEECH RECOGNITION." Thesis, 2019.

Znajdź pełny tekst źródła
Streszczenie:
<p>Speech recognition is widely applied to translation from speech to related text, voice driven commands, human machine interface and so on [1]-[8]. It has been increasingly proliferated to Human’s lives in the modern age. To improve the accuracy of speech recognition, various algorithms such as artificial neural network, hidden Markov model and so on have been developed [1], [2].</p> <p>In this thesis work, the tasks of speech recognition with various classifiers are investigated. The classifiers employed include the support vector machine (SVM), k-nearest neighbors (KNN), random forest (RF
Style APA, Harvard, Vancouver, ISO itp.
50

Rossi, Stefano, and Piero Tortoli. "Development and validation of novel approaches for real-time ultrasound vector velocity measurements." Doctoral thesis, 2021. http://hdl.handle.net/2158/1239650.

Pełny tekst źródła
Streszczenie:
Ultrasound imaging techniques have become increasingly successful in the medical field as they provide relatively low cost and totally safe diagnosis. Doppler methods focus on blood flow for the diagnosis and follow-up of cardiovascular diseases. First Doppler methods only measured the axial component of the motion. More recently, advanced methods have solved this problem, by estimating two or even all three velocity components. In this context, high frame rate (HFR) imaging techniques, based on the transmission of plane waves (PW), lead to the reconstruction of 2-D and 3-D vector maps of bloo
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!