Rozprawy doktorskie na temat „Sparse Vector Vector Multiplication”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Sparse Vector Vector Multiplication”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Ashari, Arash. "Sparse Matrix-Vector Multiplication on GPU". The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1417770100.
Pełny tekst źródłaRamachandran, Shridhar. "Incremental PageRank acceleration using Sparse Matrix-Sparse Vector Multiplication". The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1462894358.
Pełny tekst źródłaBalasubramanian, Deepan Karthik. "Efficient Sparse Matrix Vector Multiplication for Structured Grid Representation". The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1339730490.
Pełny tekst źródłaMansour, Ahmad [Verfasser]. "Sparse Matrix-Vector Multiplication Based on Network-on-Chip / Ahmad Mansour". München : Verlag Dr. Hut, 2015. http://d-nb.info/1075409470/34.
Pełny tekst źródłaSingh, Kunal. "High-Performance Sparse Matrix-Multi Vector Multiplication on Multi-Core Architecture". The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1524089757826551.
Pełny tekst źródłaEl-Kurdi, Yousef M. "Sparse Matrix-Vector floating-point multiplication with FPGAs for finite element electromagnetics". Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98958.
Pełny tekst źródłaGodwin, Jeswin Samuel. "High-Performancs Sparse Matrix-Vector Multiplication on GPUS for Structured Grid Computations". The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1357280824.
Pełny tekst źródłaPantawongdecha, Payut. "Autotuning divide-and-conquer matrix-vector multiplication". Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105968.
Pełny tekst źródłaThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 73-75).
Divide and conquer is an important concept in computer science. It is used ubiquitously to simplify and speed up programs. However, it needs to be optimized, with respect to parameter settings for example, in order to achieve the best performance. The problem boils down to searching for the best implementation choice on a given set of requirements, such as which machine the program is running on. The goal of this thesis is to apply and evaluate the Ztune approach [14] on serial divide-and-conquer matrix-vector multiplication. We implemented Ztune to autotune serial divide-and-conquer matrix-vector multiplication on machines with different hardware configurations, and found that Ztuneoptimized codes ran 1%-5% faster than the hand-optimized counterparts. We also compared Ztune-optimized results with other matrix-vector multiplication libraries including the Intel Math Kernel Library and OpenBLAS. Since the matrix-vector multiplication problem is a level 2 BLAS, it is not as computationally intensive as level 3 BLAS problems such as matrix-matrix multiplication and stencil computation. As a result, the measurement in matrix-vector multiplication is more prone to error from factors such as noise, cache alignment of the matrix, and cache states, which lead to wrong decision choices for Ztune. We explored multiple options to get more accurate measurements and demonstrated the techniques that remedied these issues. Lastly, we applied the Ztune approach to matrix-matrix multiplication, and we were able to achieve 2%-85% speedup compared to the hand-tuned code. This thesis represents joint work with Ekanathan Palamadai Natarajan.
by Payut Pantawongdecha.
M. Eng.
Hopkins, T. M. "The design of a sparse vector processor". Thesis, University of Edinburgh, 1993. http://hdl.handle.net/1842/14094.
Pełny tekst źródłaBelgin, Mehmet. "Structure-based Optimizations for Sparse Matrix-Vector Multiply". Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/30260.
Pełny tekst źródłaPh. D.
DeLorimier, Michael DeHon André. "Floating-point sparse matrix-vector multiply for FPGAs /". Diss., Pasadena, Calif. : California Institute of Technology, 2005. http://resolver.caltech.edu/CaltechETD:etd-05132005-144347.
Pełny tekst źródłaXia, Xiao-Lei. "Sparse learning for support vector machines with biological applications". Thesis, Queen's University Belfast, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.527935.
Pełny tekst źródłaMellet, Dieter Sydney-Charles. "An integrated continuous output linear power sensor using Hall effect vector multiplication". Diss., Pretoria : [s.n.], 2002. http://upetd.up.ac.za/thesis/available/etd-09012005-120807.
Pełny tekst źródłaChen, Jihong. "Sparse Modeling in Classification, Compression and Detection". Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5051.
Pełny tekst źródłaMuradov, Feruz. "Development, Implementation, Optimization and Performance Analysis of Matrix-Vector Multiplication on Eight-Core Digital Signal Processor". Thesis, KTH, Numerisk analys, NA, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-131289.
Pełny tekst źródłaDeterme, Jean-François. "Greedy algorithms for multi-channel sparse recovery". Doctoral thesis, Universite Libre de Bruxelles, 2018. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/265808.
Pełny tekst źródłaDoctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
Murugandi, Iyyappa Thirunavukkarasu. "A New Representation of Structured Grids for Matrix-vector Operation and Optimization of Doitgen Kernel". The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1276878729.
Pełny tekst źródłaFourie, Christoff. "A one-class object-based system for sparse geographic feature identification". Thesis, Stellenbosch : University of Stellenbosch, 2011. http://hdl.handle.net/10019.1/6666.
Pełny tekst źródłaENGLISH ABSTRACT: The automation of information extraction from earth observation imagery has become a field of active research. This is mainly due to the high volumes of remotely sensed data that remain unused and the possible benefits that the extracted information can provide to a wide range of interest groups. In this work an earth observation image processing system is presented and profiled that attempts to streamline the information extraction process, without degradation of the quality of the extracted information, for geographic object anomaly detection. The proposed system, implemented as a software application, combines recent research in automating image segment generation and automatically finding statistical classifier parameters and attribute subsets using evolutionary inspired search algorithms. Exploratory research was conducted on the use of an edge metric as a fitness function to an evolutionary search heuristic to automate the generation of image segments for a region merging segmentation algorithm having six control parameters. The edge metric for such an application is compared with an area based metric. The use of attribute subset selection in conjunction with a free parameter tuner for a one class support vector machine (SVM) classifier, operating on high dimensional object based data, was also investigated. For common earth observation anomaly detection problems using typical segment attributes, such a combined free parameter tuning and attribute subset selection system provided superior statistically significant results compared to a free parameter tuning only process. In some extreme cases, due to the stochastic nature of the search algorithm employed, the free parameter only strategy provided slightly better results. The developed system was used in a case study to map a single class of interest on a 22.5 x 22.5km subset of a SPOT 5 image and is compared with a multiclass classification strategy. The developed system generated slightly better classification accuracies than the multiclass classifier and only required samples from the class of interest.
AFIKAANSE OPSOMMING: Die outomatisering van die verkryging van inligting vanaf aardwaarnemingsbeelde het in sy eie reg 'n navorsingsveld geword as gevolg van die groot volumes data wat nie benut word nie, asook na aanleiding van die moontlike bydrae wat inligting wat verkry word van hierdie beelde aan verskeie belangegroepe kan bied. In hierdie tesis word 'n aardwaarneming beeldverwerkingsstelsel bekend gestel en geëvalueer. Hierdie stelsel beoog om die verkryging van inligting van aardwaarnemingsbeelde te vergemaklik deur verbruikersinteraksie te minimaliseer, sonder om die kwaliteit van die resultate te beïnvloed. Die stelsel is ontwerp vir geografiese voorwerp anomalie opsporing en is as 'n sagteware program geïmplementeer. Die program kombineer onlangse navorsing in die gebruik van evolusionêre soek-algoritmes om outomaties goeie beeldsegmente te verkry en parameters te vind, sowel as om kenmerke vir 'n statistiese klassifikasie van beeld segmente te selekteer. Verkennende navorsing is gedoen op die benutting van 'n rand metriek as 'n passings funksie in 'n evolusionêre soek heuristiek om outomaties goeie parameters te vind vir 'n streeks kombinering beeld segmentasie algoritme met ses beheer parameters. Hierdie rand metriek word vergelyk met 'n area metriek vir so 'n toepassing. Die nut van atribuut substel seleksie in samewerking met 'n vrye parameter steller vir 'n een klas steun vektor masjien (SVM) klassifiseerder is ondersoek op hoë dimensionele objek georiënteerde data. Vir algemene aardwaarneming anomalie opsporings probleme met 'n tipiese segment kenmerk versameling, het so 'n stelsel beduidend beter resultate as 'n eksklusiewe vrye parameter stel stelsel gelewer in sommige uiterste gevalle. As gevolg van die stogastiese aard van die soek algoritme het die eksklusiewe vrye parameter stel strategie effens beter resultate gelewer. Die stelsel is getoets in 'n gevallestudie waar 'n enkele klas op 'n 22.5 x 22.5km substel van 'n SPOT 5 beeld geïdentifiseer word. Die voorgestelde stelsel, wat slegs monsters van die gekose klas gebruik het, het beter klassifikasie akkuraathede genereer as die multi klas klassifiseerder.
Tordsson, Pontus. "Compressed sensing for error correction on real-valued vectors". Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-85499.
Pełny tekst źródłaEibner, Tino, i Jens Markus Melenk. "Fast algorithms for setting up the stiffness matrix in hp-FEM: a comparison". Universitätsbibliothek Chemnitz, 2006. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200601623.
Pełny tekst źródłaFlegar, Goran. "Sparse Linear System Solvers on GPUs: Parallel Preconditioning, Workload Balancing, and Communication Reduction". Doctoral thesis, Universitat Jaume I, 2019. http://hdl.handle.net/10803/667096.
Pełny tekst źródłaCon el final de la ley de Dennard y el cercano fin de la ley de Moore, la comunidad en computación de altas prestaciones se está centrando en tecnologías de aceleración no convencionales para asegurar el crecimiento exponencial de la capacidad de computación. Esta tesis contribuye a la solución iterativa de sistemas lineales dispersos en el acelerador más difundido: el procesador gráfico. Específicamente, el trabajo acelera los bloques fundamentales de los métodos de Krylov, y describe su implementación como parte de una biblioteca de bloques reutilizables. La primera parte del trabajo se centra en el producto matriz-vector disperso y el equilibrado de la carga ante patrones de dispersidad irregulares. La segunda parte describe el diseño de precondicionadores de alto rendimiento. Finalmente, la tercera parte demuestra el potencial de las técnicas de precisión adaptativa para construir precondicionadores con menor consumo de memoria, y fiabilidad comparable con las versiones de precisión completa.
Chew, Sien Wei. "Recognising facial expressions with noisy data". Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/63523/1/Sien_Wei_Chew_Thesis.pdf.
Pełny tekst źródłaGrah, Joana Sarah. "Mathematical imaging tools in cancer research : from mitosis analysis to sparse regularisation". Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/273243.
Pełny tekst źródłaSimeão, Sandra Fiorelli de Almeida Penteado. "Técnicas de esparsidade em sistemas estáticos de energia elétrica". Universidade de São Paulo, 2001. http://www.teses.usp.br/teses/disponiveis/18/18133/tde-07032016-105658/.
Pełny tekst źródłaIn this work a great survey of sparsity techniques related to static systems of electric power was accomplished. Such techniques seek, for of the computational point of view, the increase of the efficiency in the solution of the electric net aiming, besides the resolution of itself, the reduction of memory requirements, the storage and time processing. For that, an extensive bibliographic review was compiled providing a historic positioning and a broad view of theoretic development. The comparative tests accomplished for systems of 14,30, 57 and 118 buses, on the implementation of three of the most employed techniques, it pointed out an bi-factorisation as best medium performance. For small systems, the sparse symmetric Gaussian elimination showed the best results. This work will supply conceptual and methodological subsidies to technicians and researchers of the area.
LE, COZ SERGE. "La rhizomanie de la betterave sucriere : multiplication du virus et aspects agronomiques de la maladie". Paris 6, 1986. http://www.theses.fr/1986PA066644.
Pełny tekst źródłaHerrmann, Felix J., Deli Wang i Gilles Hennenfent. "Multiple prediction from incomplete data with the focused curvelet transform". Society of Exploration Geophysicists, 2007. http://hdl.handle.net/2429/563.
Pełny tekst źródłaBehúň, Kamil. "Příznaky z videa pro klasifikaci". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236367.
Pełny tekst źródłaPradat, Yannick. "Retraite et risque financier". Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLED022/document.
Pełny tekst źródłaChapter one examines the long run statistical characteristics of financial returns in France and the USA for selected assets. This study clearly shows that the returns’ distributions diverge from the Gaussian strategy as regards longholding periods. Thereafter we analyze the consequences of the non-Gaussian nature of stock returns on default-option retirement plans.Chapter two provides a reasonable explanation to the strong debate on the Efficient Market Hypothesis. The cause of the debate is often attributed to small sample sizes in combination with statistical tests for mean reversion that lackpower. In order to bypass this problem, we use the approach developed by Campbell and Viceira (2005) who have settled a vectorial autoregressive methodology (VAR) to measure the mean reversion of asset returns.The third chapter evaluates the speed of convergence of stock prices. A convenient way to characterize the speed of mean reversion is the half-life. Comparing the stock indexes of four developed countries (US, UK, France and Japan) during the period 1950-2014, we establish significant mean reversion, with a half-life lying between 4,0 and 5,8 years.The final chapter provides some results from a model built in order to study the linked impacts of demography and economy on the French pension scheme. In order to reveal the risks that are contained in pension fund investment, we use a Trending Ornstein-Uhlenbeck process instead of the typical GBM for modeling stock returns. We find that funded scheme returns, net of management fees, are slightly lower thanthe PAYG internal rate of return
Tsai, Sung-Han, i 蔡松翰. "Optimization for sparse matrix-vector multiplication based on NVIDIA CUDA platform". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/qw23p7.
Pełny tekst źródła國立彰化師範大學
資訊工程學系
105
In recent years, large size sparse matrices are often used in fields such as science and engineering which usually apply in computing linear model. Using the ELLPACK format to store sparse matrices, it can reduce the matrix storage space. But if there is too much nonzero elements in one of row of the original sparse matrix, it still waste too much memory space. There are many research focusing on the Sparse Matrix–Vector Multiplication(SpMV)with ELLPACK format on Graphic Processing Unit(GPU). Therefore, the purpose of our research is reducing the access space of sparse matrix which is transformed in Compressed Sparse Row(CSR)format after Reverse Cutthill-McKee(RCM)algorithm to accelerate for SpMV on GPU. Due to lower data access ratio from SpMV, the performance is restricted by memory bandwidth. Our propose is based on CSR format from two aspects:(1)reduce cache misses to enhance the vector locality and raise the performance, and(2)reduce accessed matrix data by index reduction to optimize the performance.
Jheng, Hong-Yuan, i 鄭弘元. "FPGA Acceleration of Sparse Matrix-Vector Multiplication Based on Network-on-Chip". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/y884tf.
Pełny tekst źródła國立臺灣科技大學
電子工程系
99
The Sparse Matrix-Vector Multiplication (SMVM) is a pervasive operation in many scientific and engineering applications. Moreover, SMVM is a computational intensive operation that dominates the performance in most iterative linear system solvers. There are some optimization challenges in computations involving SMVM due to its high memory access rate and irregular memory access pattern. In this thesis, a new design concept for SMVM in an FPGA by using Network-on-Chip (NoC) is presented. In traditional circuit design on-chip communications have been designed with dedicated point-to-point interconnections or shared buses. Therefore, regular data transfer is the major concern of many parallel implementations. However, when dealing with the SMVM operation, the required data transfers are usually dependent on the sparsity structure of the matrix and can be extremely irregular. Using an NoC architecture makes it possible to deal with arbitrary structure of the data transfers, i.e. with arbitrary structured sparse matrices. In addition, the size of the pipelined SMVM calculator based on NoC architecture can be customized to 2×2, 4×4, ..., p×p (p∈N) due to its high scalibility and flexibility. The implementation is done in IEEE-754 single floating-point precision on the Xilinx Virtex-6 FPGA. The experimental results show that the proposed NoC-based implementation can achieve approximate 2.3 - 5.6 speed-up over the MATLAB-based software implementation in Matrix Market benchmark applications.
Hsu, Wei-chun, i 徐偉郡. "Sparse Matrix-Vector Multiplication: A Low Communication Cost Data Mapping-Based Architecture". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/09761233547687794389.
Pełny tekst źródła國立臺灣科技大學
電子工程系
103
The performance of the sparse matrix-vector multiplication (SMVM) on a parallel system is strongly conditioned by the distribution of data among its components. Two costs arise as a result of the used data mapping method: arithmetic and communication. The communication cost of an algorithm often dominates the arithmetic cost, and the gap between these costs tends to increase. Therefore, finding a mapping method that reduces the communication cost is of high importance. On the other hand, the load distribution among the processing units must not be sacrificed as well. In this paper, a data mapping method is proposed for SMVM on Network-on-Chip (NoC) which achieves balanced working load and reduces the communication cost. Afterwards, an FPGA-based architecture is introduced which is designed to fit the proposed data mapping method. The experimental results show that the communication cost of the proposed design is 40\% lower than the previous work.
TSAI, NIAN-YING, i 蔡念穎. "On Job Allocation Strategies for Running Sparse Matrix-Vector Multiplication on GPUs". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/mpwmh4.
Pełny tekst źródła國立彰化師範大學
資訊工程學系
105
In the era of big data, Graphic Processing Unit (GPU) has been widely used to deal with many parallelization problems as the amount of data to be processed. Sparse Matrix – Vector Multiplication is an important and basic operation in many fields, there are still many improved space for raising the performance of the GPU operation. This paper is mainly about job allocation strategies for running Sparse Matrix-Vector Multiplication on GPUs. The LightSpMV algorithm is based on the standard CSR format. The CSR format is a common sparse matrix storage format which is more flexible and better than other formats. The LightSpMV algorithm uses two dynamic configuration methods, whose matrix row is distributed to one for vector and the other for warp. Both of the methods use Atomic operations to get the Row index values. Because Atomic operations spent too much execution time, we proposed three strategies for this part of the workload allocation: (1) Using warp as the basic unit, through doubling the number of rows which have to be executed for each allocation, to make the number of Atomic operations reduced. (2) Using block as the basic unit, the number of rows are allocated dynamically. Compared to the dynamic configuration of using warp as basic unit, this strategy can reduce the number of Atomic operations. (3) Using block as the basic unit, the number of rows executed by blocks are static allocation. In each block we reuse warp as the basic unit and the warp are allocated dynamically instead of Atomic operations.After the implementation of our experiments in the work environment of the GTX980 GPU, the best performance is the third strategy and the performance improvement is nearly 100%.
Mirza, Salma. "Scalable, Memory-Intensive Scientific Computing on Field Programmable Gate Arrays". 2010. https://scholarworks.umass.edu/theses/404.
Pełny tekst źródłaBatjargal, Delgerdalai, i 白德格. "Parallel Matrix Transposition and Vector Multiplication Using OpenMP". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/50550637149183575586.
Pełny tekst źródła靜宜大學
資訊工程學系
101
In this thesis, we propose two parallel algorithms for sparse matrix-transpose and vector multiplication using CSR (Compressed Sparse Row) format. Even though this storage format is simple and hence easy to understand and maintained, one of its limitation is difficult to parallelized, and a performance of a naïve parallel algorithm can be worst. But by preprocessing useful information that is hidden and indirect in its data structure during reading a matrix from a file, our algorithm of the matrix transposition can then be performed in parallel using OpenMP. Our codes are run on a quad-core Intel Xeon64 CPU E5507 platform. We measure, and compare the performance of our algorithms with that of using Compressed Sparse Block (CSB) format. Our experimental results show that our algorithms are comparable to the CSB based algorithm when the nonzero are scatter around the matrix and size of matrix is growing.
deLorimier, Michael John. "Floating-Point Sparse Matrix-Vector Multiply for FPGAs". Thesis, 2005. https://thesis.library.caltech.edu/1776/1/smvm_thesis.pdf.
Pełny tekst źródłaLarge, high density FPGAs with high local distributed memory bandwidth surpass the peak floating-point performance of high-end, general-purpose processors. Microprocessors do not deliver near their peak floating-point performance on efficient algorithms that use the Sparse Matrix-Vector Multiply (SMVM) kernel. In fact, microprocessors rarely achieve 33% of their peak floating-point performance when computing SMVM. We develop and analyze a scalable SMVM implementation on modern FPGAs and show that it can sustain high throughput, near peak, floating-point performance. Our implementation consists of logic design as well as scheduling and data placement techniques. For benchmark matrices from the Matrix Market Suite we project 1.5 double precision Gflops/FPGA for a single VirtexII-6000-4 and 12 double precision Gflops for 16 Virtex IIs (750 Mflops/FPGA). We also analyze the asymptotic efficiency of our architecture as parallelism scales using a constant rent-parameter matrix model. This demonstrates that our data placement techniques provide an asymptotic scaling benefit.
While FPGA performance is attractive, higher performance is possible if we re-balance the hardware resources in FPGAs with embedded memories. We show that sacrificing half the logic area for memory area rarely degrades performance and improves performance for large matrices, by up to 5 times. We also 0 the performance effect of adding custom floating-point using a simple area model to preserve total chip area. Sacrificing logic for memory and custom floating-point units increases single FPGA performance to 5 double precision Gflops.
Girosi, Federico. "An Equivalence Between Sparse Approximation and Support Vector Machines". 1997. http://hdl.handle.net/1721.1/7289.
Pełny tekst źródła"Sparse learning under regularization framework". Thesis, 2011. http://library.cuhk.edu.hk/record=b6075111.
Pełny tekst źródłaThe first part of this thesis develops a novel online learning framework to solve group lasso and multi-task feature selection. To the best our knowledge, the proposed online learning framework is the first framework for the corresponding models. The main advantages of the online learning algorithms are that (1) they can work on the applications where training data appear sequentially; consequently, the training procedure can be started at any time; (2) they can handle data up to any size with any number of features. The efficiency of the algorithms is attained because we derive closed-form solutions to update the weights of the corresponding models. At each iteration, the online learning algorithms just need O (d) time complexity and memory cost for group lasso, while they need O (d x Q) for multi-task feature selection, where d is the number of dimensions and Q is the number of tasks. Moreover, we provide theoretical analysis for the average regret of the online learning algorithms, which also guarantees the convergence rate of the algorithms. In addition, we extend the online learning framework to solve several related models which yield more sparse solutions.
The second part of this thesis addresses a general scenario of semi-supervised learning for the binary classification problern, where the unlabeled data may be a mixture of relevant and irrelevant data to the target binary classification task. Without specifying the relatedness in the unlabeled data, we develop a novel maximum margin classifier, named the tri-class support vector machine (3C-SVM), to seek an inductive rule that can separate these data into three categories: --1, +1, or 0. This is achieved by adopting a novel min loss function and following the maximum entropy principle. For the implementation, we approximate the problem and solve it by a standard concaveconvex procedure (CCCP). The approach is very efficient and it is possible to solve large-scale datasets.
The third part of this thesis focuses on multiple kernel learning (MKL) to solve the insufficiency of the L1-MKL and the Lp-MKL models. Hence, we propose a generalized MKL (GMKL) model by introducing an elastic net-type constraint on the kernel weights. More specifically, it is an MKL model with a constraint on a linear combination of the L1-norm and the square of the L2-norm on the kernel weights to seek the optimal kernel combination weights. Therefore, previous MKL problems based on the L1-norm or the L2-norm constraints can be regarded as its special cases. Moreover, our GMKL enjoys the favorable sparsity property on the solution and also facilitates the grouping effect. In addition, the optimization of our GMKL is a convex optimization problem, where a local solution is the globally optimal solution. We further derive the level method to efficiently solve the optimization problem.
Yang, Haiqin.
Advisers: Kuo Chin Irwin King; Michael Rung Tsong Iyu.
Source: Dissertation Abstracts International, Volume: 73-04, Section: B, page: .
Thesis (Ph.D.)--Chinese University of Hong Kong, 2011.
Includes bibliographical references (leaves 152-173).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract also in Chinese.
"Bayesian Framework for Sparse Vector Recovery and Parameter Bounds with Application to Compressive Sensing". Master's thesis, 2019. http://hdl.handle.net/2286/R.I.55639.
Pełny tekst źródłaDissertation/Thesis
Masters Thesis Computer Engineering 2019
"Exploring the potential for accelerating sparse matrix-vector product on a Processing-in-Memory architecture". Thesis, 2009. http://hdl.handle.net/1911/61946.
Pełny tekst źródłaZein, Ahmed H. El. "Use of graphics processing units for sparse matrix-vector products in statistical machine learning applications". Master's thesis, 2009. http://hdl.handle.net/1885/148368.
Pełny tekst źródłaMiron, David John. "The parallel solution of sparse linear least squares problems". Phd thesis, 1998. http://hdl.handle.net/1885/145188.
Pełny tekst źródłaWannenwetsch, Katrin Ulrike. "Deterministic Sparse FFT Algorithms". Doctoral thesis, 2016. http://hdl.handle.net/11858/00-1735-0000-002B-7C10-0.
Pełny tekst źródłaTiyyagura, Sunil Reddy [Verfasser]. "Efficient solution of sparse linear systems arising in engineering applications on vector hardware / vorgelegt von Sunil Reddy Tiyyagura". 2010. http://d-nb.info/1004991452/34.
Pełny tekst źródłaGadekar, Ameet. "On Learning k-Parities and the Complexity of k-Vector-SUM". Thesis, 2016. http://hdl.handle.net/2005/3060.
Pełny tekst źródła(6642491), Jingzhao Dai. "SPARSE DISCRETE WAVELET DECOMPOSITION AND FILTER BANK TECHNIQUES FOR SPEECH RECOGNITION". Thesis, 2019.
Znajdź pełny tekst źródłaSpeech recognition is widely applied to translation from speech to related text, voice driven commands, human machine interface and so on [1]-[8]. It has been increasingly proliferated to Human’s lives in the modern age. To improve the accuracy of speech recognition, various algorithms such as artificial neural network, hidden Markov model and so on have been developed [1], [2].
In this thesis work, the tasks of speech recognition with various classifiers are investigated. The classifiers employed include the support vector machine (SVM), k-nearest neighbors (KNN), random forest (RF) and convolutional neural network (CNN). Two novel features extraction methods of sparse discrete wavelet decomposition (SDWD) and bandpass filtering (BPF) based on the Mel filter banks [9] are developed and proposed. In order to meet diversity of classification algorithms, one-dimensional (1D) and two-dimensional (2D) features are required to be obtained. The 1D features are the array of power coefficients in frequency bands, which are dedicated for training SVM, KNN and RF classifiers while the 2D features are formed both in frequency domain and temporal variations. In fact, the 2D feature consists of the power values in decomposed bands versus consecutive speech frames. Most importantly, the 2D feature with geometric transformation are adopted to train CNN.
Speech recognition including males and females are from the recorded data set as well as the standard data set. Firstly, the recordings with little noise and clear pronunciation are applied with the proposed feature extraction methods. After many trials and experiments using this dataset, a high recognition accuracy is achieved. Then, these feature extraction methods are further applied to the standard recordings having random characteristics with ambient noise and unclear pronunciation. Many experiment results validate the effectiveness of the proposed feature extraction techniques.
Rossi, Stefano, i Piero Tortoli. "Development and validation of novel approaches for real-time ultrasound vector velocity measurements". Doctoral thesis, 2021. http://hdl.handle.net/2158/1239650.
Pełny tekst źródłaΠαπαδήμα, Ελισσάβετ. "Πειραματική αξιολόγηση μεθοδολογίας βελτιστοποίησης του αλγόριθμου πολλαπλασιασμού πίνακα επί διάνυσμα σε μονοπύρηνες και πολυπύρηνες αρχιτεκτονικές". Thesis, 2013. http://hdl.handle.net/10889/7283.
Pełny tekst źródłaThe subject of this MSc Thesis is the implementation and the experimental evaluation of a methodology that has been developed at the Laboratory of Integrated Circuits and optimizes the Matrix Vector Multiplication (MVM) in single-core and multi-core processors. The methodology fully exploits the characteristics of the architecture. Specifically, it exploits (a) the hierarchy of the memory, (b) the cache size, (c) the cache associativity, (d) the memory latency and (e) the number of the cores. It is the first time that the cache associativity is taken into account. The methodology optimizes all the parameters together as one problem and not separately. A different scheduling is proposed according to the size of the matrix. The general purpose processors Intel Core 2 Duo E6065, Intel Core 2 Duo T6600 and Intel i7-3930K and the embedded processor Virtex-5 Microblaze have been used. The results have been compared with the state-of-the-art library ATLAS (Automatically Tuned Linear Algebra Software) and the performance is improved by 30%. According to the experimental results, it is obvious that the bottleneck is the memory latency. Moreover, the performance is increased when a new way of saving the matrix in the main memory (data array layout) is used in both single-core and multi-core architectures. As far as the tiling is concerned, the experimental results indicate that the decrease of the misses does not always improve the performance because there is a trade-off between the tile size and the addressing instructions. According to the experimental results, as far as multicore architectures are concerned, there is no linear relation between the performance and the number of the cores, because of the limited memory bandwidth.
Bapat, Tanuja. "Sparse Multiclass And Multi-Label Classifier Design For Faster Inference". Thesis, 2011. http://etd.iisc.ernet.in/handle/2005/2065.
Pełny tekst źródłaHeinemeyer, Eric. "Integral Equation Methods for Rough Surface Scattering Problems in three Dimensions". Doctoral thesis, 2008. http://hdl.handle.net/11858/00-1735-0000-000D-F15F-2.
Pełny tekst źródłaBalamurugan, P. "Efficient Algorithms for Structured Output Learning". Thesis, 2014. http://etd.iisc.ernet.in/2005/3488.
Pełny tekst źródła