Dissertations / Theses on the topic 'Sparsification'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 26 dissertations / theses for your research on the topic 'Sparsification.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Camacho, Martin Ayalde. "Spectral sparsification." Thesis, Harvard University, 2014. http://nrs.harvard.edu/urn-3:HUL.InstRepos:12553868.
Full textOliveira, Rafael (Rafael Mendes de Oliveira). "Spectral sparsification and spectrally thin trees." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/88906.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (page 31).
We provide results of intensive experimental data in order to investigate the existence of spectrally thin trees and unweighted spectral sparsifiers for graphs with small expansion. In addition, we also survey and prove some partial results on the existence of spectrally thin trees on dense graphs with high enough expansion.
by Rafael Oliveira.
M. Eng.
Moitra, Ankur. "Vertex sparsification and universal rounding algorithms." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66019.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 125-129).
Suppose we are given a gigantic communication network, but are only interested in a small number of nodes (clients). There are many routing problems we could be asked to solve for our clients. Is there a much smaller network - that we could write down on a sheet of paper and put in our pocket - that approximately preserves all the relevant communication properties of the original network? As we will demonstrate, the answer to this question is YES, and we call this smaller network a vertex sparsifier. In fact, if we are asked to solve a sequence of optimization problems characterized by cuts or flows, we can compute a good vertex sparsifier ONCE and discard the original network. We can run our algorithms (or approximation algorithms) on the vertex sparsifier as a proxy - and still recover approximately optimal solutions in the original network. This novel pattern saves both space (because the network we store is much smaller) and time (because our algorithms run on a much smaller graph). Additionally, we apply these ideas to obtain a master theorem for graph partitioning problems - as long as the integrality gap of a standard linear programming relaxation is bounded on trees, then the integrality gap is at most a logarithmic factor larger for general networks. This result implies optimal bounds for many well studied graph partitioning problems as a special case, and even yields optimal bounds for more challenging problems that had not been studied before. Morally, these results are all based on the idea that even though the structure of optimal solutions can be quite complicated, these solution values can be approximated by crude (even linear) functions.
by Ankur Moitra.
Ph.D.
Vial, John Francis Stephen. "Conservative Sparsification for Efficient Approximate Estimation." Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9907.
Full textOrtmann, Mark [Verfasser]. "Combinatorial Algorithms for Graph Sparsification / Mark Ortmann." Konstanz : Bibliothek der Universität Konstanz, 2017. http://d-nb.info/1173616438/34.
Full textDi, Jinchao. "Gene Co-expression Network Mining Using Graph Sparsification." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1367583964.
Full textDahlin, Oskar. "Implementing and Evaluating sparsification methods in probabilistic networks." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-428591.
Full textTran, Gia-Lac. "Advances in Deep Gaussian Processes : calibration and sparsification." Electronic Thesis or Diss., Sorbonne université, 2020. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2020SORUS410.pdf.
Full textGaussian Processes (GPs) are an attractive specific way of doing non-parametric Bayesian modeling in a supervised learning problem. It is well-known that GPs are able to make inferences as well as predictive uncertainties with a firm mathematical background. However, GPs are often unfavorable by the practitioners due to their kernel's expressiveness and the computational requirements. Integration of (convolutional) neural networks and GPs are a promising solution to enhance the representational power. As our first contribution, we empirically show that these combinations are miscalibrated, which leads to over-confident predictions. We also propose a novel well-calibrated solution to merge neural structures and GPs by using random features and variational inference techniques. In addition, these frameworks can be intuitively extended to reduce the computational cost by using structural random features. In terms of computational cost, the exact Gaussian Processes require the cubic complexity to training size. Inducing point-based Gaussian Processes are a common choice to mitigate the bottleneck by selecting a small set of active points through a global distillation from available observations. However, the general case remains elusive and it is still possible that the required number of active points may exceed a certain computational budget. In our second study, we propose Sparse-within-Sparse Gaussian Processes which enable the approximation with a large number of inducing points without suffering a prohibitive computational cost
Kanapka, Joseph D. (Joseph Daniel) 1972. "Fast methods for extraction and sparsification of substrate coupling." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/29236.
Full textIncludes bibliographical references (p. 107-111).
Substrate coupling effects have had an increasing impact on circuit performance in recent years. As a result, there is strong demand for substrate simulation tools. Past work has concentrated on fast substrate solvers that are applied once per contact to get the dense conductance matrix G. We develop a method of using any underlying substrate solver a near-constant number of times to obtain a sparse approximate representation G [approximately equal to] QGwtQ' in a new basis. This method differs from previous matrix sparsification techniques in that it requires only a "black box" which can apply G quickly; it doesn't need an analytical representation of the underlying kernel or access to individual entries of G. The change-of-basis matrix Q is also sparse. For our largest example, with 10240 contacts, we obtained a Gwt with 130 times fewer nonzeros than the dense G (and Q more than twice as sparse as Gwt), with 20 times fewer solves than the naive method, and fewer than 4 percent of the QGwtQ' entries had relative error more than 10% compared to the exact G.
by Joseph Daniel Kanapka.
Ph.D.
Geppert, Jakob Alexander [Verfasser]. "Adaptive Sparsification Mechanisms in Signal Recovery / Jakob Alexander Geppert." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2021. http://d-nb.info/1231542098/34.
Full textShah, Shivani. "Graph sparsification and unsupervised machine learning for metagenomic binning." Thesis, Tours, 2019. http://theses.scd.univ-tours.fr/index.php?fichier=2019/shivani.shah_18225.pdf.
Full textMetagenomics is the field biology that relates to the study of genomic content of microbial communities directly in their natural environments. The metagenomic data is generated by sequencing technology that take the enviormental samples as the input. The generated data is composed of short fragments of DNA (called reads), which originate from genomes of all species present in the sample. The datasets size range from thousands to millions of reads. One of the steps of metagenomic data analysis is binning of the reads. In binning groups (called bins) are to be formed such that each group is composed of reads which are likely to originate from the same specie or specie family. It has essentially been treated as a task of clustering in the metagenomic literature. One of the challenges in binning occurs due to the large size of the datasets. The method overwhelms the computational resources required while performing the task. Hence the development of binning approaches which are scalable to large datasets is required.In this thesis, we address this issue by proposing a scalable method to perform binning. We position our work among the compositional based binning approaches (use of short kmers) and in completely unsupervised context. On order to decrease the complexity of the binning task, methods are proposed to perform sparsification of the data prior to clustering. The development of the approach has been performed in two steps. First the idea has been evaluated on smaller metagenomic datasets (composed of few thousands of points). In the second step, we propose to scale this approach to larger datasets (composed of Millions of points) with similarity based indexing methods (LSH approaches). There are three major contributions of the thesis.First, we propose the idea of performing sparsification of the data with proximity graphs, prior to clustering. The proximity graphs are built on the data to capture pair-wise relationships between data points that are relevant for clustering. Then we leverage community detection algorithms on these graphs to identify clusters from the data. An exploratory study has been performed with several proximity graphs and community detection algorithm on three metagenomic datasets. Based on this study we propose an approach named ProxiClust with KNN graph and Louvain community detection to perform binning.Second, to scale this approach to larger datasets the distance matrix in the pipeline is replaced with hash tables built from Sim-hash LSH approach. We introduce two strategies to build proximity graphs from the hash tables: 1) Microclusters graph and 2) Approximate k nearest neighbour graph. The performance of these graphs have been evaluated on large MC datasets. The performance and limitations of these graphs are discussed. The baseline evaluation of these datasets have also been performed to determine their clustering difficulty. Based on this study we propose Mutual-KNN graph to be the appropriate proximity graph for the large datasets. This proposal has also evaluated and confirmed on the CAMI benchmark metagenomic datasets.Lastly, we examine alternative hashing approaches to build better quality hash tables. A data-dependent hashing approach ITQ and orthogonal version of Sim-hash have been included. Two new data dependent hashing approaches named ITQ-SH and ITQ-OrthSH are introduced. All the hashing approaches have been evaluated w.r.t their ability to hash the MC datasets with high precision and recall. AndThe introduction of Mutual-KNN as the appropriate proximity graph has led to new challenges in the pipeline. First, large number of clusters are generated due to high number of components in the Mutual-KNN graph. So, in order to obtain appropriate number of clusters, a strategy needs to be devised to merge the similar clusters. Also an approach to build Mutual-KNN graph from hash tables needs to be designed. This would complete the ProxiClust pipeline for the large datasets
Muoh, Chibuike. "Sparsification for Topic Modeling and Applications to Information Retrieval." Kent State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=kent1259206719.
Full textMachimada, Machaiah Chittiappa <1993>. "Graph Sparsification and Semi-Supervised Learning: an Experimental Study." Master's Degree Thesis, Università Ca' Foscari Venezia, 2021. http://hdl.handle.net/10579/19404.
Full textAsathulla, Mudabir Kabir. "A Sparsification Based Algorithm for Maximum-Cardinality Bipartite Matching in Planar Graphs." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/88080.
Full textMS
Bonnet, Edouard. "Résultats Positifs et Négatifs en Approximation et Complexité Paramétrée." Thesis, Paris 9, 2014. http://www.theses.fr/2014PA090040/document.
Full textSeveral real-Life problems are NP-Hard and cannot be solved in polynomial time.The two main options to overcome this issue are: approximation and parameterized complexity. In this thesis, we present a new technique called greediness-For-Parameterization and we use it to improve the parameterized complexity of many problems. We also use this notion to obtain parameterized algorithms for some problems in bipartite graphs. Aiming at establishing negative results on the approximability in subexponential time and in parameterized time, we introduce new methods of sparsification that preserves approximation. We combine those "sparsifiers" with known or new reductions to achieve our goal. Finally, we present some hardness results of games such as Bridge and Havannah
Wang, Guan. "STREAMING HYPERGRAPH PARTITION FOR MASSIVE GRAPHS." Kent State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=kent1385097649.
Full textChakeri, Alireza. "Scalable Unsupervised Learning with Game Theory." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6616.
Full textWang, Nan. "A Framework of Transforming Vertex Deletion Algorithm to Edge Deletion Algorithm." University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1504878748832156.
Full textDäubel, Karl [Verfasser], Torsten [Akademischer Betreuer] Mütze, Martin [Akademischer Betreuer] Skutella, Martin [Gutachter] Skutella, Torsten [Gutachter] Mütze, and Sebastian [Gutachter] Stiller. "Some aspects of graph sparsification in theory and practice / Karl Däubel ; Gutachter: Martin Skutella, Torsten Mütze, Sebastian Stiller ; Torsten Mütze, Martin Skutella." Berlin : Technische Universität Berlin, 2020. http://d-nb.info/1214296068/34.
Full textLiang, Weifa, and wliang@cs anu edu au. "Designing Efficient Parallel Algorithms for Graph Problems." The Australian National University. Department of Computer Science, 1997. http://thesis.anu.edu.au./public/adt-ANU20010829.114536.
Full textWill, Sebastian, and Hosna Jabbari. "Sparse RNA folding revisited." Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-204163.
Full textDell, Holger. "Sparse instances of hard problems." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2011. http://dx.doi.org/10.18452/16375.
Full textIn this thesis, we use and refine methods of computational complexity theory to analyze the complexity of sparse instances, such as graphs with few edges or formulas with few constraints of bounded width. Two natural questions arise in this context: (a) Is there an efficient algorithm that reduces arbitrary instances of an NP-hard problem to equivalent, sparse instances? (b) Is there an algorithm that solves sparse instances of an NP-hard problem significantly faster than general instances can be solved? We formalize these questions for different problems and show that positive answers for these formalizations would lead to consequences in complexity theory that are considered unlikely. Question (a) is modeled by a communication process, in which two players want to cooperatively decide an NP-hard language and at the same time communicate as few as possible. Under the complexity-theoretic hypothesis that coNP is not in NP/poly, our results imply surprisingly tight lower bounds for parameters of interest in several areas, namely sparsification, kernelization in parameterized complexity, lossy compression, and probabilistically checkable proofs. We study the question (b) for counting problems in the exponential time setting. Assuming (variants of) the exponential time hypothesis (ETH), we obtain asymptotically tight, exponential lower bounds for well-studied #P-hard problems: Computing the number of satisfying assignments of a 2-CNF formula, computing the number of all independent sets in a graph, computing the permanent of a matrix with entries 0 and 1, evaluating the Tutte polynomial at fixed evaluation points.
Curado, Manuel. "Structural Similarity: Applications to Object Recognition and Clustering." Doctoral thesis, Universidad de Alicante, 2018. http://hdl.handle.net/10045/98110.
Full textMinisterio de Economía, Industria y Competitividad (Referencia TIN2012-32839 BES-2013-064482)
Liao, Zhenyu. "A regularization perspective on spectral sparsification." Thesis, 2019. https://hdl.handle.net/2144/38975.
Full textWang, Sheng-Ping, and 王盛平. "Communication Usage Optimization of Gradient Sparsification with Aggregation." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/ppyyqb.
Full text國立臺灣大學
資訊工程學研究所
106
Communication usage is a bottleneck of scaling workers for distributed deep learning. One solution is to compress the exchanged gradients into sparse format with gradient sparsification. We found that the send cost of server, which is the aggregated size of sparse gradient, can be reduced by the gradient selection from workers. Following an observation that only a few gradients are significantly large and in a short period of time, we proposed several gradient selection algorithms based on different metrics. Experiment showed that our proposed method can reduce the aggregated size for server, and the reduction in time per iteration can make the convergence rate faster than traditional sparsification.
Liang, Weifa. "Designing Efficient Parallel Algorithms for Graph Problems." Phd thesis, 1997. http://hdl.handle.net/1885/47660.
Full text