Academic literature on the topic 'Sparsification'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sparsification.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Sparsification"
Chen, Yuhan, Haojie Ye, Sanketh Vedula, Alex Bronstein, Ronald Dreslinski, Trevor Mudge, and Nishil Talati. "Demystifying Graph Sparsification Algorithms in Graph Properties Preservation." Proceedings of the VLDB Endowment 17, no. 3 (November 2023): 427–40. http://dx.doi.org/10.14778/3632093.3632106.
Full textParchas, Panos, Nikolaos Papailiou, Dimitris Papadias, and Francesco Bonchi. "Uncertain Graph Sparsification." IEEE Transactions on Knowledge and Data Engineering 30, no. 12 (December 1, 2018): 2435–49. http://dx.doi.org/10.1109/tkde.2018.2819651.
Full textLi, Tao, Wencong Jiao, Li-Na Wang, and Guoqiang Zhong. "Automatic DenseNet Sparsification." IEEE Access 8 (2020): 62561–71. http://dx.doi.org/10.1109/access.2020.2984130.
Full textEppstein, David, Zvi Galil, Giuseppe F. Italiano, and Thomas H. Spencer. "Separator Based Sparsification." Journal of Computer and System Sciences 52, no. 1 (February 1996): 3–27. http://dx.doi.org/10.1006/jcss.1996.0002.
Full textLobacheva, Ekaterina, Nadezhda Chirkova, Alexander Markovich, and Dmitry Vetrov. "Structured Sparsification of Gated Recurrent Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 4989–96. http://dx.doi.org/10.1609/aaai.v34i04.5938.
Full textFarina, Gabriele, and Tuomas Sandholm. "Fast Payoff Matrix Sparsification Techniques for Structured Extensive-Form Games." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 5 (June 28, 2022): 4999–5007. http://dx.doi.org/10.1609/aaai.v36i5.20431.
Full textBatson, Joshua, Daniel A. Spielman, Nikhil Srivastava, and Shang-Hua Teng. "Spectral sparsification of graphs." Communications of the ACM 56, no. 8 (August 2013): 87–94. http://dx.doi.org/10.1145/2492007.2492029.
Full textBonnet, Édouard, and Vangelis Th Paschos. "Sparsification and subexponential approximation." Acta Informatica 55, no. 1 (October 12, 2016): 1–15. http://dx.doi.org/10.1007/s00236-016-0281-2.
Full textSpielman, Daniel A., and Shang-Hua Teng. "Spectral Sparsification of Graphs." SIAM Journal on Computing 40, no. 4 (January 2011): 981–1025. http://dx.doi.org/10.1137/08074489x.
Full textButti, Silvia, and Stanislav Živný. "Sparsification of Binary CSPs." SIAM Journal on Discrete Mathematics 34, no. 1 (January 2020): 825–42. http://dx.doi.org/10.1137/19m1242446.
Full textDissertations / Theses on the topic "Sparsification"
Camacho, Martin Ayalde. "Spectral sparsification." Thesis, Harvard University, 2014. http://nrs.harvard.edu/urn-3:HUL.InstRepos:12553868.
Full textOliveira, Rafael (Rafael Mendes de Oliveira). "Spectral sparsification and spectrally thin trees." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/88906.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (page 31).
We provide results of intensive experimental data in order to investigate the existence of spectrally thin trees and unweighted spectral sparsifiers for graphs with small expansion. In addition, we also survey and prove some partial results on the existence of spectrally thin trees on dense graphs with high enough expansion.
by Rafael Oliveira.
M. Eng.
Moitra, Ankur. "Vertex sparsification and universal rounding algorithms." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66019.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 125-129).
Suppose we are given a gigantic communication network, but are only interested in a small number of nodes (clients). There are many routing problems we could be asked to solve for our clients. Is there a much smaller network - that we could write down on a sheet of paper and put in our pocket - that approximately preserves all the relevant communication properties of the original network? As we will demonstrate, the answer to this question is YES, and we call this smaller network a vertex sparsifier. In fact, if we are asked to solve a sequence of optimization problems characterized by cuts or flows, we can compute a good vertex sparsifier ONCE and discard the original network. We can run our algorithms (or approximation algorithms) on the vertex sparsifier as a proxy - and still recover approximately optimal solutions in the original network. This novel pattern saves both space (because the network we store is much smaller) and time (because our algorithms run on a much smaller graph). Additionally, we apply these ideas to obtain a master theorem for graph partitioning problems - as long as the integrality gap of a standard linear programming relaxation is bounded on trees, then the integrality gap is at most a logarithmic factor larger for general networks. This result implies optimal bounds for many well studied graph partitioning problems as a special case, and even yields optimal bounds for more challenging problems that had not been studied before. Morally, these results are all based on the idea that even though the structure of optimal solutions can be quite complicated, these solution values can be approximated by crude (even linear) functions.
by Ankur Moitra.
Ph.D.
Vial, John Francis Stephen. "Conservative Sparsification for Efficient Approximate Estimation." Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9907.
Full textOrtmann, Mark [Verfasser]. "Combinatorial Algorithms for Graph Sparsification / Mark Ortmann." Konstanz : Bibliothek der Universität Konstanz, 2017. http://d-nb.info/1173616438/34.
Full textDi, Jinchao. "Gene Co-expression Network Mining Using Graph Sparsification." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1367583964.
Full textDahlin, Oskar. "Implementing and Evaluating sparsification methods in probabilistic networks." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-428591.
Full textTran, Gia-Lac. "Advances in Deep Gaussian Processes : calibration and sparsification." Electronic Thesis or Diss., Sorbonne université, 2020. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2020SORUS410.pdf.
Full textGaussian Processes (GPs) are an attractive specific way of doing non-parametric Bayesian modeling in a supervised learning problem. It is well-known that GPs are able to make inferences as well as predictive uncertainties with a firm mathematical background. However, GPs are often unfavorable by the practitioners due to their kernel's expressiveness and the computational requirements. Integration of (convolutional) neural networks and GPs are a promising solution to enhance the representational power. As our first contribution, we empirically show that these combinations are miscalibrated, which leads to over-confident predictions. We also propose a novel well-calibrated solution to merge neural structures and GPs by using random features and variational inference techniques. In addition, these frameworks can be intuitively extended to reduce the computational cost by using structural random features. In terms of computational cost, the exact Gaussian Processes require the cubic complexity to training size. Inducing point-based Gaussian Processes are a common choice to mitigate the bottleneck by selecting a small set of active points through a global distillation from available observations. However, the general case remains elusive and it is still possible that the required number of active points may exceed a certain computational budget. In our second study, we propose Sparse-within-Sparse Gaussian Processes which enable the approximation with a large number of inducing points without suffering a prohibitive computational cost
Kanapka, Joseph D. (Joseph Daniel) 1972. "Fast methods for extraction and sparsification of substrate coupling." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/29236.
Full textIncludes bibliographical references (p. 107-111).
Substrate coupling effects have had an increasing impact on circuit performance in recent years. As a result, there is strong demand for substrate simulation tools. Past work has concentrated on fast substrate solvers that are applied once per contact to get the dense conductance matrix G. We develop a method of using any underlying substrate solver a near-constant number of times to obtain a sparse approximate representation G [approximately equal to] QGwtQ' in a new basis. This method differs from previous matrix sparsification techniques in that it requires only a "black box" which can apply G quickly; it doesn't need an analytical representation of the underlying kernel or access to individual entries of G. The change-of-basis matrix Q is also sparse. For our largest example, with 10240 contacts, we obtained a Gwt with 130 times fewer nonzeros than the dense G (and Q more than twice as sparse as Gwt), with 20 times fewer solves than the naive method, and fewer than 4 percent of the QGwtQ' entries had relative error more than 10% compared to the exact G.
by Joseph Daniel Kanapka.
Ph.D.
Geppert, Jakob Alexander [Verfasser]. "Adaptive Sparsification Mechanisms in Signal Recovery / Jakob Alexander Geppert." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2021. http://d-nb.info/1231542098/34.
Full textBook chapters on the topic "Sparsification"
Cárdenas, Marcelo, Pascal Peter, and Joachim Weickert. "Sparsification Scale-Spaces." In Lecture Notes in Computer Science, 303–14. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-22368-7_24.
Full textGoranci, Gramoz, and Harald Räcke. "Vertex Sparsification in Trees." In Approximation and Online Algorithms, 103–15. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-51741-4_9.
Full textGionis, Aristides, Polina Rozenshtein, Nikolaj Tatti, and Evimaria Terzi. "Community-aware network sparsification." In Proceedings of the 2017 SIAM International Conference on Data Mining, 426–34. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2017. http://dx.doi.org/10.1137/1.9781611974973.48.
Full textSoma, Tasuku, and Yuichi Yoshida. "Spectral Sparsification of Hypergraphs." In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, 2570–81. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2019. http://dx.doi.org/10.1137/1.9781611975482.159.
Full textAhmed, Reyan, Keaton Hamm, Stephen Kobourov, Mohammad Javad Latifi Jebelli, Faryad Darabi Sahneh, and Richard Spence. "Multi-priority Graph Sparsification." In Lecture Notes in Computer Science, 1–12. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-34347-6_1.
Full textMouysset, Sandrine, and Ronan Guivarch. "Sparsification on Parallel Spectral Clustering." In Lecture Notes in Computer Science, 249–60. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38718-0_25.
Full textGollub, Tim, and Benno Stein. "Unsupervised Sparsification of Similarity Graphs." In Studies in Classification, Data Analysis, and Knowledge Organization, 71–79. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-10745-0_7.
Full textJansen, Bart M. P. "On Sparsification for Computing Treewidth." In Parameterized and Exact Computation, 216–29. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-03898-8_19.
Full textSchleif, Frank-Michael, Christoph Raab, and Peter Tino. "Sparsification of Indefinite Learning Models." In Lecture Notes in Computer Science, 173–83. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-97785-0_17.
Full textSanthanam, Rahul, and Srikanth Srinivasan. "On the Limits of Sparsification." In Automata, Languages, and Programming, 774–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-31594-7_65.
Full textConference papers on the topic "Sparsification"
Li, Huan, and Aaron Schild. "Spectral Subspace Sparsification." In 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 2018. http://dx.doi.org/10.1109/focs.2018.00044.
Full textWoerner, Peter C., Aditya G. Nair, Kunihiko Taira, and William S. Oates. "Network Theoretic Approach to Atomistic Material Modeling Using Spectral Sparsification." In ASME 2017 Conference on Smart Materials, Adaptive Structures and Intelligent Systems. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/smasis2017-3917.
Full textShi, Shaohuai, Kaiyong Zhao, Qiang Wang, Zhenheng Tang, and Xiaowen Chu. "A Convergence Analysis of Distributed SGD with Communication-Efficient Gradient Sparsification." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/473.
Full textGarg, Rahul, and Rohit Khandekar. "Gradient descent with sparsification." In the 26th Annual International Conference. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1553374.1553417.
Full textLovett, Shachar, and Jiapeng Zhang. "DNF sparsification beyond sunflowers." In STOC '19: 51st Annual ACM SIGACT Symposium on the Theory of Computing. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3313276.3316323.
Full textTavakoli, Hamed R., Joachim Wabnig, Francesco Cricri, Honglei Zhang, Emre Aksu, and Iraj Saniee. "Hybrid Pruning And Sparsification." In 2021 IEEE International Conference on Image Processing (ICIP). IEEE, 2021. http://dx.doi.org/10.1109/icip42928.2021.9506632.
Full textMeidiana, Amyra, Seok-Hee Hong, Jiajun Huang, Peter Eades, and Kwan-Liu Ma. "Topology-Based Spectral Sparsification." In 2019 IEEE 9th Symposium on Large Data Analysis and Visualization (LDAV). IEEE, 2019. http://dx.doi.org/10.1109/ldav48142.2019.8944358.
Full textMathioudakis, Michael, Francesco Bonchi, Carlos Castillo, Aristides Gionis, and Antti Ukkonen. "Sparsification of influence networks." In the 17th ACM SIGKDD international conference. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2020408.2020492.
Full textZheng, Fei, Chaochao Chen, Lingjuan Lyu, and Binhui Yao. "Reducing Communication for Split Learning by Randomized Top-k Sparsification." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/519.
Full textChakeri, Alireza, Hamidreza Farhidzadeh, and Lawrence O. Hall. "Spectral sparsification in spectral clustering." In 2016 23rd International Conference on Pattern Recognition (ICPR). IEEE, 2016. http://dx.doi.org/10.1109/icpr.2016.7899979.
Full text