Journal articles on the topic 'Sparsification'

To see the other types of publications on this topic, follow the link: Sparsification.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Sparsification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Chen, Yuhan, Haojie Ye, Sanketh Vedula, Alex Bronstein, Ronald Dreslinski, Trevor Mudge, and Nishil Talati. "Demystifying Graph Sparsification Algorithms in Graph Properties Preservation." Proceedings of the VLDB Endowment 17, no. 3 (November 2023): 427–40. http://dx.doi.org/10.14778/3632093.3632106.

Full text
Abstract:
Graph sparsification is a technique that approximates a given graph by a sparse graph with a subset of vertices and/or edges. The goal of an effective sparsification algorithm is to maintain specific graph properties relevant to the downstream task while minimizing the graph's size. Graph algorithms often suffer from long execution time due to the irregularity and the large real-world graph size. Graph sparsification can be applied to greatly reduce the run time of graph algorithms by substituting the full graph with a much smaller sparsified graph, without significantly degrading the output quality. However, the interaction between numerous sparsifiers and graph properties is not widely explored, and the potential of graph sparsification is not fully understood. In this work, we cover 16 widely-used graph metrics, 12 representative graph sparsification algorithms, and 14 real-world input graphs spanning various categories, exhibiting diverse characteristics, sizes, and densities. We developed a framework to extensively assess the performance of these sparsification algorithms against graph metrics, and provide insights to the results. Our study shows that there is no one sparsifier that performs the best in preserving all graph properties, e.g. sparsifiers that preserve distance-related graph properties (eccentricity) struggle to perform well on Graph Neural Networks (GNN). This paper presents a comprehensive experimental study evaluating the performance of sparsification algorithms in preserving essential graph metrics. The insights inform future research in incorporating matching graph sparsification to graph algorithms to maximize benefits while minimizing quality degradation. Furthermore, we provide a framework to facilitate the future evaluation of evolving sparsification algorithms, graph metrics, and ever-growing graph data.
APA, Harvard, Vancouver, ISO, and other styles
2

Parchas, Panos, Nikolaos Papailiou, Dimitris Papadias, and Francesco Bonchi. "Uncertain Graph Sparsification." IEEE Transactions on Knowledge and Data Engineering 30, no. 12 (December 1, 2018): 2435–49. http://dx.doi.org/10.1109/tkde.2018.2819651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Tao, Wencong Jiao, Li-Na Wang, and Guoqiang Zhong. "Automatic DenseNet Sparsification." IEEE Access 8 (2020): 62561–71. http://dx.doi.org/10.1109/access.2020.2984130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Eppstein, David, Zvi Galil, Giuseppe F. Italiano, and Thomas H. Spencer. "Separator Based Sparsification." Journal of Computer and System Sciences 52, no. 1 (February 1996): 3–27. http://dx.doi.org/10.1006/jcss.1996.0002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lobacheva, Ekaterina, Nadezhda Chirkova, Alexander Markovich, and Dmitry Vetrov. "Structured Sparsification of Gated Recurrent Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 4989–96. http://dx.doi.org/10.1609/aaai.v34i04.5938.

Full text
Abstract:
One of the most popular approaches for neural network compression is sparsification — learning sparse weight matrices. In structured sparsification, weights are set to zero by groups corresponding to structure units, e. g. neurons. We further develop the structured sparsification approach for the gated recurrent neural networks, e. g. Long Short-Term Memory (LSTM). Specifically, in addition to the sparsification of individual weights and neurons, we propose sparsifying the preactivations of gates. This makes some gates constant and simplifies an LSTM structure. We test our approach on the text classification and language modeling tasks. Our method improves the neuron-wise compression of the model in most of the tasks. We also observe that the resulting structure of gate sparsity depends on the task and connect the learned structures to the specifics of the particular tasks.
APA, Harvard, Vancouver, ISO, and other styles
6

Farina, Gabriele, and Tuomas Sandholm. "Fast Payoff Matrix Sparsification Techniques for Structured Extensive-Form Games." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 5 (June 28, 2022): 4999–5007. http://dx.doi.org/10.1609/aaai.v36i5.20431.

Full text
Abstract:
The practical scalability of many optimization algorithms for large extensive-form games is often limited by the games' huge payoff matrices. To ameliorate the issue, Zhang and Sandholm recently proposed a sparsification technique that factorizes the payoff matrix A into a sparser object A = Â + UVᵀ, where the total combined number of nonzeros of Â, U, and V, is significantly smaller. Such a factorization can be used in place of the original payoff matrix in many optimization algorithm, such as interior-point and second-order methods, thus increasing the size of games that can be handled. Their technique significantly sparsifies poker (end)games, standard benchmarks used in computational game theory, AI, and more broadly. We show that the existence of extremely sparse factorizations in poker games can be tied to their particular Kronecker-product structure. We clarify how such structure arises and introduce the connection between that structure and sparsification. By leveraging such structure, we give two ways of computing strong sparsifications of poker games (as well as any other game with a similar structure) that are i) orders of magnitude faster to compute, ii) more numerically stable, and iii) produce a dramatically smaller number of nonzeros than the prior technique. Our techniques enable—for the first time—effective computation of high-precision Nash equilibria and strategies subject to constraints on the amount of allowed randomization. Furthermore, they significantly speed up parallel first-order game-solving algorithms; we show state-of-the-art speed on a GPU.
APA, Harvard, Vancouver, ISO, and other styles
7

Batson, Joshua, Daniel A. Spielman, Nikhil Srivastava, and Shang-Hua Teng. "Spectral sparsification of graphs." Communications of the ACM 56, no. 8 (August 2013): 87–94. http://dx.doi.org/10.1145/2492007.2492029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bonnet, Édouard, and Vangelis Th Paschos. "Sparsification and subexponential approximation." Acta Informatica 55, no. 1 (October 12, 2016): 1–15. http://dx.doi.org/10.1007/s00236-016-0281-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Spielman, Daniel A., and Shang-Hua Teng. "Spectral Sparsification of Graphs." SIAM Journal on Computing 40, no. 4 (January 2011): 981–1025. http://dx.doi.org/10.1137/08074489x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Butti, Silvia, and Stanislav Živný. "Sparsification of Binary CSPs." SIAM Journal on Discrete Mathematics 34, no. 1 (January 2020): 825–42. http://dx.doi.org/10.1137/19m1242446.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Egner, S., and T. Minkwitz. "Sparsification of Rectangular Matrices." Journal of Symbolic Computation 26, no. 2 (August 1998): 135–49. http://dx.doi.org/10.1006/jsco.1998.0204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Liu, Tao, Zhi Wang, Hui He, Wei Shi, Liangliang Lin, Ran An, and Chenhao Li. "Efficient and Secure Federated Learning for Financial Applications." Applied Sciences 13, no. 10 (May 10, 2023): 5877. http://dx.doi.org/10.3390/app13105877.

Full text
Abstract:
Conventional machine learning (ML) and deep learning approaches require sharing customers’ sensitive information with an external credit bureau to generate a prediction model, thereby increasing the risk of privacy leakage. This poses a significant challenge for financial companies. To address this challenge, federated learning has emerged as a promising approach to protect data privacy. However, the high communication costs associated with federated systems, particularly for large neural networks, can be a bottleneck. To mitigate this issue, it is necessary to limit the number and size of communications for practical training of large neural structures. Gradient sparsification is a technique that has gained increasing attention as a method to reduce communication costs, as it updates only significant gradients and accumulates insignificant gradients locally. However, the secure aggregation framework cannot directly employ gradient sparsification. To overcome this limitation, this article proposes two sparsification methods for reducing the communication costs of federated learning. The first method is a time-varying hierarchical sparsification method for model parameter updates, which addresses the challenge of maintaining model accuracy after a high sparsity ratio. This method can significantly reduce the cost of a single communication. The second method is to apply sparsification to the secure aggregation framework. Specifically, the encryption mask matrix is sparsified to reduce communication costs while protecting privacy. Experiments demonstrate that our method can reduce the upload communication costs to approximately 2.9% to 18.9% of the conventional federated learning algorithm under different non-IID experiment settings when the sparsity rate is 0.01.
APA, Harvard, Vancouver, ISO, and other styles
13

Rafiey, Akbar, and Yuichi Yoshida. "Sparsification of Decomposable Submodular Functions." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 10336–44. http://dx.doi.org/10.1609/aaai.v36i9.21275.

Full text
Abstract:
Submodular functions are at the core of many machine learning and data mining tasks. The underlying submodular functions for many of these tasks are decomposable, i.e., they are sum of several simple submodular functions. In many data intensive applications, however, the number of underlying submodular functions in the original function is so large that we need prohibitively large amount of time to process it and/or it does not even fit in the main memory. To overcome this issue, we introduce the notion of sparsification for decomposable submodular functions whose objective is to obtain an accurate approximation of the original function that is a (weighted) sum of only a few submodular functions. Our main result is a polynomial-time randomized sparsification algorithm such that the expected number of functions used in the output is independent of the number of underlying submodular functions in the original function. We also study the effectiveness of our algorithm under various constraints such as matroid and cardinality constraints. We complement our theoretical analysis with an empirical study of the performance of our algorithm.
APA, Harvard, Vancouver, ISO, and other styles
14

Krishnan, Srilal. "Sparsification: In Theory and Practice." Journal of Applied Mathematics and Physics 08, no. 01 (2020): 100–106. http://dx.doi.org/10.4236/jamp.2020.81008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Sun, He, and Luca Zanetti. "Distributed Graph Clustering and Sparsification." ACM Transactions on Parallel Computing 6, no. 3 (December 5, 2019): 1–23. http://dx.doi.org/10.1145/3364208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Crescenzi, Pilu, Alberto Del Lungo, Roberto Grossi, Elena Lodi, Linda Pagli, and Gianluca Rossi. "Text sparsification via local maxima." Theoretical Computer Science 304, no. 1-3 (July 2003): 341–64. http://dx.doi.org/10.1016/s0304-3975(03)00142-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Moitra, Ankur. "Vertex Sparsification and Oblivious Reductions." SIAM Journal on Computing 42, no. 6 (January 2013): 2400–2423. http://dx.doi.org/10.1137/100787337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Spielman, Daniel A., and Nikhil Srivastava. "Graph Sparsification by Effective Resistances." SIAM Journal on Computing 40, no. 6 (January 2011): 1913–26. http://dx.doi.org/10.1137/080734029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Jansen, Bart M. P. "On Sparsification for Computing Treewidth." Algorithmica 71, no. 3 (August 14, 2014): 605–35. http://dx.doi.org/10.1007/s00453-014-9924-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Ouyang, Yingkai, David R. White, and Earl T. Campbell. "Compilation by stochastic Hamiltonian sparsification." Quantum 4 (February 27, 2020): 235. http://dx.doi.org/10.22331/q-2020-02-27-235.

Full text
Abstract:
Simulation of quantum chemistry is expected to be a principal application of quantum computing. In quantum simulation, a complicated Hamiltonian describing the dynamics of a quantum system is decomposed into its constituent terms, where the effect of each term during time-evolution is individually computed. For many physical systems, the Hamiltonian has a large number of terms, constraining the scalability of established simulation methods. To address this limitation we introduce a new scheme that approximates the actual Hamiltonian with a sparser Hamiltonian containing fewer terms. By stochastically sparsifying weaker Hamiltonian terms, we benefit from a quadratic suppression of errors relative to deterministic approaches. Relying on optimality conditions from convex optimisation theory, we derive an appropriate probability distribution for the weaker Hamiltonian terms, and compare its error bounds with other probability ansatzes for some electronic structure Hamiltonians. Tuning the sparsity of our approximate Hamiltonians allows our scheme to interpolate between two recent random compilers: qDRIFT and randomized first order Trotter. Our scheme is thus an algorithm that combines the strengths of randomised Trotterisation with the efficiency of qDRIFT, and for intermediate gate budgets, outperforms both of these prior methods.
APA, Harvard, Vancouver, ISO, and other styles
21

Willmore, Ben D. B., and Andrew J. King. "Auditory Cortex: Representation through Sparsification?" Current Biology 19, no. 24 (December 2009): R1123—R1125. http://dx.doi.org/10.1016/j.cub.2009.11.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Wang, Jingchuan, and Weidong Chen. "An Improved Extended Information Filter SLAM Algorithm Based on Omnidirectional Vision." Journal of Applied Mathematics 2014 (2014): 1–10. http://dx.doi.org/10.1155/2014/948505.

Full text
Abstract:
In the SLAM application, omnidirectional vision extracts wide scale information and more features from environments. Traditional algorithms bring enormous computational complexity to omnidirectional vision SLAM. An improved extended information filter SLAM algorithm based on omnidirectional vision is presented in this paper. Based on the analysis of structure a characteristics of the information matrix, this algorithm improves computational efficiency. Considering the characteristics of omnidirectional images, an improved sparsification rule is also proposed. The sparse observation information has been utilized and the strongest global correlation has been maintained. So the accuracy of the estimated result is ensured by using proper sparsification of the information matrix. Then, through the error analysis, the error caused by sparsification can be eliminated by a relocation method. The results of experiments show that this method makes full use of the characteristic of repeated observations for landmarks in omnidirectional vision and maintains great efficiency and high reliability in mapping and localization.
APA, Harvard, Vancouver, ISO, and other styles
23

Nair, Aditya G., and Kunihiko Taira. "Network-theoretic approach to sparsified discrete vortex dynamics." Journal of Fluid Mechanics 768 (March 10, 2015): 549–71. http://dx.doi.org/10.1017/jfm.2015.97.

Full text
Abstract:
We examine discrete vortex dynamics in two-dimensional flow through a network-theoretic approach. The interaction of the vortices is represented with a graph, which allows the use of network-theoretic approaches to identify key vortex-to-vortex interactions. We employ sparsification techniques on these graph representations based on spectral theory to construct sparsified models and evaluate the dynamics of vortices in the sparsified set-up. Identification of vortex structures based on graph sparsification and sparse vortex dynamics is illustrated through an example of point-vortex clusters interacting amongst themselves. We also evaluate the performance of sparsification with increasing number of point vortices. The sparsified-dynamics model developed with spectral graph theory requires a reduced number of vortex-to-vortex interactions but agrees well with the full nonlinear dynamics. Furthermore, the sparsified model derived from the sparse graphs conserves the invariants of discrete vortex dynamics. We highlight the similarities and differences between the present sparsified-dynamics model and reduced-order models.
APA, Harvard, Vancouver, ISO, and other styles
24

Li, Yihang. "Sparse-Aware Deep Learning Accelerator." Highlights in Science, Engineering and Technology 39 (April 1, 2023): 305–10. http://dx.doi.org/10.54097/hset.v39i.6544.

Full text
Abstract:
In view of the difficulty of hardware implementation of convolutional neural network computing, most of the previous convolutional neural network accelerator designs focused on solving the bottleneck of computational performance and bandwidth, ignoring the importance of convolutional neural network scarcity for accelerator design. In recent years, there are a few convolutional neural network accelerators that can take advantage of the scarcity, but they are usually difficult to consider in terms of computational flexibility, parallel efficiency and resource overhead. In view of the problem that the application of convolutional neural network (CNN) on the embedded side is limited by real-time, and there is a large degree of sparsity in CNN convolution calculation. This paper summarizes the methods of sparsification from the algorithm level and based on FPGA level. The different methods of sparsification and the research and analysis of different application layers are introduced. The advantages and development trend of sparsification are analyzed and summarized.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhang, Hao, Bo He, and Ning Luan. "Sparse Extended Information Filter for AUV SLAM: Insights into the Optimal Sparse Time." Applied Mechanics and Materials 427-429 (September 2013): 1670–73. http://dx.doi.org/10.4028/www.scientific.net/amm.427-429.1670.

Full text
Abstract:
Sparse extended information filter-based simultaneous localization and mapping (SEIF-based SLAM) algorithm can reflect significant advantages in terms of computation time and storage memories. However, SEIF-SLAM is easily prone to overconfidence due to sparsification strategy. In this paper we will consider the time consumption and information loss of sparse operation, and get the optimal sparse time. In order to verify the feasibility of sparsification, a sea trial for autonomous underwater vehicle (AUV) C-Ranger was conducted in Tuandao Bay. The experimental results will show the improved algorithm is much more effective and accurate comparedwithothermethods.
APA, Harvard, Vancouver, ISO, and other styles
26

Danciu, Daniel, Mikhail Karasikov, Harun Mustafa, André Kahles, and Gunnar Rätsch. "Topology-based sparsification of graph annotations." Bioinformatics 37, Supplement_1 (July 1, 2021): i169—i176. http://dx.doi.org/10.1093/bioinformatics/btab330.

Full text
Abstract:
Abstract Motivation Since the amount of published biological sequencing data is growing exponentially, efficient methods for storing and indexing this data are more needed than ever to truly benefit from this invaluable resource for biomedical research. Labeled de Bruijn graphs are a frequently-used approach for representing large sets of sequencing data. While significant progress has been made to succinctly represent the graph itself, efficient methods for storing labels on such graphs are still rapidly evolving. Results In this article, we present RowDiff, a new technique for compacting graph labels by leveraging expected similarities in annotations of vertices adjacent in the graph. RowDiff can be constructed in linear time relative to the number of vertices and labels in the graph, and in space proportional to the graph size. In addition, construction can be efficiently parallelized and distributed, making the technique applicable to graphs with trillions of nodes. RowDiff can be viewed as an intermediary sparsification step of the original annotation matrix and can thus naturally be combined with existing generic schemes for compressed binary matrices. Experiments on 10 000 RNA-seq datasets show that RowDiff combined with multi-BRWT results in a 30% reduction in annotation footprint over Mantis-MST, the previously known most compact annotation representation. Experiments on the sparser Fungi subset of the RefSeq collection show that applying RowDiff sparsification reduces the size of individual annotation columns stored as compressed bit vectors by an average factor of 42. When combining RowDiff with a multi-BRWT representation, the resulting annotation is 26 times smaller than Mantis-MST. Availability and implementation RowDiff is implemented in C++ within the MetaGraph framework. The source code and the data used in the experiments are publicly available at https://github.com/ratschlab/row_diff.
APA, Harvard, Vancouver, ISO, and other styles
27

Li, Jiayu, Tianyun Zhang, Hao Tian, Shengmin Jin, Makan Fardad, and Reza Zafarani. "Graph sparsification with graph convolutional networks." International Journal of Data Science and Analytics 13, no. 1 (October 13, 2021): 33–46. http://dx.doi.org/10.1007/s41060-021-00288-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Hegarty, F., P. Ó Catháin, and Y. Zhao. "Sparsification of Matrices and Compressed Sensing." Irish Mathematical Society Bulletin 0081 (2018): 5–22. http://dx.doi.org/10.33232/bims.0081.5.22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Fung, Wai-Shing, Ramesh Hariharan, Nicholas J. A. Harvey, and Debmalya Panigrahi. "A General Framework for Graph Sparsification." SIAM Journal on Computing 48, no. 4 (January 2019): 1196–223. http://dx.doi.org/10.1137/16m1091666.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Schreiter, Jens, Duy Nguyen-Tuong, and Marc Toussaint. "Efficient sparsification for Gaussian process regression." Neurocomputing 192 (June 2016): 29–37. http://dx.doi.org/10.1016/j.neucom.2016.02.032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Foti, Nicholas J., James M. Hughes, and Daniel N. Rockmore. "Nonparametric Sparsification of Complex Multiscale Networks." PLoS ONE 6, no. 2 (February 8, 2011): e16431. http://dx.doi.org/10.1371/journal.pone.0016431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Aubrun, Guillaume, and Cécilia Lancien. "Zonoids and sparsification of quantum measurements." Positivity 20, no. 1 (April 30, 2015): 1–23. http://dx.doi.org/10.1007/s11117-015-0337-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Honeine, Paul. "Approximation Errors of Online Sparsification Criteria." IEEE Transactions on Signal Processing 63, no. 17 (September 2015): 4700–4709. http://dx.doi.org/10.1109/tsp.2015.2442960.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Ergür, Alperen A. "Approximating Nonnegative Polynomials via Spectral Sparsification." SIAM Journal on Optimization 29, no. 1 (January 2019): 852–73. http://dx.doi.org/10.1137/17m1121743.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Chekuri, Chandra, and Chao Xu. "Minimum Cuts and Sparsification in Hypergraphs." SIAM Journal on Computing 47, no. 6 (January 2018): 2118–56. http://dx.doi.org/10.1137/18m1163865.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Durfee, David, John Peebles, Richard Peng, and Anup B. Rao. "Determinant-Preserving Sparsification of SDDM Matrices." SIAM Journal on Computing 49, no. 4 (January 2020): FOCS17–350—FOCS17–408. http://dx.doi.org/10.1137/18m1165979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Chen, Xiaoman, Romain Tessera, Xianjin Wang, and Guoliang Yu. "Metric sparsification and operator norm localization." Advances in Mathematics 218, no. 5 (August 2008): 1496–511. http://dx.doi.org/10.1016/j.aim.2008.03.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Yang, Janghoon, and Yungho Choi. "LQR-Based Sparsification Algorithms of Consensus Networks." Electronics 10, no. 9 (May 3, 2021): 1082. http://dx.doi.org/10.3390/electronics10091082.

Full text
Abstract:
The performance of multiagent systems depends heavily on information flow. As agents are populated more densely, some information flow can be redundant. Thus, there can be a tradeoff between communication overhead and control performance. To address this issue, the optimization of the communication topology for the consensus network has been studied. In this study, three different suboptimal topology algorithms are proposed to minimize the linear quadratic regulator (LQR) cost considering the communication penalty, since the optimal solution requires a brute-force search, which has exponential complexity. The first two algorithms were designed to minimize the maximum eigenvalue of the Riccati matrix for the LQR, while the third algorithm was designed to remove edges sequentially in a greedy manner through evaluating the LQR cost directly. The first and second algorithms differ in that the active edges of a consensus network are determined at the end of the iterations in the first, while sequentially in the second. Numerical evaluations show that the proposed algorithms reduce the LQR cost significantly by optimizing communication topology, while the proposed algorithm may achieve optimal performance with a properly chosen parameterization for a small consensus network. While the three algorithms show similar performance with the increasing number of agents, the quantized terminal cost matrix optimization (QTCMO) algorithm shows significantly less complexity within the order of several tenths than those of the other two algorithms.
APA, Harvard, Vancouver, ISO, and other styles
39

Bodwin, Greg. "A note on distance-preserving graph sparsification." Information Processing Letters 174 (March 2022): 106205. http://dx.doi.org/10.1016/j.ipl.2021.106205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Möhl, Mathias, Raheleh Salari, Sebastian Will, Rolf Backofen, and S. Cenk Sahinalp. "Sparsification of RNA structure prediction including pseudoknots." Algorithms for Molecular Biology 5, no. 1 (2010): 39. http://dx.doi.org/10.1186/1748-7188-5-39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Kapralov, M., Y. T. Lee, C. N. Musco, C. P. Musco, and A. Sidford. "Single Pass Spectral Sparsification in Dynamic Streams." SIAM Journal on Computing 46, no. 1 (January 2017): 456–77. http://dx.doi.org/10.1137/141002281.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Vallvé, Joan, Joan Solà, and Juan Andrade-Cetto. "Pose-graph SLAM sparsification using factor descent." Robotics and Autonomous Systems 119 (September 2019): 108–18. http://dx.doi.org/10.1016/j.robot.2019.06.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Broutin, Nicolas, Luc Devroye, and Gábor Lugosi. "Almost optimal sparsification of random geometric graphs." Annals of Applied Probability 26, no. 5 (October 2016): 3078–109. http://dx.doi.org/10.1214/15-aap1170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Hon, Wing-Kai, Tsung-Han Ku, Tak-Wah Lam, Rahul Shah, Siu-Lung Tam, Sharma V. Thankachan, and Jeffrey Scott Vitter. "Compressing Dictionary Matching Index via Sparsification Technique." Algorithmica 72, no. 2 (January 7, 2014): 515–38. http://dx.doi.org/10.1007/s00453-013-9863-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Bueno, Andre A., and Magno T. M. Silva. "Gram-Schmidt-Based Sparsification for Kernel Dictionary." IEEE Signal Processing Letters 27 (2020): 1130–34. http://dx.doi.org/10.1109/lsp.2020.3004022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Gauthier, Bertrand, and Johan A. K. Suykens. "Optimal Quadrature-Sparsification for Integral Operator Approximation." SIAM Journal on Scientific Computing 40, no. 5 (January 2018): A3636—A3674. http://dx.doi.org/10.1137/17m1123614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Kelner, Jonathan A., and Alex Levin. "Spectral Sparsification in the Semi-streaming Setting." Theory of Computing Systems 53, no. 2 (April 26, 2012): 243–62. http://dx.doi.org/10.1007/s00224-012-9396-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Siami, Milad, and Nader Motee. "Network Sparsification with Guaranteed Systemic Performance Measures." IFAC-PapersOnLine 48, no. 22 (2015): 246–51. http://dx.doi.org/10.1016/j.ifacol.2015.10.338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Wang, Hao, Wenjia Zhang, and Guohua Liu. "TSNet: Token Sparsification for Efficient Video Transformer." Applied Sciences 13, no. 19 (September 24, 2023): 10633. http://dx.doi.org/10.3390/app131910633.

Full text
Abstract:
In the domain of video recognition, video transformers have demonstrated remarkable performance, albeit at significant computational cost. This paper introduces TSNet, an innovative approach for dynamically selecting informative tokens from given video samples. The proposed method involves a lightweight prediction module that assigns importance scores to each token in the video. Tokens with top scores are then utilized for self-attention computation. We apply the Gumbel-softmax technique to sample from the output of the prediction module, enabling end-to-end optimization of the prediction module. We aim to extend our method on hierarchical vision transformers rather than single-scale vision transformers. We use a simple linear module to project the pruned tokens, and the projected result is then concatenated with the output of the self-attention network to maintain the same number of tokens while capturing interactions with the selected tokens. Since feedforward networks (FFNs) contribute significant computation, we also propose linear projection for the pruned tokens to accelerate the model, and the existing FFN layer progresses the selected tokens. Finally, in order to ensure that the structure of the output remains unchanged, the two groups of tokens are reassembled based on their spatial positions in the original feature map. The experiments conducted primarily focus on the Kinetics-400 dataset using UniFormer, a hierarchical video transformer backbone that incorporates convolution in its self-attention block. Our model demonstrates comparable results to the original model while reducing computation by over 13%. Notably, by hierarchically pruning 70% of input tokens, our approach significantly decreases 55.5% of the FLOPs, while the decline in accuracy is confined to 2%. Additional testing of wide applicability and adaptability with other transformers such as the Video Swin Transformer was also performed and indicated its progressive potentials in video recognition benchmarks. By implementing our token sparsification framework, video vision transformers can achieve a remarkable balance between enhanced computational speed and a slight reduction in accuracy.
APA, Harvard, Vancouver, ISO, and other styles
50

Zahn, Olivia, Jorge Bustamante, Callin Switzer, Thomas L. Daniel, and J. Nathan Kutz. "Pruning deep neural networks generates a sparse, bio-inspired nonlinear controller for insect flight." PLOS Computational Biology 18, no. 9 (September 27, 2022): e1010512. http://dx.doi.org/10.1371/journal.pcbi.1010512.

Full text
Abstract:
Insect flight is a strongly nonlinear and actuated dynamical system. As such, strategies for understanding its control have typically relied on either model-based methods or linearizations thereof. Here we develop a framework that combines model predictive control on an established flight dynamics model and deep neural networks (DNN) to create an efficient method for solving the inverse problem of flight control. We turn to natural systems for inspiration since they inherently demonstrate network pruning with the consequence of yielding more efficient networks for a specific set of tasks. This bio-inspired approach allows us to leverage network pruning to optimally sparsify a DNN architecture in order to perform flight tasks with as few neural connections as possible, however, there are limits to sparsification. Specifically, as the number of connections falls below a critical threshold, flight performance drops considerably. We develop sparsification paradigms and explore their limits for control tasks. Monte Carlo simulations also quantify the statistical distribution of network weights during pruning given initial random weights of the DNNs. We demonstrate that on average, the network can be pruned to retain a small amount of original network weights and still perform comparably to its fully-connected counterpart. The relative number of remaining weights, however, is highly dependent on the initial architecture and size of the network. Overall, this work shows that sparsely connected DNNs are capable of predicting the forces required to follow flight trajectories. Additionally, sparsification has sharp performance limits.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography