To see the other types of publications on this topic, follow the link: Sparse bound.

Journal articles on the topic 'Sparse bound'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Sparse bound.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lorist, Emiel, and Zoe Nieraeth. "Sparse domination implies vector-valued sparse domination." Mathematische Zeitschrift 301, no. 1 (January 12, 2022): 1–35. http://dx.doi.org/10.1007/s00209-021-02943-z.

Full text
Abstract:
AbstractWe prove that scalar-valued sparse domination of a multilinear operator implies vector-valued sparse domination for tuples of quasi-Banach function spaces, for which we introduce a multilinear analogue of the $${{\,\mathrm{UMD}\,}}$$ UMD condition. This condition is characterized by the boundedness of the multisublinear Hardy-Littlewood maximal operator and goes beyond examples in which a $${{\,\mathrm{UMD}\,}}$$ UMD condition is assumed on each individual space and includes e.g. iterated Lebesgue, Lorentz, and Orlicz spaces. Our method allows us to obtain sharp vector-valued weighted bounds directly from scalar-valued sparse domination, without the use of a Rubio de Francia type extrapolation result. We apply our result to obtain new vector-valued bounds for multilinear Calderón-Zygmund operators as well as recover the old ones with a new sharp weighted bound. Moreover, in the Banach function space setting we improve upon recent vector-valued bounds for the bilinear Hilbert transform.
APA, Harvard, Vancouver, ISO, and other styles
2

Shparlinski, Igor E., and José Felipe Voloch. "Value Sets of Sparse Polynomials." Canadian Mathematical Bulletin 63, no. 1 (September 24, 2019): 187–96. http://dx.doi.org/10.4153/s0008439519000316.

Full text
Abstract:
AbstractWe obtain a new lower bound on the size of the value set $\mathscr{V}(f)=f(\mathbb{F}_{p})$ of a sparse polynomial $f\in \mathbb{F}_{p}[X]$ over a finite field of $p$ elements when $p$ is prime. This bound is uniform with respect to the degree and depends on some natural arithmetic properties of the degrees of the monomial terms of $f$ and the number of these terms. Our result is stronger than those that can be extracted from the bounds on multiplicities of individual values in $\mathscr{V}(f)$.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Rui, Rui Xin, Margo Seltzer, and Cynthia Rudin. "Optimal Sparse Regression Trees." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (June 26, 2023): 11270–79. http://dx.doi.org/10.1609/aaai.v37i9.26334.

Full text
Abstract:
Regression trees are one of the oldest forms of AI models, and their predictions can be made without a calculator, which makes them broadly useful, particularly for high-stakes applications. Within the large literature on regression trees, there has been little effort towards full provable optimization, mainly due to the computational hardness of the problem. This work proposes a dynamic programming-with-bounds approach to the construction of provably-optimal sparse regression trees. We leverage a novel lower bound based on an optimal solution to the k-Means clustering algorithm on one dimensional data. We are often able to find optimal sparse trees in seconds, even for challenging datasets that involve large numbers of samples and highly-correlated features.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Wengu, and Huanmin Ge. "A Sharp Bound on RIC in Generalized Orthogonal Matching Pursuit." Canadian Mathematical Bulletin 61, no. 1 (March 1, 2018): 40–54. http://dx.doi.org/10.4153/cmb-2017-009-6.

Full text
Abstract:
AbstractThe generalized orthogonal matching pursuit (gOMP) algorithm has received much attention in recent years as a natural extension of the orthogonal matching pursuit (OMP). It is used to recover sparse signals in compressive sensing. In this paper, a new bound is obtained for the exact reconstruction of every K-sparse signal via the gOMP algorithm in the noiseless case. That is, if the restricted isometry constant (RIC) δNK+1 of the sensing matrix A satisfiesthen the gOMP can perfectly recover every K-sparse signal x from y = Ax. Furthermore, the bound is proved to be sharp. In the noisy case, the above bound on RIC combining with an extra condition on the minimum magnitude of the nonzero components of K-sparse signals can guarantee that the gOMP selects all of the support indices of the K-sparse signals.
APA, Harvard, Vancouver, ISO, and other styles
5

Ferber, Asaf, Gweneth McKinley, and Wojciech Samotij. "Supersaturated Sparse Graphs and Hypergraphs." International Mathematics Research Notices 2020, no. 2 (March 8, 2018): 378–402. http://dx.doi.org/10.1093/imrn/rny030.

Full text
Abstract:
Abstract A central problem in extremal graph theory is to estimate, for a given graph H, the number of H-free graphs on a given set of n vertices. In the case when H is not bipartite, Erd̋s, Frankl, and Rödl proved that there are 2(1+o(1))ex(n, H) such graphs. In the bipartite case, however, bounds of the form 2O(ex(n, H)) have been proven only for relatively few special graphs H. As a 1st attempt at addressing this problem in full generality, we show that such a bound follows merely from a rather natural assumption on the growth rate of n ↦ ex(n, H); an analogous statement remains true when H is a uniform hypergraph. Subsequently, we derive several new results, along with most previously known estimates, as simple corollaries of our theorem. At the heart of our proof lies a general supersaturation statement that extends the seminal work of Erd̋s and Simonovits. The bounds on the number of H-free hypergraphs are derived from it using the method of hypergraph containers.
APA, Harvard, Vancouver, ISO, and other styles
6

FELDHEIM, OHAD N., and MICHAEL KRIVELEVICH. "Winning Fast in Sparse Graph Construction Games." Combinatorics, Probability and Computing 17, no. 6 (November 2008): 781–91. http://dx.doi.org/10.1017/s0963548308009401.

Full text
Abstract:
A graph construction game is a Maker–Breaker game. Maker and Breaker take turns in choosing previously unoccupied edges of the complete graphKN. Maker's aim is to claim a copy of a given target graphGwhile Breaker's aim is to prevent Maker from doing so. In this paper we show that ifGis ad-degenerate graph onnvertices andN>d1122d+9n, then Maker can claim a copy ofGin at mostd1122d+7nrounds. We also discuss a lower bound on the number of rounds Maker needs to win, and the gap between these bounds.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Qiushi, Xin Zhang, Qiang Yang, Lei Ye, and Mengxiao Zhao. "Performance Bound for Joint Multiple Parameter Target Estimation in Sparse Stepped-Frequency Radar: A Comparison Analysis." Sensors 19, no. 9 (April 29, 2019): 2002. http://dx.doi.org/10.3390/s19092002.

Full text
Abstract:
A performance bound—Cramér-Rao lower bound (CRLB) for target estimation and detection in sparse stepped frequency radars is presented. The vector formulation of this CRLB is used to obtain a lower bound on the estimation error. The estimation performance can be transformed into different types of CRLB structures. Therefore, the expressions of bounds under three equivalent models are derived separately: time delay and Doppler stretch estimator, joint multiple parameter estimator, and sparse-based estimator. The variables to be estimated include the variances of unknown noise, range, velocity, and the real and imaginary parts of the amplitude. A general performance expression is proposed by considering the echo of the target in the line-of-sight. When the relationship between CRLB and various parameters are discussed in detail, the specific effect of waveform parameters on a single CRLB is compared and analyzed. Numerical simulations demonstrated that the resulting CRLB exhibits considerable theoretical and practical significance for the selection of optimal waveform parameters.
APA, Harvard, Vancouver, ISO, and other styles
8

Sapir, Shachar, and Asaf Shapira. "The Induced Removal Lemma in Sparse Graphs." Combinatorics, Probability and Computing 29, no. 1 (September 30, 2019): 153–62. http://dx.doi.org/10.1017/s0963548319000233.

Full text
Abstract:
AbstractThe induced removal lemma of Alon, Fischer, Krivelevich and Szegedy states that if an n-vertex graph G is ε-far from being induced H-free then G contains δH(ε) · nh induced copies of H. Improving upon the original proof, Conlon and Fox proved that 1/δH(ε)is at most a tower of height poly(1/ε), and asked if this bound can be further improved to a tower of height log(1/ε). In this paper we obtain such a bound for graphs G of density O(ε). We actually prove a more general result, which, as a special case, also gives a new proof of Fox’s bound for the (non-induced) removal lemma.
APA, Harvard, Vancouver, ISO, and other styles
9

Deng, Hao, Jianghong Chen, Biqin Song, and Zhibin Pan. "Error Bound of Mode-Based Additive Models." Entropy 23, no. 6 (May 22, 2021): 651. http://dx.doi.org/10.3390/e23060651.

Full text
Abstract:
Due to their flexibility and interpretability, additive models are powerful tools for high-dimensional mean regression and variable selection. However, the least-squares loss-based mean regression models suffer from sensitivity to non-Gaussian noises, and there is also a need to improve the model’s robustness. This paper considers the estimation and variable selection via modal regression in reproducing kernel Hilbert spaces (RKHSs). Based on the mode-induced metric and two-fold Lasso-type regularizer, we proposed a sparse modal regression algorithm and gave the excess generalization error. The experimental results demonstrated the effectiveness of the proposed model.
APA, Harvard, Vancouver, ISO, and other styles
10

Milani Fard, Mahdi, Yuri Grinberg, Joelle Pineau, and Doina Precup. "Compressed Least-Squares Regression on Sparse Spaces." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 1054–60. http://dx.doi.org/10.1609/aaai.v26i1.8303.

Full text
Abstract:
Recent advances in the area of compressed sensing suggest that it is possible to reconstruct high-dimensional sparse signals from a small number of random projections. Domains in which the sparsity assumption is applicable also offer many interesting large-scale machine learning prediction tasks. It is therefore important to study the effect of random projections as a dimensionality reduction method under such sparsity assumptions. In this paper we develop the bias-variance analysis of a least-squares regression estimator in compressed spaces when random projections are applied on sparse input signals. Leveraging the sparsity assumption, we are able to work with arbitrary non i.i.d. sampling strategies and derive a worst-case bound on the entire space. Empirical results on synthetic and real-world datasets shows how the choice of the projection size affects the performance of regression on compressed spaces, and highlights a range of problems where the method is useful.
APA, Harvard, Vancouver, ISO, and other styles
11

Girolami, Mark. "A Variational Method for Learning Sparse and Overcomplete Representations." Neural Computation 13, no. 11 (November 1, 2001): 2517–32. http://dx.doi.org/10.1162/089976601753196003.

Full text
Abstract:
An expectation-maximization algorithm for learning sparse and overcomplete data representations is presented. The proposed algorithm exploits a variational approximation to a range of heavy-tailed distributions whose limit is the Laplacian. A rigorous lower bound on the sparse prior distribution is derived, which enables the analytic marginalization of a lower bound on the data likelihood. This lower bound enables the development of an expectation-maximization algorithm for learning the overcomplete basis vectors and inferring the most probable basis coefficients.
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Gang, Min-Yao Niu, and You Gao. "Welch bound-achieving compressed sensing matrices from optimal codebooks." Discrete Mathematics, Algorithms and Applications 12, no. 03 (May 26, 2020): 2050042. http://dx.doi.org/10.1142/s1793830920500421.

Full text
Abstract:
Compressed sensing (CS) is a new data acquisition theory taking full use of the sparsity of signals. It reveals that higher-dimensional sparse signals can be reconstructed from fewer nonadaptive linear measurements. The construction of CS matrices in CS is the key problem. In this paper, the deterministic CS matrices from optimal codebooks are constructed. Furthermore, the maximum sparsity of recovering the sparse signals by using our CS matrices are obtained. Meanwhile, a comparison is made with the CS matrices constructed by DeVore based on polynomials over finite fields. In the numerical simulations, our CS matrix outperforms DeVore[Formula: see text]s matrix in the process of recovering sparse signals.
APA, Harvard, Vancouver, ISO, and other styles
13

Dallaporta, Sandrine, and Yohann De Castro. "Sparse recovery from extreme eigenvalues deviation inequalities." ESAIM: Probability and Statistics 23 (2019): 430–63. http://dx.doi.org/10.1051/ps/2018024.

Full text
Abstract:
This article provides a new toolbox to derive sparse recovery guarantees – that is referred to as “stable and robust sparse regression” (SRSR) – from deviations on extreme singular values or extreme eigenvalues obtained in Random Matrix Theory. This work is based on Restricted Isometry Constants (RICs) which are a pivotal notion in Compressed Sensing and High-Dimensional Statistics as these constants finely assess how a linear operator is conditioned on the set of sparse vectors and hence how it performs in SRSR. While it is an open problem to construct deterministic matrices with apposite RICs, one can prove that such matrices exist using random matrices models. In this paper, we show upper bounds on RICs for Gaussian and Rademacher matrices using state-of-the-art deviation estimates on their extreme eigenvalues. This allows us to derive a lower bound on the probability of getting SRSR. One benefit of this paper is a direct and explicit derivation of upper bounds on RICs and lower bounds on SRSR from deviations on the extreme eigenvalues given by Random Matrix theory.
APA, Harvard, Vancouver, ISO, and other styles
14

McTavish, Hayden, Chudi Zhong, Reto Achermann, Ilias Karimalis, Jacques Chen, Cynthia Rudin, and Margo Seltzer. "Fast Sparse Decision Tree Optimization via Reference Ensembles." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 9604–13. http://dx.doi.org/10.1609/aaai.v36i9.21194.

Full text
Abstract:
Sparse decision tree optimization has been one of the most fundamental problems in AI since its inception and is a challenge at the core of interpretable machine learning. Sparse decision tree optimization is computationally hard, and despite steady effort since the 1960's, breakthroughs have been made on the problem only within the past few years, primarily on the problem of finding optimal sparse decision trees. However, current state-of-the-art algorithms often require impractical amounts of computation time and memory to find optimal or near-optimal trees for some real-world datasets, particularly those having several continuous-valued features. Given that the search spaces of these decision tree optimization problems are massive, can we practically hope to find a sparse decision tree that competes in accuracy with a black box machine learning model? We address this problem via smart guessing strategies that can be applied to any optimal branch-and-bound-based decision tree algorithm. The guesses come from knowledge gleaned from black box models. We show that by using these guesses, we can reduce the run time by multiple orders of magnitude while providing bounds on how far the resulting trees can deviate from the black box's accuracy and expressive power. Our approach enables guesses about how to bin continuous features, the size of the tree, and lower bounds on the error for the optimal decision tree. Our experiments show that in many cases we can rapidly construct sparse decision trees that match the accuracy of black box models. To summarize: when you are having trouble optimizing, just guess.
APA, Harvard, Vancouver, ISO, and other styles
15

Feng, Xue, Shi Yan, and Chunlin Wu. "The ℓ2, regularized group sparse optimization: Lower bound theory, recovery bound and algorithms." Applied and Computational Harmonic Analysis 49, no. 2 (September 2020): 381–414. http://dx.doi.org/10.1016/j.acha.2020.04.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Berg, Mark de, Dan Halperin, Mark Overmars, and Marc van Kreveld. "Sparse Arrangements and the Number of Views of Polyhedral Scenes." International Journal of Computational Geometry & Applications 07, no. 03 (June 1997): 175–95. http://dx.doi.org/10.1142/s0218195997000120.

Full text
Abstract:
In this paper we study several instances of the problem of determining the maximum number of topologically distinct two-dimensional images that three-dimensional scenes can induce. To bound this number, we investigate arrangements of curves and of surfaces that have a certain sparseness property. Given a collection of n algebraic surface patches of constant maximum degree in 3-space with the property that any vertical line stabs at most k of them, we show that the maximum combinatorial complexity of the entire arrangement that they induce is Θ(n2 k). We extend this result to collections of hypersurfaces in 4-space and to collections of (d > 1)-simplices in d-space, for any fixed d. We show that this type of arrangements (sparse arrangements) is relevant to the study of the maximum number of topologically different views of a polyhedral terrain. For polyhedral terrains with n edges and vertices, we introduce a lower bound construction inducing Ω(n5 α(n)) distinct views, and we present an almost matching upper bound. We then analyze the case of perspective views, point to the potential role of sparse arrangements in obtaining a sharp bound for this case, and present a lower bound construction inducing Ω(n8α(n)) distinct views. For the number of views of a collection of k convex polyhedra with a total of n faces, we show a bound of O(n4 k2) for views from infinity and O(n6 k3) for perspective views. We also present lower bound constructions for such scenes, with Ω(n4 + n2 k4) distinct views from infinity and Ω(n6 + n3 k6) views when the viewpoint can be anywhere in 3-space.
APA, Harvard, Vancouver, ISO, and other styles
17

BERTRAM-KRETZBERG, CLAUDIA, THOMAS HOFMEISTER, and HANNO LEFMANN. "Sparse 0−1 Matrices and Forbidden Hypergraphs." Combinatorics, Probability and Computing 8, no. 5 (September 1999): 417–27. http://dx.doi.org/10.1017/s0963548399004058.

Full text
Abstract:
We consider the problem of determining the maximum number N(m, k, r) of columns of a 0−1 matrix with m rows and exactly r ones in each column such that every k columns are linearly independent over ℤ2. For fixed integers k[ges ]4 and r[ges ]2, where k is even and gcd(k−1, r) = 1, we prove the lower bound N(m, k, r) = Ω(mkr/2(k−1) ·(ln m)1/k−1). This improves on earlier results from [14] by a factor Θ((ln m)1/k−1). Moreover, we describe a polynomial time algorithm achieving this new lower bound.
APA, Harvard, Vancouver, ISO, and other styles
18

Han, Zhi, Jianjun Wang, Jia Jing, and Hai Zhang. "A Simple Gaussian Measurement Bound for Exact Recovery of Block-Sparse Signals." Discrete Dynamics in Nature and Society 2014 (2014): 1–8. http://dx.doi.org/10.1155/2014/104709.

Full text
Abstract:
We present a probabilistic analysis on conditions of the exact recovery of block-sparse signals whose nonzero elements appear in fixed blocks. We mainly derive a simple lower bound on the necessary number of Gaussian measurements for exact recovery of such block-sparse signals via the mixedl2/lq (0<q≤1)norm minimization method. In addition, we present numerical examples to partially support the correctness of the theoretical results. The obtained results extend those known for the standardlqminimization and the mixedl2/l1minimization methods to the mixedl2/lq (0<q≤1)minimization method in the context of block-sparse signal recovery.
APA, Harvard, Vancouver, ISO, and other styles
19

Smith, Andrew Paul. "Fast construction of constant bound functions for sparse polynomials." Journal of Global Optimization 43, no. 2-3 (July 31, 2007): 445–58. http://dx.doi.org/10.1007/s10898-007-9195-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Bjarnason, Ronald, Alan Fern, and Prasad Tadepalli. "Lower Bounding Klondike Solitaire with Monte-Carlo Planning." Proceedings of the International Conference on Automated Planning and Scheduling 19 (October 16, 2009): 26–33. http://dx.doi.org/10.1609/icaps.v19i1.13363.

Full text
Abstract:
Despite its ubiquitous presence, very little is known about the odds of winning the simple card game of Klondike Solitaire. The main goal of this paper is to investigate the use of probabilistic planning to shed light on this issue. Unfortunatley, most probabilistic planning techniques are not well suited for Klondike due to the difficulties of representing the domain in standard planning languages and the complexity of the required search. Klondike thus serves as an interesting addition to the complement of probabilistic planning domains. In this paper, we study Klondike using several sampling-based planning approaches including UCT, hindsight optimization, and sparse sampling, and establish lower bounds on their performance. We also introduce novel combinations of these approaches and evaluate them in Klondike. We provide a theoretical bound on the sample complexity of a method that naturally combines sparse sampling and UCT. Our results demonstrate that there is a policy that within tight confidence intervals wins over 35% of Klondike games. This result is the first reported lower bound of an optimal Klondike policy.
APA, Harvard, Vancouver, ISO, and other styles
21

Kim, Jeong Han. "On Brooks' Theorem for Sparse Graphs." Combinatorics, Probability and Computing 4, no. 2 (June 1995): 97–132. http://dx.doi.org/10.1017/s0963548300001528.

Full text
Abstract:
Let G be a graph with maximum degree Δ(G). In this paper we prove that if the girth g(G) of G is greater than 4 then its chromatic number, χ(G), satisfieswhere o(l) goes to zero as Δ(G) goes to infinity. (Our logarithms are base e.) More generally, we prove the same bound for the list-chromatic (or choice) number:provided g(G) < 4.
APA, Harvard, Vancouver, ISO, and other styles
22

Liu, Tongliang, Dacheng Tao, and Dong Xu. "Dimensionality-Dependent Generalization Bounds for k-Dimensional Coding Schemes." Neural Computation 28, no. 10 (October 2016): 2213–49. http://dx.doi.org/10.1162/neco_a_00872.

Full text
Abstract:
The k-dimensional coding schemes refer to a collection of methods that attempt to represent data using a set of representative k-dimensional vectors and include nonnegative matrix factorization, dictionary learning, sparse coding, k-means clustering, and vector quantization as special cases. Previous generalization bounds for the reconstruction error of the k-dimensional coding schemes are mainly dimensionality-independent. A major advantage of these bounds is that they can be used to analyze the generalization error when data are mapped into an infinite- or high-dimensional feature space. However, many applications use finite-dimensional data features. Can we obtain dimensionality-dependent generalization bounds for k-dimensional coding schemes that are tighter than dimensionality-independent bounds when data are in a finite-dimensional feature space? Yes. In this letter, we address this problem and derive a dimensionality-dependent generalization bound for k-dimensional coding schemes by bounding the covering number of the loss function class induced by the reconstruction error. The bound is of order [Formula: see text], where m is the dimension of features, k is the number of the columns in the linear implementation of coding schemes, and n is the size of sample, [Formula: see text] when n is finite and [Formula: see text] when n is infinite. We show that our bound can be tighter than previous results because it avoids inducing the worst-case upper bound on k of the loss function. The proposed generalization bound is also applied to some specific coding schemes to demonstrate that the dimensionality-dependent bound is an indispensable complement to the dimensionality-independent generalization bounds.
APA, Harvard, Vancouver, ISO, and other styles
23

Li, Zelin, Changli Yao, Yuanman Zheng, Junheng Wang, and Yuwen Zhang. "3D magnetic sparse inversion using an interior-point method." GEOPHYSICS 83, no. 3 (May 1, 2018): J15—J32. http://dx.doi.org/10.1190/geo2016-0652.1.

Full text
Abstract:
Rock susceptibility measurements are sometimes taken on outcrop and borehole rocks, and they provide valuable information for constraining magnetic data inversion. We have developed two approaches for 3D magnetic sparse inversion that effectively take advantage of the rock susceptibility information. Both approaches minimize a total objective function subject to bound constraints using an interior-point method. The first approach directly minimizes an [Formula: see text]-norm of the susceptibility model by keeping the bounds positive, in which case the objective function is differentiable in the feasible region. The second approach minimizes a more generalized [Formula: see text]-like-norm ([Formula: see text]) of the susceptibility model by approximating the [Formula: see text]-like-norm inversion as an iteratively reweighted least-squares problem. Moreover, this approach allows the model values to be either positive or negative. We also revealed the equivalence of our approaches and the binary inversion. The recovered models of both approaches are characterized by sharp boundaries. However, the credibility of recovered boundaries depends on the accuracy and validity of the user-specified upper and lower bounds. Our approaches are tested on the synthetic data and field data acquired over a copper-nickel deposit.
APA, Harvard, Vancouver, ISO, and other styles
24

Ben-Haim, Zvika, and Yonina C. Eldar. "The Cramér-Rao Bound for Estimating a Sparse Parameter Vector." IEEE Transactions on Signal Processing 58, no. 6 (June 2010): 3384–89. http://dx.doi.org/10.1109/tsp.2010.2045423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Wu, Bo, Bo Huang, and Liangpei Zhang. "An Error-Bound-Regularized Sparse Coding for Spatiotemporal Reflectance Fusion." IEEE Transactions on Geoscience and Remote Sensing 53, no. 12 (December 2015): 6791–803. http://dx.doi.org/10.1109/tgrs.2015.2448100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Cui, Yunjiang, Jun Ming, Xinlei Shi, Wangwang Yang, Zhansong Zhang, and Chong Zhang. "A New Method for Calculating Reservoir Core-Bound Water Saturation Using the Cast Thin Section." Processes 11, no. 5 (May 5, 2023): 1397. http://dx.doi.org/10.3390/pr11051397.

Full text
Abstract:
The rock coring of the reservoir in the Bohai A field is difficult. The cores of the target section in the study area are loose, making it difficult to accurately measure the core-bound water saturation. The purpose of this research was to develop and validate a method for calculating a reservoir core-bound water saturation ratio using the cast thin section. First, pepper noise denoising and image enhancement were performed on the thin section by median filtering and gamma variation. Based on this, the enhanced sheet images were thresholded for segmentation by the two-dimensional OTSU algorithm, which automatically picked up the thin section pore-specific parameters. Then, the thin section image was equivalent to a capillary cross-section, while the thin film water fused to the pore surface was observed as bound water. For hydrophilic rocks with a strong homogeneity, the area of thin film water in the pore space of the sheet was divided by the total area of the pore space, which produced the bound water saturation. Next, the theoretical relationship between the film water thickness and the critical pore throat radius was derived based on the Young–Laplace equation. The bound water saturation of the rock was calculated by combining the pore perimeter and the area that was automatically picked up from the thin film for a given critical pore throat radius of the rock. Finally, 22 images of thin sections of sparse sandstone from the coring well section of the study area were image processed using the new method proposed in this paper, and the bound water saturation was calculated. The calculated results were compared with 22 NMR-bound water saturations and 11 semi-permeable baffle plate-bound water saturations in the same layer section. The results showed that the bound water saturation values calculated by the three methods produced consistent trends with absolute errors within 5%. The calculated results confirm the reliability of the method proposed in this paper. This method can effectively avoid the problem of the inaccurate results of core experiments due to the easy damage of sparse sandstone and provides a new idea for the accurate determination of the bound water saturation of sparse sandstone.
APA, Harvard, Vancouver, ISO, and other styles
27

Kelley, Zander. "Roots of sparse polynomials over a finite field." LMS Journal of Computation and Mathematics 19, A (2016): 196–204. http://dx.doi.org/10.1112/s1461157016000334.

Full text
Abstract:
For a $t$-nomial $f(x)=\sum _{i=1}^{t}c_{i}x^{a_{i}}\in \mathbb{F}_{q}[x]$, we show that the number of distinct, nonzero roots of $f$ is bounded above by $2(q-1)^{1-\unicode[STIX]{x1D700}}C^{\unicode[STIX]{x1D700}}$, where $\unicode[STIX]{x1D700}=1/(t-1)$ and $C$ is the size of the largest coset in $\mathbb{F}_{q}^{\ast }$ on which $f$ vanishes completely. Additionally, we describe a number-theoretic parameter depending only on $q$ and the exponents $a_{i}$ which provides a general and easily computable upper bound for $C$. We thus obtain a strict improvement over an earlier bound of Canetti et al. which is related to the uniformity of the Diffie–Hellman distribution. Finally, we conjecture that $t$-nomials over prime fields have only $O(t\log p)$ roots in $\mathbb{F}_{p}^{\ast }$ when $C=1$.
APA, Harvard, Vancouver, ISO, and other styles
28

Banihashem, Kiarash, Mohammad Hajiaghayi, and Max Springer. "Optimal Sparse Recovery with Decision Stumps." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 6745–52. http://dx.doi.org/10.1609/aaai.v37i6.25827.

Full text
Abstract:
Decision trees are widely used for their low computational cost, good predictive performance, and ability to assess the importance of features. Though often used in practice for feature selection, the theoretical guarantees of these methods are not well understood. We here obtain a tight finite sample bound for the feature selection problem in linear regression using single-depth decision trees. We examine the statistical properties of these "decision stumps" for the recovery of the s active features from p total features, where s
APA, Harvard, Vancouver, ISO, and other styles
29

Zaidel, Benjamin M., Ori Shental, and Shlomo Shamai (Shitz). "Beyond Equal-Power Sparse NOMA: Two User Classes and Closed-Form Bounds on the Achievable Region." Entropy 24, no. 2 (January 31, 2022): 227. http://dx.doi.org/10.3390/e24020227.

Full text
Abstract:
Non-orthogonal multiple access (NOMA) is a promising technology for future beyond-5G wireless networks, whose fundamental information-theoretic limits are yet to be fully explored. Considering regular sparse code-domain NOMA (with a fixed and finite number of orthogonal resources allocated to any designated user and vice versa), this paper extends previous results by the authors to a setting comprising two classes of users with different power constraints. Explicit rigorous closed-form analytical inner and outer bounds on the achievable rate (total class throughput) region in the large-system limit are derived and comparatively investigated in extreme-SNR regimes. The inner bound is based on the conditional vector entropy power inequality (EPI), while the outer bound relies on a recent strengthened version of the EPI. Valuable insights are provided into the potential performance gains of regular sparse NOMA in practically oriented settings, comprising, e.g., a combination of low-complexity devices and broadband users with higher transmit power capabilities, or combinations of cell-edge and cell-center users. The conditions for superior performance over dense code-domain NOMA (taking the form of randomly spread code-division multiple access), as well as a relatively small gap to the ultimate performance limits, are identified. The proposed bounds are also applicable for the analysis of interference networks, e.g., Wyner-type cellular models.
APA, Harvard, Vancouver, ISO, and other styles
30

Bu, Yuehua, and Chentao Qi. "Injective edge coloring of sparse graphs." Discrete Mathematics, Algorithms and Applications 10, no. 02 (April 2018): 1850022. http://dx.doi.org/10.1142/s1793830918500222.

Full text
Abstract:
A [Formula: see text]-injective edge coloring of a graph [Formula: see text] is a coloring [Formula: see text], such that if [Formula: see text], [Formula: see text] and [Formula: see text] are consecutive edges in [Formula: see text], then [Formula: see text]. [Formula: see text] has a [Formula: see text]-injective edge coloring[Formula: see text] is called the injective edge coloring number. In this paper, we consider the upper bound of [Formula: see text] in terms of the maximum average degree mad[Formula: see text], where [Formula: see text].
APA, Harvard, Vancouver, ISO, and other styles
31

Arunachalam, Srinivasan, Sourav Chakraborty, Troy Lee, Manaswi Paraashar, and Ronald de Wolf. "Two new results about quantum exact learning." Quantum 5 (November 24, 2021): 587. http://dx.doi.org/10.22331/q-2021-11-24-587.

Full text
Abstract:
We present two new results about exact learning by quantum computers. First, we show how to exactly learn a k-Fourier-sparse n-bit Boolean function from O(k1.5(log⁡k)2) uniform quantum examples for that function. This improves over the bound of Θ~(kn) uniformly random classical examples (Haviv and Regev, CCC'15). Additionally, we provide a possible direction to improve our O~(k1.5) upper bound by proving an improvement of Chang's lemma for k-Fourier-sparse Boolean functions. Second, we show that if a concept class C can be exactly learned using Q quantum membership queries, then it can also be learned using O(Q2log⁡Qlog⁡|C|)classical membership queries. This improves the previous-best simulation result (Servedio and Gortler, SICOMP'04) by a log⁡Q-factor.
APA, Harvard, Vancouver, ISO, and other styles
32

Nigam, H. K., and Saroj Yadav. "Sparse recovery for compressive sensing via weighted Lp−q model." Filomat 36, no. 14 (2022): 4709–16. http://dx.doi.org/10.2298/fil2214709n.

Full text
Abstract:
In this paper, we study weighted Lp?q minimization model which comprises non-smooth, nonconvex and non-Lipschitz quasi-norm Lp(0 < p ? 1) and Lq(1 < q ? 2) for recovering sparse signals. Based on the restricted isometry property (RIP) condition, we obtain exact sparse signal recovery result. We also obtain the theoretical bound for the weighted Lp?q minimization model when measurements are depraved by the noises.
APA, Harvard, Vancouver, ISO, and other styles
33

KABANAVA, M., H. RAUHUT, and H. ZHANG. "Robust analysis ℓ1-recovery from Gaussian measurements and total variation minimization." European Journal of Applied Mathematics 26, no. 6 (June 1, 2015): 917–29. http://dx.doi.org/10.1017/s0956792515000236.

Full text
Abstract:
Analysis ℓ1-recovery refers to a technique of recovering a signal that is sparse in some transform domain from incomplete corrupted measurements. This includes total variation minimization as an important special case when the transform domain is generated by a difference operator. In the present paper, we provide a bound on the number of Gaussian measurements required for successful recovery for total variation and for the case that the analysis operator is a frame. The bounds are particularly suitable when the sparsity of the analysis representation of the signal is not very small.
APA, Harvard, Vancouver, ISO, and other styles
34

Eigel, Martin, Reinhold Schneider, and Philipp Trunschke. "Convergence bounds for empirical nonlinear least-squares." ESAIM: Mathematical Modelling and Numerical Analysis 56, no. 1 (January 2022): 79–104. http://dx.doi.org/10.1051/m2an/2021070.

Full text
Abstract:
We consider best approximation problems in a nonlinear subset ℳ of a Banach space of functions (𝒱,∥•∥). The norm is assumed to be a generalization of the L 2-norm for which only a weighted Monte Carlo estimate ∥•∥n can be computed. The objective is to obtain an approximation v ∈ ℳ of an unknown function u ∈ 𝒱 by minimizing the empirical norm ∥u − v∥n. We consider this problem for general nonlinear subsets and establish error bounds for the empirical best approximation error. Our results are based on a restricted isometry property (RIP) which holds in probability and is independent of the specified nonlinear least squares setting. Several model classes are examined and the analytical statements about the RIP are compared to existing sample complexity bounds from the literature. We find that for well-studied model classes our general bound is weaker but exhibits many of the same properties as these specialized bounds. Notably, we demonstrate the advantage of an optimal sampling density (as known for linear spaces) for sets of functions with sparse representations.
APA, Harvard, Vancouver, ISO, and other styles
35

Wei, Dennis, and Alan V. Oppenheim. "A Branch-and-Bound Algorithm for Quadratically-Constrained Sparse Filter Design." IEEE Transactions on Signal Processing 61, no. 4 (February 2013): 1006–18. http://dx.doi.org/10.1109/tsp.2012.2226450.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Akcakaya, M., and V. Tarokh. "A Frame Construction and a Universal Distortion Bound for Sparse Representations." IEEE Transactions on Signal Processing 56, no. 6 (June 2008): 2443–50. http://dx.doi.org/10.1109/tsp.2007.914344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Wang, Mianzhi, Zhen Zhang, and Arye Nehorai. "Further Results on the Cramér–Rao Bound for Sparse Linear Arrays." IEEE Transactions on Signal Processing 67, no. 6 (March 15, 2019): 1493–507. http://dx.doi.org/10.1109/tsp.2019.2892021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Yujie Tang, Laming Chen, and Yuantao Gu. "On the Performance Bound of Sparse Estimation With Sensing Matrix Perturbation." IEEE Transactions on Signal Processing 61, no. 17 (September 2013): 4372–86. http://dx.doi.org/10.1109/tsp.2013.2271481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Cai, T. Tony, and Anru Zhang. "Sharp RIP bound for sparse signal and low-rank matrix recovery." Applied and Computational Harmonic Analysis 35, no. 1 (July 2013): 74–93. http://dx.doi.org/10.1016/j.acha.2012.07.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Bauvin, Baptiste, Cécile Capponi, Jean-Francis Roy, and François Laviolette. "Fast greedy $$\mathcal {C}$$-bound minimization with guarantees." Machine Learning 109, no. 9-10 (September 2020): 1945–86. http://dx.doi.org/10.1007/s10994-020-05902-7.

Full text
Abstract:
Abstract The $$\mathcal {C}$$ C -bound is a tight bound on the true risk of a majority vote classifier that relies on the individual quality and pairwise disagreement of the voters and provides PAC-Bayesian generalization guarantees. Based on this bound, MinCq is a classification algorithm that returns a dense distribution on a finite set of voters by minimizing it. Introduced later and inspired by boosting, CqBoost uses a column generation approach to build a sparse $$\mathcal {C}$$ C -bound optimal distribution on a possibly infinite set of voters. However, both approaches have a high computational learning time because they minimize the $$\mathcal {C}$$ C -bound by solving a quadratic program. Yet, one advantage of CqBoost is its experimental ability to provide sparse solutions. In this work, we address the problem of accelerating the $$\mathcal {C}$$ C -bound minimization process while keeping the sparsity of the solution and without losing accuracy. We present CB-Boost, a computationally efficient classification algorithm relying on a greedy–boosting-based–$$\mathcal {C}$$ C -bound optimization. An in-depth analysis proves the optimality of the greedy minimization process and quantifies the decrease of the $$\mathcal {C}$$ C -bound operated by the algorithm. Generalization guarantees are then drawn based on already existing PAC-Bayesian theorems. In addition, we experimentally evaluate the relevance of CB-Boost in terms of the three main properties we expect about it: accuracy, sparsity, and computational efficiency compared to MinCq, CqBoost, Adaboost and other ensemble methods. As observed in these experiments, CB-Boost not only achieves results comparable to the state of the art, but also provides $$\mathcal {C}$$ C -bound sub-optimal weights with very few computational demand while keeping the sparsity property of CqBoost.
APA, Harvard, Vancouver, ISO, and other styles
41

Gao, Jian, Zhenghang Xu, Ruizhi Li, and Minghao Yin. "An Exact Algorithm with New Upper Bounds for the Maximum k-Defective Clique Problem in Massive Sparse Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 10174–83. http://dx.doi.org/10.1609/aaai.v36i9.21257.

Full text
Abstract:
The Maximum k-Defective Clique Problem (MDCP), as a clique relaxation model, has been used to solve various problems. Because it is a hard computational task, previous works can hardly solve the MDCP for massive sparse graphs derived from real-world applications. In this work, we propose a novel branch-and-bound algorithm to solve the MDCP based on several new techniques. First, we propose two new upper bounds of the MDCP as well as corresponding reduction rules to remove redundant vertices and edges. The proposed reduction rules are particularly useful for massive graphs. Second, we present another new upper bound by counting missing edges between fixed vertices and an unfixed vertex for cutting branches. We perform extensive computational experiments to evaluate our algorithm. Experimental results show that our reduction rules are very effective for removing redundant vertices and edges so that graphs are reduced greatly. Also, our algorithm can solve benchmark instances efficiently, and it has significantly better performance than state-of-the-art algorithms.
APA, Harvard, Vancouver, ISO, and other styles
42

Luo, Luo, Cheng Chen, Guangzeng Xie, and Haishan Ye. "Revisiting Co-Occurring Directions: Sharper Analysis and Efficient Algorithm for Sparse Matrices." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (May 18, 2021): 8793–800. http://dx.doi.org/10.1609/aaai.v35i10.17065.

Full text
Abstract:
We study the streaming model for approximate matrix multiplication (AMM). We are interested in the scenario that the algorithm can only take one pass over the data with limited memory. The state-of-the-art deterministic sketching algorithm for streaming AMM is the co-occurring directions (COD), which has much smaller approximation errors than randomized algorithms and outperforms other deterministic sketching methods empirically. In this paper, we provide a tighter error bound for COD whose leading term considers the potential approximate low-rank structure and the correlation of input matrices. We prove COD is space optimal with respect to our improved error bound. We also propose a variant of COD for sparse matrices with theoretical guarantees. The experiments on real-world sparse datasets show that the proposed algorithm is more efficient than baseline methods.
APA, Harvard, Vancouver, ISO, and other styles
43

Razborov, A. A. "More about sparse halves in triangle-free graphs." Sbornik: Mathematics 213, no. 1 (January 1, 2022): 109–28. http://dx.doi.org/10.1070/sm9615.

Full text
Abstract:
Abstract One of Erdős’s conjectures states that every triangle-free graph on vertices has an induced subgraph on vertices with at most edges. We report several partial results towards this conjecture. In particular, we establish the new bound on the number of edges in the general case. We completely prove the conjecture for graphs of girth , for graphs with independence number and for strongly regular graphs. Each of these three classes includes both known (conjectured) extremal configurations, the 5-cycle and the Petersen graph. Bibliography: 21 titles.
APA, Harvard, Vancouver, ISO, and other styles
44

Birdi, Jasleen, Audrey Repetti, and Yves Wiaux. "Sparse interferometric Stokes imaging under the polarization constraint (Polarized SARA)." Monthly Notices of the Royal Astronomical Society 478, no. 4 (July 4, 2018): 4442–63. http://dx.doi.org/10.1093/mnras/sty1182.

Full text
Abstract:
ABSTRACT We develop a novel algorithm for sparse imaging of Stokes parameters in radio interferometry under the polarization constraint. The latter is a physical non-linear relation between the Stokes parameters, imposing the polarization intensity as a lower bound on the total intensity. To solve the joint inverse Stokes imaging problem including this bound, we leverage epigraphical projection techniques in convex optimization and we design a primal–dual method offering a highly flexible and parallelizable structure. In addition, we propose to regularize each Stokes parameter map through an average sparsity prior in the context of a reweighted analysis approach (SARA). The resulting method is dubbed Polarized SARA. Using simulated observations of M87 with the Event Horizon Telescope, we demonstrate that imposing the polarization constraint leads to superior image quality. For the considered data sets, the results also indicate better performance of the average sparsity prior in comparison with the widely used Cotton–Schwab clean algorithm and other total variation based priors for polarimetric imaging. Our matlab code is available online on GitHub.
APA, Harvard, Vancouver, ISO, and other styles
45

Compaleo, Jacob, and Inder J. Gupta. "Application of Sparse Representation to Bartlett Spectra for Improved Direction of Arrival Estimation." Sensors 21, no. 1 (December 25, 2020): 77. http://dx.doi.org/10.3390/s21010077.

Full text
Abstract:
A new technique for high-resolution direction of arrival estimation is presented. The method utilizes the traditional Bartlett spectra and sparse representation to locate emitters in single and multiple emitter scenarios. A method for selecting the sparse representation regularization parameter is also presented. Using Monte Carlo simulations, we show that the proposed approach achieves accurate direction of arrival (DOA) estimations that are unbiased and a variance that approaches the Cramer–Rao lower bound. We show that our method outperforms the popular MUSIC algorithm, and is slightly better than the sparse representation based L1-SVD algorithm when angular separation between emitters is small, signal SNR is low, and a small number of snapshots are used in DOA estimation.
APA, Harvard, Vancouver, ISO, and other styles
46

Anzt, Hartwig, Stanimire Tomov, and Jack Dongarra. "On the performance and energy efficiency of sparse linear algebra on GPUs." International Journal of High Performance Computing Applications 31, no. 5 (October 5, 2016): 375–90. http://dx.doi.org/10.1177/1094342016672081.

Full text
Abstract:
In this paper we unveil some performance and energy efficiency frontiers for sparse computations on GPU-based supercomputers. We compare the resource efficiency of different sparse matrix–vector products (SpMV) taken from libraries such as cuSPARSE and MAGMA for GPU and Intel’s MKL for multicore CPUs, and develop a GPU sparse matrix–matrix product (SpMM) implementation that handles the simultaneous multiplication of a sparse matrix with a set of vectors in block-wise fashion. While a typical sparse computation such as the SpMV reaches only a fraction of the peak of current GPUs, we show that the SpMM succeeds in exceeding the memory-bound limitations of the SpMV. We integrate this kernel into a GPU-accelerated Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) eigensolver. LOBPCG is chosen as a benchmark algorithm for this study as it combines an interesting mix of sparse and dense linear algebra operations that is typical for complex simulation applications, and allows for hardware-aware optimizations. In a detailed analysis we compare the performance and energy efficiency against a multi-threaded CPU counterpart. The reported performance and energy efficiency results are indicative of sparse computations on supercomputers.
APA, Harvard, Vancouver, ISO, and other styles
47

Duan, Huiping, Linxiao Yang, Jun Fang, and Hongbin Li. "Fast Inverse-Free Sparse Bayesian Learning via Relaxed Evidence Lower Bound Maximization." IEEE Signal Processing Letters 24, no. 6 (June 2017): 774–78. http://dx.doi.org/10.1109/lsp.2017.2692217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Xie, Yuan, Shaohan Huang, Tianyu Chen, and Furu Wei. "MoEC: Mixture of Expert Clusters." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (June 26, 2023): 13807–15. http://dx.doi.org/10.1609/aaai.v37i11.26617.

Full text
Abstract:
Sparsely Mixture of Experts (MoE) has received great interest due to its promising scaling capability with affordable computational overhead. MoE models convert dense layers into sparse experts, and utilize a gated routing network to make experts conditionally activated. However, as the number of experts grows, MoE with outrageous parameters suffers from overfitting and sparse data allocation. Such problems are especially severe on tasks with limited data, thus hindering the progress towards improving performance by scaling up. We verify that there exists a performance upper bound of scaling up sparse MoE. In this work, we propose Mixture of Expert Clusters — a general approach to enable expert layers to learn more diverse and appropriate knowledge by imposing variance-based constraints on the routing stage. Given this, we could further propose a cluster-level expert dropout strategy specifically designed for the expert cluster structure. Our experiments reveal that MoEC could improve performance on machine translation and natural language understanding tasks. MoEC plays a positive role in mitigating overfitting and sparse data allocation problems, thus fully releasing the potential of large-scale sparse models.
APA, Harvard, Vancouver, ISO, and other styles
49

Koutris, Paraschos, and Shaleen Deep. "The Fine-Grained Complexity of CFL Reachability." Proceedings of the ACM on Programming Languages 7, POPL (January 9, 2023): 1713–39. http://dx.doi.org/10.1145/3571252.

Full text
Abstract:
Many problems in static program analysis can be modeled as the context-free language (CFL) reachability problem on directed labeled graphs. The CFL reachability problem can be generally solved in time O ( n 3 ), where n is the number of vertices in the graph, with some specific cases that can be solved faster. In this work, we ask the following question: given a specific CFL, what is the exact exponent in the monomial of the running time? In other words, for which cases do we have linear, quadratic or cubic algorithms, and are there problems with intermediate runtimes? This question is inspired by recent efforts to classify classic problems in terms of their exact polynomial complexity, known as fine-grained complexity. Although recent efforts have shown some conditional lower bounds (mostly for the class of combinatorial algorithms), a general picture of the fine-grained complexity landscape for CFL reachability is missing. Our main contribution is lower bound results that pinpoint the exact running time of several classes of CFLs or specific CFLs under widely believed lower bound conjectures (e.g., Boolean Matrix Multiplication, k -Clique, APSP, 3SUM). We particularly focus on the family of Dyck- k languages (which are strings with well-matched parentheses), a fundamental class of CFL reachability problems. Remarkably, we are able to show a Ω( n 2.5 ) lower bound for Dyck-2 reachability, which to the best of our knowledge is the first super-quadratic lower bound that applies to all algorithms, and shows that CFL reachability is strictly harder that Boolean Matrix Multiplication. We also present new lower bounds for the case of sparse input graphs where the number of edges m is the input parameter, a common setting in the database literature. For this setting, we show a cubic lower bound for Andersen’s Pointer Analysis which significantly strengthens prior known results.
APA, Harvard, Vancouver, ISO, and other styles
50

Ah-Fat, Patrick, and Michael Huth. "Protecting Private Inputs: Bounded Distortion Guarantees With Randomised Approximations." Proceedings on Privacy Enhancing Technologies 2020, no. 3 (July 1, 2020): 284–303. http://dx.doi.org/10.2478/popets-2020-0053.

Full text
Abstract:
AbstractComputing a function of some private inputs while maintaining the confidentiality of those inputs is an important problem, to which Differential Privacy and Secure Multi-party Computation can offer solutions under specific assumptions. Research in randomised algorithms aims at improving the privacy of such inputs by randomising the output of a computation while ensuring that large distortions of outputs occur with low probability. But use cases such as e-voting or auctions will not tolerate large distortions at all. Thus, we develop a framework for randomising the output of a privacypreserving computation, while guaranteeing that output distortions stay within a specified bound. We analyse the privacy gains of our approach and characterise them more precisely for our notion of sparse functions. We build randomisation algorithms, running in linearithmic time in the number of possible input values, for this class of functions and we prove that the computed randomisations maximise the inputs’ privacy. Experimental work demonstrates significant privacy gains when compared with existing approaches that guarantee distortion bounds, also for non-sparse functions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography