To see the other types of publications on this topic, follow the link: Algorithmes de sieving.

Journal articles on the topic 'Algorithmes de sieving'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 17 journal articles for your research on the topic 'Algorithmes de sieving.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bai, Shi, Thijs Laarhoven, and Damien Stehlé. "Tuple lattice sieving." LMS Journal of Computation and Mathematics 19, A (2016): 146–62. http://dx.doi.org/10.1112/s1461157016000292.

Full text
Abstract:
Lattice sieving is asymptotically the fastest approach for solving the shortest vector problem (SVP) on Euclidean lattices. All known sieving algorithms for solving the SVP require space which (heuristically) grows as $2^{0.2075n+o(n)}$, where $n$ is the lattice dimension. In high dimensions, the memory requirement becomes a limiting factor for running these algorithms, making them uncompetitive with enumeration algorithms, despite their superior asymptotic time complexity.We generalize sieving algorithms to solve SVP with less memory. We consider reductions of tuples of vectors rather than pairs of vectors as existing sieve algorithms do. For triples, we estimate that the space requirement scales as $2^{0.1887n+o(n)}$. The naive algorithm for this triple sieve runs in time $2^{0.5661n+o(n)}$. With appropriate filtering of pairs, we reduce the time complexity to $2^{0.4812n+o(n)}$ while keeping the same space complexity. We further analyze the effects of using larger tuples for reduction, and conjecture how this provides a continuous trade-off between the memory-intensive sieving and the asymptotically slower enumeration.
APA, Harvard, Vancouver, ISO, and other styles
2

Mukhopadhyay, Priyanka. "Faster Provable Sieving Algorithms for the Shortest Vector Problem and the Closest Vector Problem on Lattices in ℓp Norm." Algorithms 14, no. 12 (December 13, 2021): 362. http://dx.doi.org/10.3390/a14120362.

Full text
Abstract:
In this work, we give provable sieving algorithms for the Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP) on lattices in ℓp norm (1≤p≤∞). The running time we obtain is better than existing provable sieving algorithms. We give a new linear sieving procedure that works for all ℓp norm (1≤p≤∞). The main idea is to divide the space into hypercubes such that each vector can be mapped efficiently to a sub-region. We achieve a time complexity of 22.751n+o(n), which is much less than the 23.849n+o(n) complexity of the previous best algorithm. We also introduce a mixed sieving procedure, where a point is mapped to a hypercube within a ball and then a quadratic sieve is performed within each hypercube. This improves the running time, especially in the ℓ2 norm, where we achieve a time complexity of 22.25n+o(n), while the List Sieve Birthday algorithm has a running time of 22.465n+o(n). We adopt our sieving techniques to approximation algorithms for SVP and CVP in ℓp norm (1≤p≤∞) and show that our algorithm has a running time of 22.001n+o(n), while previous algorithms have a time complexity of 23.169n+o(n).
APA, Harvard, Vancouver, ISO, and other styles
3

Grémy, Laurent. "Higher-dimensional sieving for the number field sieve algorithms." Open Book Series 2, no. 1 (January 28, 2019): 275–91. http://dx.doi.org/10.2140/obs.2019.2.275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Satılmış, Hami, Sedat Akleylek, and Cheng-Chi Lee. "Efficient Implementations of Sieving and Enumeration Algorithms for Lattice-Based Cryptography." Mathematics 9, no. 14 (July 8, 2021): 1618. http://dx.doi.org/10.3390/math9141618.

Full text
Abstract:
The security of lattice-based cryptosystems is based on solving hard lattice problems such as the shortest vector problem (SVP) and the closest vector problem (CVP). Various cryptanalysis algorithms such as (Pro)GaussSieve, HashSieve, ENUM, and BKZ have been proposed to solve these hard problems. Several implementations of these algorithms have been developed. On the other hand, the implementations of these algorithms are expected to be efficient in terms of run time and memory space. In this paper, a modular software package/library containing efficient implementations of GaussSieve, ProGaussSieve, HashSieve, and BKZ algorithms is developed. These implementations are considered efficient in terms of run time. While constructing this software library, some modifications to the algorithms are made to increase the performance. Then, the run times of these implementations are compared with the others. According to the experimental results, the proposed GaussSieve, ProGaussSieve, and HashSieve implementations are at least 70%, 75%, and 49% more efficient than previous ones, respectively.
APA, Harvard, Vancouver, ISO, and other styles
5

Shi, Wenhao, Haodong Jiang, and Zhi Ma. "Solving HNP with One Bit Leakage: An Asymmetric Lattice Sieving Algorithm." Entropy 25, no. 1 (December 27, 2022): 49. http://dx.doi.org/10.3390/e25010049.

Full text
Abstract:
The Hidden Number Problem (HNP) was introduced by Boneh and Venkastesan to analyze the bit-security of the Diffie–Hellman key exchange scheme. It is often used to mount a side-channel attack on (EC)DSA. The hardness of HNP is mainly determined by the number of nonce leakage bits and the size of the modulus. With the development of lattice reduction algorithms and lattice sieving, the range of practically vulnerable parameters are extended further. However, 1-bit leakage is still believed to be challenging for lattice attacks. In this paper, we proposed an asymmetric lattice sieving algorithm that can solve HNP with 1-bit leakage. The algorithm is composed of a BKZ pre-processing and a sieving step. The novel part of our lattice sieving algorithm is that the lattice used in these two steps have different dimensions. In particular, in the BKZ step we use more samples to derive a better lattice basis, while we just use truncated lattice basis for the lattice sieving step. To verify our algorithm, we use it to solve HNP with 1-bit leakage and 116-bit modulus.
APA, Harvard, Vancouver, ISO, and other styles
6

Sengupta, Binanda, and Abhijit Das. "Use of SIMD-based data parallelism to speed up sieving in integer-factoring algorithms." Applied Mathematics and Computation 293 (January 2017): 204–17. http://dx.doi.org/10.1016/j.amc.2016.08.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Budroni, Alessandro, Qian Guo, Thomas Johansson, Erik Mårtensson, and Paul Stankovski Wagner. "Improvements on Making BKW Practical for Solving LWE." Cryptography 5, no. 4 (October 28, 2021): 31. http://dx.doi.org/10.3390/cryptography5040031.

Full text
Abstract:
The learning with errors (LWE) problem is one of the main mathematical foundations of post-quantum cryptography. One of the main groups of algorithms for solving LWE is the Blum–Kalai–Wasserman (BKW) algorithm. This paper presents new improvements of BKW-style algorithms for solving LWE instances. We target minimum concrete complexity, and we introduce a new reduction step where we partially reduce the last position in an iteration and finish the reduction in the next iteration, allowing non-integer step sizes. We also introduce a new procedure in the secret recovery by mapping the problem to binary problems and applying the fast Walsh Hadamard transform. The complexity of the resulting algorithm compares favorably with all other previous approaches, including lattice sieving. We additionally show the steps of implementing the approach for large LWE problem instances. We provide two implementations of the algorithm, one RAM-based approach that is optimized for speed, and one file-based approach which overcomes RAM limitations by using file-based storage.
APA, Harvard, Vancouver, ISO, and other styles
8

Purinton, Benjamin, and Bodo Bookhagen. "Introducing <i>PebbleCounts</i>: a grain-sizing tool for photo surveys of dynamic gravel-bed rivers." Earth Surface Dynamics 7, no. 3 (September 17, 2019): 859–77. http://dx.doi.org/10.5194/esurf-7-859-2019.

Full text
Abstract:
Abstract. Grain-size distributions are a key geomorphic metric of gravel-bed rivers. Traditional measurement methods include manual counting or photo sieving, but these are achievable only at the 1–10 m2 scale. With the advent of drones and increasingly high-resolution cameras, we can now generate orthoimagery over hectares at millimeter to centimeter resolution. These scales, along with the complexity of high-mountain rivers, necessitate different approaches for photo sieving. As opposed to other image segmentation methods that use a watershed approach, our open-source algorithm, PebbleCounts, relies on k-means clustering in the spatial and spectral domain and rapid manual selection of well-delineated grains. This improves grain-size estimates for complex riverbed imagery, without post-processing. We also develop a fully automated method, PebbleCountsAuto, that relies on edge detection and filtering suspect grains, without the k-means clustering or manual selection steps. The algorithms are tested in controlled indoor conditions on three arrays of pebbles and then applied to 12 × 1 m2 orthomosaic clips of high-energy mountain rivers collected with a camera-on-mast setup (akin to a low-flying drone). A 20-pixel b-axis length lower truncation is necessary for attaining accurate grain-size distributions. For the k-means PebbleCounts approach, average percentile bias and precision are 0.03 and 0.09 ψ, respectively, for ∼1.16 mm pixel−1 images, and 0.07 and 0.05 ψ for one 0.32 mm pixel−1 image. The automatic approach has higher bias and precision of 0.13 and 0.15 ψ, respectively, for ∼1.16 mm pixel−1 images, but similar values of −0.06 and 0.05 ψ for one 0.32 mm pixel−1 image. For the automatic approach, only at best 70 % of the grains are correct identifications, and typically around 50 %. PebbleCounts operates most effectively at the 1 m2 patch scale, where it can be applied in ∼5–10 min on many patches to acquire accurate grain-size data over 10–100 m2 areas. These data can be used to validate PebbleCountsAuto, which may be applied at the scale of entire survey sites (102–104 m2). We synthesize results and recommend best practices for image collection, orthomosaic generation, and grain-size measurement using both algorithms.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Shouhua, Shuaihu Wang, and Xiyan Sun. "A Multi-Scale Anti-Multipath Algorithm for GNSS-RTK Monitoring Application." Sensors 23, no. 20 (October 11, 2023): 8396. http://dx.doi.org/10.3390/s23208396.

Full text
Abstract:
During short baseline measurements in the Real-Time Kinematic Global Navigation Satellite System (GNSS-RTK), multipath error has a significant impact on the quality of observed data. Aiming at the characteristics of multipath error in GNSS-RTK measurements, a novel method that combines improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) and adaptive wavelet packet threshold denoising (AWPTD) is proposed to reduce the effects of multipath error in GNSS-RTK measurements through modal function decomposition, effective coefficient sieving, and adaptive thresholding denoising. It first utilizes the ICEEMDAN algorithm to decompose the observed data into a series of intrinsic mode functions (IMFs). Then, a novel IMF selection method is designed based on information entropy to accurately locate the IMFs containing multipath error information. Finally, an optimized adaptive denoising method is applied to the selected IMFs to preserve the original signal characteristics to the maximum possible extent and improve the accuracy of the multipath error correction model. This study shows that the ICEEMDAN-AWPTD algorithm provides a multipath error correction model with higher accuracy compared to singular filtering algorithms based on the results of simulation data and GNSS-RTK data. After the multipath correction, the accuracy of the E, N, and U coordinates increased by 49.2%, 65.1%, and 56.6%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
10

Nowak, Damian, Rafał Adam Bachorz, and Marcin Hoffmann. "Neural Networks in the Design of Molecules with Affinity to Selected Protein Domains." International Journal of Molecular Sciences 24, no. 2 (January 16, 2023): 1762. http://dx.doi.org/10.3390/ijms24021762.

Full text
Abstract:
Drug design with machine learning support can speed up new drug discoveries. While current databases of known compounds are smaller in magnitude (approximately 108), the number of small drug-like molecules is estimated to be between 1023 and 1060. The use of molecular docking algorithms can help in new drug development by sieving out the worst drug-receptor complexes. New chemical spaces can be efficiently searched with the application of artificial intelligence. From that, new structures can be proposed. The research proposed aims to create new chemical structures supported by a deep neural network that will possess an affinity to the selected protein domains. Transferring chemical structures into SELFIES codes helped us pass chemical information to a neural network. On the basis of vectorized SELFIES, new chemical structures can be created. With the use of the created neural network, novel compounds that are chemically sensible can be generated. Newly created chemical structures are sieved by the quantitative estimation of the drug-likeness descriptor, Lipinski’s rule of 5, and the synthetic Bayesian accessibility classifier score. The affinity to selected protein domains was verified with the use of the AutoDock tool. As per the results, we obtained the structures that possess an affinity to the selected protein domains, namely PDB IDs 7NPC, 7NP5, and 7KXD.
APA, Harvard, Vancouver, ISO, and other styles
11

Jiao, Lin, Yongqiang Li, and Yonglin Hao. "A Guess-And-Determine Attack On SNOW-V Stream Cipher." Computer Journal 63, no. 12 (March 6, 2020): 1789–812. http://dx.doi.org/10.1093/comjnl/bxaa003.

Full text
Abstract:
Abstract The 5G mobile communication system is coming with a main objective, known also as IMT-2020, that intends to increase the current data rates up to several gigabits per second. To meet an accompanying demand of the super high-speed encryption, EIA and EEA algorithms face some challenges. The 3GPP standardization organization expects to increase the security level to 256-bit key length, and the international cryptographic field responds actively in cipher designs and standard applications. SNOW-V is such a proposal offered by the SNOW family design team, with a revision of the SNOW 3G architecture in terms of linear feedback shift register (LFSR) and finite state machine (FSM), where the LFSR part is new and operates eight times the speed of the FSM, consisting of two shift registers and each feeding into the other, and the FSM increases to three 128-bit registers and employs two instances of full AES encryption round function for update. It takes a 128-bit IV, employs 896-bit internal state and produces 128-bit keystream blocks. The result is competitive in pure software environment, making use of both AES-NI and AVX acceleration instructions. Thus, the security evaluation of SNOW-V is essential and urgent, since there is scarcely any definite security bound for it. In this paper, we propose a byte-based guess-and-determine attack on SNOW-V with complexity $2^{406}$ using only seven keystream blocks. We first improve the heuristic guessing-path auto-searching algorithm based on dynamic programming by adding initial guessing set, which is iteratively modified by sieving out the unnecessary guessing variables, in order to correct the guessing path according to the cipher structure and finally launch smaller guessing basis. For the specific design, we split all the computing units into bytes and rewrite all the internal operations correspondingly. We establish a backward-clock linear equation system according to the circular construction of the LFSR part. Then we further simplify the equations to adapt to the input requirements of the heuristic guessing-path auto-searching algorithm. Finally, the derived guessing path needs modification for the pre-simplification and post-reduction. This is the first complete guess-and-determine attack on SNOW-V as well as the first specific security evaluation to the full cipher.
APA, Harvard, Vancouver, ISO, and other styles
12

Doulgerakis, Emmanouil, Thijs Laarhoven, and Benne de Weger. "The irreducible vectors of a lattice:." Designs, Codes and Cryptography, October 18, 2022. http://dx.doi.org/10.1007/s10623-022-01119-y.

Full text
Abstract:
AbstractThe main idea behind lattice sieving algorithms is to reduce a sufficiently large number of lattice vectors with each other so that a set of short enough vectors is obtained. It is therefore natural to study vectors which cannot be reduced. In this work we give a concrete definition of an irreducible vector and study the properties of the set of all such vectors. We show that the set of irreducible vectors is a subset of the set of Voronoi relevant vectors and study its properties. For extremal lattices this set may contain as many as $$2^n$$ 2 n vectors, which leads us to define the notion of a complete system of irreducible vectors, whose size can be upper-bounded by the kissing number. One of our main results shows that modified heuristic sieving algorithms heuristically approximate such a set (modulo sign). We provide experiments in low dimensions which support this theory. Finally we give some applications of this set in the study of lattice problems such as SVP, SIVP and CVPP. The introduced notions, as well as various results derived along the way, may provide further insights into lattice algorithms and motivate new research into understanding these algorithms better.
APA, Harvard, Vancouver, ISO, and other styles
13

Xu, Luyao, Zhengyi Dai, Baofeng Wu, and Dongdai Lin. "Improved Attacks on (EC)DSA with Nonce Leakage by Lattice Sieving with Predicate." IACR Transactions on Cryptographic Hardware and Embedded Systems, March 6, 2023, 568–86. http://dx.doi.org/10.46586/tches.v2023.i2.568-586.

Full text
Abstract:
Lattice reduction algorithms have been proved to be one of the most powerful and versatile tools in public key cryptanalysis. In this work, we primarily concentrate on lattice attacks against (EC)DSA with nonce leakage via some sidechannel analysis. Previous works relying on lattice reduction algorithms such as LLL and BKZ will finally lead to the “lattice barrier”: lattice algorithms become infeasible when only fewer nonce is known. Recently, Albrecht and Heninger introduced lattice algorithms augmented with a predicate and broke the lattice barrier (Eurocrypt 2021). We improve their work in several aspects.We first propose a more efficient predicate algorithm which aims to search for the target lattice vector in a large database. Then, we combine sieving with predicate algorithm with the “dimensions for free” and “progressive sieving” techniques to further improve the performance of our attacks. Furthermore, we give a theoretic analysis on how to choose the optimal Kannan embedding factor.As a result, our algorithm outperforms the state-of-the-art lattice attacks for existing records such as 3-bit nonce leakage for a 256-bit curve and 2-bit nonce leakage for a 160-bit curve in terms of running time, sample numbers and success probability. We also break the lattice records on the 384-bit curve with 3-bit nonce leakage and the 256-bit curve with 2-bit nonce leakage which are thought infeasible previously. Finally, we give the first lattice attack against ECDSA with a single-bit nonce leakage, which enables us to break a 112-bit curve with 1-bit nonce leakage in practical time.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhu, Jianying, Qi Zhang, Hui Zhang, Zuoqiang Shi, Mingxu Hu, and Chenglong Bao. "A minority of final stacks yields superior amplitude in single-particle cryo-EM." Nature Communications 14, no. 1 (December 10, 2023). http://dx.doi.org/10.1038/s41467-023-43555-x.

Full text
Abstract:
AbstractCryogenic electron microscopy (cryo-EM) is widely used to determine near-atomic resolution structures of biological macromolecules. Due to the low signal-to-noise ratio, cryo-EM relies on averaging many images. However, a crucial question in the field of cryo-EM remains unanswered: how close can we get to the minimum number of particles required to reach a specific resolution in practice? The absence of an answer to this question has impeded progress in understanding sample behavior and the performance of sample preparation methods. To address this issue, we develop an iterative particle sorting and/or sieving method called CryoSieve. Extensive experiments demonstrate that CryoSieve outperforms other cryo-EM particle sorting algorithms, revealing that most particles are unnecessary in final stacks. The minority of particles remaining in the final stacks yield superior high-resolution amplitude in reconstructed density maps. For some datasets, the size of the finest subset approaches the theoretical limit.
APA, Harvard, Vancouver, ISO, and other styles
15

Klausner, Martina. "Calculating Therapeutic Compliance." Science & Technology Studies, September 21, 2018, 30–51. http://dx.doi.org/10.23987/sts.66375.

Full text
Abstract:
This article discusses calculation practices in the development of a monitoring device, aimed at improving therapeutic compliance of children and teenagers suffering from a deformation of the spine. In managing the complexities of physical parameters, therapeutic measures and interventions in every day life, numbers are central participants in inferring from and interfering into bodies and behaviours. Numbers are input and output of such monitoring systems, translating, circulating and visualizing body conditions, therapeutic effects and suggesting action. At the core of this generative process of capturing and interpreting data are algorithms as common participants of managing the complexities of vast amounts of data and providing the basis for interference in people’s lives. They process data and provide seemingly unambiguous numerical outcome, based on mathematical-technological processing of information. Attending to the incremental process of “learning algorithms” as central part of the system’s development allows me to describe the robustness of certain modes of inference. Beyond using the specific case as an exemplary for computer-based numerical inference and interference, the article is an attempt to probe and complement two theoretical approaches to the numerical management of complexity: Helen Verran’s focus on numbers’ performative properties and the potential tensions arising from divergent numerical orderings; and Paul Kockelman’s sieving of inferential and indexical chains along the generation of meaning and ontological transformativities.
APA, Harvard, Vancouver, ISO, and other styles
16

Hittmeir, Markus. "Smooth Subsum Search A Heuristic for Practical Integer Factorization." International Journal of Foundations of Computer Science, December 29, 2023, 1–26. http://dx.doi.org/10.1142/s0129054123500296.

Full text
Abstract:
The two currently fastest general-purpose integer factorization algorithms are the Quadratic Sieve and the Number Field Sieve. Both techniques are used to find so-called smooth values of certain polynomials, i.e., values that factor completely over a set of small primes (the factor base). As the names of the methods suggest, a sieving procedure is used for the task of quickly identifying smooth values among the candidates in a certain range. While the Number Field Sieve is asymptotically faster, the Quadratic Sieve is still considered the most efficient factorization technique for numbers up to around 100 digits. In this paper, we challenge the Quadratic Sieve by presenting a novel approach based on representing smoothness candidates as sums that are always divisible by several of the primes in the factor base. The resulting values are generally smaller than those considered in the Quadratic Sieve, increasing the likelihood of them being smooth. Using the fastest implementations of the Self-initializing Quadratic Sieve in Python as benchmarks, a Python implementation of our approach runs consistently 5 to 7 times faster for numbers with 45–100 digits, and around 10 times faster for numbers with 30–40 digits. We discuss several avenues for further improvements and applications of the technique.
APA, Harvard, Vancouver, ISO, and other styles
17

Wu, Can, Ying Cui, Donghui Li, and Defeng Sun. "Convex and Nonconvex Risk-Based Linear Regression at Scale." INFORMS Journal on Computing, April 11, 2023. http://dx.doi.org/10.1287/ijoc.2023.1282.

Full text
Abstract:
The value at risk (VaR) and the conditional value at risk (CVaR) are two popular risk measures to hedge against the uncertainty of data. In this paper, we provide a computational toolbox for solving high-dimensional sparse linear regression problems under either VaR or CVaR measures, the former being nonconvex and the latter convex. Unlike the empirical risk (neutral) minimization models in which the overall losses are decomposable across data, the aforementioned risk-sensitive models have nonseparable objective functions so that typical first order algorithms are not easy to scale. We address this scaling issue by adopting a semismooth Newton-based proximal augmented Lagrangian method of the convex CVaR linear regression problem. The matrix structures of the Newton systems are carefully explored to reduce the computational cost per iteration. The method is further embedded in a majorization–minimization algorithm as a subroutine to tackle the nonconvex VaR-based regression problem. We also discuss an adaptive sieving strategy to iteratively guess and adjust the effective problem dimension, which is particularly useful when a solution path associated with a sequence of tuning parameters is needed. Extensive numerical experiments on both synthetic and real data demonstrate the effectiveness of our proposed methods. In particular, they are about 53 times faster than the commercial package Gurobi for the CVaR-based sparse linear regression with 4,265,669 features and 16,087 observations. History: Accepted by Antonio Frangioni, Area Editor for Design & Analysis of Algorithms–Continuous. Funding: This work was supported in part by the NSF, the Division of Computing and Communication Foundations [Grant 2153352], the National Natural Science Foundation of China [Grant 12271187], and the Hong Kong Research Grant Council [Grant 15304019]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information ( https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2023.1282 ) as well as from the IJOC GitHub software repository ( https://github.com/INFORMSJoC/2022.0012 ) at ( http://dx.doi.org/10.5281/zenodo.7483279 ).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography