Zeitschriftenartikel zum Thema „Sieving algorithms“

Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Sieving algorithms.

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Sieving algorithms" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Bai, Shi, Thijs Laarhoven und Damien Stehlé. „Tuple lattice sieving“. LMS Journal of Computation and Mathematics 19, A (2016): 146–62. http://dx.doi.org/10.1112/s1461157016000292.

Der volle Inhalt der Quelle
Annotation:
Lattice sieving is asymptotically the fastest approach for solving the shortest vector problem (SVP) on Euclidean lattices. All known sieving algorithms for solving the SVP require space which (heuristically) grows as $2^{0.2075n+o(n)}$, where $n$ is the lattice dimension. In high dimensions, the memory requirement becomes a limiting factor for running these algorithms, making them uncompetitive with enumeration algorithms, despite their superior asymptotic time complexity.We generalize sieving algorithms to solve SVP with less memory. We consider reductions of tuples of vectors rather than pairs of vectors as existing sieve algorithms do. For triples, we estimate that the space requirement scales as $2^{0.1887n+o(n)}$. The naive algorithm for this triple sieve runs in time $2^{0.5661n+o(n)}$. With appropriate filtering of pairs, we reduce the time complexity to $2^{0.4812n+o(n)}$ while keeping the same space complexity. We further analyze the effects of using larger tuples for reduction, and conjecture how this provides a continuous trade-off between the memory-intensive sieving and the asymptotically slower enumeration.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Mukhopadhyay, Priyanka. „Faster Provable Sieving Algorithms for the Shortest Vector Problem and the Closest Vector Problem on Lattices in ℓp Norm“. Algorithms 14, Nr. 12 (13.12.2021): 362. http://dx.doi.org/10.3390/a14120362.

Der volle Inhalt der Quelle
Annotation:
In this work, we give provable sieving algorithms for the Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP) on lattices in ℓp norm (1≤p≤∞). The running time we obtain is better than existing provable sieving algorithms. We give a new linear sieving procedure that works for all ℓp norm (1≤p≤∞). The main idea is to divide the space into hypercubes such that each vector can be mapped efficiently to a sub-region. We achieve a time complexity of 22.751n+o(n), which is much less than the 23.849n+o(n) complexity of the previous best algorithm. We also introduce a mixed sieving procedure, where a point is mapped to a hypercube within a ball and then a quadratic sieve is performed within each hypercube. This improves the running time, especially in the ℓ2 norm, where we achieve a time complexity of 22.25n+o(n), while the List Sieve Birthday algorithm has a running time of 22.465n+o(n). We adopt our sieving techniques to approximation algorithms for SVP and CVP in ℓp norm (1≤p≤∞) and show that our algorithm has a running time of 22.001n+o(n), while previous algorithms have a time complexity of 23.169n+o(n).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Shi, Wenhao, Haodong Jiang und Zhi Ma. „Solving HNP with One Bit Leakage: An Asymmetric Lattice Sieving Algorithm“. Entropy 25, Nr. 1 (27.12.2022): 49. http://dx.doi.org/10.3390/e25010049.

Der volle Inhalt der Quelle
Annotation:
The Hidden Number Problem (HNP) was introduced by Boneh and Venkastesan to analyze the bit-security of the Diffie–Hellman key exchange scheme. It is often used to mount a side-channel attack on (EC)DSA. The hardness of HNP is mainly determined by the number of nonce leakage bits and the size of the modulus. With the development of lattice reduction algorithms and lattice sieving, the range of practically vulnerable parameters are extended further. However, 1-bit leakage is still believed to be challenging for lattice attacks. In this paper, we proposed an asymmetric lattice sieving algorithm that can solve HNP with 1-bit leakage. The algorithm is composed of a BKZ pre-processing and a sieving step. The novel part of our lattice sieving algorithm is that the lattice used in these two steps have different dimensions. In particular, in the BKZ step we use more samples to derive a better lattice basis, while we just use truncated lattice basis for the lattice sieving step. To verify our algorithm, we use it to solve HNP with 1-bit leakage and 116-bit modulus.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Grémy, Laurent. „Higher-dimensional sieving for the number field sieve algorithms“. Open Book Series 2, Nr. 1 (28.01.2019): 275–91. http://dx.doi.org/10.2140/obs.2019.2.275.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Satılmış, Hami, Sedat Akleylek und Cheng-Chi Lee. „Efficient Implementations of Sieving and Enumeration Algorithms for Lattice-Based Cryptography“. Mathematics 9, Nr. 14 (08.07.2021): 1618. http://dx.doi.org/10.3390/math9141618.

Der volle Inhalt der Quelle
Annotation:
The security of lattice-based cryptosystems is based on solving hard lattice problems such as the shortest vector problem (SVP) and the closest vector problem (CVP). Various cryptanalysis algorithms such as (Pro)GaussSieve, HashSieve, ENUM, and BKZ have been proposed to solve these hard problems. Several implementations of these algorithms have been developed. On the other hand, the implementations of these algorithms are expected to be efficient in terms of run time and memory space. In this paper, a modular software package/library containing efficient implementations of GaussSieve, ProGaussSieve, HashSieve, and BKZ algorithms is developed. These implementations are considered efficient in terms of run time. While constructing this software library, some modifications to the algorithms are made to increase the performance. Then, the run times of these implementations are compared with the others. According to the experimental results, the proposed GaussSieve, ProGaussSieve, and HashSieve implementations are at least 70%, 75%, and 49% more efficient than previous ones, respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Budroni, Alessandro, Qian Guo, Thomas Johansson, Erik Mårtensson und Paul Stankovski Wagner. „Improvements on Making BKW Practical for Solving LWE“. Cryptography 5, Nr. 4 (28.10.2021): 31. http://dx.doi.org/10.3390/cryptography5040031.

Der volle Inhalt der Quelle
Annotation:
The learning with errors (LWE) problem is one of the main mathematical foundations of post-quantum cryptography. One of the main groups of algorithms for solving LWE is the Blum–Kalai–Wasserman (BKW) algorithm. This paper presents new improvements of BKW-style algorithms for solving LWE instances. We target minimum concrete complexity, and we introduce a new reduction step where we partially reduce the last position in an iteration and finish the reduction in the next iteration, allowing non-integer step sizes. We also introduce a new procedure in the secret recovery by mapping the problem to binary problems and applying the fast Walsh Hadamard transform. The complexity of the resulting algorithm compares favorably with all other previous approaches, including lattice sieving. We additionally show the steps of implementing the approach for large LWE problem instances. We provide two implementations of the algorithm, one RAM-based approach that is optimized for speed, and one file-based approach which overcomes RAM limitations by using file-based storage.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Sengupta, Binanda, und Abhijit Das. „Use of SIMD-based data parallelism to speed up sieving in integer-factoring algorithms“. Applied Mathematics and Computation 293 (Januar 2017): 204–17. http://dx.doi.org/10.1016/j.amc.2016.08.019.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Purinton, Benjamin, und Bodo Bookhagen. „Introducing <i>PebbleCounts</i>: a grain-sizing tool for photo surveys of dynamic gravel-bed rivers“. Earth Surface Dynamics 7, Nr. 3 (17.09.2019): 859–77. http://dx.doi.org/10.5194/esurf-7-859-2019.

Der volle Inhalt der Quelle
Annotation:
Abstract. Grain-size distributions are a key geomorphic metric of gravel-bed rivers. Traditional measurement methods include manual counting or photo sieving, but these are achievable only at the 1–10 m2 scale. With the advent of drones and increasingly high-resolution cameras, we can now generate orthoimagery over hectares at millimeter to centimeter resolution. These scales, along with the complexity of high-mountain rivers, necessitate different approaches for photo sieving. As opposed to other image segmentation methods that use a watershed approach, our open-source algorithm, PebbleCounts, relies on k-means clustering in the spatial and spectral domain and rapid manual selection of well-delineated grains. This improves grain-size estimates for complex riverbed imagery, without post-processing. We also develop a fully automated method, PebbleCountsAuto, that relies on edge detection and filtering suspect grains, without the k-means clustering or manual selection steps. The algorithms are tested in controlled indoor conditions on three arrays of pebbles and then applied to 12 × 1 m2 orthomosaic clips of high-energy mountain rivers collected with a camera-on-mast setup (akin to a low-flying drone). A 20-pixel b-axis length lower truncation is necessary for attaining accurate grain-size distributions. For the k-means PebbleCounts approach, average percentile bias and precision are 0.03 and 0.09 ψ, respectively, for ∼1.16 mm pixel−1 images, and 0.07 and 0.05 ψ for one 0.32 mm pixel−1 image. The automatic approach has higher bias and precision of 0.13 and 0.15 ψ, respectively, for ∼1.16 mm pixel−1 images, but similar values of −0.06 and 0.05 ψ for one 0.32 mm pixel−1 image. For the automatic approach, only at best 70 % of the grains are correct identifications, and typically around 50 %. PebbleCounts operates most effectively at the 1 m2 patch scale, where it can be applied in ∼5–10 min on many patches to acquire accurate grain-size data over 10–100 m2 areas. These data can be used to validate PebbleCountsAuto, which may be applied at the scale of entire survey sites (102–104 m2). We synthesize results and recommend best practices for image collection, orthomosaic generation, and grain-size measurement using both algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Wang, Shouhua, Shuaihu Wang und Xiyan Sun. „A Multi-Scale Anti-Multipath Algorithm for GNSS-RTK Monitoring Application“. Sensors 23, Nr. 20 (11.10.2023): 8396. http://dx.doi.org/10.3390/s23208396.

Der volle Inhalt der Quelle
Annotation:
During short baseline measurements in the Real-Time Kinematic Global Navigation Satellite System (GNSS-RTK), multipath error has a significant impact on the quality of observed data. Aiming at the characteristics of multipath error in GNSS-RTK measurements, a novel method that combines improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) and adaptive wavelet packet threshold denoising (AWPTD) is proposed to reduce the effects of multipath error in GNSS-RTK measurements through modal function decomposition, effective coefficient sieving, and adaptive thresholding denoising. It first utilizes the ICEEMDAN algorithm to decompose the observed data into a series of intrinsic mode functions (IMFs). Then, a novel IMF selection method is designed based on information entropy to accurately locate the IMFs containing multipath error information. Finally, an optimized adaptive denoising method is applied to the selected IMFs to preserve the original signal characteristics to the maximum possible extent and improve the accuracy of the multipath error correction model. This study shows that the ICEEMDAN-AWPTD algorithm provides a multipath error correction model with higher accuracy compared to singular filtering algorithms based on the results of simulation data and GNSS-RTK data. After the multipath correction, the accuracy of the E, N, and U coordinates increased by 49.2%, 65.1%, and 56.6%, respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Nowak, Damian, Rafał Adam Bachorz und Marcin Hoffmann. „Neural Networks in the Design of Molecules with Affinity to Selected Protein Domains“. International Journal of Molecular Sciences 24, Nr. 2 (16.01.2023): 1762. http://dx.doi.org/10.3390/ijms24021762.

Der volle Inhalt der Quelle
Annotation:
Drug design with machine learning support can speed up new drug discoveries. While current databases of known compounds are smaller in magnitude (approximately 108), the number of small drug-like molecules is estimated to be between 1023 and 1060. The use of molecular docking algorithms can help in new drug development by sieving out the worst drug-receptor complexes. New chemical spaces can be efficiently searched with the application of artificial intelligence. From that, new structures can be proposed. The research proposed aims to create new chemical structures supported by a deep neural network that will possess an affinity to the selected protein domains. Transferring chemical structures into SELFIES codes helped us pass chemical information to a neural network. On the basis of vectorized SELFIES, new chemical structures can be created. With the use of the created neural network, novel compounds that are chemically sensible can be generated. Newly created chemical structures are sieved by the quantitative estimation of the drug-likeness descriptor, Lipinski’s rule of 5, and the synthetic Bayesian accessibility classifier score. The affinity to selected protein domains was verified with the use of the AutoDock tool. As per the results, we obtained the structures that possess an affinity to the selected protein domains, namely PDB IDs 7NPC, 7NP5, and 7KXD.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Jiao, Lin, Yongqiang Li und Yonglin Hao. „A Guess-And-Determine Attack On SNOW-V Stream Cipher“. Computer Journal 63, Nr. 12 (06.03.2020): 1789–812. http://dx.doi.org/10.1093/comjnl/bxaa003.

Der volle Inhalt der Quelle
Annotation:
Abstract The 5G mobile communication system is coming with a main objective, known also as IMT-2020, that intends to increase the current data rates up to several gigabits per second. To meet an accompanying demand of the super high-speed encryption, EIA and EEA algorithms face some challenges. The 3GPP standardization organization expects to increase the security level to 256-bit key length, and the international cryptographic field responds actively in cipher designs and standard applications. SNOW-V is such a proposal offered by the SNOW family design team, with a revision of the SNOW 3G architecture in terms of linear feedback shift register (LFSR) and finite state machine (FSM), where the LFSR part is new and operates eight times the speed of the FSM, consisting of two shift registers and each feeding into the other, and the FSM increases to three 128-bit registers and employs two instances of full AES encryption round function for update. It takes a 128-bit IV, employs 896-bit internal state and produces 128-bit keystream blocks. The result is competitive in pure software environment, making use of both AES-NI and AVX acceleration instructions. Thus, the security evaluation of SNOW-V is essential and urgent, since there is scarcely any definite security bound for it. In this paper, we propose a byte-based guess-and-determine attack on SNOW-V with complexity $2^{406}$ using only seven keystream blocks. We first improve the heuristic guessing-path auto-searching algorithm based on dynamic programming by adding initial guessing set, which is iteratively modified by sieving out the unnecessary guessing variables, in order to correct the guessing path according to the cipher structure and finally launch smaller guessing basis. For the specific design, we split all the computing units into bytes and rewrite all the internal operations correspondingly. We establish a backward-clock linear equation system according to the circular construction of the LFSR part. Then we further simplify the equations to adapt to the input requirements of the heuristic guessing-path auto-searching algorithm. Finally, the derived guessing path needs modification for the pre-simplification and post-reduction. This is the first complete guess-and-determine attack on SNOW-V as well as the first specific security evaluation to the full cipher.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Zhang, Bin, Jinke Gong, Wenhua Yuan, Jun Fu und Yi Huang. „Intelligent Prediction of Sieving Efficiency in Vibrating Screens“. Shock and Vibration 2016 (2016): 1–7. http://dx.doi.org/10.1155/2016/9175417.

Der volle Inhalt der Quelle
Annotation:
In order to effectively predict the sieving efficiency of a vibrating screen, experiments to investigate the sieving efficiency were carried out. Relation between sieving efficiency and other working parameters in a vibrating screen such as mesh aperture size, screen length, inclination angle, vibration amplitude, and vibration frequency was analyzed. Based on the experiments, least square support vector machine (LS-SVM) was established to predict the sieving efficiency, and adaptive genetic algorithm and cross-validation algorithm were used to optimize the parameters in LS-SVM. By the examination of testing points, the prediction performance of least square support vector machine is better than that of the existing formula and neural network, and its average relative error is only 4.2%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Dadush, Daniel. „A Randomized Sieving Algorithm for Approximate Integer Programming“. Algorithmica 70, Nr. 2 (20.09.2013): 208–44. http://dx.doi.org/10.1007/s00453-013-9834-8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Flanders, Harley, und Alan F. Tomala. „Algorithm of the Bi_Month: Sieving Primes on a Micro“. College Mathematics Journal 19, Nr. 4 (September 1988): 364. http://dx.doi.org/10.2307/2686472.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Fekpe, E. S. K., und A. M. Clayton. „Vehicle classification from weigh-in-motion data: the progressive sieving algorithm“. Canadian Journal of Civil Engineering 21, Nr. 2 (01.04.1994): 195–206. http://dx.doi.org/10.1139/l94-023.

Der volle Inhalt der Quelle
Annotation:
A classification procedure has been developed, based on the concept of progressive sieving of weigh-in-motion data through a series of criteria designed to separate, identify, and classify vehicles by their axle arrangement. The procedure is demonstrated to be feasible, and is capable of identifying all major types of vehicles quite accurately. The procedure together with a nomenclature provides a classification system that is open-ended having an infinite number of classes. The system is universal and could be used by any jurisdiction. It facilitates evaluation of vehicle weight and dimension policy alternatives and their impacts on the highway system. The procedure has the capability of translating data from one classification system into another. Key words: classification, sieving, axle spacing, vehicle configuration.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Zajac, Pavol. „On the use of the lattice sieve in the 3D NFS“. Tatra Mountains Mathematical Publications 45, Nr. 1 (01.12.2010): 161–72. http://dx.doi.org/10.2478/v10127-010-0012-y.

Der volle Inhalt der Quelle
Annotation:
ABSTRACT An adaptation of the Number Field Sieve (NFS) algorithm to solve a discrete logarithm problem in degree 6 finite fields (DLP6) requires a modified sieving procedure to find smooth elements of the three dimensional sieve space. In our successful solution [P. Zajac: Discrete Logarithms and Degree Six NumbereField Sieve: A practical Approach. VDM Verlag Dr. M¨uller, Saarbr¨ucken, 2009] we have used a modified line sieving to process a box-shaped region using a large factor base. In this contribution, we compare the results with an alternative approach based on the lattice sieving, which was used in most of the classical factorization and DLP record solutions. Results indicate that this approach does not scale to the 3D-case, making DLP6 more difficult in practice than comparable classical DLP cases.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Li, Meng, Xu Wang, Hao Yao, Henrik Saxén und Yaowei Yu. „Analysis of Particle Size Distribution of Coke on Blast Furnace Belt Using Object Detection“. Processes 10, Nr. 10 (20.09.2022): 1902. http://dx.doi.org/10.3390/pr10101902.

Der volle Inhalt der Quelle
Annotation:
Particle size distribution is an important parameter of metallurgical coke for use in blast furnaces. It is usually analyzed by traditional sieving methods, which cause delays and require maintenance. In this paper, a coke particle detection model was developed using a deep learning-based object detection algorithm (YOLOv3). The results were used to estimate the particle size distribution by a statistical method. Images of coke on the main conveyor belt of a blast furnace were acquired for model training and testing, and the particle size distribution determined by sieving was used for verification of the results. The experiment results show that the particle detection model is fast and has a high accuracy; the absolute error of the particle size distribution between the detection method and the sieving method was less than 5%. The detection method provides a new approach for fast analysis of particle size distributions from images and holds promise for a future online application in the plant.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Kilburn, Peter. „Discussion: Vehicle classification from weigh-in-motion data: the progressive sieving algorithm“. Canadian Journal of Civil Engineering 22, Nr. 2 (01.04.1995): 294. http://dx.doi.org/10.1139/l95-040.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Fekpe, E. S. K., und A. M. Clayton. „Reply: Vehicle classification from weigh-in-motion data: the progressive sieving algorithm“. Canadian Journal of Civil Engineering 22, Nr. 2 (01.04.1995): 294–95. http://dx.doi.org/10.1139/l95-041.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Incze, Arpad. „Pixel-sieve cryptographic primitives with LSB steganography“. Global Journal of Information Technology: Emerging Technologies 7, Nr. 1 (27.06.2017): 14–22. http://dx.doi.org/10.18844/gjit.v7i1.1934.

Der volle Inhalt der Quelle
Annotation:
AbstractThis paper contains a brief description of new approach regarding LSB steganography. The novelty of the method resides in the combination of LSB (Least Significant Bits) steganography with some primitives of the pixel-sieve/bit-sieve cryptographic method. In short, we propose to use two or more carrier images and the sieving algorithm, borrowed from the pixel sieve primitive, to determine which carrier image will receive the next set of bits of the secret message. While in classic LSB steganography the secret message must be encrypted prior to embed the information into the carrier image, in our proposal the message is scrambled between the shares in a pseudo random way. An attacker will need all the carrier images and the sieving key in order to reconstruct the original message. Also we recommend an alternative method in which instead of simply replacing the last bit/bits we use them as XOR keys to further enhance the security. Keywords: steganography, cryptography, secret sharing; visual cryptography, LSB.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Ermilov, Alexander A., Gergely Benkő und Sándor Baranya. „Automated riverbed composition analysis using deep learning on underwater images“. Earth Surface Dynamics 11, Nr. 6 (01.11.2023): 1061–95. http://dx.doi.org/10.5194/esurf-11-1061-2023.

Der volle Inhalt der Quelle
Annotation:
Abstract. The sediment of alluvial riverbeds plays a significant role in river systems both in engineering and natural processes. However, the sediment composition can show high spatial and temporal heterogeneity, even on river-reach scale, making it difficult to representatively sample and assess. Conventional sampling methods are inadequate and time-consuming for effectively capturing the variability of bed surface texture in these situations. In this study, we overcome this issue by adopting an image-based deep-learning (DL) algorithm. The algorithm was trained to recognise the main sediment classes in videos that were taken along cross sections underwater in the Danube. A total of 27 riverbed samples were collected and analysed for validation. The introduced DL-based method is fast, i.e. the videos of 300–400 m long sections can be analysed within minutes with continuous spatial sampling distribution (i.e. the whole riverbed along the path is mapped with images in ca. 0.3–1 m2 overlapping windows). The quality of the trained algorithm was evaluated (i) mathematically by dividing the annotated images into test and validation sets and also via (ii) intercomparison with other direct (sieving of physical samples) and indirect sampling methods (wavelet-based image processing of the riverbed images), focusing on the percentages of the detected sediment fractions. For the final evaluation, the sieving analysis of the collected physical samples were considered the ground truth. After correcting for samples affected by bed armouring, comparison of the DL approach with 14 physical samples yielded a mean classification error of 4.5 %. In addition, based upon the visual evaluation of the footage, the spatial trend in the fraction changes was also well captured along the cross sections. Suggestions for performing proper field measurements are also given; furthermore, possibilities for combining the algorithm with other techniques are highlighted, briefly showcasing the multi-purpose nature of underwater videos for hydromorphological assessment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Phillips, P. J., J. Huang und S. M. Dunn. „Computational Micrograph Registration with Sieve Processes“. Proceedings, annual meeting, Electron Microscopy Society of America 54 (11.08.1996): 440–41. http://dx.doi.org/10.1017/s0424820100164660.

Der volle Inhalt der Quelle
Annotation:
In this paper we present an efficient algorithm for automatically finding the correspondence between pairs of stereo micrographs, the key step in forming a stereo image. The computation burden in this problem is solving for the optimal mapping and transformation between the two micrographs. In this paper, we present a sieve algorithm for efficiently estimating the transformation and correspondence.In a sieve algorithm, a sequence of stages gradually reduce the number of transformations and correspondences that need to be examined, i.e., the analogy of sieving through the set of mappings with gradually finer meshes until the answer is found. The set of sieves is derived from an image model, here a planar graph that encodes the spatial organization of the features. In the sieve algorithm, the graph represents the spatial arrangement of objects in the image. The algorithm for finding the correspondence restricts its attention to the graph, with the correspondence being found by a combination of graph matchings, point set matching and geometric invariants.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Zhang, Li Ying, und Xue Jun Zhang. „Measurement of Proppant Geometrical Characteristics Based on Chain Code“. Applied Mechanics and Materials 239-240 (Dezember 2012): 708–12. http://dx.doi.org/10.4028/www.scientific.net/amm.239-240.708.

Der volle Inhalt der Quelle
Annotation:
In order to evaluate geometrical characteristics of proppant scientifically and accurately, both of the theoretical analysis and the experimental method were adopted. By binaryzation processing of the particles digital image, using chain code technology to realize regional boundary calibration to obtain vertex chain code and coordinates of boundary point, an algorithm has been inferred for particle size, sphericity and roundness, and problem-solving process for geometrical characteristics has been confirmed by experiments. The results indicate that measurement of geometrical characteristics, algorithm and analysis for the sizing and averaging have become rationale for the particle pattern recognition and analysis system. The conclusion breaks through traditional sieving methods in measuring and contrast plate method, laying foundation for the application of computer image analysis technology.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Lu, Zhaolin, Xiaojuan Hu und Yao Lu. „Particle Morphology Analysis of Biomass Material Based on Improved Image Processing Method“. International Journal of Analytical Chemistry 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/5840690.

Der volle Inhalt der Quelle
Annotation:
Particle morphology, including size and shape, is an important factor that significantly influences the physical and chemical properties of biomass material. Based on image processing technology, a method was developed to process sample images, measure particle dimensions, and analyse the particle size and shape distributions of knife-milled wheat straw, which had been preclassified into five nominal size groups using mechanical sieving approach. Considering the great variation of particle size from micrometer to millimeter, the powders greater than 250 μm were photographed by a flatbed scanner without zoom function, and the others were photographed using a scanning electron microscopy (SEM) with high-image resolution. Actual imaging tests confirmed the excellent effect of backscattered electron (BSE) imaging mode of SEM. Particle aggregation is an important factor that affects the recognition accuracy of the image processing method. In sample preparation, the singulated arrangement and ultrasonic dispersion methods were used to separate powders into particles that were larger and smaller than the nominal size of 250 μm. In addition, an image segmentation algorithm based on particle geometrical information was proposed to recognise the finer clustered powders. Experimental results demonstrated that the improved image processing method was suitable to analyse the particle size and shape distributions of ground biomass materials and solve the size inconsistencies in sieving analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Becker, Anja, Nicolas Gama und Antoine Joux. „A sieve algorithm based on overlattices“. LMS Journal of Computation and Mathematics 17, A (2014): 49–70. http://dx.doi.org/10.1112/s1461157014000229.

Der volle Inhalt der Quelle
Annotation:
AbstractIn this paper, we present a heuristic algorithm for solving exact, as well as approximate, shortest vector and closest vector problems on lattices. The algorithm can be seen as a modified sieving algorithm for which the vectors of the intermediate sets lie in overlattices or translated cosets of overlattices. The key idea is hence no longer to work with a single lattice but to move the problems around in a tower of related lattices. We initiate the algorithm by sampling very short vectors in an overlattice of the original lattice that admits a quasi-orthonormal basis and hence an efficient enumeration of vectors of bounded norm. Taking sums of vectors in the sample, we construct short vectors in the next lattice. Finally, we obtain solution vector(s) in the initial lattice as a sum of vectors of an overlattice. The complexity analysis relies on the Gaussian heuristic. This heuristic is backed by experiments in low and high dimensions that closely reflect these estimates when solving hard lattice problems in the average case.This new approach allows us to solve not only shortest vector problems, but also closest vector problems, in lattices of dimension$\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}n$in time$2^{0.3774\, n}$using memory$2^{0.2925\, n}$. Moreover, the algorithm is straightforward to parallelize on most computer architectures.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Chen, Ssu-Han, Jer-Huan Jang, Yu-Ru Chang, Chih-Hsiang Kang, Hung-Yi Chen, Kevin Fong-Rey Liu, Fong-Lin Lee, Yang-Shen Hsueh und Meng-Jey Youh. „An Automatic Foreign Matter Detection and Sorting System for PVC Powder“. Applied Sciences 12, Nr. 12 (20.06.2022): 6276. http://dx.doi.org/10.3390/app12126276.

Der volle Inhalt der Quelle
Annotation:
In the present study, an automatic defect detection system has been assembled and introduced for Polyvinyl chloride (PVC) powder. The average diameter for PVC powder is approximately 100 μm. The system hardware includes a powder delivery device, a sieving device, a circular platform, an image capture device, and a recycling device. A defect detection algorithm based on YOLOv4 was developed using CSPDarkNet53 as the backbone for feature extraction, spatial pyramid pooling (SPP) and path aggregation network (PAN) as the neck, and Yoloblock as the head. An auto-annotation algorithm was developed based on a digital image processing algorithm to save time in feature engineering. Several hyper-parameters have been employed to improve the efficiency of detection in the process of training YOLOv4. The Taguchi method was utilized to optimize the performance of detection, in which the mean average precision (mAP) is the response. Results show that our optimized YOLOv4 has a test mAP of 0.9385, compared to 0.8653 and 0.7999 for naïve YOLOv4 and Faster RCNN, respectively. Additionally, with the optimized YOLOv4, there is no false alarm for images without any foreign matter.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Kaygusuz, Emre. „Design and analysis of vibratory screen for peanuts“. Thermal Science 27, Nr. 4 Part B (2023): 3323–35. http://dx.doi.org/10.2298/tsci2304323k.

Der volle Inhalt der Quelle
Annotation:
The low-cost cleaning and classification of granular agricultural products in a vibrating environment is possible via the mathematical modeling of the movement of a grain on the surface of the screen. In this study, design and analysis of a vibratory screen used for cleaning and classifying peanuts has been realized. Therefore, firstly a mathematical model of this process has been developed based on analysis carried out on a grain making translation on a vibrating surface. A 6- bar mechanism was selected as the driving system and the kinematical analysis was performed to obtain the basic inputs of the vibratory screen. Based on the mathematical model developed, a design algorithm has been formed by which design and operating parameters are selected so as to satisfy the necessary conditions for effective sieving. This way ensures saving money and time. The algorithm has been demonstrated on a numerical example and it is shown suitable to sieve the peanut.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Li, Yeming, Yidan Xia, Dailiang Xie, Ya Xu, Zhipeng Xu und Yuebing Wang. „Application of artificial bee colony algorithm for particle size distribution measurement of suspended sediment based on focused ultrasonic sensor“. Transactions of the Institute of Measurement and Control 43, Nr. 7 (11.02.2021): 1680–90. http://dx.doi.org/10.1177/0142331221989115.

Der volle Inhalt der Quelle
Annotation:
A new focused ultrasonic sensor is proposed, based on which the measurement system for particle size distribution measurement of suspended sediment is established. Compared with the traditional ultrasonic sensors, the one used in this paper is equipped with piezoelectric transducer (PZT) on an arc-shaped shell, to concentrate ultrasonic beams on one measurement point. The sensor is used to measure the particle size distribution of suspended sediment. The experiments were carried out on water-sediment mixtures with different particle size distribution. Due to multiple parameters and non-linearity of the ultrasonic attenuation model, the artificial bee colony (ABC) inversion algorithm is used to estimate particle size distribution, thus improving measurement accuracy. The particle sizes obtained by sieving method are seen as reference values. The results indicate that whether the suspended particles are subject to a unimodal distribution, uniform distribution or random distribution, the particle size distribution obtained by ABC inversion algorithm is consistent with the result obtained by the sieve method. The results demonstrate that the method has good utility and accuracy within the low concentration range.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Jiang, Zi-Long, und Chen-Hui Jin. „Multiple Impossible Differentials Cryptanalysis on 7-Round ARIA-192“. Security and Communication Networks 2018 (2018): 1–11. http://dx.doi.org/10.1155/2018/7453572.

Der volle Inhalt der Quelle
Annotation:
This paper studies the security of 7-round ARIA-192 against multiple impossible differentials cryptanalysis. We propose six special 4-round impossible differentials which have the same input difference and different output difference with the maximum number of nonzero common bytes. Based on these differentials, we construct six attack trails including the maximum number of common subkey bytes. Under such circumstances, we utilize an efficient sieving process to improve the efficiency of eliminating common subkeys; therefore, both data and time complexities are reduced. Furthermore, we also present an efficient algorithm to recover the master key via guess-and-determine technique. Taking advantage of the above advances, we have obtained the best result so far for impossible differential cryptanalysis of ARIA-192, with time, data, and memory complexities being 2189.8 7-round ARIA encryptions, 2116.6 chosen plaintexts, and 2139.3 bytes, respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

D’Addese, Gianluca, Martina Casari, Roberto Serra und Marco Villani. „A Fast and Effective Method to Identify Relevant Sets of Variables in Complex Systems“. Mathematics 9, Nr. 9 (30.04.2021): 1022. http://dx.doi.org/10.3390/math9091022.

Der volle Inhalt der Quelle
Annotation:
In many complex systems one observes the formation of medium-level structures, whose detection could allow a high-level description of the dynamical organization of the system itself, and thus to its better understanding. We have developed in the past a powerful method to achieve this goal, which however requires a heavy computational cost in several real-world cases. In this work we introduce a modified version of our approach, which reduces the computational burden. The design of the new algorithm allowed the realization of an original suite of methods able to work simultaneously at the micro level (that of the binary relationships of the single variables) and at meso level (the identification of dynamically relevant groups). We apply this suite to a particularly relevant case, in which we look for the dynamic organization of a gene regulatory network when it is subject to knock-outs. The approach combines information theory, graph analysis, and an iterated sieving algorithm in order to describe rather complex situations. Its application allowed to derive some general observations on the dynamical organization of gene regulatory networks, and to observe interesting characteristics in an experimental case.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Gray, J. M. N. T., und A. R. Thornton. „A theory for particle size segregation in shallow granular free-surface flows“. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 461, Nr. 2057 (26.04.2005): 1447–73. http://dx.doi.org/10.1098/rspa.2004.1420.

Der volle Inhalt der Quelle
Annotation:
Abstract Granular materials composed of a mixture of grain sizes are notoriously prone to segregation during shaking or transport. In this paper, a binary mixture theory is used to formulate a model for kinetic sieving of large and small particles in thin, rapidly flowing avalanches, which occur in many industrial and geophysical free-surface flows. The model is based on a simple percolation idea, in which the small particles preferentially fall into underlying void space and lever large particles upwards. Exact steady-state solutions have been constructed for general steady uniform velocity fields, as well as time-dependent solutions for plug-flow, that exploit the decoupling of material columns in the avalanche. All the solutions indicate the development of concentration shocks, which are frequently observed in experiments. A shock-capturing numerical algorithm is formulated to solve general problems and is used to investigate segregation in flows with weak shear.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Liu, Tao, Xingchen Lv und Min Jin. „Research on Image Measurement Method of Flat Parts Based on the Adaptive Chord Inclination Angle Algorithm“. Applied Sciences 13, Nr. 3 (27.01.2023): 1641. http://dx.doi.org/10.3390/app13031641.

Der volle Inhalt der Quelle
Annotation:
To accurately measure the critical dimensions of flat parts using machine vision, an inspection method based on the adaptive chord inclination angle progressively screening the segmentation points of graphic elements was proposed in this study. The method doubled the size of the part image using bicubic interpolation, extracted the single-pixel contour with more detailed information, and designed an adaptive step size to obtain the front and back chord inclination angles of the contour. The method of complementing the front and back chord inclination angles was employed to avoid the negative effects of contour jaggedness, thereby obtaining the contour segmentation points after the initial screening. The segmentation points obtained in the initial sieving were divided into different point clusters according to the distance, and the contour, which was segmented by two segmentation points in different point clusters, was fitted using the least squares. The fitting results were evaluated, and all the fitting results were selected using the improved non-maximum suppression (NMS) algorithm to obtain the precisely selected segmentation points of the graphic elements. Consequently, the segmented individual graphic elements were fitted with the segmentation points as constraints to obtain the key dimensions of the closed part. The developed method could accurately find the contour segmentation points, and the relative error was less than 0.6%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

D’Addese, Gianluca, Laura Sani, Luca La Rocca, Roberto Serra und Marco Villani. „Asymptotic Information-Theoretic Detection of Dynamical Organization in Complex Systems“. Entropy 23, Nr. 4 (27.03.2021): 398. http://dx.doi.org/10.3390/e23040398.

Der volle Inhalt der Quelle
Annotation:
The identification of emergent structures in complex dynamical systems is a formidable challenge. We propose a computationally efficient methodology to address such a challenge, based on modeling the state of the system as a set of random variables. Specifically, we present a sieving algorithm to navigate the huge space of all subsets of variables and compare them in terms of a simple index that can be computed without resorting to simulations. We obtain such a simple index by studying the asymptotic distribution of an information-theoretic measure of coordination among variables, when there is no coordination at all, which allows us to fairly compare subsets of variables having different cardinalities. We show that increasing the number of observations allows the identification of larger and larger subsets. As an example of relevant application, we make use of a paradigmatic case regarding the identification of groups in autocatalytic sets of reactions, a chemical situation related to the origin of life problem.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Notué Kadjie, Arnaud, und E. B. Tchawou Tchuisseu. „A Multifunction Robot Based on the Slider-Crank Mechanism: Dynamics and Optimal Configuration for Energy Harvesting“. International Journal of Robotics and Control Systems 1, Nr. 3 (10.08.2021): 269–84. http://dx.doi.org/10.31763/ijrcs.v1i3.408.

Der volle Inhalt der Quelle
Annotation:
An electromechanical robot based on the modified slider-crank mechanism with a damped spring hung at its plate terminal is investigated. The robot is first used for actuation operation and for energy harvesting purposes thereafter. Mathematical modeling in both cases is proposed. As an actuator, the robot is powered with a DC motor, and the effect of the voltage supply on the whole system dynamics is found out. From the numerical simulation based on the fourth-order Runge-Kutta algorithm, results show various dynamics of the subsystems, including periodicity, multi-periodicity, and chaos as depicted by the bifurcation diagrams. Applications can be found in industrial processes like sieving, shaking, cutting, pushing, crushing, or grinding. Regarding the case of the robot functioning as an energy harvester, two different configurations of the electrical circuit for both single and double loops are set up. The challenge is to determine the best configuration for the high performance of the harvester. It comes from theoretical predictions and experimental data that the efficiency of the robot depends on the range values of the electrical load resistance RL. The double loop circuit is preferable for the low values of RL50 Ohm) while the single loop is convenient for high values of RL ≥ 50 Ohm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Villani, Marco, Laura Sani, Riccardo Pecori, Michele Amoretti, Andrea Roli, Monica Mordonini, Roberto Serra und Stefano Cagnoni. „An Iterative Information-Theoretic Approach to the Detection of Structures in Complex Systems“. Complexity 2018 (11.11.2018): 1–15. http://dx.doi.org/10.1155/2018/3687839.

Der volle Inhalt der Quelle
Annotation:
Systems that exhibit complex behaviours often contain inherent dynamical structures which evolve over time in a coordinated way. In this paper, we present a methodology based on the Relevance Index method aimed at revealing the dynamical structures hidden in complex systems. The method iterates two basic steps: detection of relevant variable sets based on the computation of the Relevance Index, and application of a sieving algorithm, which refines the results. This approach is able to highlight the organization of a complex system into sets of variables, which interact with one another at different hierarchical levels, detected, in turn, in the different iterations of the sieve. The method can be applied directly to systems composed of a small number of variables, whereas it requires the help of a custom metaheuristic in case of systems with larger dimensions. We have evaluated the potential of the method by applying it to three case studies: synthetic data generated by a nonlinear stochastic dynamical system, a small-sized and well-known system modelling a catalytic reaction, and a larger one, which describes the interactions within a social community, that requires the use of the metaheuristic. The experiments we made to validate the method produced interesting results, effectively uncovering hidden details of the systems to which it was applied.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Pu, Hui, Yuhe Wang und Yinghui Li. „How CO2-Storage Mechanisms Are Different in Organic Shale: Characterization and Simulation Studies“. SPE Journal 23, Nr. 03 (04.10.2017): 661–71. http://dx.doi.org/10.2118/180080-pa.

Der volle Inhalt der Quelle
Annotation:
Summary Widely distributed organic-rich shales are being considered as one of the important carbon-storage targets, owing to three differentiators compared with conventional reservoirs and saline aquifers: (1) trapping of a significant amount of carbon dioxide (CO2) permanently; (2) kerogen-rich shale's higher affinity of CO2; and (3) existing well and pipeline infrastructure, especially that in the vicinity of existing power or chemical plants. The incapability to model capillarity with the consideration of imperative pore-size-distribution (PSD) characteristics by use of commercial software may lead to inaccurate modeling of CO2 injection in organic shale. We develop a novel approach to examine how PSD would alter phase and flow behavior under nanopore confinements. We incorporate adsorption behavior with a local density-optimization algorithm designed for multicomponent interactions to adsorption sites for a full spectrum of reservoir pressures of interests. This feature elevates the limitation of the Langmuir isotherm model, allowing us to understand the storage and sieving capabilities for a CO2/N2 flue-gas system with remaining reservoir fluids. Taking PSD data of Bakken shale, we perform a core-scale simulation study of CO2/N2 flue-gas injection and reveal the differences between CO2 injection/storage in organic shales and conventional rocks on the basis of numerical modeling.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Wu, Wen-Hwa, Chien-Chou Chen, Shang-Li Lin und Gwolong Lai. „A Real-Time Monitoring System for Cable Tension with Vibration Signals Based on an Automated Algorithm to Sieve Out Reliable Modal Frequencies“. Structural Control and Health Monitoring 2023 (11.07.2023): 1–25. http://dx.doi.org/10.1155/2023/9343343.

Der volle Inhalt der Quelle
Annotation:
Cables or suspenders are the critical force-transmitting components of cable-supported bridges, and their timely tension monitoring is consequently the most important issue in the corresponding structural health monitoring. However, very few works regarding the full automation of vibration-based tension estimation have been reported in the literature. To develop a monitoring system of cable tension based on real-time vibration signals, this research first employs an efficient stochastic subspace identification (SSI) method with tailored parameter selection to continuously identify the three frequencies of adjacent modes for the cables of Mao-Luo-Hsi Bridge. More importantly, an automated sieving algorithm is delicately established to obtain the stable modal frequencies by making the best of the specific modal frequency distribution for cables. The ratios between the frequency values identified from SSI analysis are exhaustively checked to systematically extract the qualified cable frequencies and decide their corresponding mode orders. The tension is finally computed with one available cable frequency according to the priority order predetermined by the statistics of identification rate. Demonstrated by analyzing the vibration signals measured from the stay cable of Mao-Luo-Hsi Bridge in real time for two full years, the effectiveness and robustness of this real-time monitoring system have been extensively testified. The long-term success rates for the immediate determination of dependable tension are found to be perfect for 15 of the 18 investigated cables. As for the other three cables, their corresponding success rates are still higher than 99.99% with very few cases of absent or false tension values.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Ogurtzov, A. V., Yu V. Khokhlova, V. E. Mizonov und V. A. Ogurtzov. „Identifying morphological characteristics of solid particles based on digital image analysis“. Vestnik IGEU, Nr. 3 (30.06.2020): 64–70. http://dx.doi.org/10.17588/2072-2672.2020.3.064-070.

Der volle Inhalt der Quelle
Annotation:
Enriched coal as an energy source has a number of undeniable environmental and economic advantages, and has a higher calorific value. Important characteristics that affect the technological process of coal flotation are the particle size distribution, the shape of the coal particles and their surface texture. Screen analysis is the most common method for determining the granulometric composition of granular media. However, it is not sensitive to the shape of the particles and the nature of their surface. With the sieve method of analysis, there is no direct measurement of any axes of the particle, except the case when the particle is ball-shaped. In this case, its size coincides with the edge length of the square mesh of the sieve. Thus, two particles of completely different shapes can pass through the same sieve opening. Our task was to develop a simple technique that allows identifying the shape and texture parameters of particles of the bulk material. To solve the problem in question, the algorithm implemented in MATLAB (Image Processing Toolbox) and the concept of fractals are used. A method for estimating the shape and texture of bulk material particles based on digital image processing has been proposed. The particle size distribution curves are constructed, obtained by sieving method and image processing method. It was found that these curves are in good agreement with each other. Digital image processing is an alternative to identifying the important quality characteristics of bulk materials.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Huang, Haohang, Maziar Moaveni, Scott Schmidt, Erol Tutumluer und John M. Hart. „Evaluation of Railway Ballast Permeability Using Machine Vision–Based Degradation Analysis“. Transportation Research Record: Journal of the Transportation Research Board 2672, Nr. 10 (01.08.2018): 62–73. http://dx.doi.org/10.1177/0361198118790849.

Der volle Inhalt der Quelle
Annotation:
Railway ballast degrades progressively as a result of accumulated traffic primarily through abrasion and particle breakage. Degraded ballast may cause reduced lateral and longitudinal stability, ineffective drainage, and excessive settlement of track structures, all of which would adversely affect the performance of ballasted track. Traditional methods of ballast degradation assessment involve time-consuming field sampling and laboratory sieve analysis; moreover, determining the level of track performance deterioration at which ballast maintenance is best considered still remains challenging. This paper investigates the permeability of railway ballast through laboratory testing and provides insight into its field drainage capacity under degraded condition using an innovative approach of field imaging. Constant head permeability tests were conducted on clean and degraded ballast samples which indicated nonlinear power-curve trends, especially for clean ballast, of unit flow amount with its hydraulic gradient. Imaging-based degradation analysis using machine vision technology was also performed on clean and degraded in-service ballast to correlate Fouling Index (FI) from laboratory sieving with Percent Degraded Segments (PDS) obtained from the recently developed image segmentation algorithm. Accordingly, a new Permeability Index (PI) is introduced in this paper to define ballast permeability in the form of a bilinear model developed from the machine vision–based ballast degradation analysis. Based on the findings of this study, a two-stage ballast cleaning process for determining the timeframe of ballasted track maintenance considering its drainage capacity is proposed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Biney, James Kobina Mensah, Luboš Borůvka, Prince Chapman Agyeman, Karel Němeček und Aleš Klement. „Comparison of Field and Laboratory Wet Soil Spectra in the Vis-NIR Range for Soil Organic Carbon Prediction in the Absence of Laboratory Dry Measurements“. Remote Sensing 12, Nr. 18 (20.09.2020): 3082. http://dx.doi.org/10.3390/rs12183082.

Der volle Inhalt der Quelle
Annotation:
Spectroscopy has demonstrated the ability to predict specific soil properties. Consequently, it is a promising avenue to complement the traditional methods that are costly and time-consuming. In the visible-near infrared (Vis-NIR) region, spectroscopy has been widely used for the rapid determination of organic components, especially soil organic carbon (SOC) using laboratory dry (lab-dry) measurement. However, steps such as collecting, grinding, sieving and soil drying at ambient (room) temperature and humidity for several days, which is a vital process, make the lab-dry preparation a bit slow compared to the field or laboratory wet (lab-wet) measurement. The use of soil spectra measured directly in the field or on a wet sample remains challenging due to uncontrolled soil moisture variations and other environmental conditions. However, for direct and timely prediction and mapping of soil properties, especially SOC, the field or lab-wet measurement could be an option in place of the lab-dry measurement. This study focuses on comparison of field and naturally acquired laboratory measurement of wet samples in Visible (VIS), Near-Infrared (NIR) and Vis-NIR range using several pretreatment approaches including orthogonal signal correction (OSC). The comparison was concluded with the development of validation models for SOC prediction based on partial least squares regression (PLSR) and support vector machine (SVMR). Nonetheless, for the OSC implementation, we use principal component regression (PCR) together with PLSR as SVMR is not appropriate under OSC. For SOC prediction, the field measurement was better in the VIS range with R2CV = 0.47 and RMSEPcv = 0.24, while in Vis-NIR range the lab-wet measurement was better with R2CV = 0.44 and RMSEPcv = 0.25, both using the SVMR algorithm. However, the prediction accuracy improves with the introduction of OSC on both samples. The highest prediction was obtained with the lab-wet dataset (using PLSR) in the NIR and Vis-NIR range with R2CV = 0.54/0.55 and RMSEPcv = 0.24. This result indicates that the field and, in particular, lab-wet measurements, which are not commonly used, can also be useful for SOC prediction, just as the lab-dry method, with some adjustments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Xu, Luyao, Zhengyi Dai, Baofeng Wu und Dongdai Lin. „Improved Attacks on (EC)DSA with Nonce Leakage by Lattice Sieving with Predicate“. IACR Transactions on Cryptographic Hardware and Embedded Systems, 06.03.2023, 568–86. http://dx.doi.org/10.46586/tches.v2023.i2.568-586.

Der volle Inhalt der Quelle
Annotation:
Lattice reduction algorithms have been proved to be one of the most powerful and versatile tools in public key cryptanalysis. In this work, we primarily concentrate on lattice attacks against (EC)DSA with nonce leakage via some sidechannel analysis. Previous works relying on lattice reduction algorithms such as LLL and BKZ will finally lead to the “lattice barrier”: lattice algorithms become infeasible when only fewer nonce is known. Recently, Albrecht and Heninger introduced lattice algorithms augmented with a predicate and broke the lattice barrier (Eurocrypt 2021). We improve their work in several aspects.We first propose a more efficient predicate algorithm which aims to search for the target lattice vector in a large database. Then, we combine sieving with predicate algorithm with the “dimensions for free” and “progressive sieving” techniques to further improve the performance of our attacks. Furthermore, we give a theoretic analysis on how to choose the optimal Kannan embedding factor.As a result, our algorithm outperforms the state-of-the-art lattice attacks for existing records such as 3-bit nonce leakage for a 256-bit curve and 2-bit nonce leakage for a 160-bit curve in terms of running time, sample numbers and success probability. We also break the lattice records on the 384-bit curve with 3-bit nonce leakage and the 256-bit curve with 2-bit nonce leakage which are thought infeasible previously. Finally, we give the first lattice attack against ECDSA with a single-bit nonce leakage, which enables us to break a 112-bit curve with 1-bit nonce leakage in practical time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Doulgerakis, Emmanouil, Thijs Laarhoven und Benne de Weger. „The irreducible vectors of a lattice:“. Designs, Codes and Cryptography, 18.10.2022. http://dx.doi.org/10.1007/s10623-022-01119-y.

Der volle Inhalt der Quelle
Annotation:
AbstractThe main idea behind lattice sieving algorithms is to reduce a sufficiently large number of lattice vectors with each other so that a set of short enough vectors is obtained. It is therefore natural to study vectors which cannot be reduced. In this work we give a concrete definition of an irreducible vector and study the properties of the set of all such vectors. We show that the set of irreducible vectors is a subset of the set of Voronoi relevant vectors and study its properties. For extremal lattices this set may contain as many as $$2^n$$ 2 n vectors, which leads us to define the notion of a complete system of irreducible vectors, whose size can be upper-bounded by the kissing number. One of our main results shows that modified heuristic sieving algorithms heuristically approximate such a set (modulo sign). We provide experiments in low dimensions which support this theory. Finally we give some applications of this set in the study of lattice problems such as SVP, SIVP and CVPP. The introduced notions, as well as various results derived along the way, may provide further insights into lattice algorithms and motivate new research into understanding these algorithms better.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Zhu, Jianying, Qi Zhang, Hui Zhang, Zuoqiang Shi, Mingxu Hu und Chenglong Bao. „A minority of final stacks yields superior amplitude in single-particle cryo-EM“. Nature Communications 14, Nr. 1 (10.12.2023). http://dx.doi.org/10.1038/s41467-023-43555-x.

Der volle Inhalt der Quelle
Annotation:
AbstractCryogenic electron microscopy (cryo-EM) is widely used to determine near-atomic resolution structures of biological macromolecules. Due to the low signal-to-noise ratio, cryo-EM relies on averaging many images. However, a crucial question in the field of cryo-EM remains unanswered: how close can we get to the minimum number of particles required to reach a specific resolution in practice? The absence of an answer to this question has impeded progress in understanding sample behavior and the performance of sample preparation methods. To address this issue, we develop an iterative particle sorting and/or sieving method called CryoSieve. Extensive experiments demonstrate that CryoSieve outperforms other cryo-EM particle sorting algorithms, revealing that most particles are unnecessary in final stacks. The minority of particles remaining in the final stacks yield superior high-resolution amplitude in reconstructed density maps. For some datasets, the size of the finest subset approaches the theoretical limit.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Klausner, Martina. „Calculating Therapeutic Compliance“. Science & Technology Studies, 21.09.2018, 30–51. http://dx.doi.org/10.23987/sts.66375.

Der volle Inhalt der Quelle
Annotation:
This article discusses calculation practices in the development of a monitoring device, aimed at improving therapeutic compliance of children and teenagers suffering from a deformation of the spine. In managing the complexities of physical parameters, therapeutic measures and interventions in every day life, numbers are central participants in inferring from and interfering into bodies and behaviours. Numbers are input and output of such monitoring systems, translating, circulating and visualizing body conditions, therapeutic effects and suggesting action. At the core of this generative process of capturing and interpreting data are algorithms as common participants of managing the complexities of vast amounts of data and providing the basis for interference in people’s lives. They process data and provide seemingly unambiguous numerical outcome, based on mathematical-technological processing of information. Attending to the incremental process of “learning algorithms” as central part of the system’s development allows me to describe the robustness of certain modes of inference. Beyond using the specific case as an exemplary for computer-based numerical inference and interference, the article is an attempt to probe and complement two theoretical approaches to the numerical management of complexity: Helen Verran’s focus on numbers’ performative properties and the potential tensions arising from divergent numerical orderings; and Paul Kockelman’s sieving of inferential and indexical chains along the generation of meaning and ontological transformativities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Wu, Can, Ying Cui, Donghui Li und Defeng Sun. „Convex and Nonconvex Risk-Based Linear Regression at Scale“. INFORMS Journal on Computing, 11.04.2023. http://dx.doi.org/10.1287/ijoc.2023.1282.

Der volle Inhalt der Quelle
Annotation:
The value at risk (VaR) and the conditional value at risk (CVaR) are two popular risk measures to hedge against the uncertainty of data. In this paper, we provide a computational toolbox for solving high-dimensional sparse linear regression problems under either VaR or CVaR measures, the former being nonconvex and the latter convex. Unlike the empirical risk (neutral) minimization models in which the overall losses are decomposable across data, the aforementioned risk-sensitive models have nonseparable objective functions so that typical first order algorithms are not easy to scale. We address this scaling issue by adopting a semismooth Newton-based proximal augmented Lagrangian method of the convex CVaR linear regression problem. The matrix structures of the Newton systems are carefully explored to reduce the computational cost per iteration. The method is further embedded in a majorization–minimization algorithm as a subroutine to tackle the nonconvex VaR-based regression problem. We also discuss an adaptive sieving strategy to iteratively guess and adjust the effective problem dimension, which is particularly useful when a solution path associated with a sequence of tuning parameters is needed. Extensive numerical experiments on both synthetic and real data demonstrate the effectiveness of our proposed methods. In particular, they are about 53 times faster than the commercial package Gurobi for the CVaR-based sparse linear regression with 4,265,669 features and 16,087 observations. History: Accepted by Antonio Frangioni, Area Editor for Design & Analysis of Algorithms–Continuous. Funding: This work was supported in part by the NSF, the Division of Computing and Communication Foundations [Grant 2153352], the National Natural Science Foundation of China [Grant 12271187], and the Hong Kong Research Grant Council [Grant 15304019]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information ( https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2023.1282 ) as well as from the IJOC GitHub software repository ( https://github.com/INFORMSJoC/2022.0012 ) at ( http://dx.doi.org/10.5281/zenodo.7483279 ).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Hittmeir, Markus. „Smooth Subsum Search A Heuristic for Practical Integer Factorization“. International Journal of Foundations of Computer Science, 29.12.2023, 1–26. http://dx.doi.org/10.1142/s0129054123500296.

Der volle Inhalt der Quelle
Annotation:
The two currently fastest general-purpose integer factorization algorithms are the Quadratic Sieve and the Number Field Sieve. Both techniques are used to find so-called smooth values of certain polynomials, i.e., values that factor completely over a set of small primes (the factor base). As the names of the methods suggest, a sieving procedure is used for the task of quickly identifying smooth values among the candidates in a certain range. While the Number Field Sieve is asymptotically faster, the Quadratic Sieve is still considered the most efficient factorization technique for numbers up to around 100 digits. In this paper, we challenge the Quadratic Sieve by presenting a novel approach based on representing smoothness candidates as sums that are always divisible by several of the primes in the factor base. The resulting values are generally smaller than those considered in the Quadratic Sieve, increasing the likelihood of them being smooth. Using the fastest implementations of the Self-initializing Quadratic Sieve in Python as benchmarks, a Python implementation of our approach runs consistently 5 to 7 times faster for numbers with 45–100 digits, and around 10 times faster for numbers with 30–40 digits. We discuss several avenues for further improvements and applications of the technique.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Qian, Tao, Lei Dai, Liming Zhang und Zehua Chen. „Granular sieving algorithm for selecting best n$$ n $$ parameters“. Mathematical Methods in the Applied Sciences, 30.03.2022. http://dx.doi.org/10.1002/mma.8254.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Zeng, Jiaqi, Yuxiao Wang, Ziyao Wu und Yizhuang Zhou. „FRAGTE2: An Enhanced Algorithm to Pre-Select Closely Related Genomes for Bacterial Species Demarcation“. Frontiers in Microbiology 13 (18.05.2022). http://dx.doi.org/10.3389/fmicb.2022.847439.

Der volle Inhalt der Quelle
Annotation:
We previously reported on FRAGTE (hereafter termed FRAGTE1), a promising algorithm for sieving (pre-selecting genome pairs for whole-genome species demarcation). However, the overall amount of pairs sieved by FRAGTE1 is still large, requiring seriously unaffordable computing cost, especially for large datasets. Here, we present FRAGTE2. Tests on simulated genomes, real genomes, and metagenome-assembled genomes revealed that (i) FRAGTE2 outstandingly reduces ~50–60.10% of the overall amount of pairs sieved by FRAGTE1, dramatically decreasing the computing cost required for whole-genome species demarcation afterward; (ii) FRAGTE2 shows superior sensitivity than FRAGTE1; (iii) FRAGTE2 shows higher specificity than FRAGTE1; and (iv) FRAGTE2 is faster than or comparable with FRAGTE1. Besides, FRAGTE2 is independent of genome completeness, the same as FRAGTE1. We therefore recommend FRAGTE2 tailored for sieving to facilitate species demarcation in prokaryotes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Dai, Lei, Liming Zhang, Zehua Chen und Weiping Ding. „Collaborative Granular Sieving: A Deterministic Multievolutionary Algorithm for Multimodal Optimization Problems“. Information Sciences, September 2022. http://dx.doi.org/10.1016/j.ins.2022.09.007.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Ding, Mingcai, Xiaoliang Song und Bo Yu. „An Inexact Proximal DC Algorithm with Sieving Strategy for Rank Constrained Least Squares Semidefinite Programming“. Journal of Scientific Computing 91, Nr. 3 (30.04.2022). http://dx.doi.org/10.1007/s10915-022-01845-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie