Journal articles on the topic 'Indexed data compression'

To see the other types of publications on this topic, follow the link: Indexed data compression.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Indexed data compression.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kaneiwa, Ken, and Koji Fujiwara. "The Compression of Indexed Data and Fast Search for Large RDF Graphs." Transactions of the Japanese Society for Artificial Intelligence 33, no. 2 (2018): E—H43_1–10. http://dx.doi.org/10.1527/tjsai.e-h43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

M.K., Bouza. "Analysis and modification of graphic data compression algorithms." Artificial Intelligence 25, no. 4 (December 25, 2020): 32–40. http://dx.doi.org/10.15407/jai2020.04.032.

Full text
Abstract:
The article examines the algorithms for JPEG and JPEG-2000 compression of various graphic images. The main steps of the operation of both algorithms are given, their advantages and disadvantages are noted. The main differences between JPEG and JPEG-2000 are analyzed. It is noted that the JPEG-2000 algorithm allows re-moving visually unpleasant effects. This makes it possible to highlight important areas of the image and improve the quality of their compression. The features of each step of the algorithms are considered and the difficulties of their implementation are compared. The effectiveness of each algorithm is demonstrated by the example of a full-color image of the BSU emblem. The obtained compression ratios were obtained and shown in the corresponding tables using both algorithms. Compression ratios are obtained for a wide range of quality values from 1 to ten. We studied various types of images: black and white, business graphics, indexed and full color. A modified LZW-Lempel-Ziv-Welch algorithm is presented, which is applicable to compress a variety of information from text to images. The modification is based on limiting the graphic file to 256 colors. This made it possible to index the color with one byte instead of three. The efficiency of this modification grows with increasing image sizes. The modified LZW-algorithm can be adapted to any image from single-color to full-color. The prepared tests were indexed to the required number of colors in the images using the FastStone Image Viewer program. For each image, seven copies were obtained, containing 4, 8, 16, 32, 64, 128 and 256 colors, respectively. Testing results showed that the modified version of the LZW algorithm allows for an average of twice the compression ratio. However, in a class of full-color images, both algorithms showed the same results. The developed modification of the LZW algorithm can be successfully applied in the field of site design, especially in the case of so-called flat design. The comparative characteristics of the basic and modified methods are presented.
APA, Harvard, Vancouver, ISO, and other styles
3

Senthilkumar, Radha, Gomathi Nandagopal, and Daphne Ronald. "QRFXFreeze: Queryable Compressor for RFX." Scientific World Journal 2015 (2015): 1–8. http://dx.doi.org/10.1155/2015/864750.

Full text
Abstract:
The verbose nature of XML has been mulled over again and again and many compression techniques for XML data have been excogitated over the years. Some of the techniques incorporate support for querying the XML database in its compressed format while others have to be decompressed before they can be queried. XML compression in which querying is directly supported instantaneously with no compromise over time is forced to compromise over space. In this paper, we propose the compressor, QRFXFreeze, which not only reduces the space of storage but also supports efficient querying. The compressor does this without decompressing the compressed XML file. The compressor supports all kinds of XML documents along with insert, update, and delete operations. The forte of QRFXFreeze is that the textual data are semantically compressed and are indexed to reduce the querying time. Experimental results show that the proposed compressor performs much better than other well-known compressors.
APA, Harvard, Vancouver, ISO, and other styles
4

Hernández-Illera, Antonio, Miguel A. Martínez-Prieto, Javier D. Fernández, and Antonio Fariña. "iHDT++: improving HDT for SPARQL triple pattern resolution." Journal of Intelligent & Fuzzy Systems 39, no. 2 (August 31, 2020): 2249–61. http://dx.doi.org/10.3233/jifs-179888.

Full text
Abstract:
RDF self-indexes compress the RDF collection and provide efficient access to the data without a previous decompression (via the so-called SPARQL triple patterns). HDT is one of the reference solutions in this scenario, with several applications to lower the barrier of both publication and consumption of Big Semantic Data. However, the simple design of HDT takes a compromise position between compression effectiveness and retrieval speed. In particular, it supports scan and subject-based queries, but it requires additional indexes to resolve predicate and object-based SPARQL triple patterns. A recent variant, HDT++, improves HDT compression ratios, but it does not retain the original HDT retrieval capabilities. In this article, we extend HDT++ with additional indexes to support full SPARQL triple pattern resolution with a lower memory footprint than the original indexed HDT (called HDT-FoQ). Our evaluation shows that the resultant structure, iHDT++ , requires 70 - 85% of the original HDT-FoQ space (and up to 48 - 72% for an HDT Community variant). In addition, iHDT++ shows significant performance improvements (up to one level of magnitude) for most triple pattern queries, being competitive with state-of-the-art RDF self-indexes.
APA, Harvard, Vancouver, ISO, and other styles
5

Moneta, G. L., A. D. Nicoloff, and J. M. Porter. "Compression Treatment of Chronic Venous Ulceration: A Review." Phlebology: The Journal of Venous Disease 15, no. 3-4 (December 2000): 162–68. http://dx.doi.org/10.1177/026835550001500316.

Full text
Abstract:
Objective: To review the recent medical literature with regard to the use of compressive therapy in healing and preventing the recurrence of venous ulceration. Methods: Searches of Medline and Embase medical literature databases. Appropriate non-indexed journals and textbooks were also reviewed. Synthesis: Elastic compression therapy is regarded as the ‘gold standard’ treatment for venous ulceration. The benefits of elastic compression therapy in the treatment of venous ulceration may be mediated through favourable alterations in venous haemodynamics, micro-circulatory haemodynamics and/or improvement in subcutaneous Starling forces. Available data indicate compressive therapy is highly effective in healing of the large majority of venous ulcers. Elastic compression stockings, Unna boots, as well as multi-layer elastic wraps, have all been noted to achieve excellent healing rates for venous ulcers. In compliant patients it appears that approximately 75% of venous ulcers can be healed by 6 months, and up to 90% by 1 year. Non-healing of venous ulcers is associated with lack of patient compliance with treatment, large and long-standing venous ulceration and the coexistence of arterial insufficiency. Recurrence of venous ulceration is, however, a significant problem after healing with compressive therapy, even in compliant patients; approximately 20-30% of venous ulcers will recur by 2 years. Conclusions: Compressive therapy is capable of achieving high rates of healing of venous ulceration in compliant patients. Various forms of compression, including elastic, rigid and multi-layer dressings, are available depending on physician preference, the clinical situation and the needs of the individual patient. Compressive therapy, while effective, remains far from ideal. The future goals are to achieve faster healing of venous ulceration, less painful healing and freedom from ulcer recurrence.
APA, Harvard, Vancouver, ISO, and other styles
6

Selivanova, Irina V. "Limitations of Applying the Data Compression Method to the Classification of Abstracts of Publications Indexed in Scopus." Vestnik NSU. Series: Information Technologies 18, no. 3 (2020): 57–68. http://dx.doi.org/10.25205/1818-7900-2020-18-3-57-68.

Full text
Abstract:
The paper describes the limitations of applying the method of classification of scientific texts based on data compression to all categories indicated in the ASJC classification used in the Scopus bibliographic database. It is shown that the automatic generation of learning samples for each category is a rather time-consuming process, and in some cases is impossible due to the restriction on data upload installed in Scopus and the lack of category names in the Scopus Search API. Another reason is that in many subject areas there are completely no journals and, accordingly, publications that have only one category. Application of the method to all 26 subject areas is impossible due to their vastness, as well as the initial classification of Scopus. Often in different subject areas there are terminologically close categories, which makes it difficult to classify a publication as a true area. These findings also indicate that the classification currently used in Scopus and SciVal may not be completely reliable. For example, according to SciVal in terms of the number of publications, the category “Theoretical computer science” is in second place among all publications in the subject area “Mathematics”. The study showed that this category is one of the smallest categories, both in terms of the presence of journals and publications with only this category. Thus, many studies based on the use of publications in ASJC may have some inaccuracies.
APA, Harvard, Vancouver, ISO, and other styles
7

Shibuya, Yoshihiro, and Matteo Comin. "Indexing k-mers in linear space for quality value compression." Journal of Bioinformatics and Computational Biology 17, no. 05 (October 2019): 1940011. http://dx.doi.org/10.1142/s0219720019400110.

Full text
Abstract:
Many bioinformatics tools heavily rely on [Formula: see text]-mer dictionaries to describe the composition of sequences and allow for faster reference-free algorithms or look-ups. Unfortunately, naive [Formula: see text]-mer dictionaries are very memory-inefficient, requiring very large amount of storage space to save each [Formula: see text]-mer. This problem is generally worsened by the necessity of an index for fast queries. In this work, we discuss how to build an indexed linear reference containing a set of input [Formula: see text]-mers and its application to the compression of quality scores in FASTQ files. Most of the entropies of sequencing data lie in the quality scores, and thus they are difficult to compress. Here, we present an application to improve the compressibility of quality values while preserving the information for SNP calling. We show how a dictionary of significant [Formula: see text]-mers, obtained from SNP databases and multiple genomes, can be indexed in linear space and used to improve the compression of quality value. Availability: The software is freely available at https://github.com/yhhshb/yalff .
APA, Harvard, Vancouver, ISO, and other styles
8

Gupta, Shweta, Sunita Yadav, and Rajesh Prasad. "Document Retrieval using Efficient Indexing Techniques." International Journal of Business Analytics 3, no. 4 (October 2016): 64–82. http://dx.doi.org/10.4018/ijban.2016100104.

Full text
Abstract:
Document retrieval plays a crucial role in retrieving relevant documents. Relevancy depends upon the occurrences of query keywords in a document. Several documents include a similar key terms and hence they need to be indexed. Most of the indexing techniques are either based on inverted index or full-text index. Inverted index create lists and support word-based pattern queries. While full-text index handle queries comprise of any sequence of characters rather than just words. Problems arise when text cannot be separated as words in some western languages. Also, there are difficulties in space used by compressed versions of full-text indexes. Recently, one of the unique data structure called wavelet tree has been popular in the text compression and indexing. It indexes words or characters of the text documents and help in retrieving top ranked documents more efficiently. This paper presents a review on most recent efficient indexing techniques used in document retrieval.
APA, Harvard, Vancouver, ISO, and other styles
9

Navarro, Gonzalo. "Indexing Highly Repetitive String Collections, Part I." ACM Computing Surveys 54, no. 2 (April 2021): 1–31. http://dx.doi.org/10.1145/3434399.

Full text
Abstract:
Two decades ago, a breakthrough in indexing string collections made it possible to represent them within their compressed space while at the same time offering indexed search functionalities. As this new technology permeated through applications like bioinformatics, the string collections experienced a growth that outperforms Moore’s Law and challenges our ability to handle them even in compressed form. It turns out, fortunately, that many of these rapidly growing string collections are highly repetitive, so that their information content is orders of magnitude lower than their plain size. The statistical compression methods used for classical collections, however, are blind to this repetitiveness, and therefore a new set of techniques has been developed to properly exploit it. The resulting indexes form a new generation of data structures able to handle the huge repetitive string collections that we are facing. In this survey, formed by two parts, we cover the algorithmic developments that have led to these data structures. In this first part, we describe the distinct compression paradigms that have been used to exploit repetitiveness, and the algorithmic techniques that provide direct access to the compressed strings. In the quest for an ideal measure of repetitiveness, we uncover a fascinating web of relations between those measures, as well as the limits up to which the data can be recovered, and up to which direct access to the compressed data can be provided. This is the basic aspect of indexability, which is covered in the second part of this survey.
APA, Harvard, Vancouver, ISO, and other styles
10

Gupta, Pranjal, Amine Mhedhbi, and Semih Salihoglu. "Columnar storage and list-based processing for graph database management systems." Proceedings of the VLDB Endowment 14, no. 11 (July 2021): 2491–504. http://dx.doi.org/10.14778/3476249.3476297.

Full text
Abstract:
We revisit column-oriented storage and query processing techniques in the context of contemporary graph database management systems (GDBMSs). Similar to column-oriented RDBMSs, GDBMSs support read-heavy analytical workloads that however have fundamentally different data access patterns than traditional analytical workloads. We first derive a set of desiderata for optimizing storage and query processors of GDBMS based on their access patterns. We then present the design of columnar storage, compression, and query processing techniques based on these desiderata. In addition to showing direct integration of existing techniques from columnar RDBMSs, we also propose novel ones that are optimized for GDBMSs. These include a novel list-based query processor, which avoids expensive data copies of traditional block-based processors under many-to-many joins, a new data structure we call single-indexed edge property pages and an accompanying edge ID scheme, and a new application of Jacobson's bit vector index for compressing NULL values and empty lists. We integrated our techniques into the GraphflowDB in-memory GDBMS. Through extensive experiments, we demonstrate the scalability and query performance benefits of our techniques.
APA, Harvard, Vancouver, ISO, and other styles
11

Ono, S. "High-pressure phase transformation in MnCO3: a synchrotron XRD study." Mineralogical Magazine 71, no. 1 (February 2006): 105–11. http://dx.doi.org/10.1180/minmag.2007.071.1.105.

Full text
Abstract:
AbstractThe high-pressure behaviour of manganese carbonate was investigated by in situ synchrotron X-ray powder diffraction up to 54 GPa with a laser-heated diamond anvil cell. A phase transition from rhodochrosite to a new structure form was observed at 50 GPa after laser heating. The diffraction pattern of the new high-pressure form was reasonably indexed with an orthorhombic unit-cell with a = 5.361 A, b = 8.591 A and c = 9.743 Å. The pressure-induced phase transition implies a unit-cell volume reduction of ∼5%. This result does not support the direct formation of diamond by dissociation of solid state MnCO3 reported in a previous study. Fitting the compression data of rhodochrosite to a second-order Birch-Murnaghan equation of state (Ko’ = 4) gives K0 = 126(±10) GPa. The c axis of the unit-cell parameter was more compressive than the a axis.
APA, Harvard, Vancouver, ISO, and other styles
12

Navarro, Gonzalo. "Indexing Highly Repetitive String Collections, Part II." ACM Computing Surveys 54, no. 2 (April 2021): 1–32. http://dx.doi.org/10.1145/3432999.

Full text
Abstract:
Two decades ago, a breakthrough in indexing string collections made it possible to represent them within their compressed space while at the same time offering indexed search functionalities. As this new technology permeated through applications like bioinformatics, the string collections experienced a growth that outperforms Moore’s Law and challenges our ability of handling them even in compressed form. It turns out, fortunately, that many of these rapidly growing string collections are highly repetitive, so that their information content is orders of magnitude lower than their plain size. The statistical compression methods used for classical collections, however, are blind to this repetitiveness, and therefore a new set of techniques has been developed to properly exploit it. The resulting indexes form a new generation of data structures able to handle the huge repetitive string collections that we are facing. In this survey, formed by two parts, we cover the algorithmic developments that have led to these data structures. In this second part, we describe the fundamental algorithmic ideas and data structures that form the base of all the existing indexes, and the various concrete structures that have been proposed, comparing them both in theoretical and practical aspects, and uncovering some new combinations. We conclude with the current challenges in this fascinating field.
APA, Harvard, Vancouver, ISO, and other styles
13

GALLÉ, MATTHIAS, PIERRE PETERLONGO, and FRANÇOIS COSTE. "IN-PLACE UPDATE OF SUFFIX ARRAY WHILE RECODING WORDS." International Journal of Foundations of Computer Science 20, no. 06 (December 2009): 1025–45. http://dx.doi.org/10.1142/s0129054109007029.

Full text
Abstract:
Motivated by grammatical inference and data compression applications, we propose an algorithm to update a suffix array while in the indexed text some occurrences of a given word are substituted by a new character. Compared to other published index update methods, the problem addressed here may require the modification of a large number of distinct positions over the original text. The proposed algorithm uses the specific internal order of suffix arrays in order to update simultaneously groups of indices, and ensures that only indices to be modified are visited. Experiments confirm a significant execution time speedup compared to the construction of suffix array from scratch at each step of the application.
APA, Harvard, Vancouver, ISO, and other styles
14

Liu, Yuansheng, Zuguo Yu, Marcel E. Dinger, and Jinyan Li. "Index suffix–prefix overlaps by (w, k)-minimizer to generate long contigs for reads compression." Bioinformatics 35, no. 12 (November 8, 2018): 2066–74. http://dx.doi.org/10.1093/bioinformatics/bty936.

Full text
Abstract:
Abstract Motivation Advanced high-throughput sequencing technologies have produced massive amount of reads data, and algorithms have been specially designed to contract the size of these datasets for efficient storage and transmission. Reordering reads with regard to their positions in de novo assembled contigs or in explicit reference sequences has been proven to be one of the most effective reads compression approach. As there is usually no good prior knowledge about the reference sequence, current focus is on the novel construction of de novo assembled contigs. Results We introduce a new de novo compression algorithm named minicom. This algorithm uses large k-minimizers to index the reads and subgroup those that have the same minimizer. Within each subgroup, a contig is constructed. Then some pairs of the contigs derived from the subgroups are merged into longer contigs according to a (w, k)-minimizer-indexed suffix–prefix overlap similarity between two contigs. This merging process is repeated after the longer contigs are formed until no pair of contigs can be merged. We compare the performance of minicom with two reference-based methods and four de novo methods on 18 datasets (13 RNA-seq datasets and 5 whole genome sequencing datasets). In the compression of single-end reads, minicom obtained the smallest file size for 22 of 34 cases with significant improvement. In the compression of paired-end reads, minicom achieved 20–80% compression gain over the best state-of-the-art algorithm. Our method also achieved a 10% size reduction of compressed files in comparison with the best algorithm under the reads-order preserving mode. These excellent performances are mainly attributed to the exploit of the redundancy of the repetitive substrings in the long contigs. Availability and implementation https://github.com/yuansliu/minicom Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
15

Lin, Feng-Fei, Chao-Hui Lin, Bin Chen, and Ke Zheng. "Combination Prophylaxis versus Pharmacologic Prophylaxis Alone for Preventing Deep Vein Thrombosis in Hip Surgery." HIP International 26, no. 6 (August 10, 2016): 561–66. http://dx.doi.org/10.5301/hipint.5000384.

Full text
Abstract:
Purpose To evaluate the comparative efficacy and safety of combination pharmacologic and graduated compression stockings (GCS) prophylaxis versus pharmacological prophylaxis alone for preventing deep vein thrombosis (DVT) and pulmonary embolism (PE) in hip surgery. Methods Relevant publications indexed in PubMed, Cochrane Library, Embase, Web of Science, Wanfang Data, CNKI and Sinomed (CBM) were identified. Appropriate articles identified from the reference lists of the above searches were also reviewed. Results Significant differences in the rate of distal DVT were observed between combination prophylaxis and pharmacological groups. When data from Fredin 1989 was excluded no significant difference in the rate of distal DVT was seen between groups. No significant difference in the rate of proximal DVT or PE was observed between combination and pharmacologic prophylaxis groups. Conclusions A combination of pharmacological prophylaxis and GCS can decrease distal DVT in the lower extremity when compare to pharmacological prophylaxis alone, but it is not useful in decreasing proximal DVT and PE. If we use currently recommended pharmacologic prophylaxis it is not necessary to combine this with GCS.
APA, Harvard, Vancouver, ISO, and other styles
16

Öztekin, Ertekin. "ANN based investigations of reliabilities of the models for concrete under triaxial compression." Engineering Computations 33, no. 7 (October 3, 2016): 2019–44. http://dx.doi.org/10.1108/ec-03-2015-0065.

Full text
Abstract:
Purpose A lot of triaxial compressive models for different concrete types and different concrete strength classes were proposed to be used in structural analyses. The existence of so many models creates conflicts and confusions during the selection of the models. In this study, reliability analyses were carried out to prevent such conflicts and confusions and to determine the most reliable model for normal- and high-strength concrete (NSC and HSC) under combined triaxial compressions. The paper aims to discuss these issues. Design/methodology/approach An analytical model was proposed to estimate the strength of NSC and HSC under different triaxial loadings. After verifying the validity of the model by making comparisons with the models in the literature, reliabilities of all models were investigated. The Monte Carlo simulation method was used in the reliability studies. Artificial experimental data required for the Monte Carlo simulation method were generated by using artificial neural networks. Findings The validity of the proposed model was verified. Reliability indexes of triaxial compressive models were obtained for the limit states, different concrete strengths and different lateral compressions. Finally, the reliability indexes were tabulated to be able to choose the best model for NSC and HSC under different triaxial compressions. Research limitations/implications Concrete compressive strength and lateral compression were taken as variables in the model. Practical implications The reliability indexes were tabulated to be able to choose the best model for NSC and HSC under different triaxial compressions. Originality/value A new analytical model was proposed to estimate the strength of NSC and HSC under different triaxial loadings. Reliability indexes of triaxial compressive models were obtained for the limit states, different concrete strengths and different lateral compressions. Artificial experimental data were obtained by using artificial neural networks. Four different artificial neural networks were developed to generate artificial experimental data. They can also be used in the estimations of the strength of NSC and HSC under different triaxial loadings.
APA, Harvard, Vancouver, ISO, and other styles
17

Siboni, Shachar, and Asaf Cohen. "Anomaly Detection for Individual Sequences with Applications in Identifying Malicious Tools." Entropy 22, no. 6 (June 12, 2020): 649. http://dx.doi.org/10.3390/e22060649.

Full text
Abstract:
Anomaly detection refers to the problem of identifying abnormal behaviour within a set of measurements. In many cases, one has some statistical model for normal data, and wishes to identify whether new data fit the model or not. However, in others, while there are normal data to learn from, there is no statistical model for this data, and there is no structured parameter set to estimate. Thus, one is forced to assume an individual sequences setup, where there is no given model or any guarantee that such a model exists. In this work, we propose a universal anomaly detection algorithm for one-dimensional time series that is able to learn the normal behaviour of systems and alert for abnormalities, without assuming anything on the normal data, or anything on the anomalies. The suggested method utilizes new information measures that were derived from the Lempel–Ziv (LZ) compression algorithm in order to optimally and efficiently learn the normal behaviour (during learning), and then estimate the likelihood of new data (during operation) and classify it accordingly. We apply the algorithm to key problems in computer security, as well as a benchmark anomaly detection data set, all using simple, single-feature time-indexed data. The first is detecting Botnets Command and Control (C&C) channels without deep inspection. We then apply it to the problems of malicious tools detection via system calls monitoring and data leakage identification.We conclude with the New York City (NYC) taxi data. Finally, while using information theoretic tools, we show that an attacker’s attempt to maliciously fool the detection system by trying to generate normal data is bound to fail, either due to a high probability of error or because of the need for huge amounts of resources.
APA, Harvard, Vancouver, ISO, and other styles
18

Riznyk, V. V. "FORMALIZATION CODING METHODS OF INFORMATION UNDER TOROIDAL COORDINATE SYSTEMS." Radio Electronics, Computer Science, Control, no. 2 (July 7, 2021): 144–53. http://dx.doi.org/10.15588/1607-3274-2021-2-15.

Full text
Abstract:
Contents. Coding and processing large information content actualizes the problem of formalization of interdependence between information parameters of vector data coding systems on a single mathematical platform. Objective. The formalization of relationships between information parameters of vector data coding systems in the optimized basis of toroidal coordinate systems with the achievement of a favorable compromise between contradictory goals. Method. The method involves the establishing harmonious mutual penetration of symmetry and asymmetry as the remarkable property of real space, which allows use decoded information for forming the mathematical principle relating to the optimal placement of structural elements in spatially or temporally distributed systems, using novel designs based on the concept of Ideal Ring Bundles (IRB)s. IRBs are cyclic sequences of positive integers which dividing a symmetric sphere about center of the symmetry. The sums of connected sub-sequences of an IRB enumerate the set of partitions of a sphere exactly R times. Two-and multidimensional IRBs, namely the “Glory to Ukraine Stars”, are sets of t-dimensional vectors, each of them as well as all modular sums of them enumerate the set node points grid of toroid coordinate system with the corresponding sizes and dimensionality exactly R times. Moreover, we require each indexed vector data “category-attribute” mutually uniquely corresponds to the point with the eponymous set of the coordinate system. Besides, a combination of binary code with vector weight discharges of the database is allowed, and the set of all values of indexed vector data sets are the same that a set of numerical values. The underlying mathematical principle relates to the optimal placement of structural elements in spatially and/or temporally distributed systems, using novel designs based on tdimensional “star” combinatorial configurations, including the appropriate algebraic theory of cyclic groups, number theory, modular arithmetic, and IRB geometric transformations. Results. The relationship of vector code information parameters (capacity, code size, dimensionality, number of encodingvectors) with geometric parameters of the coordinate system (dimension, dimensionality, and grid sizes), and vector data characteristic (number of attributes and number of categories, entity-attribute-value size list) have been formalized. The formula system is derived as a functional dependency between the above parameters, which allows achieving a favorable compromise between the contradictory goals (for example, the performance and reliability of the coding method). Theorem with corresponding corollaries about the maximum vector code size of conversion methods for t-dimensional indexed data sets “category-attribute” proved. Theoretically, the existence of an infinitely large number of minimized basis, which give rise to numerous varieties of multidimensional “star” coordinate systems, which can find practical application in modern and future multidimensional information technologies, substantiated. Conclusions. The formalization provides, essentially, a new conceptual model of information systems for optimal coding and processing of big vector data, using novel design based on the remarkable properties and structural perfection of the “Glory to Ukraine Stars” combinatorial configurations. Moreover, the optimization has been embedded in the underlying combinatorial models. The favorable qualities of the combinatorial structures can be applied to vector data coded design of multidimensional signals, signal compression and reconstruction for communications and radar, and other areas to which the GUS-model can be useful. There are many opportunities to apply them to numerous branches of sciences and advanced systems engineering, including information technologies under the toroidal coordinate systems. A perfection, harmony and beauty exists not only in the abstract models but in the real world also.
APA, Harvard, Vancouver, ISO, and other styles
19

Han, Bo, and Bolang Li. "Lossless Compression of Data Tables in Mobile Devices by Using Co-clustering." International Journal of Computers Communications & Control 11, no. 6 (October 17, 2016): 776. http://dx.doi.org/10.15837/ijccc.2016.6.2554.

Full text
Abstract:
Data tables have been widely used for storage of a collection of related records in a structured format in many mobile applications. The lossless compression of data tables not only brings benefits for storage, but also reduces network transmission latencies and energy costs in batteries. In this paper, we propose a novel lossless compression approach by combining co-clustering and information coding theory. It reorders table columns and rows simultaneously for shaping homogeneous blocks and further optimizes alignment within a block to expose redundancy, such that standard lossless encoders can significantly improve compression ratios. We tested the approach on a synthetic dataset and ten UCI real-life datasets by using a standard compressor 7Z. The extensive experimental results suggest that compared with the direct table compression without co-clustering and within-block alignment, our approach can boost compression rates at least 21% and up to 68%. The results also show that the compression time cost of the co-clustering approach is linearly proportional to a data table size. In addition, since the inverse transform of co-clustering is just exchange of rows and columns according to recorded indexes, the decompression procedure runs very fast and the decompression time cost is similar to the counterpart without using co-clustering. Thereby, our approach is suitable for lossless compression of data tables in mobile devices with constrained resources.
APA, Harvard, Vancouver, ISO, and other styles
20

Al-Bahadili, Hussein, and Saif Al-Saab. "Development of a Novel Compressed Index-Query Web Search Engine Model." International Journal of Information Technology and Web Engineering 6, no. 3 (July 2011): 39–56. http://dx.doi.org/10.4018/jitwe.2011070103.

Full text
Abstract:
In this paper, the authors present a description of a new Web search engine model, the compressed index-query (CIQ) Web search engine model. This model incorporates two bit-level compression layers implemented at the back-end processor (server) side, one layer resides after the indexer acting as a second compression layer to generate a double compressed index (index compressor), and the second layer resides after the query parser for query compression (query compressor) to enable bit-level compressed index-query search. The data compression algorithm used in this model is the Hamming codes-based data compression (HCDC) algorithm, which is an asymmetric, lossless, bit-level algorithm permits CIQ search. The different components of the new Web model are implemented in a prototype CIQ test tool (CIQTT), which is used as a test bench to validate the accuracy and integrity of the retrieved data and evaluate the performance of the proposed model. The test results demonstrate that the proposed CIQ model reduces disk space requirements and searching time by more than 24%, and attains a 100% agreement when compared with an uncompressed model.
APA, Harvard, Vancouver, ISO, and other styles
21

Et. al., Dr J. Preetha,. "An Improved Framework for Bitmap Indexes and their Use in Data Warehouse Optimization." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 2 (April 10, 2021): 1513–20. http://dx.doi.org/10.17762/turcomat.v12i2.1389.

Full text
Abstract:
Compression technique is basically used to compress the size of table or reduce the storage area. Oracle already gives this feature for the table compression as well as for the index compression. when index is created on particular column of a table then it contain some space, which require some storage or disk space by this technique we can save our disk space because in industry the company have to purchase the disk space according to the size of the their data and pay according to their disk space. To utilize this disk space for useful records data rather than wasting it. In this paper used the data pump utility for the compression of Bitmap index and table. Data pump utility performed for the logical backups in database.in this paper implemented data pump for compression, to release the space and change the index pointing location. It will not release the space even after deletion of records. This is of special interest for the case to compress the bitmap index and table space along with the’S (Data Manipulation Language).
APA, Harvard, Vancouver, ISO, and other styles
22

Ferrada, H., T. Gagie, T. Hirvola, and S. J. Puglisi. "Hybrid indexes for repetitive datasets." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 372, no. 2016 (May 28, 2014): 20130137. http://dx.doi.org/10.1098/rsta.2013.0137.

Full text
Abstract:
Advances in DNA sequencing mean that databases of thousands of human genomes will soon be commonplace. In this paper, we introduce a simple technique for reducing the size of conventional indexes on such highly repetitive texts. Given upper bounds on pattern lengths and edit distances, we pre-process the text with the lossless data compression algorithm LZ77 to obtain a filtered text, for which we store a conventional index. Later, given a query, we find all matches in the filtered text, then use their positions and the structure of the LZ77 parse to find all matches in the original text. Our experiments show that this also significantly reduces query times.
APA, Harvard, Vancouver, ISO, and other styles
23

Devillers, P., C. Saix, and M. S. El Youssoufi. "Loi de comportement thermo-hydromécanique pour les sols non saturés : identification in situ des indices de compression thermique." Canadian Geotechnical Journal 33, no. 2 (May 8, 1996): 250–59. http://dx.doi.org/10.1139/t96-004.

Full text
Abstract:
This paper deals with thermo-hydromechanical behaviour of nonsaturated soils. The constitutive relationship presented herein allows for the prediction of the settlement or swell of an unsaturated soil under nonisothermal oedometric conditions. Mechanical, hydraulic, and thermal compression indexes are considered for stress, capillary pressure, and temperature variables, respectively. The implementation of the relationship in a prediction scheme requires preliminary characterization of these indexes. The stress–strain and water volume change relationships are first presented from semiempirical point of view for a nonsaturated soil element under nonisothermal conditions. These relationships allow for the expression of a thermo-hydromechanical constitutive law for nonsaturated soils and propose a relationship for the change in the soil water content. The thermal compression indexes are then determined for a clayey silty sand, first using a reverse method and then a direct method. This determination is made from experimental data recorded on a prototype involving heat storage in an aquifer. The values of these thermal compression indexes are finally compared with the laboratory values obtained in a thermal triaxial cell on samples of the same soil. Key words: nonsaturated soils, thermo-hydromechanical, oedometer tests, thermal compression indexes, characterization, reverse method, direct method.[Journal Translation]
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Rongjie, Junyi Li, Yang Bai, Tianyi Zang, and Yadong Wang. "BdBG: a bucket-based method for compressing genome sequencing data with dynamic de Bruijn graphs." PeerJ 6 (October 19, 2018): e5611. http://dx.doi.org/10.7717/peerj.5611.

Full text
Abstract:
Dramatic increases in data produced by next-generation sequencing (NGS) technologies demand data compression tools for saving storage space. However, effective and efficient data compression for genome sequencing data has remained an unresolved challenge in NGS data studies. In this paper, we propose a novel alignment-free and reference-free compression method, BdBG, which is the first to compress genome sequencing data with dynamic de Bruijn graphs based on the data after bucketing. Compared with existing de Bruijn graph methods, BdBG only stored a list of bucket indexes and bifurcations for the raw read sequences, and this feature can effectively reduce storage space. Experimental results on several genome sequencing datasets show the effectiveness of BdBG over three state-of-the-art methods. BdBG is written in python and it is an open source software distributed under the MIT license, available for download at https://github.com/rongjiewang/BdBG.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhao, Yakun, Jianhong Chen, Shan Yang, and Zhe Liu. "Game Theory and an Improved Maximum Entropy-Attribute Measure Interval Model for Predicting Rockburst Intensity." Mathematics 10, no. 15 (July 22, 2022): 2551. http://dx.doi.org/10.3390/math10152551.

Full text
Abstract:
To improve the accuracy of predicting rockburst intensity, game theory and an improved maximum entropy-attribute measure interval model were established. First, by studying the mechanism of rockburst and typical cases, rock uniaxial compressive strength σc, rock compression-tension ratio σc/σt, rock shear compression ratio σθ/σc, rock elastic deformation coefficient Wet, and rock integrity coefficient were selected as indexes for predicting rockburst intensity. Second, by combining the maximum entropy principle with the attribute measure interval and using the minimum distance Di−k between sample and class as the guide, the entropy solution of the attribute measure was obtained, which eliminates the greyness and ambiguity of the rockburst indexes to the maximum extent. Third, using the compromise coefficient to integrate the comprehensive attribute measure, which avoids the ambiguity about the number of attribute measure intervals. Fourth, from the essence of measurement theory, the Euclidean distance formula was used to improve the attribute identification mode, which overcomes the effect of the confidence coefficient taking on the results. Moreover, in order to balance the shortcomings of the subjective weights of the Analytic Hierarchy Process and the objective weights of the CRITIC method, game theory was used for the combined weights, which balances experts’ experience and the amount of data information. Finally, 20 sets of typical cases for rockburst in the world were selected as samples. On the one hand, the reasonableness of the combined weights of indexes was analyzed; on the other hand, the results of this paper’s model were compared with the three analytical models for predicting rockburst, and this paper’s model had the lowest number of misjudged samples and an accuracy rate of 80%, which was better than other models, verifying the accuracy and applicability.
APA, Harvard, Vancouver, ISO, and other styles
26

Yan, Wen Yi, and Hong Yuan Liu. "Influence of Z-Pinning on the Buckling of Composite Laminates under Edge-Wise Compression." Key Engineering Materials 312 (June 2006): 127–32. http://dx.doi.org/10.4028/www.scientific.net/kem.312.127.

Full text
Abstract:
Z-pinning is a newly developed technique to enhance the strength of composite laminates in the thickness direction. Recent experimental and theoretical studies have shown that z-pins significantly improve mode I and mode II fracture toughness. In practice, buckling accompanying delamination is a typical failure mode in laminated composite structures. For a complete understanding of the z-pinning technique towards improvements of the overall mechanical properties of laminated composites, a numerical model is developed in this paper to investigate the influence of z-pins on the buckling composite laminates with initial delaminations under edge-wise compression. The numerical results indicate that z-pinning can indeed effectively increase the compressive strength of the composite laminates provided that the initial imperfection is within a certain range. The magnitude of the improvement is consistent with available experimental data.
APA, Harvard, Vancouver, ISO, and other styles
27

Maarala, Altti Ilari, Ossi Arasalo, Daniel Valenzuela, Veli Mäkinen, and Keijo Heljanko. "Distributed hybrid-indexing of compressed pan-genomes for scalable and fast sequence alignment." PLOS ONE 16, no. 8 (August 3, 2021): e0255260. http://dx.doi.org/10.1371/journal.pone.0255260.

Full text
Abstract:
Computational pan-genomics utilizes information from multiple individual genomes in large-scale comparative analysis. Genetic variation between case-controls, ethnic groups, or species can be discovered thoroughly using pan-genomes of such subpopulations. Whole-genome sequencing (WGS) data volumes are growing rapidly, making genomic data compression and indexing methods very important. Despite current space-efficient repetitive sequence compression and indexing methods, the deployed compression methods are often sequential, computationally time-consuming, and do not provide efficient sequence alignment performance on vast collections of genomes such as pan-genomes. For performing rapid analytics with the ever-growing genomics data, data compression and indexing methods have to exploit distributed and parallel computing more efficiently. Instead of strict genome data compression methods, we will focus on the efficient construction of a compressed index for pan-genomes. Compressed hybrid-index enables fast sequence alignments to several genomes at once while shrinking the index size significantly compared to traditional indexes. We propose a scalable distributed compressed hybrid-indexing method for large genomic data sets enabling pan-genome-based sequence search and read alignment capabilities. We show the scalability of our tool, DHPGIndex, by executing experiments in a distributed Apache Spark-based computing cluster comprising 448 cores distributed over 26 nodes. The experiments have been performed both with human and bacterial genomes. DHPGIndex built a BLAST index for n = 250 human pan-genome with an 870:1 compression ratio (CR) in 342 minutes and a Bowtie2 index with 157:1 CR in 397 minutes. For n = 1,000 human pan-genome, the BLAST index was built in 1520 minutes with 532:1 CR and the Bowtie2 index in 1938 minutes with 76:1 CR. Bowtie2 aligned 14.6 GB of paired-end reads to the compressed (n = 1,000) index in 31.7 minutes on a single node. Compressing n = 13,375,031 (488 GB) GenBank database to BLAST index resulted in CR of 62:1 in 575 minutes. BLASTing 189,864 Crispr-Cas9 gRNA target sequences (23 MB in total) to the compressed index of human pan-genome (n = 1,000) finished in 45 minutes on a single node. 30 MB mixed bacterial sequences were (n = 599) were blasted to the compressed index of 488 GB GenBank database (n = 13,375,031) in 26 minutes on 25 nodes. 78 MB mixed sequences (n = 4,167) were blasted to the compressed index of 18 GB E. coli sequence database (n = 745,409) in 5.4 minutes on a single node.
APA, Harvard, Vancouver, ISO, and other styles
28

Asnaoui, Khalid El. "Image Compression Based on Block SVD Power Method." Journal of Intelligent Systems 29, no. 1 (April 2, 2019): 1345–59. http://dx.doi.org/10.1515/jisys-2018-0034.

Full text
Abstract:
Abstract In recent years, the important and fast growth in the development and demand of multimedia products is contributing to an insufficiency in the bandwidth of devices and network storage memory. Consequently, the theory of data compression becomes more significant for reducing data redundancy in order to allow more transfer and storage of data. In this context, this paper addresses the problem of lossy image compression. Indeed, this new proposed method is based on the block singular value decomposition (SVD) power method that overcomes the disadvantages of MATLAB’s SVD function in order to make a lossy image compression. The experimental results show that the proposed algorithm has better compression performance compared with the existing compression algorithms that use MATLAB’s SVD function. In addition, the proposed approach is simple in terms of implementation and can provide different degrees of error resilience, which gives, in a short execution time, a better image compression.
APA, Harvard, Vancouver, ISO, and other styles
29

Delaunay, Xavier, Aurélie Courtois, and Flavien Gouillon. "Evaluation of lossless and lossy algorithms for the compression of scientific datasets in netCDF-4 or HDF5 files." Geoscientific Model Development 12, no. 9 (September 23, 2019): 4099–113. http://dx.doi.org/10.5194/gmd-12-4099-2019.

Full text
Abstract:
Abstract. The increasing volume of scientific datasets requires the use of compression to reduce data storage and transmission costs, especially for the oceanographic or meteorological datasets generated by Earth observation mission ground segments. These data are mostly produced in netCDF files. Indeed, the netCDF-4/HDF5 file formats are widely used throughout the global scientific community because of the useful features they offer. HDF5 in particular offers a dynamically loaded filter plugin so that users can write compression/decompression filters, for example, and process the data before reading or writing them to disk. This study evaluates lossy and lossless compression/decompression methods through netCDF-4 and HDF5 tools on analytical and real scientific floating-point datasets. We also introduce the Digit Rounding algorithm, a new relative error-bounded data reduction method inspired by the Bit Grooming algorithm. The Digit Rounding algorithm offers a high compression ratio while keeping a given number of significant digits in the dataset. It achieves a higher compression ratio than the Bit Grooming algorithm with slightly lower compression speed.
APA, Harvard, Vancouver, ISO, and other styles
30

Zhu, Yongjun, Wenbo Liu, Qian Shen, Yin Wu, and Han Bao. "JPEG Lifting Algorithm Based on Adaptive Block Compressed Sensing." Mathematical Problems in Engineering 2020 (July 11, 2020): 1–17. http://dx.doi.org/10.1155/2020/2873830.

Full text
Abstract:
This paper proposes a JPEG lifting algorithm based on adaptive block compressed sensing (ABCS), which solves the fusion between the ABCS algorithm for 1-dimension vector data processing and the JPEG compression algorithm for 2-dimension image data processing and improves the compression rate of the same quality image in comparison with the existing JPEG-like image compression algorithms. Specifically, mean information entropy and multifeature saliency indexes are used to provide a basis for adaptive blocking and observing, respectively, joint model and curve fitting are adopted for bit rate control, and a noise analysis model is introduced to improve the antinoise capability of the current JPEG decoding algorithm. Experimental results show that the proposed method has good performance of fidelity and antinoise, especially at a medium compression ratio.
APA, Harvard, Vancouver, ISO, and other styles
31

Ji, Shi Jun. "Research on Compression Methods for ASCII STL File." Advanced Materials Research 108-111 (May 2010): 1254–58. http://dx.doi.org/10.4028/www.scientific.net/amr.108-111.1254.

Full text
Abstract:
Two new types of ASCII format STL file are presented under lossless data compression and lossy implicit data compression respectively. In these new format file, a vertex with three coordinates which has degree n is stored only once rather than n times and three vertexes of a facet are saved by their indexes instead of by their three coordinates (x, y, z). As a result, the storage space of the vertex reduces to about 1/3 of its original space. It is very convenience for the file transmitting of network manufacturing. Moreover, the reading, writing and pretreatment time of data file are shortened greatly because of the decreasing of the file length.
APA, Harvard, Vancouver, ISO, and other styles
32

Rosa Righi, Rodrigo da, Vinicius F. Rodrigues, Cristiano A. Costa, and Roberto Q. Gomes. "Exploiting Data-Parallelism on Multicore and SMT Systems for Implementing the Fractal Image Compressing Problem." Computer and Information Science 10, no. 1 (December 25, 2016): 34. http://dx.doi.org/10.5539/cis.v10n1p34.

Full text
Abstract:
This paper presents a parallel modeling of a lossy image compression method based on the fractal theory and its evaluation over two versions of dual-core processors: with and without simultaneous multithreading (SMT) support. The idea is to observe the speedup on both configurations when changing application parameters and the number of threads at operating system level. Our target application is particularly relevant in the Big Data era. Huge amounts of data often need to be sent over low/medium bandwidth networks, and/or to be saved on devices with limited store capacity, motivating efficient image compression. Especially, the fractal compression presents a CPU-bound coding method known for offering higher indexes of file reduction through highly time-consuming calculus. The structure of the problem allowed us to explore data-parallelism by implementing an embarrassingly parallel version of the algorithm. Despite its simplicity, our modeling is useful for fully exploiting and evaluating the considered architectures. When comparing performance in both processors, the results demonstrated that the SMT-based one presented gains up to 29%. Moreover, they emphasized that a large number of threads does not always represent a reduction in application time. In average, the results showed a curve in which a strong time reduction is achieved when working with 4 and 8 threads when evaluating pure and SMT dual-core processors, respectively. The trend concerns a slow growing of the execution time when enlarging the number of threads due to both task granularity and threads management.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhang, Yi, Ephraim Suhir, and Yuan Xu. "Effective Young's modulus of carbon nanofiber array." Journal of Materials Research 21, no. 11 (November 2006): 2948–54. http://dx.doi.org/10.1557/jmr.2006.0363.

Full text
Abstract:
We developed a methodology for the evaluation of the effective Young's modulus (EYM) of the vertically aligned carbon nanofibers array (CNFA). The carbon nanofibers array is treated in this study as a continuous structural element, and, for this reason, the determined EYM might be appreciably different (actually, lower) than the Young's modulus (YM) of the material of an individual carbon nanotube or a nanofiber. The developed methodology is based on the application of a compressive load onto the carbon nanofibers array, so that each individual carbon nanofiber experiences axial compression and is expected to buckle under the compressive load. The relationship between the applied compressive stress and the induced displacement of the carbon nanofiber array is measured using a table version of an Instron tester. It has been found that the carbon nanofiber array exhibits nonlinear behavior and the EYM increases with an increase in the compressive load. The largest measured EYM of the carbon nanofiber array turned out to be about 90 GPa. It has been found also that the fragmentary pieces of lateral graphitic layer in the carbon nanofiber array resulted in substantial worsening of the quality of the carbon nanofibers. This might be one of the possible reasons why the measured EYM turned out to be much lower than the theoretical predictions reported in the literature. The measured EYM is also much lower than the reported in the literature atomic force microscopy (AFM)-based data for the EYM for multiwalled carbon nanotubes (MWCNTs) that possess uniform and straight graphitic wall structure. Our transmission electron microscope (TEM) observations have revealed indeed poor structural qualities of the plasma-enhanced chemical vapor deposition (PECVD) grown CNFs.
APA, Harvard, Vancouver, ISO, and other styles
34

Zhu, Yongjun, Wenbo Liu, and Qian Shen. "Adaptive Algorithm on Block-Compressive Sensing and Noisy Data Estimation." Electronics 8, no. 7 (July 3, 2019): 753. http://dx.doi.org/10.3390/electronics8070753.

Full text
Abstract:
In this paper, an altered adaptive algorithm on block-compressive sensing (BCS) is developed by using saliency and error analysis. A phenomenon has been observed that the performance of BCS can be improved by means of rational block and uneven sampling ratio as well as adopting error analysis in the process of reconstruction. The weighted mean information entropy is adopted as the basis for partitioning of BCS which results in a flexible block group. Furthermore, the synthetic feature (SF) based on local saliency and variance is introduced to step-less adaptive sampling that works well in distinguishing and sampling between smooth blocks and detail blocks. The error analysis method is used to estimate the optimal number of iterations in sparse reconstruction. Based on the above points, an altered adaptive block-compressive sensing algorithm with flexible partitioning and error analysis is proposed in the article. On the one hand, it provides a feasible solution for the partitioning and sampling of an image, on the other hand, it also changes the iteration stop condition of reconstruction, and then improves the quality of the reconstructed image. The experimental results verify the effectiveness of the proposed algorithm and illustrate a good improvement in the indexes of the Peak Signal to Noise Ratio (PSNR), Structural Similarity (SSIM), Gradient Magnitude Similarity Deviation (GMSD), and Block Effect Index (BEI).
APA, Harvard, Vancouver, ISO, and other styles
35

Jiang, Juyu, Dong Wang, Xinping Han, and Shuai Di. "Relationship between Brittleness Index and Crack Initiation Stress Ratio for Different Rock Types." Advances in Civil Engineering 2020 (April 24, 2020): 1–12. http://dx.doi.org/10.1155/2020/8091895.

Full text
Abstract:
Brittleness and crack initiation stress (σci) are important rock mechanical properties and intrinsically related to rock deformation and failure. We establish the relationship between σci and uniaxial tensile strength (σt) based on the Griffith stress criterion of brittle failure and introduce brittleness indexes B1–B4 based on the ratio of uniaxial compressive strength (σc) to σt. The crack initiation stress ratio (K) is defined as the ratio of σci to crack damage stress. The relationship between brittleness index and K is obtained from laboratory mechanics tests including uniaxial compression and Brazilian splitting tests. The results show that B1 and B2 have an inversely proportional and variant inversely proportional relationship with K, respectively, whereas no apparent relationship is observed between B3 and B4 and K. The fitting of experimental data from igneous, metamorphic, and sedimentary rocks shows that B1 and B2 have a power and linear relationship with K, respectively, whereas no functional relationship is observed between B3 and B4 and K. We collected 70 different types of uniaxial compression test data for igneous, metamorphic, and sedimentary rocks and obtained laws that are consistent within each rock type. The experimental data are used to verify K estimations using a specified constant α based on the experimental data. According to results of the limestone tests, α = 3 for σc < 60 MPa (high porosity), α = 5 for 60 MPa ≤ σc ≤ 90 MPa (moderate porosity), and α = 8 for σc > 90 MPa (low porosity) as well as for igneous and metamorphic rocks. Estimates of K for 127 different rock types using the newly defined brittleness index are in good agreement with the experimental results. This study provides an important new brittleness index calculation method and a simple and reliable method for estimating K.
APA, Harvard, Vancouver, ISO, and other styles
36

Han, Jie, Tao Guo, Qiaoqiao Zhou, Wei Han, Bo Bai, and Gong Zhang. "Structural Entropy of the Stochastic Block Models." Entropy 24, no. 1 (January 3, 2022): 81. http://dx.doi.org/10.3390/e24010081.

Full text
Abstract:
With the rapid expansion of graphs and networks and the growing magnitude of data from all areas of science, effective treatment and compression schemes of context-dependent data is extremely desirable. A particularly interesting direction is to compress the data while keeping the “structural information” only and ignoring the concrete labelings. Under this direction, Choi and Szpankowski introduced the structures (unlabeled graphs) which allowed them to compute the structural entropy of the Erdős–Rényi random graph model. Moreover, they also provided an asymptotically optimal compression algorithm that (asymptotically) achieves this entropy limit and runs in expectation in linear time. In this paper, we consider the stochastic block models with an arbitrary number of parts. Indeed, we define a partitioned structural entropy for stochastic block models, which generalizes the structural entropy for unlabeled graphs and encodes the partition information as well. We then compute the partitioned structural entropy of the stochastic block models, and provide a compression scheme that asymptotically achieves this entropy limit.
APA, Harvard, Vancouver, ISO, and other styles
37

Wu, Yingjie, Kun Du, Chengqing Wu, Ming Tao, and Rui Zhao. "Time-Varying Pattern and Prediction Model for Geopolymer Mortar Performance under Seawater Immersion." Materials 16, no. 3 (February 1, 2023): 1244. http://dx.doi.org/10.3390/ma16031244.

Full text
Abstract:
In this study, immersion experiments were conducted on the geopolymer mortar (GPM) by using artificial seawater, and the effects of alkali equivalent (AE) and waterglass modulus (WGM) on the resistance of geopolymer mortar (GPM) to seawater immersion were analyzed. The test subjected 300 specimens to 270 days of artificial seawater immersion and periodic performance tests. Alkali equivalent (AE) (3–15%) and waterglass modulus (WGM) (1.0–1.8) were employed as influencing factors, and the mass loss and uniaxial compressive strength (UCS) were used as the performance evaluation indexes, combined with X-ray diffraction (XRD) and scanning electron microscopy (SEM) to analyze the time-varying pattern of geopolymer mortar (GPM) performance with seawater immersion. The findings demonstrated a general trend of initially growing and then declining in the uniaxial compression strength (UCS) of geopolymer mortar (GPM) under seawater immersion. The resistance of geopolymer mortar (GPM) to seawater immersion decreased with both higher or lower alkali equivalent (AE), and the ideal range of alkali equivalent (AE) was 9–12%. The diffusion layer of the bilayer structure of the waterglass particle became thinner with an increase in waterglass modulus (WGM), which ultimately led to the reduction in the resistance of the geopolymer structure to seawater immersion. Additionally, a support vector regression (SVR) model was developed based on the experimental data to predict the uniaxial compression strength (UCS) of GPM under seawater immersion. The model performed better and was able to achieve accurate prediction within 1–2 months, and provided an accurate approach to predicting the strength of geopolymer materials in a practical offshore construction project.
APA, Harvard, Vancouver, ISO, and other styles
38

Shirai, Tatsuya, Hiroyuki Yamamoto, Miyuki Matsuo, Mikuri Inatsugu, Masato Yoshida, Saori Sato, KC Sujan, Yoshihito Suzuki, Isao Toyoshima, and Noboru Yamashita. "Negative gravitropism of Ginkgo biloba: growth stress and reaction wood formation." Holzforschung 70, no. 3 (March 1, 2016): 267–74. http://dx.doi.org/10.1515/hf-2015-0005.

Full text
Abstract:
Abstract Ginkgo (Ginkgo biloba L.) forms thick, lignified secondary xylem in the cylindrical stem as in Pinales (commonly called conifers), although it has more phylogenetic affinity to Cycadales than to conifers. Ginkgo forms compression wood-like (CW-like) reaction wood (RW) in its inclined stem as it is the case in conifers. However, the distribution of growth stress is not yet investigated in the RW of ginkgo, and thus this tissue resulting from negative gravitropism is still waiting for closer consideration. The present study intended to fill this gap. It has been demonstrated that, indeed, ginkgo forms RW tissue on the lower side of the inclined stem, where the compressive growth stress (CGS) was generated. In the RW, the micorofibril angle in the S2 layer, the air-dried density, and the lignin content increased, whereas the cellulose content decreased. These data are quite similar to those of conifer CWs. The multiple linear regression analysis revealed that the CGS is significantly correlated by the changes in the aforementioned parameters. It can be safely concluded that the negative gravitropism of ginkgo is very similar to that of conifers.
APA, Harvard, Vancouver, ISO, and other styles
39

Wu, Daoxiang, Lei Ye, Huahong Zhao, Leilei Wu, Jiacheng Guo, and Bin Ji. "Study on Correlation of Physical and Mechanical Properties Indexex of Cohesive Soil in Hilly and Plain Region along the Yangtze River in Anhui Province." Journal of Physics: Conference Series 2148, no. 1 (January 1, 2022): 012019. http://dx.doi.org/10.1088/1742-6596/2148/1/012019.

Full text
Abstract:
Abstract In this paper, a large number of geotechnical engineering survey data are collected in hilly and plain region along the Yangtze River in Anhui Province. Based on the statistical analysis and calculation of the experimental data of physical and mechanical properties of cohesive soils (the main quaternary soil layers in the area), the correlation between the liquidity index and water content and other physical and mechanical properties indexes are analyzed, and the fitting regression is carried out respectively. The results show that the liquidity index (IL)) and water content (w) are highly correlated with cohesion (C), compression modulus (Es), compression coefficient (α), natural density (ρ), void ratio (e), and the regression equations have high goodness of fit and good fitting effect; In addition, the fitting regression equations of water content, void ratio and natural density are compared with the theoretical calculation formula, it is found that the calculation results are close, which proves that the fitting regression equations are reliable and can be used in engineering practice.
APA, Harvard, Vancouver, ISO, and other styles
40

Fokoué, Ernest. "A Taxonomy of Big Data for Optimal Predictive Machine Learning and Data Mining." Serdica Journal of Computing 8, no. 2 (April 7, 2015): 111–36. http://dx.doi.org/10.55630/sjc.2014.8.111-136.

Full text
Abstract:
Big data comes in various ways, types, shapes, forms and sizes.Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to be analyzed efficiently and effectively. Inthis paper, we propose a rough idea of a possible taxonomy of big data,along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space andthe sample size n are usually the main ingredients in the characterizationof data bigness. The specific statistical machine learning technique used tohandle a particular big data set will depend on which category it falls inwithin the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among othertools, we discuss Preprocessing, Standardization, Imputation, Projection,Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham’s razor non-plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.
APA, Harvard, Vancouver, ISO, and other styles
41

Jia, Shanpo, Caoxuan Wen, Xiaofei Fu, Tuanhui Liu, and Zengqiang Xi. "A Caprock Evaluation Methodology for Underground Gas Storage in a Deep Depleted Gas Reservoir: A Case Study for the X9 Lithologic Trap of Langgu Sag, Bohai Bay Basin, China." Energies 15, no. 12 (June 14, 2022): 4351. http://dx.doi.org/10.3390/en15124351.

Full text
Abstract:
The evaluation of caprocks’ sealing capacity is exceedingly important for depleted gas reservoirs to be reconstructed into gas storage. In this paper, based on the physical sealing mechanism of caprock, four aspects of ten indexes of caprock quality evaluation were firstly selected, and the related classification standards were established. Secondly, based on the rock mechanical sealing mechanism, elastic and plastic indexes were selected to characterize the mechanical brittleness of caprock, and a brittleness evaluation method of caprock based on complete stress-strain curves was established. Then, a systematic comprehensive evaluation model (including 5 aspects and 12 evaluation indexes) for the sealing capacity of gas storage caprock was proposed, and the analytic hierarchy process (AHP) was used to determine the weight of the 12 indexes in the evaluation model, and the formula for calculating the suitability of the caprock sealing capacity was established. Finally, the geological data, laboratory, and field test data, including X-ray diffraction, poro-permeability test, displacement pressure, and tri-axial compression test, were used for the caprock sealing capacity evaluation of the X9 depleted gas reservoir, and the result from this model showed that the caprock quality is suitable for underground gas storage.
APA, Harvard, Vancouver, ISO, and other styles
42

Ho Tong Minh, Dinh, and Yen-Nhi Ngo. "Compressed SAR Interferometry in the Big Data Era." Remote Sensing 14, no. 2 (January 14, 2022): 390. http://dx.doi.org/10.3390/rs14020390.

Full text
Abstract:
Modern Synthetic Aperture Radar (SAR) missions provide an unprecedented massive interferometric SAR (InSAR) time series. The processing of the Big InSAR Data is challenging for long-term monitoring. Indeed, as most deformation phenomena develop slowly, a strategy of a processing scheme can be worked on reduced volume data sets. This paper introduces a novel ComSAR algorithm based on a compression technique for reducing computational efforts while maintaining the performance robustly. The algorithm divides the massive data into many mini-stacks and then compresses them. The compressed estimator is close to the theoretical Cramer–Rao lower bound under a realistic C-band Sentinel-1 decorrelation scenario. Both persistent and distributed scatterers (PSDS) are exploited in the ComSAR algorithm. The ComSAR performance is validated via simulation and application to Sentinel-1 data to map land subsidence of the salt mine Vauvert area, France. The proposed ComSAR yields consistently better performance when compared with the state-of-the-art PSDS technique. We make our PSDS and ComSAR algorithms as an open-source TomoSAR package. To make it more practical, we exploit other open-source projects so that people can apply our PSDS and ComSAR methods for an end-to-end processing chain. To our knowledge, TomoSAR is the first public domain tool available to jointly handle PS and DS targets.
APA, Harvard, Vancouver, ISO, and other styles
43

Han, Jianmin. "Texture Image Compression Algorithm Based on Self-Organizing Neural Network." Computational Intelligence and Neuroscience 2022 (April 10, 2022): 1–10. http://dx.doi.org/10.1155/2022/4865808.

Full text
Abstract:
With the rapid development of science and technology, human beings have gradually stepped into a brand-new digital era. Virtual reality technology has brought people an immersive experience. In order to enable users to get a better virtual reality experience, the pictures produced by virtual skillfully must be realistic enough and support users' real-time interaction. So interactive real-time photorealistic rendering becomes the focus of research. Texture mapping is a technology proposed to solve the contradiction between real time and reality. It has been widely studied and used since it was proposed. However, due to limited bandwidth and memory storage, it brings challenges to the stain dyeing of many large texture images, so texture compression is introduced. Texture compression can improve the utilization rate of cache but also greatly reduce the pressure on data transmission caused by the system, which largely solves the problem of real-time rendering of realistic graphics. Due to the particularity of texture image compression, it is necessary to consider not only the quality of texture image after compression ratio and decompression but also whether the algorithm is compatible with mainstream graphics cards. On this basis, we put forward the texture image compression method based on self-organizing mapping, the experiment results show that our method has achieved good results, and it is superior to other methods in most performance indexes.
APA, Harvard, Vancouver, ISO, and other styles
44

Glaysher, Michael A., Alma L. Moekotte, and Jamie Kelly. "Endoscopic sleeve gastroplasty: a modified technique with greater curvature compression sutures." Endoscopy International Open 07, no. 10 (October 2019): E1303—E1309. http://dx.doi.org/10.1055/a-0996-8089.

Full text
Abstract:
Abstract Background Endoscopic sleeve gastroplasty (ESG) is rapidly becoming established as a safe and effective means of achieving substantial weight loss via the transoral route. New ESG suture patterns are emerging. Our aim was to investigate whether superior weight loss outcomes can be achieved by using a unique combination of longitudinal compression sutures and “U”-shaped sutures. Methods This is a retrospective review of prospectively collected data of all patients undergoing ESG by a single operator in a single UK center. Results Between January 2016 and December 2017, 32 patients (23 female) underwent ESG; n = 9 cases were completed utilizing a commonly used triangular suture pattern (“no longitudinal compression”) and n = 23 cases were completed using our unique “longitudinal compression” suture pattern. In the no compression and compression groups, the mean ages were 45 ± 12 years and 43 ± 10 years, the median baseline weights were 113.6 kg (range 82.0 – 156.4) and 107 kg (range 74.0 – 136.0), and the median baseline body mass indexes (BMIs) were 35.9 kg/m2 (range 30.9 – 43.8) and 36.5 kg/m2 (range 29.8 – 42.9), respectively. After 6 months, body weight had decreased by 21.1 kg (range, 12.2 – 34.0) in the compression group (n = 7) versus 10.8 kg (range, 7.0 – 25.8) in the no compression group (n = 5) (P = 0.042). Correspondingly, BMI decreased by 7.8 kg/m2 (range, 4.9 – 11.2) and 4.1 kg/m2 (range, 2.6 – 7.2) in each group, respectively (P = 0.019). Total body weight loss (%TBWL) was greater in the compression group at 19.5 % (range, 12.9 – 30.4 %) compared to 13.2 % (range, 6.2 – 17.1 %) in the non-compression group (P = 0.042). No significant adverse events were reported in this series. Conclusion The technique of ESG is evolving and outcomes from endoscopic bariatric therapies continue to improve. We provide preliminary evidence of superior weight loss achieved through a modified gastroplasty suture pattern.
APA, Harvard, Vancouver, ISO, and other styles
45

Drijkoningen, Guy, Nihed el Allouche, Jan Thorbecke, and Gábor Bada. "Nongeometrically converted shear waves in marine streamer data." GEOPHYSICS 77, no. 6 (November 1, 2012): P45—P56. http://dx.doi.org/10.1190/geo2012-0037.1.

Full text
Abstract:
Under certain circumstances, marine streamer data contain nongeometrical shear body wave arrivals that can be used for imaging. These shear waves are generated via an evanescent compressional wave in the water and convert to propagating shear waves at the water bottom. They are called “nongeometrical” because the evanescent part in the water does not satisfy Snell’s law for real angles, but only for complex angles. The propagating shear waves then undergo reflection and refraction in the subsurface, and arrive at the receivers via an evanescent compressional wave. The required circumstances are that sources and receivers are near the water bottom, irrespective of the total water depth, and that the shear-wave velocity of the water bottom is smaller than the P-wave velocity in the water, most often the normal situation. This claim has been tested during a seismic experiment in the river Danube, south of Budapest, Hungary. To show that the shear-related arrivals are body rather than surface waves, a borehole was drilled and used for multicomponent recordings. The streamer data indeed show evidence of shear waves propagating as body waves, and the borehole data confirm that these arrivals are refracted shear waves. To illustrate the effect, finite-difference modeling has been performed and it confirmed the presence of such shear waves. The streamer data were subsequently processed to obtain a shear-wave refraction section; this was obtained by removing the Scholte wave arrival, separating the wavefield into different refracted arrivals, stacking and depth-converting each refracted arrival before adding the different depth sections together. The obtained section can be compared directly with the standard P-wave reflection section. The comparison shows that this approach can deliver refracted-shear-wave sections from streamer data in an efficient manner, because neither the source nor receivers need to be situated on the water bottom.
APA, Harvard, Vancouver, ISO, and other styles
46

Martini, Maria G., and Chaminda T. E. R. Hewage. "Flexible Macroblock Ordering for Context-Aware Ultrasound Video Transmission over Mobile WiMAX." International Journal of Telemedicine and Applications 2010 (2010): 1–14. http://dx.doi.org/10.1155/2010/127519.

Full text
Abstract:
The most recent network technologies are enabling a variety of new applications, thanks to the provision of increased bandwidth and better management of Quality of Service. Nevertheless, telemedical services involving multimedia data are still lagging behind, due to the concern of the end users, that is, clinicians and also patients, about the low quality provided. Indeed, emerging network technologies should be appropriately exploited by designing the transmission strategy focusing on quality provision for end users. Stemming from this principle, we propose here a context-aware transmission strategy for medical video transmission over WiMAX systems. Context, in terms of regions of interest (ROI) in a specific session, is taken into account for the identification of multiple regions of interest, and compression/transmission strategies are tailored to such context information. We present a methodology based on H.264 medical video compression and Flexible Macroblock Ordering (FMO) for ROI identification. Two different unequal error protection methodologies, providing higher protection to the most diagnostically relevant data, are presented.
APA, Harvard, Vancouver, ISO, and other styles
47

Liu, Yu-shan, Jian-yong Pang, and Wei-jing Yao. "Effects of High Temperature on Creep Behaviour of Glazed Hollow Bead Insulation Concrete." Materials 13, no. 17 (August 19, 2020): 3658. http://dx.doi.org/10.3390/ma13173658.

Full text
Abstract:
Glazed hollow bead insulation concrete (GHBC) presents a promising application prospect in terms of its light weight and superior fire resistance. However, only a few studies have focused on the creep behaviour of GHBC exposed to high temperatures. Therefore, in this study, the mechanism of high temperature on GHBC is analysed through a series of tests on uniaxial compression and multistage creep of GHBC, exposed from room temperature up to 800 °C. The results show a decrease in the weight and compressive strength of GHBC as the temperature rises. After 800 °C, the loss of weight and strength reach to 9.67% and 69.84%, respectively. The creep strain and creep rate increase, with a higher target temperature and higher stress level, while the transient deformation modulus, the creep failure threshold stress, and creep duration are reduced significantly. Furthermore, the creep of GHBC exhibits a considerable increase above 600 °C and the creep under the same loading ratio at 600 °C increases by 74.19% compared to the creep at room temperature. Indeed, the higher the temperature, the more sensitive the stress is to the creep. Based on our findings, the Burgers model agrees well with the creep test data at the primary creep and steady-state creep stages, providing a useful reference for the fire resistance design calculation of the GHBC structures.
APA, Harvard, Vancouver, ISO, and other styles
48

Meehan, Richard R., and S. J. Burns. "Modeling Cutting: Plastic Deformation of Polymer Samples Indented With a Wedge." Journal of Manufacturing Science and Engineering 129, no. 3 (November 1, 2006): 477–84. http://dx.doi.org/10.1115/1.2716716.

Full text
Abstract:
An experiment was designed to relate force to plastic deformation caused by a wedge indenting the edge surface of a polymer sample. The experiment reveals the primary phenomena observed in industrial converting processes of cutting and slitting of thin polymer films. The thin film was modeled using a polycarbonate rectangular block, which was indented with a metallic half-round wedge that represents the industrial cutter blades. The wedge radius and sample size were selected to scale to the ratio of slitting blade radius and industrial film thickness. A compression test frame impressed wedges into polymer samples with measurements of both force and displacement recorded. These experiments clearly revealed the shape of the plastic deformation zone ahead of and around the wedges. Data from the experiments showed increasing cutting force with wedge displacement until the sample fractured. Plastic deformation of the samples was examined: the out-of-plane plastic volume was shown to equal the volume displaced by the wedge. Cracks that developed from the side of the wedge tip during indention propagated with near steady-state loads under edge surface indention.
APA, Harvard, Vancouver, ISO, and other styles
49

Kortas, Manel, Oussama Habachi, Ammar Bouallegue, Vahid Meghdadi, Tahar Ezzedine, and Jean-Pierre Cances. "Robust Data Recovery in Wireless Sensor Network: A Learning-Based Matrix Completion Framework." Sensors 21, no. 3 (February 2, 2021): 1016. http://dx.doi.org/10.3390/s21031016.

Full text
Abstract:
In this paper, we are interested in the data gathering for Wireless Sensor Networks (WSNs). In this context, we assume that only some nodes are active in the network, and that these nodes are not transmitting all the time. On the other side, the inactive nodes are considered to be inexistent or idle for a long time period. Henceforth, the sink should be able to recover the entire data matrix whie using the few received measurements. To this end, we propose a novel technique that is based on the Matrix Completion (MC) methodology. Indeed, the considered compression pattern, which is composed of structured and random losses, cannot be solved by existing MC techniques. When the received reading matrix contains several missing rows, corresponding to the inactive nodes, MC techniques are unable to recover the missing data. Thus, we propose a clustering technique that takes the inter-nodes correlation into account, and we present a complementary minimization problem based-interpolation technique that guarantees the recovery of the inactive nodes’ readings. The proposed reconstruction pattern, combined with the sampling one, is evaluated under extensive simulations. The results confirm the validity of each building block and the efficiency of the whole structured approach, and prove that it outperforms the closest scheme.
APA, Harvard, Vancouver, ISO, and other styles
50

Richly, Keven, Rainer Schlosser, and Martin Boissier. "Budget-Conscious Fine-Grained Configuration Optimization for Spatio-Temporal Applications." Proceedings of the VLDB Endowment 15, no. 13 (September 2022): 4079–92. http://dx.doi.org/10.14778/3565838.3565858.

Full text
Abstract:
Based on the performance requirements of modern spatio-temporal data mining applications, in-memory database systems are often used to store and process the data. To efficiently utilize the scarce DRAM capacities, modern database systems support various tuning possibilities to reduce the memory footprint (e.g., data compression) or increase performance (e.g., additional indexes). However, the selection of cost and performance balancing configurations is challenging due to the vast number of possible setups consisting of mutually dependent individual decisions. In this paper, we introduce a novel approach to jointly optimize the compression, sorting, indexing, and tiering configuration for spatio-temporal workloads. Further, we consider horizontal data partitioning, which enables the independent application of different tuning options on a fine-grained level. We propose different linear programming (LP) models addressing cost dependencies at different levels of accuracy to compute optimized tuning configurations for a given workload and memory budgets. To yield maintainable and robust configurations, we extend our LP-based approach to incorporate reconfiguration costs as well as a worst-case optimization for potential workload scenarios. Further, we demonstrate on a real-world dataset that our models allow to significantly reduce the memory footprint with equal performance or increase the performance with equal memory size compared to existing tuning heuristics.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography