To see the other types of publications on this topic, follow the link: Data compression.

Journal articles on the topic 'Data compression'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Data compression.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Shevchuk, Yury Vladimirovich. "Memory-efficient sensor data compression." Program Systems: Theory and Applications 13, no. 2 (April 4, 2022): 35–63. http://dx.doi.org/10.25209/2079-3316-2022-13-2-35-63.

Full text
Abstract:
We treat scalar data compression in sensor network nodes in streaming mode (compressing data points as they arrive, no pre-compression buffering). Several experimental algorithms based on linear predictive coding (LPC) combined with run length encoding (RLE) are considered. In entropy coding stage we evaluated (a) variable-length coding with dynamic prefixes generated with MTF-transform, (b) adaptive width binary coding, and (c) adaptive Golomb-Rice coding. We provide a comparison of known and experimental compression algorithms on 75 sensor data sources. Compression ratios achieved in the tests are about 1.5/4/1000000 (min/med/max), with compression context size about 10 bytes.
APA, Harvard, Vancouver, ISO, and other styles
2

Saidhbi, Sheik. "An Intelligent Multimedia Data Encryption and Compression and Secure Data Transmission of Public Cloud." Asian Journal of Engineering and Applied Technology 8, no. 2 (May 5, 2019): 37–40. http://dx.doi.org/10.51983/ajeat-2019.8.2.1141.

Full text
Abstract:
Data compression is a method of reducing the size of the data file so that the file should take less disk space for storage. Compression of a file depends upon encoding of file. In lossless data compression algorithm there is no data loss while compressing a file, therefore confidential data can be reproduce if it is compressed using lossless data compression. Compression reduces the redundancy and if a compressed file is encrypted it is having a better security and faster transfer rate across the network than encrypting and transferring uncompressed file. Most of the computer applications related to health are not secure and these applications exchange lot of confidential health data having different file formats like HL7, DICOM images and other audio, image, textual and video data formats etc. These types of confidential data need to be transmitted securely and stored efficiently. Therefore this paper proposes a learning compression- encryption model for identifying the files that should be compressed before encrypting and the files that should be encrypted without compressing them.
APA, Harvard, Vancouver, ISO, and other styles
3

Nithya, P., T. Vengattaraman, and M. Sathya. "Survey On Parameters of Data Compression." REST Journal on Data Analytics and Artificial Intelligence 2, no. 1 (March 1, 2023): 1–7. http://dx.doi.org/10.46632/jdaai/2/1/1.

Full text
Abstract:
The rapid development in the hardware and the software gives rise to data growth. This data growth has numerous impacts, including the need for a larger storage capacity for storing and transmitting. Data compression is needed in today’s world because it helps to minimize the amount of storage space required to store and transmit data. Performance measures in data compression are used to evaluate the efficiency and effectiveness of data compression algorithms. In recent times, numerous data compression algorithms are developed to reduce data storage and increase transmission speed in this internet era. In order to analyses how data compression performance is measured in terms of text, image, audio, and video compressions. This survey presents discussion made for important data compression parameters according to their data types.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Xinyu, Jiannan Tian, Ian Beaver, Cynthia Freeman, Yan Yan, Jianguo Wang, and Dingwen Tao. "FCBench: Cross-Domain Benchmarking of Lossless Compression for Floating-Point Data." Proceedings of the VLDB Endowment 17, no. 6 (February 2024): 1418–31. http://dx.doi.org/10.14778/3648160.3648180.

Full text
Abstract:
While both the database and high-performance computing (HPC) communities utilize lossless compression methods to minimize floating-point data size, a disconnect persists between them. Each community designs and assesses methods in a domain-specific manner, making it unclear if HPC compression techniques can benefit database applications or vice versa. With the HPC community increasingly leaning towards in-situ analysis and visualization, more floating-point data from scientific simulations are being stored in databases like Key-Value Stores and queried using in-memory retrieval paradigms. This trend underscores the urgent need for a collective study of these compression methods' strengths and limitations, not only based on their performance in compressing data from various domains but also on their runtime characteristics. Our study extensively evaluates the performance of eight CPU-based and five GPU-based compression methods developed by both communities, using 33 real-world datasets assembled in the Floating-point Compressor Benchmark (FCBench). Additionally, we utilize the roofline model to profile their runtime bottlenecks. Our goal is to offer insights into these compression methods that could assist researchers in selecting existing methods or developing new ones for integrated database and HPC applications.
APA, Harvard, Vancouver, ISO, and other styles
5

Ryabko, Boris. "Time-Universal Data Compression." Algorithms 12, no. 6 (May 29, 2019): 116. http://dx.doi.org/10.3390/a12060116.

Full text
Abstract:
Nowadays, a variety of data-compressors (or archivers) is available, each of which has its merits, and it is impossible to single out the best ones. Thus, one faces the problem of choosing the best method to compress a given file, and this problem is more important the larger is the file. It seems natural to try all the compressors and then choose the one that gives the shortest compressed file, then transfer (or store) the index number of the best compressor (it requires log m bits, if m is the number of compressors available) and the compressed file. The only problem is the time, which essentially increases due to the need to compress the file m times (in order to find the best compressor). We suggest a method of data compression whose performance is close to optimal, but for which the extra time needed is relatively small: the ratio of this extra time and the total time of calculation can be limited, in an asymptotic manner, by an arbitrary positive constant. In short, the main idea of the suggested approach is as follows: in order to find the best, try all the data compressors, but, when doing so, use for compression only a small part of the file. Then apply the best data compressors to the whole file. Note that there are many situations where it may be necessary to find the best data compressor out of a given set. In such a case, it is often done by comparing compressors empirically. One of the goals of this work is to turn such a selection process into a part of the data compression method, automating and optimizing it.
APA, Harvard, Vancouver, ISO, and other styles
6

McGeoch, Catherine C. "Data Compression." American Mathematical Monthly 100, no. 5 (May 1993): 493. http://dx.doi.org/10.2307/2324310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Helman, D. R., and G. G. Langdon. "Data compression." IEEE Potentials 7, no. 1 (February 1988): 25–28. http://dx.doi.org/10.1109/45.1889.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lelewer, Debra A., and Daniel S. Hirschberg. "Data compression." ACM Computing Surveys 19, no. 3 (September 1987): 261–96. http://dx.doi.org/10.1145/45072.45074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

McGeoch, Catherine C. "Data Compression." American Mathematical Monthly 100, no. 5 (May 1993): 493–97. http://dx.doi.org/10.1080/00029890.1993.11990441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bookstein, Abraham, and James A. Storer. "Data compression." Information Processing & Management 28, no. 6 (November 1992): 675–80. http://dx.doi.org/10.1016/0306-4573(92)90060-d.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Mishra, Amit Kumar. "Versatile Video Coding (VVC) Standard: Overview and Applications." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 10, no. 2 (September 10, 2019): 975–81. http://dx.doi.org/10.17762/turcomat.v10i2.13578.

Full text
Abstract:
Information security includes picture and video compression and encryption since compressed data is more secure than uncompressed imagery. Another point is that handling data of smaller sizes is simple. Therefore, efficient, secure, and simple data transport methods are created through effective data compression technology. Consequently, there are two different sorts of compression algorithm techniques: lossy compressions and lossless compressions. Any type of data format, including text, audio, video, and picture files, may leverage these technologies. In this procedure, the Least Significant Bit technique is used to encrypt each frame of the video file format to be able to increase security. The primary goals of this procedure are to safeguard the data by encrypting the frames and compressing the video file. Using PSNR to enhance process throughput would also enhance data transmission security while reducing data loss.
APA, Harvard, Vancouver, ISO, and other styles
12

Ko, Yousun, Alex Chadwick, Daniel Bates, and Robert Mullins. "Lane Compression." ACM Transactions on Embedded Computing Systems 20, no. 2 (March 2021): 1–26. http://dx.doi.org/10.1145/3431815.

Full text
Abstract:
This article presents Lane Compression, a lightweight lossless compression technique for machine learning that is based on a detailed study of the statistical properties of machine learning data. The proposed technique profiles machine learning data gathered ahead of run-time and partitions values bit-wise into different lanes with more distinctive statistical characteristics. Then the most appropriate compression technique is chosen for each lane out of a small number of low-cost compression techniques. Lane Compression’s compute and memory requirements are very low and yet it achieves a compression rate comparable to or better than Huffman coding. We evaluate and analyse Lane Compression on a wide range of machine learning networks for both inference and re-training. We also demonstrate the profiling prior to run-time and the ability to configure the hardware based on the profiling guarantee robust performance across different models and datasets. Hardware implementations are described and the scheme’s simplicity makes it suitable for compressing both on-chip and off-chip traffic.
APA, Harvard, Vancouver, ISO, and other styles
13

Kuncoro, Adam Prayogo, Dinar Mustofa, Dwi Krisbiantoro, and Tarwoto Tarwoto. "DIGITAL DATA SECURITY WITH APPLICATION OF CRYPTOGRAPHY AND DATA COMPRESSION TECHNIQUES." Jurnal Teknik Informatika (Jutif) 4, no. 5 (October 3, 2023): 995–99. http://dx.doi.org/10.52436/1.jutif.2023.4.5.659.

Full text
Abstract:
The need for digital data security is to ensure that the data and information we have are confidential and can only be accessed by authorized users. And no one can change the information in it, thus ensuring complete accuracy. The functions of data security are confidentiality, authentication, integrity, and anti-repudiation. Compression techniques are used to protect digital data because they aim to make less storage space and allow us to transfer more data over the internet. This study aims to plan to prove the application of a combination of 2 (two) techniques, namely compression and cryptography to digital data with the aim of increasing the security of the data. This research has the result that the compression technique of the Huffman method is the most effective in compressing digital data into the smallest file size compared to other compression methods. It can compressed the data size by around 30% (thirty percent) to 40% (forty percent) compared to the original data size. And coupled with data security with cryptographic encryption techniques so that files remain safe when transferred over the network.
APA, Harvard, Vancouver, ISO, and other styles
14

Yang, Le, Zhao Yang Guo, Shan Shan Yong, Feng Guo, and Xin An Wang. "A Hardware Implementation of Real Time Lossless Data Compression and Decompression Circuits." Applied Mechanics and Materials 719-720 (January 2015): 554–60. http://dx.doi.org/10.4028/www.scientific.net/amm.719-720.554.

Full text
Abstract:
This paper presents a hardware implementation of real time data compression and decompression circuits based on the LZW algorithm. LZW is a dictionary based data compression, which has the advantage of fast speed, high compression, and small resource occupation. In compression circuit, the design creatively utilizes two dictionaries alternately to improve efficiency and compressing rate. In decompression circuit, an integrated State machine control module is adopted to save hardware resource. Through hardware description and language programming, the circuits finally reach function simulation and timing simulation. The width of data sample is 12bits, and the dictionary storage capacity is 1K. The simulation results show the compression and decompression circuits have complete function. Compared to software method, hardware implementation can save more storage and compressing time. It has a high practical value in the future.
APA, Harvard, Vancouver, ISO, and other styles
15

A. Sapate, Suchit. "Effective XML Compressor: XMill with LZMA Data Compression." International Journal of Education and Management Engineering 9, no. 4 (July 8, 2019): 1–10. http://dx.doi.org/10.5815/ijeme.2019.04.01.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

P, Srividya. "Optimization of Lossless Compression Algorithms using Multithreading." Journal of Information Technology and Sciences 9, no. 1 (March 2, 2023): 36–42. http://dx.doi.org/10.46610/joits.2022.v09i01.005.

Full text
Abstract:
The process of reducing the number of bits required to characterize data is referred to as compression. The advantages of compression include a reduction in the time taken to transfer data from one point to another, and a reduction in the cost required for the storage space and network bandwidth. There are two types of compression algorithms namely lossy compression algorithm and lossless compression algorithm. Lossy algorithms find utility in compressing audio and video signals whereas lossless algorithms are used in compressing text messages. The advent of the internet and its worldwide usage has not only raised the utility but also the storage of text, audio and video files. These multimedia files demand more storage space as compared to traditional files. This has given rise to the requirement for an efficient compression algorithm. There is a considerable improvement in the computing performance of the machines due to the advent of the multi-core processor. However, this multi-core architecture is not used by compression algorithms. This paper shows the implementation of lossless compression algorithms namely the Lempel-Ziv-Markov Algorithm, BZip2 and ZLIB algorithms using the concept of multithreading. The results obtained prove that the ZLIB algorithm proves to be more efficient in terms of the time taken to compress and decompress the text. The comparison is done for both compressions without multithreading and compression with multi-threading.
APA, Harvard, Vancouver, ISO, and other styles
17

P, Srividya. "Optimization of Lossless Compression Algorithms using Multithreading." Journal of Information Technology and Sciences 9, no. 1 (March 1, 2023): 36–42. http://dx.doi.org/10.46610/joits.2023.v09i01.005.

Full text
Abstract:
The process of reducing the number of bits required to characterize data is referred to as compression. The advantages of compression include a reduction in the time taken to transfer data from one point to another, and a reduction in the cost required for the storage space and network bandwidth. There are two types of compression algorithms namely lossy compression algorithm and lossless compression algorithm. Lossy algorithms find utility in compressing audio and video signals whereas lossless algorithms are used in compressing text messages. The advent of the internet and its worldwide usage has not only raised the utility but also the storage of text, audio and video files. These multimedia files demand more storage space as compared to traditional files. This has given rise to the requirement for an efficient compression algorithm. There is a considerable improvement in the computing performance of the machines due to the advent of the multi-core processor. However, this multi-core architecture is not used by compression algorithms. This paper shows the implementation of lossless compression algorithms namely the Lempel-Ziv-Markov Algorithm, BZip2 and ZLIB algorithms using the concept of multithreading. The results obtained prove that the ZLIB algorithm proves to be more efficient in terms of the time taken to compress and decompress the text. The comparison is done for both compressions without multithreading and compression with multi-threading.
APA, Harvard, Vancouver, ISO, and other styles
18

Zirkind, Givon. "AFIS data compression." ACM SIGSOFT Software Engineering Notes 32, no. 6 (November 2007): 8. http://dx.doi.org/10.1145/1317471.1317480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Zirkind, Givon. "AFIS data compression." ACM SIGGRAPH Computer Graphics 41, no. 4 (November 2007): 1–36. http://dx.doi.org/10.1145/1331098.1331103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

McCluskey, E. J., D. Burek, B. Koenemann, S. Mitra, J. Patel, J. Rajski, and J. Waicukauski. "Test data compression." IEEE Design & Test of Computers 20, no. 2 (March 2003): 76–87. http://dx.doi.org/10.1109/mdt.2003.1188267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Hernaez, Mikel, Dmitri Pavlichin, Tsachy Weissman, and Idoia Ochoa. "Genomic Data Compression." Annual Review of Biomedical Data Science 2, no. 1 (July 20, 2019): 19–37. http://dx.doi.org/10.1146/annurev-biodatasci-072018-021229.

Full text
Abstract:
Recently, there has been growing interest in genome sequencing, driven by advances in sequencing technology, in terms of both efficiency and affordability. These developments have allowed many to envision whole-genome sequencing as an invaluable tool for both personalized medical care and public health. As a result, increasingly large and ubiquitous genomic data sets are being generated. This poses a significant challenge for the storage and transmission of these data. Already, it is more expensive to store genomic data for a decade than it is to obtain the data in the first place. This situation calls for efficient representations of genomic information. In this review, we emphasize the need for designing specialized compressors tailored to genomic data and describe the main solutions already proposed. We also give general guidelines for storing these data and conclude with our thoughts on the future of genomic formats and compressors.
APA, Harvard, Vancouver, ISO, and other styles
22

Mattsson, A. Geo. "DATA ON COMPRESSION." Journal of the American Society for Naval Engineers 13, no. 2 (March 18, 2009): 422. http://dx.doi.org/10.1111/j.1559-3584.1901.tb03391.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

McGillis, Peggy, Mina Nichols, and Britt Terry. "[Data] Compression Theory." EDPACS 25, no. 8 (February 1998): 16. http://dx.doi.org/10.1201/1079/43236.25.8.19980201/30193.9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Berger, Jens, Ulrich Frankenfeld, Volker Lindenstruth, Patrick Plamper, Dieter Röhrich, Erich Schäfer, Markus W. Schulz, et al. "TPC data compression." Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 489, no. 1-3 (August 2002): 406–21. http://dx.doi.org/10.1016/s0168-9002(02)00792-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Farruggia, Andrea, Paolo Ferragina, Antonio Frangioni, and Rossano Venturini. "Bicriteria Data Compression." SIAM Journal on Computing 48, no. 5 (January 2019): 1603–42. http://dx.doi.org/10.1137/17m1121457.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

BRAMBLE, JOHN M., H. K. HUANG, and MARK D. MURPHY. "Image Data Compression." Investigative Radiology 23, no. 10 (October 1988): 707–12. http://dx.doi.org/10.1097/00004424-198810000-00001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Tyrygin, I. Ya. "?-Entropy data compression." Ukrainian Mathematical Journal 44, no. 11 (November 1992): 1473–79. http://dx.doi.org/10.1007/bf01071523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Goldberg, Mark A. "Image data compression." Journal of Digital Imaging 11, S1 (August 1998): 230–32. http://dx.doi.org/10.1007/bf03168323.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Goldberg, Mark A. "Image data compression." Journal of Digital Imaging 10, S1 (August 1997): 9–11. http://dx.doi.org/10.1007/bf03168640.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Pandey, Anukul, Barjinder Singh Saini, and Butta Singh. "ELECTROCARDIOGRAM DATA COMPRESSION TECHNIQUES IN 1D/2D DOMAIN." Biomedical Engineering: Applications, Basis and Communications 33, no. 02 (January 9, 2021): 2150011. http://dx.doi.org/10.4015/s1016237221500113.

Full text
Abstract:
Electrocardiogram (ECG) is one of the best representatives of physiological signal that provides the state of the autonomic nervous system, primarily responsible for the cardiac activity. The ECG data compression plays a significant role in localized digital storage or efficient communication channel utilization in telemedicine applications. The lossless and lossy compression system’s compressor efficiency depends on the methodologies used for compression and the quality measure used to evaluate distortion. Based on domain ECG, data compression can be performed either one-dimensional (1D) or two-dimensional (2D) for utilization of inter and inter with intra beat correlation, respectively. In this paper, a comparative study between 1D and 2D ECG data compression methods was taken out from the existing literature to provide an update in this regard. ECG data compression techniques and algorithms in 1D and 2D domain have their own merits and limitations. Recently, numerous research and techniques in 1D ECG data compression have been developed, including direct and transformed domain. Additionally, 2D ECG data compression research is reported based on period normalization and complexity sorting in recent times. Finally, several practical issues highlight the assessment of reconstructed signal quality and performance comparisons with an average comparative of exhaustive existing 1D and 2D ECG compression methods based on the utilized digital signal processing systems.
APA, Harvard, Vancouver, ISO, and other styles
31

Chandak, Shubham, Kedar Tatwawadi, Idoia Ochoa, Mikel Hernaez, and Tsachy Weissman. "SPRING: a next-generation compressor for FASTQ data." Bioinformatics 35, no. 15 (December 7, 2018): 2674–76. http://dx.doi.org/10.1093/bioinformatics/bty1015.

Full text
Abstract:
Abstract Motivation High-Throughput Sequencing technologies produce huge amounts of data in the form of short genomic reads, associated quality values and read identifiers. Because of the significant structure present in these FASTQ datasets, general-purpose compressors are unable to completely exploit much of the inherent redundancy. Although there has been a lot of work on designing FASTQ compressors, most of them lack in support of one or more crucial properties, such as support for variable length reads, scalability to high coverage datasets, pairing-preserving compression and lossless compression. Results In this work, we propose SPRING, a reference-free compressor for FASTQ files. SPRING supports a wide variety of compression modes and features, including lossless compression, pairing-preserving compression, lossy compression of quality values, long read compression and random access. SPRING achieves substantially better compression than existing tools, for example, SPRING compresses 195 GB of 25× whole genome human FASTQ from Illumina’s NovaSeq sequencer to less than 7 GB, around 1.6× smaller than previous state-of-the-art FASTQ compressors. SPRING achieves this improvement while using comparable computational resources. Availability and implementation SPRING can be downloaded from https://github.com/shubhamchandak94/SPRING. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
32

Tao, Dingwen, Sheng Di, Hanqi Guo, Zizhong Chen, and Franck Cappello. "Z-checker: A framework for assessing lossy compression of scientific data." International Journal of High Performance Computing Applications 33, no. 2 (November 15, 2017): 285–303. http://dx.doi.org/10.1177/1094342017737147.

Full text
Abstract:
Because of the vast volume of data being produced by today’s scientific simulations and experiments, lossy data compressor allowing user-controlled loss of accuracy during the compression is a relevant solution for significantly reducing the data size. However, lossy compressor developers and users are missing a tool to explore the features of scientific data sets and understand the data alteration after compression in a systematic and reliable way. To address this gap, we have designed and implemented a generic framework called Z-checker. On the one hand, Z-checker combines a battery of data analysis components for data compression. On the other hand, Z-checker is implemented as an open-source community tool to which users and developers can contribute and add new analysis components based on their additional analysis demands. In this article, we present a survey of existing lossy compressors. Then, we describe the design framework of Z-checker, in which we integrated evaluation metrics proposed in prior work as well as other analysis tools. Specifically, for lossy compressor developers, Z-checker can be used to characterize critical properties (such as entropy, distribution, power spectrum, principal component analysis, and autocorrelation) of any data set to improve compression strategies. For lossy compression users, Z-checker can detect the compression quality (compression ratio and bit rate) and provide various global distortion analysis comparing the original data with the decompressed data (peak signal-to-noise ratio, normalized mean squared error, rate–distortion, rate-compression error, spectral, distribution, and derivatives) and statistical analysis of the compression error (maximum, minimum, and average error; autocorrelation; and distribution of errors). Z-checker can perform the analysis with either coarse granularity (throughout the whole data set) or fine granularity (by user-defined blocks), such that the users and developers can select the best fit, adaptive compressors for different parts of the data set. Z-checker features a visualization interface displaying all analysis results in addition to some basic views of the data sets such as time series. To the best of our knowledge, Z-checker is the first tool designed to assess lossy compression comprehensively for scientific data sets.
APA, Harvard, Vancouver, ISO, and other styles
33

Ochoa, Idoia, Mikel Hernaez, and Tsachy Weissman. "Aligned genomic data compression via improved modeling." Journal of Bioinformatics and Computational Biology 12, no. 06 (December 2014): 1442002. http://dx.doi.org/10.1142/s0219720014420025.

Full text
Abstract:
With the release of the latest Next-Generation Sequencing (NGS) machine, the HiSeq X by Illumina, the cost of sequencing the whole genome of a human is expected to drop to a mere $1000. This milestone in sequencing history marks the era of affordable sequencing of individuals and opens the doors to personalized medicine. In accord, unprecedented volumes of genomic data will require storage for processing. There will be dire need not only of compressing aligned data, but also of generating compressed files that can be fed directly to downstream applications to facilitate the analysis of and inference on the data. Several approaches to this challenge have been proposed in the literature; however, focus thus far has been on the low coverage regime and most of the suggested compressors are not based on effective modeling of the data. We demonstrate the benefit of data modeling for compressing aligned reads. Specifically, we show that, by working with data models designed for the aligned data, we can improve considerably over the best compression ratio achieved by previously proposed algorithms. Our results indicate that the pareto-optimal barrier for compression rate and speed claimed by Bonfield and Mahoney (2013) [Bonfield JK and Mahoneys MV, Compression of FASTQ and SAM format sequencing data, PLOS ONE, 8(3):e59190, 2013.] does not apply for high coverage aligned data. Furthermore, our improved compression ratio is achieved by splitting the data in a manner conducive to operations in the compressed domain by downstream applications.
APA, Harvard, Vancouver, ISO, and other styles
34

Nemetz, Tibor, and Pál Papp. "Increasing data security by data compression." Studia Scientiarum Mathematicarum Hungarica 42, no. 4 (October 1, 2005): 343–53. http://dx.doi.org/10.1556/sscmath.42.2005.4.1.

Full text
Abstract:
We analyze the effect of data-compression on security of encryption both from theoretical and practical point of view. It is demonstrated that data-compression essentially improves the security of encryption, helps to overcome technical difficulties. On the other side, it makes crypt-analysis more difficult and causes extra problems. At present data-compression applied rarely and frequently defectively. We propose a method which eliminates the negative effects. Our aim is initiate data compression as an aid for data security. To this end we provide an overview of the most frequently used cryptographic protocols. A comparison with encryption software reveals that even the most frequently used protocols do not support encryption and compression.
APA, Harvard, Vancouver, ISO, and other styles
35

Lee, Chun-Hee, and Chin-Wan Chung. "Compression Schemes with Data Reordering for Ordered Data." Journal of Database Management 25, no. 1 (January 2014): 1–28. http://dx.doi.org/10.4018/jdm.2014010101.

Full text
Abstract:
Although there have been many compression schemes for reducing data effectively, most schemes do not consider the reordering of data. In the case of unordered data, if the users change the data order in a given data set, the compression ratio may be improved compared to the original compression before reordering data. However, in the case of ordered data, the users need a mapping table that maps the original position to the changed position in order to recover the original order. Therefore, reordering ordered data may be disadvantageous in terms of space. In this paper, the authors consider two compression schemes, run-length encoding and bucketing scheme as bases for showing the impact of data reordering in compression schemes. Also, the authors propose various optimization techniques related to data reordering. Finally, the authors show that the compression schemes with data reordering are better than the original compression schemes in terms of the compression ratio.
APA, Harvard, Vancouver, ISO, and other styles
36

Kontoyiannis, I. "Pointwise redundancy in lossy data compression and universal lossy data compression." IEEE Transactions on Information Theory 46, no. 1 (2000): 136–52. http://dx.doi.org/10.1109/18.817514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Mansyuri, Umar. "KOMPRESI DATA TEKS DENGAN METODE RUN LENGTH ENCODING." Jurnal Ilmiah Sistem Informasi 1, no. 2 (December 12, 2021): 102–9. http://dx.doi.org/10.46306/sm.v1i2.13.

Full text
Abstract:
One method of using data compression is by using a method called Run Length Encoding (RLE), especially image data. The RLE method is one of the simplest lossless types of data compression schemes and is based on the simple principle of data encoding. The RLE method is very suitable for compressing data containing repetitive characters such as simple graphic images. The compressed data are 28 RGB (Red, Green, Blue) images and 28 grayscale images in jpg, png, bmp, and tiff formats, respectively. Image data is compressed with an encoder and decoder program using the RLE algorithm in the matlab application. The RLE method is said to be effective in compressing image data if the compression ratio is less than 100% because it has a lot of color repetition in the pixels. The RLE method is said to be ineffective if the compression ratio is more than 100% because it has a little repetition of colors in the pixels. Of the 28 RGB images tested, it was found that the RLE method was effective on 1 image and not effective on 27 images. For the 28 grayscale images tested, it was found that the RLE method was effective on 6 images and not effective on 22 images
APA, Harvard, Vancouver, ISO, and other styles
38

Hayati, Anis Kamilah, and Haris Suka Dyatmika. "THE EFFECT OF JPEG2000 COMPRESSION ON REMOTE SENSING DATA OF DIFFERENT SPATIAL RESOLUTIONS." International Journal of Remote Sensing and Earth Sciences (IJReSES) 14, no. 2 (January 8, 2018): 111. http://dx.doi.org/10.30536/j.ijreses.2017.v14.a2724.

Full text
Abstract:
The huge size of remote sensing data implies the information technology infrastructure to store, manage, deliver and process the data itself. To compensate these disadvantages, compressing technique is a possible solution. JPEG2000 compression provide lossless and lossy compression with scalability for lossy compression. As the ratio of lossy compression getshigher, the size of the file reduced but the information loss increased. This paper tries to investigate the JPEG2000 compression effect on remote sensing data of different spatial resolution. Three set of data (Landsat 8, SPOT 6 and Pleiades) processed with five different level of JPEG2000 compression. Each set of data then cropped at a certain area and analyzed using unsupervised classification. To estimate the accuracy, this paper utilized the Mean Square Error (MSE) and the Kappa coefficient agreement. The study shows that compressed scenes using lossless compression have no difference with uncompressed scenes. Furthermore, compressed scenes using lossy compression with the compression ratioless than 1:10 have no significant difference with uncompressed data with Kappa coefficient higher than 0.8.
APA, Harvard, Vancouver, ISO, and other styles
39

Dahunsi, F. M., O. A. Somefun, A. A. Ponnle, and K. B. Adedeji. "Compression Techniques of Electrical Energy Data for Load Monitoring: A Review." Nigerian Journal of Technological Development 18, no. 3 (November 5, 2021): 194–208. http://dx.doi.org/10.4314/njtd.v18i3.4.

Full text
Abstract:
In recent years, the electric grid has experienced increasing deployment, use, and integration of smart meters and energy monitors. These devices transmit big time-series load data representing consumed electrical energy for load monitoring. However, load monitoring presents reactive issues concerning efficient processing, transmission, and storage. To promote improved efficiency and sustainability of the smart grid, one approach to manage this challenge is applying data-compression techniques. The subject of compressing electrical energy data (EED) has received quite an active interest in the past decade to date. However, a quick grasp of the range of appropriate compression techniques remains somewhat a bottleneck to researchers and developers starting in this domain. In this context, this paper reviews the compression techniques and methods (lossy and lossless) adopted for load monitoring. Selected top-performing compression techniques metrics were discussed, such as compression efficiency, low reconstruction error, and encoding-decoding speed. Additionally reviewed is the relation between electrical energy, data, and sound compression. This review will motivate further interest in developing standard codecs for the compression of electrical energy data that matches that of other domains.
APA, Harvard, Vancouver, ISO, and other styles
40

Budiman, Gelar, Andriyan Bayu Suksmono, and Donny Danudirdjo. "Compressive Sampling with Multiple Bit Spread Spectrum-Based Data Hiding." Applied Sciences 10, no. 12 (June 24, 2020): 4338. http://dx.doi.org/10.3390/app10124338.

Full text
Abstract:
We propose a novel data hiding method in an audio host with a compressive sampling technique. An over-complete dictionary represents a group of watermarks. Each row of the dictionary is a Hadamard sequence representing multiple bits of the watermark. Then, the singular values of the segment-based host audio in a diagonal matrix are multiplied by the over-complete dictionary, producing a lower size matrix. At the same time, we embed the watermark into the compressed audio. In the detector, we detect the watermark and reconstruct the audio. This proposed method offers not only hiding the information, but also compressing the audio host. The application of the proposed method is broadcast monitoring and biomedical signal recording. We can mark and secure the signal content by hiding the watermark inside the signal while we compress the signal for memory efficiency. We evaluate the performance in terms of payload, compression ratio, audio quality, and watermark quality. The proposed method can hide the data imperceptibly, in the range of 729–5292 bps, with a compression ratio 1.47–4.84, and a perfectly detected watermark.
APA, Harvard, Vancouver, ISO, and other styles
41

Guerra, Aníbal, Jaime Lotero, José Édinson Aedo, and Sebastián Isaza. "Tackling the Challenges of FASTQ Referential Compression." Bioinformatics and Biology Insights 13 (January 2019): 117793221882137. http://dx.doi.org/10.1177/1177932218821373.

Full text
Abstract:
The exponential growth of genomic data has recently motivated the development of compression algorithms to tackle the storage capacity limitations in bioinformatics centers. Referential compressors could theoretically achieve a much higher compression than their non-referential counterparts; however, the latest tools have not been able to harness such potential yet. To reach such goal, an efficient encoding model to represent the differences between the input and the reference is needed. In this article, we introduce a novel approach for referential compression of FASTQ files. The core of our compression scheme consists of a referential compressor based on the combination of local alignments with binary encoding optimized for long reads. Here we present the algorithms and performance tests developed for our reads compression algorithm, named UdeACompress. Our compressor achieved the best results when compressing long reads and competitive compression ratios for shorter reads when compared to the best programs in the state of the art. As an added value, it also showed reasonable execution times and memory consumption, in comparison with similar tools.
APA, Harvard, Vancouver, ISO, and other styles
42

Aji Suryadi, Yanuar, and Gunawan. "Compressor Piping Design Effect on Vibration Data." Journal of Advanced Research in Fluid Mechanics and Thermal Sciences 88, no. 1 (October 11, 2021): 94–108. http://dx.doi.org/10.37934/arfmts.88.1.94108.

Full text
Abstract:
One of the systems for oil and gas production supports is the nitrogen compression system. Problem found that condition of the compressor has high vibration with the maximum overall the first compressor is 9,813 mm / s RMS, the second compressor is 7,439 mm / s RMS, the third compressor is 7,430 mm / s RMS, the fourth compressor is 13.47 mm / s RMS, the fifth compressor is 13,220 mm / s RMS, and sixth compressor already damaged. This research will discuss the nitrogen compression process in terms of the characteristics of the output fluid flow from the compressor using computational fluid dynamics. The first piping system shows that the standby compressor's flow has a higher pressure reaching 10.72 - 11.82 Pa but it is still acceptable. The second piping system with two compressors in operation shows that the pipeline flows in the opposite direction with high pressure. Flow turbulence occurs, resulting in a higher speed. The highest pressure in the pipeline reaches 44.79 Pa, mostly at the fifth and sixth compressors. The conclusion from this research there is high pressure backflow when one compressor stops and another compressor start running. Prevents direct pressure to the compressor or the condensed fluid from the gas flowing and entering the compressor used valve addition.
APA, Harvard, Vancouver, ISO, and other styles
43

Song, Biao, Yuyang Fang, Runda Guan, Rongjie Zhu, Xiaokang Pan, and Yuan Tian. "Hierarchical Indexing and Compression Method with AI-Enhanced Restoration for Scientific Data Service." Applied Sciences 14, no. 13 (June 25, 2024): 5528. http://dx.doi.org/10.3390/app14135528.

Full text
Abstract:
In the process of data services, compressing and indexing data can reduce storage costs, improve query efficiency, and thus enhance the quality of data services. However, different service requirements have diverse demands for data precision. Traditional lossy compression techniques fail to meet the precision requirements of different data due to their fixed compression parameters and schemes. Additionally, error-bounded lossy compression techniques, due to their tightly coupled design, cannot achieve high compression ratios under high precision requirements. To address these issues, this paper proposes a lossy compression technique based on error control. Instead of imposing precision constraints during compression, this method first uses the JPEG compression algorithm for multi-level compression and then manages data through a tree-based index structure to achieve error control. This approach satisfies error control requirements while effectively avoiding tight coupling. Additionally, this paper enhances data restoration effects using a deep learning network and provides a range query processing algorithm for the tree-based index to improve query efficiency. We evaluated our solution using ocean data. Experimental results show that, while maintaining data precision requirements (PSNR of at least 39 dB), our compression ratio can reach 64, which is twice that of the SZ compression algorithm.
APA, Harvard, Vancouver, ISO, and other styles
44

Payani, Ali, Afshin Abdi, Xin Tian, Faramarz Fekri, and Mohamed Mohandes. "Advances in Seismic Data Compression via Learning from Data: Compression for Seismic Data Acquisition." IEEE Signal Processing Magazine 35, no. 2 (March 2018): 51–61. http://dx.doi.org/10.1109/msp.2017.2784458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Hasan, M. R., M. I. Ibrahimy, S. M. A. Motakabber, M. M. Ferdaus, and M. N. H. Khan. "Comparative data compression techniques and multi-compression results." IOP Conference Series: Materials Science and Engineering 53 (December 20, 2013): 012081. http://dx.doi.org/10.1088/1757-899x/53/1/012081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

A. Al-Khayyat, Kamal, Imad F. Al-Shaikhli, and V. Vijayakuumar. "On Randomness of Compressed Data Using Non-parametric Randomness Tests." Bulletin of Electrical Engineering and Informatics 7, no. 1 (March 1, 2018): 63–69. http://dx.doi.org/10.11591/eei.v7i1.902.

Full text
Abstract:
Four randomness tests were used to test the outputs (compressed files) of four lossless compressions algorithms: JPEG-LS and JPEG-2000 algorithms are image-dedicated algorithms, while 7z and Bzip2 algorithms are general-purpose algorithms. The relationship between the result of randomness tests and the compression ratio was investigated. This paper reports the important relationship between the statistical information behind these tests and the compression ratio. It shows that, this statistical information almost the same at least, for the four lossless algorithms under test. This information shows that 50 % of the compressed data are grouping of runs, 50% of it has positive signs when comparing adjacent values, 66% of the files containing turning points, and using Cox-Stuart test, 25% of the file give positive signs, which reflects the similarity aspects of compressed data. When it comes to the relationship between the compression ratio and these statistical information, the paper shows also, that, the greater values of these statistical numbers, the greater compression ratio we get.
APA, Harvard, Vancouver, ISO, and other styles
47

Luzhetskyі, V. A., L. A. Savitska, and V. A. Kaplun. "SPECIALIZED DATA COMPRESSION PROCESSOR." Information technology and computer engineering 54, no. 2 (2022): 15–25. http://dx.doi.org/10.31649/1999-9941-2022-54-2-15-25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Harth‐Kitzerow, Johannes, Reimar H. Leike, Philipp Arras, and Torsten A. Enßlin. "Toward Bayesian Data Compression." Annalen der Physik 533, no. 3 (February 8, 2021): 2000508. http://dx.doi.org/10.1002/andp.202000508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Perkins, M. G. "Data compression of stereopairs." IEEE Transactions on Communications 40, no. 4 (April 1992): 684–96. http://dx.doi.org/10.1109/26.141424.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Crochemore, M., F. Mignosi, A. Restivo, and S. Salemi. "Data compression using antidictionaries." Proceedings of the IEEE 88, no. 11 (November 2000): 1756–68. http://dx.doi.org/10.1109/5.892711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography