Artykuły w czasopismach na temat „Data compression”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Data compression.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Data compression”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Shevchuk, Yury Vladimirovich. "Memory-efficient sensor data compression". Program Systems: Theory and Applications 13, nr 2 (4.04.2022): 35–63. http://dx.doi.org/10.25209/2079-3316-2022-13-2-35-63.

Pełny tekst źródła
Streszczenie:
We treat scalar data compression in sensor network nodes in streaming mode (compressing data points as they arrive, no pre-compression buffering). Several experimental algorithms based on linear predictive coding (LPC) combined with run length encoding (RLE) are considered. In entropy coding stage we evaluated (a) variable-length coding with dynamic prefixes generated with MTF-transform, (b) adaptive width binary coding, and (c) adaptive Golomb-Rice coding. We provide a comparison of known and experimental compression algorithms on 75 sensor data sources. Compression ratios achieved in the tests are about 1.5/4/1000000 (min/med/max), with compression context size about 10 bytes.
Style APA, Harvard, Vancouver, ISO itp.
2

Saidhbi, Sheik. "An Intelligent Multimedia Data Encryption and Compression and Secure Data Transmission of Public Cloud". Asian Journal of Engineering and Applied Technology 8, nr 2 (5.05.2019): 37–40. http://dx.doi.org/10.51983/ajeat-2019.8.2.1141.

Pełny tekst źródła
Streszczenie:
Data compression is a method of reducing the size of the data file so that the file should take less disk space for storage. Compression of a file depends upon encoding of file. In lossless data compression algorithm there is no data loss while compressing a file, therefore confidential data can be reproduce if it is compressed using lossless data compression. Compression reduces the redundancy and if a compressed file is encrypted it is having a better security and faster transfer rate across the network than encrypting and transferring uncompressed file. Most of the computer applications related to health are not secure and these applications exchange lot of confidential health data having different file formats like HL7, DICOM images and other audio, image, textual and video data formats etc. These types of confidential data need to be transmitted securely and stored efficiently. Therefore this paper proposes a learning compression- encryption model for identifying the files that should be compressed before encrypting and the files that should be encrypted without compressing them.
Style APA, Harvard, Vancouver, ISO itp.
3

McGeoch, Catherine C. "Data Compression". American Mathematical Monthly 100, nr 5 (maj 1993): 493. http://dx.doi.org/10.2307/2324310.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Helman, D. R., i G. G. Langdon. "Data compression". IEEE Potentials 7, nr 1 (luty 1988): 25–28. http://dx.doi.org/10.1109/45.1889.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Lelewer, Debra A., i Daniel S. Hirschberg. "Data compression". ACM Computing Surveys 19, nr 3 (wrzesień 1987): 261–96. http://dx.doi.org/10.1145/45072.45074.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

McGeoch, Catherine C. "Data Compression". American Mathematical Monthly 100, nr 5 (maj 1993): 493–97. http://dx.doi.org/10.1080/00029890.1993.11990441.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Bookstein, Abraham, i James A. Storer. "Data compression". Information Processing & Management 28, nr 6 (listopad 1992): 675–80. http://dx.doi.org/10.1016/0306-4573(92)90060-d.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Nithya, P., T. Vengattaraman i M. Sathya. "Survey On Parameters of Data Compression". REST Journal on Data Analytics and Artificial Intelligence 2, nr 1 (1.03.2023): 1–7. http://dx.doi.org/10.46632/jdaai/2/1/1.

Pełny tekst źródła
Streszczenie:
The rapid development in the hardware and the software gives rise to data growth. This data growth has numerous impacts, including the need for a larger storage capacity for storing and transmitting. Data compression is needed in today’s world because it helps to minimize the amount of storage space required to store and transmit data. Performance measures in data compression are used to evaluate the efficiency and effectiveness of data compression algorithms. In recent times, numerous data compression algorithms are developed to reduce data storage and increase transmission speed in this internet era. In order to analyses how data compression performance is measured in terms of text, image, audio, and video compressions. This survey presents discussion made for important data compression parameters according to their data types.
Style APA, Harvard, Vancouver, ISO itp.
9

Ryabko, Boris. "Time-Universal Data Compression". Algorithms 12, nr 6 (29.05.2019): 116. http://dx.doi.org/10.3390/a12060116.

Pełny tekst źródła
Streszczenie:
Nowadays, a variety of data-compressors (or archivers) is available, each of which has its merits, and it is impossible to single out the best ones. Thus, one faces the problem of choosing the best method to compress a given file, and this problem is more important the larger is the file. It seems natural to try all the compressors and then choose the one that gives the shortest compressed file, then transfer (or store) the index number of the best compressor (it requires log m bits, if m is the number of compressors available) and the compressed file. The only problem is the time, which essentially increases due to the need to compress the file m times (in order to find the best compressor). We suggest a method of data compression whose performance is close to optimal, but for which the extra time needed is relatively small: the ratio of this extra time and the total time of calculation can be limited, in an asymptotic manner, by an arbitrary positive constant. In short, the main idea of the suggested approach is as follows: in order to find the best, try all the data compressors, but, when doing so, use for compression only a small part of the file. Then apply the best data compressors to the whole file. Note that there are many situations where it may be necessary to find the best data compressor out of a given set. In such a case, it is often done by comparing compressors empirically. One of the goals of this work is to turn such a selection process into a part of the data compression method, automating and optimizing it.
Style APA, Harvard, Vancouver, ISO itp.
10

Mishra, Amit Kumar. "Versatile Video Coding (VVC) Standard: Overview and Applications". Turkish Journal of Computer and Mathematics Education (TURCOMAT) 10, nr 2 (10.09.2019): 975–81. http://dx.doi.org/10.17762/turcomat.v10i2.13578.

Pełny tekst źródła
Streszczenie:
Information security includes picture and video compression and encryption since compressed data is more secure than uncompressed imagery. Another point is that handling data of smaller sizes is simple. Therefore, efficient, secure, and simple data transport methods are created through effective data compression technology. Consequently, there are two different sorts of compression algorithm techniques: lossy compressions and lossless compressions. Any type of data format, including text, audio, video, and picture files, may leverage these technologies. In this procedure, the Least Significant Bit technique is used to encrypt each frame of the video file format to be able to increase security. The primary goals of this procedure are to safeguard the data by encrypting the frames and compressing the video file. Using PSNR to enhance process throughput would also enhance data transmission security while reducing data loss.
Style APA, Harvard, Vancouver, ISO itp.
11

Ko, Yousun, Alex Chadwick, Daniel Bates i Robert Mullins. "Lane Compression". ACM Transactions on Embedded Computing Systems 20, nr 2 (marzec 2021): 1–26. http://dx.doi.org/10.1145/3431815.

Pełny tekst źródła
Streszczenie:
This article presents Lane Compression, a lightweight lossless compression technique for machine learning that is based on a detailed study of the statistical properties of machine learning data. The proposed technique profiles machine learning data gathered ahead of run-time and partitions values bit-wise into different lanes with more distinctive statistical characteristics. Then the most appropriate compression technique is chosen for each lane out of a small number of low-cost compression techniques. Lane Compression’s compute and memory requirements are very low and yet it achieves a compression rate comparable to or better than Huffman coding. We evaluate and analyse Lane Compression on a wide range of machine learning networks for both inference and re-training. We also demonstrate the profiling prior to run-time and the ability to configure the hardware based on the profiling guarantee robust performance across different models and datasets. Hardware implementations are described and the scheme’s simplicity makes it suitable for compressing both on-chip and off-chip traffic.
Style APA, Harvard, Vancouver, ISO itp.
12

A. Sapate, Suchit. "Effective XML Compressor: XMill with LZMA Data Compression". International Journal of Education and Management Engineering 9, nr 4 (8.07.2019): 1–10. http://dx.doi.org/10.5815/ijeme.2019.04.01.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Zirkind, Givon. "AFIS data compression". ACM SIGSOFT Software Engineering Notes 32, nr 6 (listopad 2007): 8. http://dx.doi.org/10.1145/1317471.1317480.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Zirkind, Givon. "AFIS data compression". ACM SIGGRAPH Computer Graphics 41, nr 4 (listopad 2007): 1–36. http://dx.doi.org/10.1145/1331098.1331103.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

McCluskey, E. J., D. Burek, B. Koenemann, S. Mitra, J. Patel, J. Rajski i J. Waicukauski. "Test data compression". IEEE Design & Test of Computers 20, nr 2 (marzec 2003): 76–87. http://dx.doi.org/10.1109/mdt.2003.1188267.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Hernaez, Mikel, Dmitri Pavlichin, Tsachy Weissman i Idoia Ochoa. "Genomic Data Compression". Annual Review of Biomedical Data Science 2, nr 1 (20.07.2019): 19–37. http://dx.doi.org/10.1146/annurev-biodatasci-072018-021229.

Pełny tekst źródła
Streszczenie:
Recently, there has been growing interest in genome sequencing, driven by advances in sequencing technology, in terms of both efficiency and affordability. These developments have allowed many to envision whole-genome sequencing as an invaluable tool for both personalized medical care and public health. As a result, increasingly large and ubiquitous genomic data sets are being generated. This poses a significant challenge for the storage and transmission of these data. Already, it is more expensive to store genomic data for a decade than it is to obtain the data in the first place. This situation calls for efficient representations of genomic information. In this review, we emphasize the need for designing specialized compressors tailored to genomic data and describe the main solutions already proposed. We also give general guidelines for storing these data and conclude with our thoughts on the future of genomic formats and compressors.
Style APA, Harvard, Vancouver, ISO itp.
17

Mattsson, A. Geo. "DATA ON COMPRESSION". Journal of the American Society for Naval Engineers 13, nr 2 (18.03.2009): 422. http://dx.doi.org/10.1111/j.1559-3584.1901.tb03391.x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

McGillis, Peggy, Mina Nichols i Britt Terry. "[Data] Compression Theory". EDPACS 25, nr 8 (luty 1998): 16. http://dx.doi.org/10.1201/1079/43236.25.8.19980201/30193.9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Berger, Jens, Ulrich Frankenfeld, Volker Lindenstruth, Patrick Plamper, Dieter Röhrich, Erich Schäfer, Markus W. Schulz i in. "TPC data compression". Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 489, nr 1-3 (sierpień 2002): 406–21. http://dx.doi.org/10.1016/s0168-9002(02)00792-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Farruggia, Andrea, Paolo Ferragina, Antonio Frangioni i Rossano Venturini. "Bicriteria Data Compression". SIAM Journal on Computing 48, nr 5 (styczeń 2019): 1603–42. http://dx.doi.org/10.1137/17m1121457.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

BRAMBLE, JOHN M., H. K. HUANG i MARK D. MURPHY. "Image Data Compression". Investigative Radiology 23, nr 10 (październik 1988): 707–12. http://dx.doi.org/10.1097/00004424-198810000-00001.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Tyrygin, I. Ya. "?-Entropy data compression". Ukrainian Mathematical Journal 44, nr 11 (listopad 1992): 1473–79. http://dx.doi.org/10.1007/bf01071523.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Goldberg, Mark A. "Image data compression". Journal of Digital Imaging 11, S1 (sierpień 1998): 230–32. http://dx.doi.org/10.1007/bf03168323.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Goldberg, Mark A. "Image data compression". Journal of Digital Imaging 10, S1 (sierpień 1997): 9–11. http://dx.doi.org/10.1007/bf03168640.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Yang, Le, Zhao Yang Guo, Shan Shan Yong, Feng Guo i Xin An Wang. "A Hardware Implementation of Real Time Lossless Data Compression and Decompression Circuits". Applied Mechanics and Materials 719-720 (styczeń 2015): 554–60. http://dx.doi.org/10.4028/www.scientific.net/amm.719-720.554.

Pełny tekst źródła
Streszczenie:
This paper presents a hardware implementation of real time data compression and decompression circuits based on the LZW algorithm. LZW is a dictionary based data compression, which has the advantage of fast speed, high compression, and small resource occupation. In compression circuit, the design creatively utilizes two dictionaries alternately to improve efficiency and compressing rate. In decompression circuit, an integrated State machine control module is adopted to save hardware resource. Through hardware description and language programming, the circuits finally reach function simulation and timing simulation. The width of data sample is 12bits, and the dictionary storage capacity is 1K. The simulation results show the compression and decompression circuits have complete function. Compared to software method, hardware implementation can save more storage and compressing time. It has a high practical value in the future.
Style APA, Harvard, Vancouver, ISO itp.
26

Pandey, Anukul, Barjinder Singh Saini i Butta Singh. "ELECTROCARDIOGRAM DATA COMPRESSION TECHNIQUES IN 1D/2D DOMAIN". Biomedical Engineering: Applications, Basis and Communications 33, nr 02 (9.01.2021): 2150011. http://dx.doi.org/10.4015/s1016237221500113.

Pełny tekst źródła
Streszczenie:
Electrocardiogram (ECG) is one of the best representatives of physiological signal that provides the state of the autonomic nervous system, primarily responsible for the cardiac activity. The ECG data compression plays a significant role in localized digital storage or efficient communication channel utilization in telemedicine applications. The lossless and lossy compression system’s compressor efficiency depends on the methodologies used for compression and the quality measure used to evaluate distortion. Based on domain ECG, data compression can be performed either one-dimensional (1D) or two-dimensional (2D) for utilization of inter and inter with intra beat correlation, respectively. In this paper, a comparative study between 1D and 2D ECG data compression methods was taken out from the existing literature to provide an update in this regard. ECG data compression techniques and algorithms in 1D and 2D domain have their own merits and limitations. Recently, numerous research and techniques in 1D ECG data compression have been developed, including direct and transformed domain. Additionally, 2D ECG data compression research is reported based on period normalization and complexity sorting in recent times. Finally, several practical issues highlight the assessment of reconstructed signal quality and performance comparisons with an average comparative of exhaustive existing 1D and 2D ECG compression methods based on the utilized digital signal processing systems.
Style APA, Harvard, Vancouver, ISO itp.
27

P, Srividya. "Optimization of Lossless Compression Algorithms using Multithreading". Journal of Information Technology and Sciences 9, nr 1 (1.03.2023): 36–42. http://dx.doi.org/10.46610/joits.2023.v09i01.005.

Pełny tekst źródła
Streszczenie:
The process of reducing the number of bits required to characterize data is referred to as compression. The advantages of compression include a reduction in the time taken to transfer data from one point to another, and a reduction in the cost required for the storage space and network bandwidth. There are two types of compression algorithms namely lossy compression algorithm and lossless compression algorithm. Lossy algorithms find utility in compressing audio and video signals whereas lossless algorithms are used in compressing text messages. The advent of the internet and its worldwide usage has not only raised the utility but also the storage of text, audio and video files. These multimedia files demand more storage space as compared to traditional files. This has given rise to the requirement for an efficient compression algorithm. There is a considerable improvement in the computing performance of the machines due to the advent of the multi-core processor. However, this multi-core architecture is not used by compression algorithms. This paper shows the implementation of lossless compression algorithms namely the Lempel-Ziv-Markov Algorithm, BZip2 and ZLIB algorithms using the concept of multithreading. The results obtained prove that the ZLIB algorithm proves to be more efficient in terms of the time taken to compress and decompress the text. The comparison is done for both compressions without multithreading and compression with multi-threading.
Style APA, Harvard, Vancouver, ISO itp.
28

P, Srividya. "Optimization of Lossless Compression Algorithms using Multithreading". Journal of Information Technology and Sciences 9, nr 1 (2.03.2023): 36–42. http://dx.doi.org/10.46610/joits.2022.v09i01.005.

Pełny tekst źródła
Streszczenie:
The process of reducing the number of bits required to characterize data is referred to as compression. The advantages of compression include a reduction in the time taken to transfer data from one point to another, and a reduction in the cost required for the storage space and network bandwidth. There are two types of compression algorithms namely lossy compression algorithm and lossless compression algorithm. Lossy algorithms find utility in compressing audio and video signals whereas lossless algorithms are used in compressing text messages. The advent of the internet and its worldwide usage has not only raised the utility but also the storage of text, audio and video files. These multimedia files demand more storage space as compared to traditional files. This has given rise to the requirement for an efficient compression algorithm. There is a considerable improvement in the computing performance of the machines due to the advent of the multi-core processor. However, this multi-core architecture is not used by compression algorithms. This paper shows the implementation of lossless compression algorithms namely the Lempel-Ziv-Markov Algorithm, BZip2 and ZLIB algorithms using the concept of multithreading. The results obtained prove that the ZLIB algorithm proves to be more efficient in terms of the time taken to compress and decompress the text. The comparison is done for both compressions without multithreading and compression with multi-threading.
Style APA, Harvard, Vancouver, ISO itp.
29

Ochoa, Idoia, Mikel Hernaez i Tsachy Weissman. "Aligned genomic data compression via improved modeling". Journal of Bioinformatics and Computational Biology 12, nr 06 (grudzień 2014): 1442002. http://dx.doi.org/10.1142/s0219720014420025.

Pełny tekst źródła
Streszczenie:
With the release of the latest Next-Generation Sequencing (NGS) machine, the HiSeq X by Illumina, the cost of sequencing the whole genome of a human is expected to drop to a mere $1000. This milestone in sequencing history marks the era of affordable sequencing of individuals and opens the doors to personalized medicine. In accord, unprecedented volumes of genomic data will require storage for processing. There will be dire need not only of compressing aligned data, but also of generating compressed files that can be fed directly to downstream applications to facilitate the analysis of and inference on the data. Several approaches to this challenge have been proposed in the literature; however, focus thus far has been on the low coverage regime and most of the suggested compressors are not based on effective modeling of the data. We demonstrate the benefit of data modeling for compressing aligned reads. Specifically, we show that, by working with data models designed for the aligned data, we can improve considerably over the best compression ratio achieved by previously proposed algorithms. Our results indicate that the pareto-optimal barrier for compression rate and speed claimed by Bonfield and Mahoney (2013) [Bonfield JK and Mahoneys MV, Compression of FASTQ and SAM format sequencing data, PLOS ONE, 8(3):e59190, 2013.] does not apply for high coverage aligned data. Furthermore, our improved compression ratio is achieved by splitting the data in a manner conducive to operations in the compressed domain by downstream applications.
Style APA, Harvard, Vancouver, ISO itp.
30

Chandak, Shubham, Kedar Tatwawadi, Idoia Ochoa, Mikel Hernaez i Tsachy Weissman. "SPRING: a next-generation compressor for FASTQ data". Bioinformatics 35, nr 15 (7.12.2018): 2674–76. http://dx.doi.org/10.1093/bioinformatics/bty1015.

Pełny tekst źródła
Streszczenie:
Abstract Motivation High-Throughput Sequencing technologies produce huge amounts of data in the form of short genomic reads, associated quality values and read identifiers. Because of the significant structure present in these FASTQ datasets, general-purpose compressors are unable to completely exploit much of the inherent redundancy. Although there has been a lot of work on designing FASTQ compressors, most of them lack in support of one or more crucial properties, such as support for variable length reads, scalability to high coverage datasets, pairing-preserving compression and lossless compression. Results In this work, we propose SPRING, a reference-free compressor for FASTQ files. SPRING supports a wide variety of compression modes and features, including lossless compression, pairing-preserving compression, lossy compression of quality values, long read compression and random access. SPRING achieves substantially better compression than existing tools, for example, SPRING compresses 195 GB of 25× whole genome human FASTQ from Illumina’s NovaSeq sequencer to less than 7 GB, around 1.6× smaller than previous state-of-the-art FASTQ compressors. SPRING achieves this improvement while using comparable computational resources. Availability and implementation SPRING can be downloaded from https://github.com/shubhamchandak94/SPRING. Supplementary information Supplementary data are available at Bioinformatics online.
Style APA, Harvard, Vancouver, ISO itp.
31

Kontoyiannis, I. "Pointwise redundancy in lossy data compression and universal lossy data compression". IEEE Transactions on Information Theory 46, nr 1 (2000): 136–52. http://dx.doi.org/10.1109/18.817514.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Nemetz, Tibor, i Pál Papp. "Increasing data security by data compression". Studia Scientiarum Mathematicarum Hungarica 42, nr 4 (1.10.2005): 343–53. http://dx.doi.org/10.1556/sscmath.42.2005.4.1.

Pełny tekst źródła
Streszczenie:
We analyze the effect of data-compression on security of encryption both from theoretical and practical point of view. It is demonstrated that data-compression essentially improves the security of encryption, helps to overcome technical difficulties. On the other side, it makes crypt-analysis more difficult and causes extra problems. At present data-compression applied rarely and frequently defectively. We propose a method which eliminates the negative effects. Our aim is initiate data compression as an aid for data security. To this end we provide an overview of the most frequently used cryptographic protocols. A comparison with encryption software reveals that even the most frequently used protocols do not support encryption and compression.
Style APA, Harvard, Vancouver, ISO itp.
33

Tao, Dingwen, Sheng Di, Hanqi Guo, Zizhong Chen i Franck Cappello. "Z-checker: A framework for assessing lossy compression of scientific data". International Journal of High Performance Computing Applications 33, nr 2 (15.11.2017): 285–303. http://dx.doi.org/10.1177/1094342017737147.

Pełny tekst źródła
Streszczenie:
Because of the vast volume of data being produced by today’s scientific simulations and experiments, lossy data compressor allowing user-controlled loss of accuracy during the compression is a relevant solution for significantly reducing the data size. However, lossy compressor developers and users are missing a tool to explore the features of scientific data sets and understand the data alteration after compression in a systematic and reliable way. To address this gap, we have designed and implemented a generic framework called Z-checker. On the one hand, Z-checker combines a battery of data analysis components for data compression. On the other hand, Z-checker is implemented as an open-source community tool to which users and developers can contribute and add new analysis components based on their additional analysis demands. In this article, we present a survey of existing lossy compressors. Then, we describe the design framework of Z-checker, in which we integrated evaluation metrics proposed in prior work as well as other analysis tools. Specifically, for lossy compressor developers, Z-checker can be used to characterize critical properties (such as entropy, distribution, power spectrum, principal component analysis, and autocorrelation) of any data set to improve compression strategies. For lossy compression users, Z-checker can detect the compression quality (compression ratio and bit rate) and provide various global distortion analysis comparing the original data with the decompressed data (peak signal-to-noise ratio, normalized mean squared error, rate–distortion, rate-compression error, spectral, distribution, and derivatives) and statistical analysis of the compression error (maximum, minimum, and average error; autocorrelation; and distribution of errors). Z-checker can perform the analysis with either coarse granularity (throughout the whole data set) or fine granularity (by user-defined blocks), such that the users and developers can select the best fit, adaptive compressors for different parts of the data set. Z-checker features a visualization interface displaying all analysis results in addition to some basic views of the data sets such as time series. To the best of our knowledge, Z-checker is the first tool designed to assess lossy compression comprehensively for scientific data sets.
Style APA, Harvard, Vancouver, ISO itp.
34

Lee, Chun-Hee, i Chin-Wan Chung. "Compression Schemes with Data Reordering for Ordered Data". Journal of Database Management 25, nr 1 (styczeń 2014): 1–28. http://dx.doi.org/10.4018/jdm.2014010101.

Pełny tekst źródła
Streszczenie:
Although there have been many compression schemes for reducing data effectively, most schemes do not consider the reordering of data. In the case of unordered data, if the users change the data order in a given data set, the compression ratio may be improved compared to the original compression before reordering data. However, in the case of ordered data, the users need a mapping table that maps the original position to the changed position in order to recover the original order. Therefore, reordering ordered data may be disadvantageous in terms of space. In this paper, the authors consider two compression schemes, run-length encoding and bucketing scheme as bases for showing the impact of data reordering in compression schemes. Also, the authors propose various optimization techniques related to data reordering. Finally, the authors show that the compression schemes with data reordering are better than the original compression schemes in terms of the compression ratio.
Style APA, Harvard, Vancouver, ISO itp.
35

Payani, Ali, Afshin Abdi, Xin Tian, Faramarz Fekri i Mohamed Mohandes. "Advances in Seismic Data Compression via Learning from Data: Compression for Seismic Data Acquisition". IEEE Signal Processing Magazine 35, nr 2 (marzec 2018): 51–61. http://dx.doi.org/10.1109/msp.2017.2784458.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Hasan, M. R., M. I. Ibrahimy, S. M. A. Motakabber, M. M. Ferdaus i M. N. H. Khan. "Comparative data compression techniques and multi-compression results". IOP Conference Series: Materials Science and Engineering 53 (20.12.2013): 012081. http://dx.doi.org/10.1088/1757-899x/53/1/012081.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Baklanova, Olga E., i Vladimir A. Vasilenko. "Data compression with $\Sigma\Pi$-approximations based on splines". Applications of Mathematics 38, nr 6 (1993): 405–10. http://dx.doi.org/10.21136/am.1993.104563.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Dahunsi, F. M., O. A. Somefun, A. A. Ponnle i K. B. Adedeji. "Compression Techniques of Electrical Energy Data for Load Monitoring: A Review". Nigerian Journal of Technological Development 18, nr 3 (5.11.2021): 194–208. http://dx.doi.org/10.4314/njtd.v18i3.4.

Pełny tekst źródła
Streszczenie:
In recent years, the electric grid has experienced increasing deployment, use, and integration of smart meters and energy monitors. These devices transmit big time-series load data representing consumed electrical energy for load monitoring. However, load monitoring presents reactive issues concerning efficient processing, transmission, and storage. To promote improved efficiency and sustainability of the smart grid, one approach to manage this challenge is applying data-compression techniques. The subject of compressing electrical energy data (EED) has received quite an active interest in the past decade to date. However, a quick grasp of the range of appropriate compression techniques remains somewhat a bottleneck to researchers and developers starting in this domain. In this context, this paper reviews the compression techniques and methods (lossy and lossless) adopted for load monitoring. Selected top-performing compression techniques metrics were discussed, such as compression efficiency, low reconstruction error, and encoding-decoding speed. Additionally reviewed is the relation between electrical energy, data, and sound compression. This review will motivate further interest in developing standard codecs for the compression of electrical energy data that matches that of other domains.
Style APA, Harvard, Vancouver, ISO itp.
39

Hayati, Anis Kamilah, i Haris Suka Dyatmika. "THE EFFECT OF JPEG2000 COMPRESSION ON REMOTE SENSING DATA OF DIFFERENT SPATIAL RESOLUTIONS". International Journal of Remote Sensing and Earth Sciences (IJReSES) 14, nr 2 (8.01.2018): 111. http://dx.doi.org/10.30536/j.ijreses.2017.v14.a2724.

Pełny tekst źródła
Streszczenie:
The huge size of remote sensing data implies the information technology infrastructure to store, manage, deliver and process the data itself. To compensate these disadvantages, compressing technique is a possible solution. JPEG2000 compression provide lossless and lossy compression with scalability for lossy compression. As the ratio of lossy compression getshigher, the size of the file reduced but the information loss increased. This paper tries to investigate the JPEG2000 compression effect on remote sensing data of different spatial resolution. Three set of data (Landsat 8, SPOT 6 and Pleiades) processed with five different level of JPEG2000 compression. Each set of data then cropped at a certain area and analyzed using unsupervised classification. To estimate the accuracy, this paper utilized the Mean Square Error (MSE) and the Kappa coefficient agreement. The study shows that compressed scenes using lossless compression have no difference with uncompressed scenes. Furthermore, compressed scenes using lossy compression with the compression ratioless than 1:10 have no significant difference with uncompressed data with Kappa coefficient higher than 0.8.
Style APA, Harvard, Vancouver, ISO itp.
40

Budiman, Gelar, Andriyan Bayu Suksmono i Donny Danudirdjo. "Compressive Sampling with Multiple Bit Spread Spectrum-Based Data Hiding". Applied Sciences 10, nr 12 (24.06.2020): 4338. http://dx.doi.org/10.3390/app10124338.

Pełny tekst źródła
Streszczenie:
We propose a novel data hiding method in an audio host with a compressive sampling technique. An over-complete dictionary represents a group of watermarks. Each row of the dictionary is a Hadamard sequence representing multiple bits of the watermark. Then, the singular values of the segment-based host audio in a diagonal matrix are multiplied by the over-complete dictionary, producing a lower size matrix. At the same time, we embed the watermark into the compressed audio. In the detector, we detect the watermark and reconstruct the audio. This proposed method offers not only hiding the information, but also compressing the audio host. The application of the proposed method is broadcast monitoring and biomedical signal recording. We can mark and secure the signal content by hiding the watermark inside the signal while we compress the signal for memory efficiency. We evaluate the performance in terms of payload, compression ratio, audio quality, and watermark quality. The proposed method can hide the data imperceptibly, in the range of 729–5292 bps, with a compression ratio 1.47–4.84, and a perfectly detected watermark.
Style APA, Harvard, Vancouver, ISO itp.
41

Luzhetskyі, V. A., L. A. Savitska i V. A. Kaplun. "SPECIALIZED DATA COMPRESSION PROCESSOR". Information technology and computer engineering 54, nr 2 (2022): 15–25. http://dx.doi.org/10.31649/1999-9941-2022-54-2-15-25.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Harth‐Kitzerow, Johannes, Reimar H. Leike, Philipp Arras i Torsten A. Enßlin. "Toward Bayesian Data Compression". Annalen der Physik 533, nr 3 (8.02.2021): 2000508. http://dx.doi.org/10.1002/andp.202000508.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Perkins, M. G. "Data compression of stereopairs". IEEE Transactions on Communications 40, nr 4 (kwiecień 1992): 684–96. http://dx.doi.org/10.1109/26.141424.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Crochemore, M., F. Mignosi, A. Restivo i S. Salemi. "Data compression using antidictionaries". Proceedings of the IEEE 88, nr 11 (listopad 2000): 1756–68. http://dx.doi.org/10.1109/5.892711.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Bailey, R. L. "Pipelining Data Compression Algorithms". Computer Journal 33, nr 4 (1.04.1990): 308–13. http://dx.doi.org/10.1093/comjnl/33.4.308.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Schobben, D. W. E., R. A. Beuker i W. Oomen. "Dither and data compression". IEEE Transactions on Signal Processing 45, nr 8 (1997): 2097–101. http://dx.doi.org/10.1109/78.611218.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Antoniol, G., i P. Tonella. "EEG data compression techniques". IEEE Transactions on Biomedical Engineering 44, nr 2 (1997): 105–14. http://dx.doi.org/10.1109/10.552239.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Corda, Roberto. "Digital holography data compression". Telfor Journal 11, nr 1 (2019): 52–57. http://dx.doi.org/10.5937/telfor1901052c.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Wylie, F. "Digital audio data compression". Electronics & Communication Engineering Journal 7, nr 1 (1.02.1995): 5–10. http://dx.doi.org/10.1049/ecej:19950103.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Keeve, Erwin, Stefan Schaller, Sabine Girod i Bernd Girod. "Adaptive surface data compression". Signal Processing 59, nr 2 (czerwiec 1997): 211–20. http://dx.doi.org/10.1016/s0165-1684(97)00047-9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii