To see the other types of publications on this topic, follow the link: Compression scheme.

Journal articles on the topic 'Compression scheme'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Compression scheme.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jha, Mithilesh Kumar, Brejesh Lall, and Sumantra Dutta Roy. "Statistically Matched Wavelet Based Texture Synthesis in a Compressive Sensing Framework." ISRN Signal Processing 2014 (February 17, 2014): 1–18. http://dx.doi.org/10.1155/2014/838315.

Full text
Abstract:
This paper proposes a statistically matched wavelet based textured image coding scheme for efficient representation of texture data in a compressive sensing (CS) frame work. Statistically matched wavelet based data representation causes most of the captured energy to be concentrated in the approximation subspace, while very little information remains in the detail subspace. We encode not the full-resolution statistically matched wavelet subband coefficients but only the approximation subband coefficients (LL) using standard image compression scheme like JPEG2000. The detail subband coefficients, that is, HL, LH, and HH, are jointly encoded in a compressive sensing framework. Compressive sensing technique has proved that it is possible to achieve a sampling rate lower than the Nyquist rate with acceptable reconstruction quality. The experimental results demonstrate that the proposed scheme can provide better PSNR and MOS with a similar compression ratio than the conventional DWT-based image compression schemes in a CS framework and other wavelet based texture synthesis schemes like HMT-3S.
APA, Harvard, Vancouver, ISO, and other styles
2

Gao, Jian Qiang, Shao Dong Sun, Li Kun Wang, and Ning Wang. "The Analysis on the Scheme of the Flue Gas Compressor of a 300 MW Oxy-Fuel Pulverized Coal Fired Boiler." Applied Mechanics and Materials 602-605 (August 2014): 638–41. http://dx.doi.org/10.4028/www.scientific.net/amm.602-605.638.

Full text
Abstract:
The compression of the flue gas is one of the most critical processes of CO2 recycling in oxy-fuel combustion technology. With a 300 MW Oxy-fuel pulverized coal fired boiler taken as an example of study, the selection of compressors of its flue gas compressing system is analyzed by the thermodynamic calculation. In this paper, two centrifugal compressors with the same specification are in parallel running as Scheme 1 of selection, while an axial compressor and a piston compressor are in series operation as Scheme 2 of selection. It shows that the Scheme 2 has the advantage of the energy consumption and water separation of the flue gas in inter-cooler comparing with Scheme1.
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Haoran, Petr M. Anisimov, Bruce E. Carlsten, Leanne D. Duffy, Quinn R. Marksteiner, and River R. Robles. "X-ray Free Electron Laser Accelerator Lattice Design Using Laser-Assisted Bunch Compression." Applied Sciences 13, no. 4 (February 10, 2023): 2285. http://dx.doi.org/10.3390/app13042285.

Full text
Abstract:
We report the start-to-end modeling of our accelerator lattice design employing a laser-assisted bunch compression (LABC) scheme in an X-ray free electron laser (XFEL), using the proposed Matter-Radiation Interactions in Extremes (MaRIE) XFEL parameters. The accelerator lattice utilized a two-stage bunch compression scheme, with the first bunch compressor performing a conventional bulk compression enhancing the beam current from 20 A to 500 A, at 750 MeV. The second bunch compression was achieved by modulating the beam immediately downstream of the first bunch compressor by a laser with 1-μm wavelength in a laser modulator, accelerating the beam to the final energy of 12 GeV, and compressing the individual 1-μm periods of the modulated beam into a sequence of microbunches with 3-kA current spikes by the second bunch compressor. The LABC architecture presented had been developed based on the scheme of enhanced self-amplified spontaneous emission (ESASE), but operated in a disparate regime of parameters. Enabled by the novel technology of the cryogenic normal conducting radiofrequency photoinjector, we investigated an electron beam with ultra-low emittance at the starting point of the lattice design. Our work aimed at mitigating the well-known beam instabilities to preserve the beam emittance and suppress the energy spread growth.
APA, Harvard, Vancouver, ISO, and other styles
4

Armenta Ramade, Álvaro, and Arturo Serrano Santoyo. "802.11g multilayer header compression for VoIP in remote rural contexts." Revista Facultad de Ingeniería Universidad de Antioquia, no. 71 (March 4, 2014): 101–14. http://dx.doi.org/10.17533/udea.redin.15011.

Full text
Abstract:
This article presents a Multilayer Header Compression scheme for VoIP packets in 802.11 wireless networks that does not require context synchronization of compressor and decompressor units. The reduction of the header size is accomplished through the static field compression of the RTP/UDP/IP headers in conjunction with an 8 bit virtual addressing in the 802.11 MAC layer. Considering the RTP/UDP/IP/MAC header fields, this scheme allows 48% header compression efficiency compared with 41% provided by the CRTP and RoHC schemes. Given that our proposal does not require context synchronization, the decompressor will regenerate the headers even in high packet loss rate conditions.
APA, Harvard, Vancouver, ISO, and other styles
5

Waleed, Muhammad, Tai-Won Um, Aftab Khan, and Ali Khan. "On the Utilization of Reversible Colour Transforms for Lossless 2-D Data Compression." Applied Sciences 10, no. 3 (January 31, 2020): 937. http://dx.doi.org/10.3390/app10030937.

Full text
Abstract:
Reversible Colour Transforms (RCTs) in conjunction with Bi-level Burrows–Wheeler Compression Algorithm (BBWCA) allows for high-level lossless image compression as demonstrated in this study. The RCTs transformation results in exceedingly coordinated image information among the neighbouring pixels as compared to the RGB colour space. This aids the Burrows–Wheeler Transform (BWT) based compression scheme and achieves compression ratios of high degree at the subsequent steps of the program. Validation has been done by comparing the proposed scheme across a range of benchmarks schemes and the performance of the proposed scheme is above par the other schemes. The proposed compression outperforms the techniques exclusively developed for 2-D electrocardiogram (EEG), RASTER map and Color Filter Array (CFA) image compression. The proposed system shows no dependency over parameters like image size, its type or the medium in which it is captured. A comprehensive analysis of the proposed scheme concludes that it achieves a significant increase in compression and depicts comparable complexity similar to the various benchmark schemes.
APA, Harvard, Vancouver, ISO, and other styles
6

Moreno-Alvarado, Rodolfo, Eduardo Rivera-Jaramillo, Mariko Nakano, and Hector Perez-Meana. "Simultaneous Audio Encryption and Compression Using Compressive Sensing Techniques." Electronics 9, no. 5 (May 22, 2020): 863. http://dx.doi.org/10.3390/electronics9050863.

Full text
Abstract:
The development of coding schemes with the capacity to simultaneously encrypt and compress audio signals is a subject of active research because of the increasing necessity for transmitting sensitive audio information over insecure communication channels. Thus, several schemes have been developed; firstly, some of them compress the digital information and subsequently encrypt the resulting information. These schemas efficiently compress and encrypt the information. However, they may compromise the information as it can be accessed before encryption. To overcome this problem, a compressing sensing-based system to simultaneously compress and encrypt audio signals is proposed in which the audio signal is segmented in frames of 1024 samples and transformed into a sparse frame using the discrete cosine transform (DCT). Each frame is then multiplied by a different sensing matrix generated using the chaotic mixing scheme. This fact allows that the proposed scheme satisfies the extended Wyner secrecy (EWS) criterion. The evaluation results obtained using several genres of audio signals show that the proposed system allows to simultaneously compress and encrypt audio signals, satisfying the EWS criterion.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Yang, and Zheng Qin. "Region-Based Image-Fusion Framework for Compressive Imaging." Journal of Applied Mathematics 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/219540.

Full text
Abstract:
A novel region-based image-fusion framework for compressive imaging (CI) and its implementation scheme are proposed. Unlike previous works on conventional image fusion, we consider both compression capability on sensor side and intelligent understanding of the image contents in the image fusion. Firstly, the compressed sensing theory and normalized cut theory are introduced. Then region-based image-fusion framework for compressive imaging is proposed and its corresponding fusion scheme is constructed. Experiment results demonstrate that the proposed scheme delivers superior performance over traditional compressive image-fusion schemes in terms of both object metrics and visual quality.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Zi Long, Tu Ji, Mei Song Zheng, Jun Ye Wang, and Li Jian Li. "An Efficient Two-Dimension BIST Compression Approach." Applied Mechanics and Materials 713-715 (January 2015): 552–57. http://dx.doi.org/10.4028/www.scientific.net/amm.713-715.552.

Full text
Abstract:
In this paper a two-dimension BIST compression scheme is presented; the proposed scheme is utilized in order to drive down the number of deterministic vectors to achieve complete fault coverage in BIST applications. By introducing shifting compression and input reduction, vertical and horizontal compression are realized respectively to achieve two-dimension BIST compression. Experimental results show that the BIST shifting compression based on input reduction can achieve great compression rate as much as 99%. Comparisons with previously competitive presented schemes indicate that the proposed scheme provide an efficient BIST compression approach with lower storage overhead.
APA, Harvard, Vancouver, ISO, and other styles
9

ACHARYA, TINKU, and AMAR MUKHERJEE. "HIGH-SPEED PARALLEL VLSI ARCHITECTURES FOR IMAGE DECORRELATION." International Journal of Pattern Recognition and Artificial Intelligence 09, no. 02 (April 1995): 343–65. http://dx.doi.org/10.1142/s021800149500016x.

Full text
Abstract:
We present a new high speed parallel architecture and its VLSI implementation to design a special purpose hardware for real-time lossless image compression/ decompression using a decorrelation scheme. The proposed architecture can easily be implemented using state-of-the-art VLSI technology. The hardware yields a high compression rate. A prototype 1-micron VLSI chip based on this architectural idea has been designed. The scheme is favourably comparable to the lossless JPEG standard image compression schemes. We also discuss the parallelization issues of the lossless JPEG standard still compression schemes and their difficulties.
APA, Harvard, Vancouver, ISO, and other styles
10

HU, YU-CHEN. "LOW BIT-RATE IMAGE COMPRESSION SCHEMES BASED ON VECTOR QUANTIZATION." International Journal of Image and Graphics 05, no. 04 (October 2005): 745–64. http://dx.doi.org/10.1142/s0219467805001963.

Full text
Abstract:
Three image compression schemes based on vector quantization are proposed in this paper. The block similarity property among neighboring image blocks is exploited in these schemes to cut down the bit rate of the vector quantization scheme. For the first scheme, the correlation among the encoded block to the left and the encoded block directly above the current processing block is exploited. In the second scheme, the relative addressing technique is incorporated into the encoding procedure. Finally, the third scheme introduces a simple technique to reduce the required bit rate with only a slight reduction in image quality. According to the experimental results, it is shown that these proposed schemes not only reduce storage costs but also achieve good reconstructed image quality. Furthermore, the required computational cost for the encoding/decoding procedures of these schemes is less than that of the conventional vector quantization scheme. In other words, these schemes are suitable for the compression of digital images.
APA, Harvard, Vancouver, ISO, and other styles
11

Abliz, Wayit, Hao Wu, Maihemuti Maimaiti, Jiamila Wushouer, Kahaerjiang Abiderexiti, Tuergen Yibulayin, and Aishan Wumaier. "A Syllable-Based Technique for Uyghur Text Compression." Information 11, no. 3 (March 23, 2020): 172. http://dx.doi.org/10.3390/info11030172.

Full text
Abstract:
To improve utilization of text storage resources and efficiency of data transmission, we proposed two syllable-based Uyghur text compression coding schemes. First, according to the statistics of syllable coverage of the corpus text, we constructed a 12-bit and 16-bit syllable code tables and added commonly used symbols—such as punctuation marks and ASCII characters—to the code tables. To enable the coding scheme to process Uyghur texts mixed with other language symbols, we introduced a flag code in the compression process to distinguish the Unicode encodings that were not in the code table. The experiments showed that the 12-bit coding scheme had an average compression ratio of 0.3 on Uyghur text less than 4 KB in size and that the 16-bit coding scheme had an average compression ratio of 0.5 on text less than 2 KB in size. Our compression schemes outperformed GZip, BZip2, and the LZW algorithm on short text and could be effectively applied to the compression of Uyghur short text for storage and applications.
APA, Harvard, Vancouver, ISO, and other styles
12

SRIRAAM, N., and C. ESWARAN. "EEG SIGNAL COMPRESSION USING RADIAL BASIS NEURAL NETWORKS." Journal of Mechanics in Medicine and Biology 04, no. 02 (June 2004): 143–52. http://dx.doi.org/10.1142/s0219519404000928.

Full text
Abstract:
This paper describes a two-stage lossless compression scheme for electroencephalographic (EEG) signals using radial basis neural network predictors. Two variants of the radial basis network, namely, the radial basis function network and the generalized regression neural network are used in the first stage and their performances are evaluated in terms of compression ratio. The training is imparted to the network by using two training schemes, namely, single block scheme and block adaptive scheme. The compression ratios achieved by these networks when used along with arithmetic encoders in a two-stage compression scheme are obtained for different EEG test files. It is found that the generalized regression neural network performs better than other neural network models such as multilayer perceptrons and Elman network and linear predictor such as FIR.
APA, Harvard, Vancouver, ISO, and other styles
13

Bajpai, Shrish, Harsh Vikram Singh, and Naimur Rahman Kidwai. "3D modified wavelet block tree coding for hyperspectral images." Indonesian Journal of Electrical Engineering and Computer Science 15, no. 2 (August 1, 2019): 1001. http://dx.doi.org/10.11591/ijeecs.v15.i2.pp1001-1008.

Full text
Abstract:
<p><span>A novel wavelet-based efficient hyperspectral image compression scheme for low memory sensors has been proposed. The proposed scheme uses the 3D dyadic wavelet transform to exploit intersubband and intrasubband correlation among the wavelet coefficients. By doing the reconstruction of the transform image cube, taking the difference between the frames, it increases the coding efficiency, reduces the memory requirement and complexity of the hyperspectral compression schemes in comparison with other state-of-the-art compression schemes.</span></p>
APA, Harvard, Vancouver, ISO, and other styles
14

Hu, Yu-Chen, and Chin-Chen Chang. "A new lossless compression scheme based on Huffman coding scheme for image compression." Signal Processing: Image Communication 16, no. 4 (November 2000): 367–72. http://dx.doi.org/10.1016/s0923-5965(99)00064-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Yang, Chen, Ping Pan, and Qun Ding. "Image Encryption Scheme Based on Mixed Chaotic Bernoulli Measurement Matrix Block Compressive Sensing." Entropy 24, no. 2 (February 14, 2022): 273. http://dx.doi.org/10.3390/e24020273.

Full text
Abstract:
Many image encryption schemes based on compressive sensing have poor reconstructed image quality when the compression ratio is low, as well as difficulty in hardware implementation. To address these problems, we propose an image encryption algorithm based on the mixed chaotic Bernoulli measurement matrix block compressive sensing. A new chaotic measurement matrix was designed using the Chebyshev map and logistic map; the image was compressed in blocks to obtain the measurement values. Still, using the Chebyshev map and logistic map to generate encrypted sequences, the measurement values were encrypted by no repetitive scrambling as well as a two-way diffusion algorithm based on GF(257) for the measurement value matrix. The security of the encryption system was further improved by generating the Secure Hash Algorithm-256 of the original image to calculate the initial values of the chaotic mappings for the encryption process. The scheme uses two one-dimensional maps and is easier to implement in hardware. Simulation and performance analysis showed that the proposed image compression–encryption scheme can improve the peak signal-to-noise ratio of the reconstructed image with a low compression ratio and has good encryption against various attacks.
APA, Harvard, Vancouver, ISO, and other styles
16

Guerra, Aníbal, Jaime Lotero, José Édinson Aedo, and Sebastián Isaza. "Tackling the Challenges of FASTQ Referential Compression." Bioinformatics and Biology Insights 13 (January 2019): 117793221882137. http://dx.doi.org/10.1177/1177932218821373.

Full text
Abstract:
The exponential growth of genomic data has recently motivated the development of compression algorithms to tackle the storage capacity limitations in bioinformatics centers. Referential compressors could theoretically achieve a much higher compression than their non-referential counterparts; however, the latest tools have not been able to harness such potential yet. To reach such goal, an efficient encoding model to represent the differences between the input and the reference is needed. In this article, we introduce a novel approach for referential compression of FASTQ files. The core of our compression scheme consists of a referential compressor based on the combination of local alignments with binary encoding optimized for long reads. Here we present the algorithms and performance tests developed for our reads compression algorithm, named UdeACompress. Our compressor achieved the best results when compressing long reads and competitive compression ratios for shorter reads when compared to the best programs in the state of the art. As an added value, it also showed reasonable execution times and memory consumption, in comparison with similar tools.
APA, Harvard, Vancouver, ISO, and other styles
17

Kolo, Jonathan Gana, S. Anandan Shanmugam, David Wee Gin Lim, Li-Minn Ang, and Kah Phooi Seng. "An Adaptive Lossless Data Compression Scheme for Wireless Sensor Networks." Journal of Sensors 2012 (2012): 1–20. http://dx.doi.org/10.1155/2012/539638.

Full text
Abstract:
Energy is an important consideration in the design and deployment of wireless sensor networks (WSNs) since sensor nodes are typically powered by batteries with limited capacity. Since the communication unit on a wireless sensor node is the major power consumer, data compression is one of possible techniques that can help reduce the amount of data exchanged between wireless sensor nodes resulting in power saving. However, wireless sensor networks possess significant limitations in communication, processing, storage, bandwidth, and power. Thus, any data compression scheme proposed for WSNs must be lightweight. In this paper, we present an adaptive lossless data compression (ALDC) algorithm for wireless sensor networks. Our proposed ALDC scheme performs compression losslessly using multiple code options. Adaptive compression schemes allow compression to dynamically adjust to a changing source. The data sequence to be compressed is partitioned into blocks, and the optimal compression scheme is applied for each block. Using various real-world sensor datasets we demonstrate the merits of our proposed compression algorithm in comparison with other recently proposed lossless compression algorithms for WSNs.
APA, Harvard, Vancouver, ISO, and other styles
18

Kryshtopa, S. I., L. I. Kryshtopa, M. М. Hnyp, I. M. Mykytiy, and M. M. Tseber. "Development of energy efficient system of gas cooling of mobile diesel compressor stations of oil and gas industry." Oil and Gas Power Engineering, no. 1(33) (September 3, 2020): 81–89. http://dx.doi.org/10.31471/1993-9868-2020-1(33)-81-89.

Full text
Abstract:
The developments of domestic and foreign specialists in the field of improving the energy efficiency of mobile diesel compressor stations of the oil and gas industry is studied. The disadvantages of existing mobile diesel compressor stations in terms of their energy efficiency are identified. Theoretical studies of energy efficiency directions and designs of existing mobile diesel compressor stations have been carried out. It has been studied that for efficient operation of a compressor station it is rational to increase the load of compressors and organize the operation of equipment in energy-efficient modes. Ways to improve the energy efficiency of mobile diesel compressor stations with various options are proposed. It has been established that the optimal way to increase energy efficiency is to reduce the gas temperature to lower temperatures, in comparison with the existing gas cooling systems of compressor stations. It is found that with an increase in the degree of pressure, the savings in gas compression for a compressor with a promising scheme for intermediate gas cooling also increase. An indicator diagram of compressor stations with various cooling schemes has been built. The scheme of the existing system of multi-stage gas cooling of mobile diesel compressor stations is investigated. An improved energy-efficient cooling system for compressed gas by using the heat of compression of gases is proposed. The conclusion about the existence of a reserve to reduce the energy costs of compression due to the use of a coolant-cooler with a temperature significantly lower than the ambient temperature is presented. The description of the working processes of mobile diesel compressor stations according to the existing and perspective schemes is made.
APA, Harvard, Vancouver, ISO, and other styles
19

Amir Sultan, Duha. "Better Adaptive Text Compression Scheme." JOURNAL OF EDUCATION AND SCIENCE 27, no. 2 (March 1, 2018): 48–57. http://dx.doi.org/10.33899/edusj.2018.147575.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

A. Ahmed Al-Obaidi, Yasir, and Hadeel N. Abdullah. "Image Compression Using Lifting Scheme." Engineering and Technology Journal 28, no. 17 (August 1, 2010): 5455–67. http://dx.doi.org/10.30684/etj.28.17.5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Sultana, Tangina, and Young-Koo Lee. "gRDF: An Efficient Compressor with Reduced Structural Regularities That Utilizes gRePair." Sensors 22, no. 7 (March 26, 2022): 2545. http://dx.doi.org/10.3390/s22072545.

Full text
Abstract:
The explosive volume of semantic data published in the Resource Description Framework (RDF) data model demands efficient management and compression with better compression ratio and runtime. Although extensive work has been carried out for compressing the RDF datasets, they do not perform well in all dimensions. However, these compressors rarely exploit the graph patterns and structural regularities of real-world datasets. Moreover, there are a variety of existing approaches that reduce the size of a graph by using a grammar-based graph compression algorithm. In this study, we introduce a novel approach named gRDF (graph repair for RDF) that uses gRePair, one of the most efficient grammar-based graph compression schemes, to compress the RDF dataset. In addition to that, we have improved the performance of HDT (header-dictionary-triple), an efficient approach for compressing the RDF datasets based on structural properties, by introducing modified HDT (M-HDT). It can detect the frequent graph pattern by employing the data-structure-oriented approach in a single pass from the dataset. In our proposed system, we use M-HDT for indexing the nodes and edge labels. Then, we employ gRePair algorithm for identifying the grammar from the RDF graph. Afterward, the system improves the performance of k2-trees by introducing a more efficient algorithm to create the trees and serialize the RDF datasets. Our experiments affirm that the proposed gRDF scheme can substantially achieve at approximately 26.12%, 13.68%, 6.81%, 2.38%, and 12.76% better compression ratio when compared with the most prominent state-of-the-art schemes such as HDT, HDT++, k2-trees, RDF-TR, and gRePair in the case of real-world datasets. Moreover, the processing efficiency of our proposed scheme also outperforms others.
APA, Harvard, Vancouver, ISO, and other styles
22

Eldstål-Ahrens, Albin, Angelos Arelakis, and Ioannis Sourdis. "L 2 C: Combining Lossy and Lossless Compression on Memory and I/O." ACM Transactions on Embedded Computing Systems 21, no. 1 (January 31, 2022): 1–27. http://dx.doi.org/10.1145/3481641.

Full text
Abstract:
In this article, we introduce L 2 C, a hybrid lossy/lossless compression scheme applicable both to the memory subsystem and I/O traffic of a processor chip. L 2 C employs general-purpose lossless compression and combines it with state-of-the-art lossy compression to achieve compression ratios up to 16:1 and to improve the utilization of chip’s bandwidth resources. Compressing memory traffic yields lower memory access time, improving system performance, and energy efficiency. Compressing I/O traffic offers several benefits for resource-constrained systems, including more efficient storage and networking. We evaluate L 2 C as a memory compressor in simulation with a set of approximation-tolerant applications. L 2 C improves baseline execution time by an average of 50% and total system energy consumption by 16%. Compared to the lossy and lossless current state-of-the-art memory compression approaches, L 2 C improves execution time by 9% and 26%, respectively, and reduces system energy costs by 3% and 5%, respectively. I/O compression efficacy is evaluated using a set of real-life datasets. L 2 C achieves compression ratios of up to 10.4:1 for a single dataset and on average about 4:1, while introducing no more than 0.4% error.
APA, Harvard, Vancouver, ISO, and other styles
23

Rabiul Islam, Sheikh Md, Xu Huang, and Keng Liang Ou. "Image Compression Based on Compressive Sensing Using Wavelet Lifting Scheme." International journal of Multimedia & Its Applications 7, no. 1 (February 28, 2015): 01–16. http://dx.doi.org/10.5121/ijma.2015.7101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Kumar JHA, Mithilesh, Brejesh Lall, and Sumantra Roy. "DEMD-Based Image Compression Scheme in a Compressive Sensing Framework." Journal of Pattern Recognition Research 9, no. 1 (2014): 64–78. http://dx.doi.org/10.13176/11.580.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Sheikh Akbari, A., and P. Bagheri Zadeh. "Compressive sampling and wavelet-based multi-view image compression scheme." Electronics Letters 48, no. 22 (2012): 1403. http://dx.doi.org/10.1049/el.2012.2613.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

alZahir, Saif, and Syed M. Naqvi. "A Hybrid Lossless-Lossy Binary Image Compression Scheme." International Journal of Computer Vision and Image Processing 3, no. 4 (October 2013): 37–50. http://dx.doi.org/10.4018/ijcvip.2013100103.

Full text
Abstract:
In this paper, the authors present a binary image compression scheme that can be used either for lossless or lossy compression requirements. This scheme contains five new contributions. The lossless component of the scheme partitions the input image into a number of non-overlapping rectangles using a new line-by-line method. The upper-left and the lower-right vertices of each rectangle are identified and the coordinates of which are efficiently encoded using three methods of representation and compression. The lossy component, on the other hand, provides higher compression through two techniques. 1) It reduces the number of rectangles from the input image using our mathematical regression models. These mathematical models guarantees image quality so that rectangular reduction should not produce visual distortion in the image. The mathematical models have been obtained through subjective tests and regression analysis on a large set of binary images. 2) Further compression gain is achieved through discarding isolated pixels and 1-pixel rectangles from the image. Simulation results show that the proposed schemes provide significant improvements over previously published work for both the lossy and the lossless components.
APA, Harvard, Vancouver, ISO, and other styles
27

Juan, Chia-Chen, and Chin-Chen. "Reversible Steganographic Scheme for AMBTC-Compressed Image Based on (7,4) Hamming Code." Symmetry 11, no. 10 (October 3, 2019): 1236. http://dx.doi.org/10.3390/sym11101236.

Full text
Abstract:
In recent years, compression steganography technology has attracted the attention of many scholars. Among all image compression method, absolute moment block truncation coding (AMBTC) is a simple and effective compression method. Most AMBTC-based reversible data hiding (RDH) schemes do not guarantee that the stego AMBTC compression codes can be translated by the conventional AMBTC decoder. In other words, they do not belong to Type I AMBTC-based RDH scheme and easily attract malicious users’ attention. To solve this problem and enhance the hiding capacity, we used (7,4) hamming code to design a Type I AMBTC-based RDH scheme in this paper. To provide the reversibility feature, we designed a prediction method and judgement mechanism to successfully select the embeddable blocks during the data embedding phase and data extraction and recovery phase. In comparing our approach with other BTC-based schemes, it is confirmed that our hiding capacity is increased while maintaining the limited size of the compression codes and acceptable image quality of the stego AMBTC-compressed images.
APA, Harvard, Vancouver, ISO, and other styles
28

Motaung, William B., Kingsley A. Ogudo, and Chabalala S. Chabalala. "Optimal Video Compression Parameter Tuning for Digital Video Broadcasting (DVB) using Deep Reinforcement Learning." International Conference on Intelligent and Innovative Computing Applications 2022 (December 31, 2022): 270–76. http://dx.doi.org/10.59200/iconic.2022.030.

Full text
Abstract:
DVB (digital video broadcasting) has undergone an enormous paradigm shift, especially through internet streaming that utilizes multiple channels (i.e., secured hypertext transfer protocols). However, due to the limitations of the current communication network infrastructure, video signals need to be compressed before transmission. Whereas most recent research has concentrated and focused on assessing video quality, little to no study has worked on improving the compression processes of digital video signals in lightweight DVB setups. This study provides a video compression strategy (DRL-VC) that employs deep reinforcement learning for learning the suitable parameters used in digital video signal compression. The problem is formulated as a multi-objective one, considering the structural similarity index metric (SSIM), the delay time, and the peak signal-to-noise ratio (PSNR). Based on the findings of the experiments, our proposed scheme increases bitrate savings while at a constant PSNR. Results also show that our scheme performs better than the benchmarked compression schemes. Finally, the root means square error values show a consistent rate across different video streams, indicating the validity of our proposed compression scheme.
APA, Harvard, Vancouver, ISO, and other styles
29

Lee, S. "Density ratios in compressions driven by radiation pressure." Laser and Particle Beams 6, no. 3 (August 1988): 597–606. http://dx.doi.org/10.1017/s026303460000553x.

Full text
Abstract:
It has been recently suggested (Hora & Miley 1984) that in the cannonball scheme of laser compression the pellet may be considered to be compressed by the ‘brute force’ of the radiation pressure. For such a radiation-driven compression, this paper applies an energy balance method to give an equation fixing the radius compression ratio κ which is a key parameter for such intense compressions. A shock model is used to yield specific results. For a square-pulse driving power compressing a spherical pellet with a specific heat ratio of 5/3, a density compression ratio Γ of 27 is computed. Double (stepped) pulsing with linearly rising power enhances Γ to 1750. The value of Γ is not dependent on the absolute magnitude of the piston power, as long as this is large enough. Further enhancement of compression by multiple (stepped) pulsing becomes obvious. The enhanced compression increases the energy gain factor G for a 100 μm DT pellet driven by radiation power of 1016 W from 6 for a square pulse power with 0·5 MJ absorbed energy to 90 for a double (stepped) linearly rising pulse with absorbed energy of 0·4 MJ assuming perfect coupling efficiency.
APA, Harvard, Vancouver, ISO, and other styles
30

Cai, Changchun, Enjian Bai, Xue-Qin Jiang, and Yun Wu. "Simultaneous Audio Encryption and Compression Using Parallel Compressive Sensing and Modified Toeplitz Measurement Matrix." Electronics 10, no. 23 (November 24, 2021): 2902. http://dx.doi.org/10.3390/electronics10232902.

Full text
Abstract:
With the explosive growth of voice information interaction, there is an urgent need for safe and effective compression transmission methods. In this paper, compressive sensing is used to realize the compression and encryption of speech signals. Firstly, the scheme of linear feedback shift register combined with inner product to generate measurement matrix is proposed. Secondly, we adopt a new parallel compressive sensing technique to tremendously improve the processing efficiency. Further, the two parties in the communication adopt public key cryptosystem to safely share the key and select a different measurement matrix for each frame of the voice signal to ensure the security. This scheme greatly reduces the difficulty of generating measurement matrix in hardware and improves the processing efficiency. Compared with the existing scheme by Moreno-Alvarado et al., our scheme has reduced the execution time by approximately 8%, and the mean square error (MSE) has also been reduced by approximately 5%.
APA, Harvard, Vancouver, ISO, and other styles
31

Li, Yong Jie, Dong Jiao Xu, Xin Wang, and Ying Chang. "Signal Compression Reconstruction with Narrow-Band Interference." Applied Mechanics and Materials 668-669 (October 2014): 1110–13. http://dx.doi.org/10.4028/www.scientific.net/amm.668-669.1110.

Full text
Abstract:
Compressive sensing (CS) implements sampling and compression to sparse or compressible signals simultaneously. Compressive signal processing is a new signal processing scheme base on compressive sensing theory. In this paper, the problem of signal compressive reconstruction base on narrow-band interference is researched. The reconstruction performance of BP, MP, and OMP algorithms with narrow-band interference is analyzed by computer simulations.
APA, Harvard, Vancouver, ISO, and other styles
32

Lee, Chun-Hee, and Chin-Wan Chung. "Compression Schemes with Data Reordering for Ordered Data." Journal of Database Management 25, no. 1 (January 2014): 1–28. http://dx.doi.org/10.4018/jdm.2014010101.

Full text
Abstract:
Although there have been many compression schemes for reducing data effectively, most schemes do not consider the reordering of data. In the case of unordered data, if the users change the data order in a given data set, the compression ratio may be improved compared to the original compression before reordering data. However, in the case of ordered data, the users need a mapping table that maps the original position to the changed position in order to recover the original order. Therefore, reordering ordered data may be disadvantageous in terms of space. In this paper, the authors consider two compression schemes, run-length encoding and bucketing scheme as bases for showing the impact of data reordering in compression schemes. Also, the authors propose various optimization techniques related to data reordering. Finally, the authors show that the compression schemes with data reordering are better than the original compression schemes in terms of the compression ratio.
APA, Harvard, Vancouver, ISO, and other styles
33

Pathak, Vinay, Karan Singh, Radha Raman Chandan, Sachin Kumar Gupta, Manoj Kumar, Shashi Bhushan, and Sujith Jayaprakash. "Efficient Compression Sensing Mechanism Based WBAN System Using Blockchain." Security and Communication Networks 2023 (May 11, 2023): 1–12. http://dx.doi.org/10.1155/2023/8468745.

Full text
Abstract:
The hybrid wireless sensor network is made up of Wireless Body Area Network (WBAN). Generally, many hospitals use cellular networks to support telemedicine. To provide the treatment to the patient on time, for this, an early diagnosis is required, for treatment. With the help of WBANs, collections and transmissions of essential biomedical data to monitor human health becomes easy. Compressor Sensing (CS) is an emerging signal compression/acquisition methodology that offers a protruding alternative to traditional signal acquisition. The proposed mechanism reduces message exchange overhead and enhances trust value estimation via response time and computational resources. It reduces cost and makes the system affordable to the patient. According to the results, the proposed scheme in terms of Compression Ratio (CR) is 18.18% to 88.11% better as compared to existing schemes. Also in terms of Percentage Root-Mean-Squared Difference (PRD) value, the proposed scheme is 18.18% to 34.21% better than with respect to existing schemes. The consensus for any new block is achieved in 24% less time than the Proof-of-Work (PoW) approach. The shallow CPU usage is required for the leader election mechanism. CPU utilization while the experiment lies in the range of 0.9% and 14%. While simulating a one-hour duration, the peak CPU utilization is 21%.
APA, Harvard, Vancouver, ISO, and other styles
34

Tsai, Chao-Jen, Huan-Chih Wang, and Ja-Ling Wu. "Three Techniques for Enhancing Chaos-Based Joint Compression and Encryption Schemes." Entropy 21, no. 1 (January 9, 2019): 40. http://dx.doi.org/10.3390/e21010040.

Full text
Abstract:
In this work, three techniques for enhancing various chaos-based joint compression and encryption (JCAE) schemes are proposed. They respectively improved the execution time, compression ratio, and estimation accuracy of three different chaos-based JCAE schemes. The first uses auxiliary data structures to significantly accelerate an existing chaos-based JCAE scheme. The second solves the problem of huge multidimensional lookup table overheads by sieving out a small number of important sub-tables. The third increases the accuracy of frequency distribution estimations, used for compressing streaming data, by weighting symbols in the plaintext stream according to their positions in the stream. Finally, two modified JCAE schemes leveraging the above three techniques are obtained, one applicable to static files and the other working for streaming data. Experimental results show that the proposed schemes do run faster and generate smaller files than existing JCAE schemes, which verified the effectiveness of the three newly proposed techniques.
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, Yong, Bing Li, Yan Zhang, and Xia Zhao. "A Huffman-Based Joint Compression and Encryption Scheme for Secure Data Storage Using Physical Unclonable Functions." Electronics 10, no. 11 (May 25, 2021): 1267. http://dx.doi.org/10.3390/electronics10111267.

Full text
Abstract:
With the developments of Internet of Things (IoT) and cloud-computing technologies, cloud servers need storage of a huge volume of IoT data with high throughput and robust security. Joint Compression and Encryption (JCAE) scheme based on Huffman algorithm has been regarded as a promising technology to enhance the data storage method. Existing JCAE schemes still have the following limitations: (1) The keys in the JCAE would be cracked by physical and cloning attacks; (2) The rebuilding of Huffman tree reduces the operational efficiency; (3) The compression ratio should be further improved. In this paper, a Huffman-based JCAE scheme using Physical Unclonable Functions (PUFs) is proposed. It provides physically secure keys with PUFs, efficient Huffman tree mutation without rebuilding, and practical compression ratio by combining the Lempel-Ziv and Welch (LZW) algorithm. The performance of the instanced PUFs and the derived keys was evaluated. Moreover, our scheme was demonstrated in a file protection system with the average throughput of 473Mbps and the average compression ratio of 0.5586. Finally, the security analysis shows that our scheme resists physical and cloning attacks as well as several classic attacks, thus improving the security level of existing data protection methods.
APA, Harvard, Vancouver, ISO, and other styles
36

Kang, Heung-Sik, Haeryong Yang, Gyujin Kim, Hoon Heo, Inhyuk Nam, Chang-Ki Min, Changbum Kim, et al. "FEL performance achieved at PAL-XFEL using a three-chicane bunch compression scheme." Journal of Synchrotron Radiation 26, no. 4 (June 19, 2019): 1127–38. http://dx.doi.org/10.1107/s1600577519005861.

Full text
Abstract:
PAL-XFEL utilizes a three-chicane bunch compression (3-BC) scheme (the very first of its kind in operation) for free-electron laser (FEL) operation. The addition of a third bunch compressor allows for more effective mitigation of coherent synchrotron radiation during bunch compression and an increased flexibility of system configuration. Start-to-end simulations of the effects of radiofrequency jitter on the electron beam performance show that using the 3-BC scheme leads to better performance compared with the two-chicane bunch compression scheme. Together with the high performance of the linac radiofrequency system, it enables reliable operation of PAL-XFEL with unprecedented stability in terms of arrival timing, pointing and intensity; an arrival timing jitter of better than 15 fs, a transverse position jitter of smaller than 10% of the photon beam size, and an FEL intensity jitter of smaller than 5% are consistently achieved.
APA, Harvard, Vancouver, ISO, and other styles
37

Chung, Kuo-Liang, Hsuan-Ying Chen, Tsung-Lun Hsieh, and Yen-Bo Chen. "Compression for Bayer CFA Images: Review and Performance Comparison." Sensors 22, no. 21 (October 31, 2022): 8362. http://dx.doi.org/10.3390/s22218362.

Full text
Abstract:
Bayer color filter array (CFA) images are captured by a single-chip image sensor covered with a Bayer CFA pattern which has been widely used in modern digital cameras. In the past two decades, many compression methods have been proposed to compress Bayer CFA images. These compression methods can be roughly divided into the compression-first-based (CF-based) scheme and the demosaicing-first-based (DF-based) scheme. However, in the literature, no review article for the two compression schemes and their compression performance is reported. In this article, the related CF-based and DF-based compression works are reviewed first. Then, the testing Bayer CFA images created from the Kodak, IMAX, screen content images, videos, and classical image datasets are compressed on the Joint Photographic Experts Group-2000 (JPEG-2000) and the newly released Versatile Video Coding (VVC) platform VTM-16.2. In terms of the commonly used objective quality, perceptual quality metrics, the perceptual effect, and the quality–bitrate tradeoff metric, the compression performance comparison of the CF-based compression methods, in particular the reversible color transform-based compression methods and the DF-based compression methods, is reported and discussed.
APA, Harvard, Vancouver, ISO, and other styles
38

Ramalho, Leonardo, Maria Nilma Fonseca, Aldebaro Klautau, Chenguang Lu, Miguel Berg, Elmar Trojer, and Stefan Host. "An LPC-Based Fronthaul Compression Scheme." IEEE Communications Letters 21, no. 2 (February 2017): 318–21. http://dx.doi.org/10.1109/lcomm.2016.2624296.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Sayood, K., and K. Anderson. "A differential lossless image compression scheme." IEEE Transactions on Signal Processing 40, no. 1 (1992): 236–41. http://dx.doi.org/10.1109/78.157204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Ekman, Magnus, and Per Stenstrom. "A Robust Main-Memory Compression Scheme." ACM SIGARCH Computer Architecture News 33, no. 2 (May 2005): 74–85. http://dx.doi.org/10.1145/1080695.1069978.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Bentley, Jon Louis, Daniel D. Sleator, Robert E. Tarjan, and Victor K. Wei. "A locally adaptive data compression scheme." Communications of the ACM 29, no. 4 (April 1986): 320–30. http://dx.doi.org/10.1145/5684.5688.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Moffat, A. "Implementing the PPM data compression scheme." IEEE Transactions on Communications 38, no. 11 (1990): 1917–21. http://dx.doi.org/10.1109/26.61469.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Najeeb, Fathima, and Rahul R Nair. "Hybrid Compression Scheme for Scanned Documents." International Journal of Engineering Trends and Technology 17, no. 10 (November 25, 2014): 490–94. http://dx.doi.org/10.14445/22315381/ijett-v17p296.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Li, Haibo. "Fractal-based image sequence compression scheme." Optical Engineering 32, no. 7 (1993): 1588. http://dx.doi.org/10.1117/12.139803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Li, Zhen, Changgen Peng, Weijie Tan, and Liangrong Li. "An Efficient Plaintext-Related Chaotic Image Encryption Scheme Based on Compressive Sensing." Sensors 21, no. 3 (January 23, 2021): 758. http://dx.doi.org/10.3390/s21030758.

Full text
Abstract:
With the development of mobile communication network, especially 5G today and 6G in the future, the security and privacy of digital images are important in network applications. Meanwhile, high resolution images will take up a lot of bandwidth and storage space in the cloud applications. Facing the demands, an efficient and secure plaintext-related chaotic image encryption scheme is proposed based on compressive sensing for achieving the compression and encryption simultaneously. In the proposed scheme, the internal keys for controlling the whole process of compression and encryption is first generated by plain image and initial key. Subsequently, discrete wavelets transform is used in order to convert the plain image to the coefficient matrix. After that, the permutation processing, which is controlled by the two-dimensional Sine improved Logistic iterative chaotic map (2D-SLIM), was done on the coefficient matrix in order to make the matrix energy dispersive. Furthermore, a plaintext related compressive sensing has been done utilizing a measurement matrix generated by 2D-SLIM. In order to make the cipher image lower correlation and distribute uniform, measurement results quantified the 0∼255 and the permutation and diffusion operation is done under the controlling by two-dimensional Logistic-Sine-coupling map (2D-LSCM). Finally, some common compression and security performance analysis methods are used to test our scheme. The test and comparison results shown in our proposed scheme have both excellent security and compression performance when compared with other recent works, thus ensuring the digital image application in the network.
APA, Harvard, Vancouver, ISO, and other styles
46

Li, Xiao Ning, Zhang Hong Wu, and Hao Yang Xing. "A Compression Sensing Sampling Scheme Based on the Golden Section Principle." Applied Mechanics and Materials 644-650 (September 2014): 4567–72. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.4567.

Full text
Abstract:
In classic radial sampling scheme, angle intervals between adjacent straight sampling lines are the same. And so the coherence between the sampling matrix and the sparse matrix is high. Many pseudo-shadows will occur on the reconstructed image. For this problem, we put forward a novel sampling scheme in which the scan lines are placed in the manner of golden section angle interval. In the resulting sampling matrix, the scan lines scatter randomly on the whole. Experimental results demonstrate that the algorithm proposed in this paper has lower correlation coefficient between sampling matrix and sparse matrix than classic radial sampling scheme and random radial sampling scheme. Image can be accurately reconstructed by collecting less data than the other two schemes.
APA, Harvard, Vancouver, ISO, and other styles
47

Sundarakrishnan, Sundarakrishnan, B. Jaison B.Jaison, and S. P. Raja S.P.Raja. "Secured Color Image Compression based on Compressive Sampling and Lü System." Information Technology And Control 49, no. 3 (September 23, 2020): 346–69. http://dx.doi.org/10.5755/j01.itc.49.3.25901.

Full text
Abstract:
An efficacious and unharmed approach is vital for transmission of sensitive and secret images over unsecure public Internet. In this paper, secured color image compression method based on compressive sampling and Lü system is proposed. Initially, the plain-image is sparsely represented in transform basis. Compressive sampling measurements are obtained from these sparse transform coefficients by employing incoherent sensing matrix. To upgrade the security level, permutation-substitution operations are performed on pixels based on Lü system. To concoct input sensitivity in the scheme, the keys are obtained from input image. Lastly, fast and efficient greedy algorithm is utilized for sparse signal reconstruction. To evaluate the performance of the proposed scheme Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM), Average Difference (AD), Structural Content (SC), Normalized Cross Correlation (NCC), Normalized Absolute Error (NAE), Edge Strength Similarity (ESSIM), Maximum Difference (MD), Correlation Coefficient, Unified Average Changing Intensity (UACI), Key Sensitivity, Number of pixel change Rate (NPCR), Key Space and Histogram metrics are used. Experimental results demonstrate that the proposed scheme produced highly satisfactory security.
APA, Harvard, Vancouver, ISO, and other styles
48

Wu, Chun Ming, Lang Fang Su, and Tao Yang. "An Improved Image Compression Algorithm Based on Block and Classification Scheme." Applied Mechanics and Materials 711 (December 2014): 282–85. http://dx.doi.org/10.4028/www.scientific.net/amm.711.282.

Full text
Abstract:
To improve compression performance, and realize the automatic selection of compression algorithms in processing images without the prior information, the regularity relationship between compression algorithms and image features is studied, and a preprocessing scheme of block and classification based on image features has been proposed in this paper. Compression algorithms pre and post preprocessing scheme are investigated for the same 100 images. Our scheme achieves the larger peak signal to noise ratio (PSNR), which demonstrates the effectiveness of the proposed preprocessing scheme of block and classification in improving compression performance and selecting suitable algorithm to process image without the prior information.
APA, Harvard, Vancouver, ISO, and other styles
49

Abed, Qutaiba K., and Waleed A. Mahmoud Al-Jawher. "A Robust Image Encryption Scheme Based on Block Compressive Sensing and Wavelet Transform." International Journal of Innovative Computing 13, no. 1-2 (September 13, 2023): 7–13. http://dx.doi.org/10.11113/ijic.v13n1-2.413.

Full text
Abstract:
In this paper, a modified robust image encryption scheme is developed by combining block compressive sensing (BCS) and Wavelet Transform. It was achieved with a balanced performance of security, compression, robustness and running efficiency. First, the plain image is divided equally and sparsely represented in discrete wavelet transform (DWT) domain, and the coefficient vectors are confused using the coefficient random permutation strategy and encrypted into a secret image by compressive sensing. In pursuit of superior security, the hyper-chaotic Lorenz system is utilized to generate the updated secret code streams for encryption and embedding with assistance from the counter mode. This scheme is suitable for processing the medium and large images in parallel. Additionally, it exhibits superior robustness and efficiency compared with existing related schemes. Simulation results and comprehensive performance analyses are presented to demonstrate the effectiveness, secrecy and robustness of the proposed scheme. The compressive encryption model using BCS with Walsh transform as sensing matrix and WAM chaos system, the scrambling technique and diffusion succeeded in enhancement of secure performance.
APA, Harvard, Vancouver, ISO, and other styles
50

Song, Yanjie, Zhiliang Zhu, Wei Zhang, Li Guo, Xue Yang, and Hai Yu. "Joint image compression–encryption scheme using entropy coding and compressive sensing." Nonlinear Dynamics 95, no. 3 (December 14, 2018): 2235–61. http://dx.doi.org/10.1007/s11071-018-4689-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography