Journal articles on the topic 'Image compression level'

To see the other types of publications on this topic, follow the link: Image compression level.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Image compression level.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Celik, Mehmet Utku, Gaurav Sharma, and A. Murat Tekalp. "Gray-level-embedded lossless image compression." Signal Processing: Image Communication 18, no. 6 (July 2003): 443–54. http://dx.doi.org/10.1016/s0923-5965(03)00023-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Di Martino, Ferdinando, and Salvatore Sessa. "Multi-level fuzzy transforms image compression." Journal of Ambient Intelligence and Humanized Computing 10, no. 7 (August 17, 2018): 2745–56. http://dx.doi.org/10.1007/s12652-018-0971-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cahya Dewi, Dewa Ayu Indah, and I. Made Oka Widyantara. "Usage analysis of SVD, DWT and JPEG compression methods for image compression." Jurnal Ilmu Komputer 14, no. 2 (September 30, 2021): 99. http://dx.doi.org/10.24843/jik.2021.v14.i02.p04.

Full text
Abstract:
Through image compression, can save bandwidth usage on telecommunication networks, accelerate image file sending time and can save memory in image file storage. Technique to reduce image size through compression techniques is needed. Image compression is one of the image processing techniques performed on digital images with the aim of reducing the redundancy of the data contained in the image so that it can be stored or transmitted efficiently. This research analyzed the results of image compression and measure the error level of the image compression results. The analysis to be carried out is in the form of an analysis of JPEG compression techniques with various types of images. The method of measuring the compression results uses the MSE and PSNR methods. Meanwhile, to determine the percentage level of compression using the compression ratio calculation. The average ratio for JPEG compression was 0.08605, the compression rate was 91.39%. The average compression ratio for the DWT method was 0.133090833, the compression rate was 86.69%. The average compression ratio of the SVD method was 0.101938833 and the compression rate was 89.80%.
APA, Harvard, Vancouver, ISO, and other styles
4

Marcelo, Alvin, Paul Fontelo, Miguel Farolan, and Hernani Cualing. "Effect of Image Compression on Telepathology." Archives of Pathology & Laboratory Medicine 124, no. 11 (November 1, 2000): 1653–56. http://dx.doi.org/10.5858/2000-124-1653-eoicot.

Full text
Abstract:
Abstract Context.—For practitioners deploying store-and-forward telepathology systems, optimization methods such as image compression need to be studied. Objective.—To determine if Joint Photographic Expert Group (JPG or JPEG) compression, a lossy image compression algorithm, negatively affects the accuracy of diagnosis in telepathology. Design.—Double-blind, randomized, controlled trial. Setting.—University-based pathology departments. Participants.—Resident and staff pathologists at the University of Illinois, Chicago, and University of Cincinnati, Cincinnati, Ohio. Intervention.—Compression of raw images using the JPEG algorithm. Main Outcome Measures.—Image acceptability, accuracy of diagnosis, confidence level of pathologist, image quality. Results.—There was no statistically significant difference in the diagnostic accuracy between noncompressed (bit map) and compressed (JPG) images. There were also no differences in the acceptability, confidence level, and perception of image quality. Additionally, rater experience did not significantly correlate with degree of accuracy. Conclusions.—For providers practicing telepathology, JPG image compression does not negatively affect the accuracy and confidence level of diagnosis. The acceptability and quality of images were also not affected.
APA, Harvard, Vancouver, ISO, and other styles
5

Gollu, Vimala Kumari, Ganta Usha Sravani, Mandru Sunil Prakash, and Ganta Srikanth. "Pipeline of Optimization Techniques for Multi-Level Thresholding in Medical Image Compression Using 2D Histogram." Traitement du Signal 38, no. 4 (August 31, 2021): 993–1006. http://dx.doi.org/10.18280/ts.380409.

Full text
Abstract:
In recent times, medical scan images are crucial for accurate diagnosis by medical professionals. Due to the increasing size of the medical images, transfer and storage of images require huge bandwidth and storage space, and hence needs compression. In this paper, multilevel thresholding using 2-D histogram is proposed for compressing the images. In the proposed work, hybridization of optimization techniques viz., Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Symbiotic Organisms Search (SOS) is used to optimize the multilevel thresholding process by assuming the Renyi entropy as an objective function. Meaningful clusters are possible with optimal threshold values, which lead to better image compression. For performance evaluation, the proposed work has been examined on six Magnetic Resonance (MR) images of brain and compared with individual optimization techniques as well as with 1-D histogram. Recent study reveals that peak signal to noise ratio (PSNR) fail in measuring the visual quality of reconstructed image because of mismatch with the objective mean opinion scores (MOS). So, we incorporate weighted PSNR (WPSNR) and visual PSNR (VPSNR) as performance measuring parameters of the proposed method. Experimental results reveal that hGAPSO-SOS method can be accurately and efficiently used in problem of multilevel thresholding for image compression.
APA, Harvard, Vancouver, ISO, and other styles
6

Moëll, Mattias K., and Minoru Fujita. "FOURIER TRANSFORM METHODS IN IMAGE ANALYSIS OF COMPRESSION WOOD AT THE CELLULAR LEVEL." IAWA Journal 25, no. 3 (2004): 311–24. http://dx.doi.org/10.1163/22941932-90000368.

Full text
Abstract:
Compression wood affects the overall quality of construction timber and paper quality. We have investigated the microscopic features of lumen shape and tracheid shape for compression wood studies and detection in softwoods. In this paper, we describe a method for directly analyzing tracheid and lumen shape over an entire image. The method uses the Fast Fourier Transform (FFT) and reduces the two-dimensional image data to one-dimensional data, from which lumen and tracheid shape can be evaluated. We illustrate the method by comparison of compression wood images to normal wood images. The results of detecting severe compression wood were successful, while the detection of weak compression wood was not satisfactory.
APA, Harvard, Vancouver, ISO, and other styles
7

Barrios, Yubal, Alfonso Rodríguez, Antonio Sánchez, Arturo Pérez, Sebastián López, Andrés Otero, Eduardo de la Torre, and Roberto Sarmiento. "Lossy Hyperspectral Image Compression on a Reconfigurable and Fault-Tolerant FPGA-Based Adaptive Computing Platform." Electronics 9, no. 10 (September 26, 2020): 1576. http://dx.doi.org/10.3390/electronics9101576.

Full text
Abstract:
This paper describes a novel hardware implementation of a lossy multispectral and hyperspectral image compressor for on-board operation in space missions. The compression algorithm is a lossy extension of the Consultative Committee for Space Data Systems (CCSDS) 123.0-B-1 lossless standard that includes a bit-rate control stage, which in turn manages the losses the compressor may introduce to achieve higher compression ratios without compromising the recovered image quality. The algorithm has been implemented using High-Level Synthesis (HLS) techniques to increase design productivity by raising the abstraction level. The proposed lossy compression solution is deployed onto ARTICo3, a dynamically reconfigurable multi-accelerator architecture, obtaining a run-time adaptive solution that enables user-selectable performance (i.e., load more hardware accelerators to transparently increase throughput), power consumption, and fault tolerance (i.e., group hardware accelerators to transparently enable hardware redundancy). The whole compression solution is tested on a Xilinx Zynq UltraScale+ Field-Programmable Gate Array (FPGA)-based MPSoC using different input images, from multispectral to ultraspectral. For images acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), the proposed implementation renders an execution time of approximately 36 s when 8 accelerators are compressing concurrently at 100 MHz, which in turn uses around 20% of the LUTs and 17% of the dedicated memory blocks available in the target device. In this scenario, a speedup of 15.6× is obtained in comparison with a pure software version of the algorithm running in an ARM Cortex-A53 processor.
APA, Harvard, Vancouver, ISO, and other styles
8

SinghKatre, Surjeet. "Image Compression based on 4 Level AMBTC." International Journal of Computer Applications 95, no. 4 (June 18, 2014): 7–9. http://dx.doi.org/10.5120/16580-6275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sayed, Mohamed H., and Talaat M. Wahby. "Multi-Level Image Steganography Using Compression Techniques." International Journal of Computer Applications Technology and Research 6, no. 11 (November 4, 2017): 441–50. http://dx.doi.org/10.7753/ijcatr0611.1001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li Donghui, 李东晖. "Context-Based Bi-Level Speckle Image Compression." Laser & Optoelectronics Progress 55, no. 12 (2018): 121010. http://dx.doi.org/10.3788/lop55.121010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Iren, Sami, and Paul D. Amer. "Application level framing applied to image compression." Annales Des Télécommunications 57, no. 5-6 (May 2002): 502–19. http://dx.doi.org/10.1007/bf02995173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

LOW, YIN FEN, and ROSLI BESAR. "WAVELET-BASED MEDICAL IMAGE COMPRESSION USING EZW: OBJECTIVE AND SUBJECTIVE EVALUATIONS." Journal of Mechanics in Medicine and Biology 04, no. 01 (March 2004): 93–110. http://dx.doi.org/10.1142/s0219519404000795.

Full text
Abstract:
Recently, the wavelet transform has emerged as a cutting edge technology within the field of image compression research. Wavelet methods involve overlapping transforms with varying-length basis functions. This overlapping nature of the transform alleviates blocking artifacts, while the multi-resolution character of the wavelet decomposition leads to superior energy compaction and perceptual quality of the decompressed image. Embedded zerotree wavelet (EZW) coder is the first algorithm to show the full power of wavelet-based image compression. The main purpose of this paper is to investigate the impact and quality of orthogonal wavelet filter in compressing medical image by using EZW. Meanwhile, we also look into the effect of the level of wavelet decomposition towards compression efficiency. The wavelet filters used are Haar and Daubechies. The compression simulations are done on three modalities of medical images. The objective (based on PSNR) and subjective (perceived image quality) results of these simulations are presented.
APA, Harvard, Vancouver, ISO, and other styles
13

Joseph, Sanjith Sathya, and R. Ganesan. "Vector Quantization for Satellite Image Compression." Journal of Communications Technology, Electronics and Computer Science 5 (April 30, 2016): 22. http://dx.doi.org/10.22385/jctecs.v5i0.72.

Full text
Abstract:
Image compression is the process of reducing the size of a file without humiliating the quality of the image to an unacceptable level by Human Visual System. The reduction in file size allows as to store more data in less memory and speed up the transmission process in low bandwidth also, in case of satellite images it reduces the time required for the image to reach the ground station. In order to increase the transmission process compression plays an important role in remote sensing images. This paper presents a coding scheme for satellite images using Vector Quantization. And it is a well-known technique for signal compression, and it is also the generalization of the scalar quantization. The given satellite image is compressed using VCDemo software by creating codebooks for vector quantization and the quality of the compressed and decompressed image is compared by the Mean Square Error, Signal to Noise Ratio, Peak Signal to Noise Ratio values.
APA, Harvard, Vancouver, ISO, and other styles
14

El Ayachi, R., M. Gouskir, and M. Baslam. "Application of Haar Wavelets on Medical Images." Journal of Electronic Commerce in Organizations 13, no. 2 (April 2015): 41–49. http://dx.doi.org/10.4018/jeco.2015040104.

Full text
Abstract:
Recently, the information processing approaches are increased. These methods can be used for several purposes: compressing, restoring, and information encoding. The raw data are less presented and are gradually replaced by others formats in terms of space or speed of access. This paper is interested in compression, precisely, the image compression using the Haar wavelets. The latter allows the application of compression at several levels. The subject is to analyze the compression levels to find the optimal level. This study is conducted on medical images.
APA, Harvard, Vancouver, ISO, and other styles
15

WONG, ALEXANDER. "PECSI: A PRACTICAL PERCEPTUALLY-ENHANCED COMPRESSION FRAMEWORK FOR STILL IMAGES." International Journal of Image and Graphics 09, no. 04 (October 2009): 511–29. http://dx.doi.org/10.1142/s0219467809003551.

Full text
Abstract:
This paper presents PECSI, a perceptually-enhanced image compression framework designed to provide high compression rates for still images while preserving visual quality. PECSI utilizes important human perceptual characteristics during image encoding stages (e.g. downsampling and quantization) and image decoding stages (e.g. upsampling and deblocking) to find a better balance between image compression and the perceptual quality of an image. The proposed framework is computationally efficient and easy to integrate into existing block-based still image compression standards. Experimental results show that the PECSI framework provides improved perceptual quality at the same compression rate as existing still image compression methods. Alternatively, the framework can be used to achieve higher compression ratios while maintaining the same level of perceptual quality.
APA, Harvard, Vancouver, ISO, and other styles
16

Kato, Shigeo. "Introduction to image data compression techniques. (10). Coding of bi-level/multi-level images." Journal of the Institute of Television Engineers of Japan 44, no. 3 (1990): 265–74. http://dx.doi.org/10.3169/itej1978.44.265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Kuo-Liang Chung and Kuo-Bao Hong. "Level compression-based image representation and its applications." Pattern Recognition 31, no. 3 (March 1998): 327–32. http://dx.doi.org/10.1016/s0031-3203(97)00051-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Sahami, S., and M. G. Shayesteh. "Bi-level image compression technique using neural networks." IET Image Processing 6, no. 5 (2012): 496. http://dx.doi.org/10.1049/iet-ipr.2011.0079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Cho, Moonki, and Yungsup Yoon. "BTC Algorithm Utilizing Multi-Level Quantization Method for Image Compression." Journal of the Institute of Electronics and Information Engineers 50, no. 6 (June 25, 2013): 114–21. http://dx.doi.org/10.5573/ieek.2013.50.6.114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Oliveira, Fernanda D. V. R., Hugo L. Haas, José Gabriel R. C. Gomes, and Antonio Petraglia. "CMOS Image Sensor Featuring Current-Mode Focal-Plane Image Compression." Journal of Integrated Circuits and Systems 8, no. 1 (December 27, 2013): 14–21. http://dx.doi.org/10.29292/jics.v8i1.369.

Full text
Abstract:
The interest in focal-plane processing techniques, by which image processing is carried out at the pixel level, has increased since the advent of active pixel sensors in the middle 90’s. By sharing processing circuitry by a group of neighboring pixels such techniques enable high-speed imaging operation and massive parallel computation. Focal-plane image compression is particularly interesting, because it allows for further reduction in data rates. The proposed approach also benefits from processing currents rather than voltages, which not only suits current-mode APS imagers, but also enables the circuits to operate at low voltage supply levels and achieve high speed. Moreover, arithmetic computations such as additions and scaling are easily implemented in current mode. Whereas current-mode imaging architectures produce higher fixed pattern noise (FPN) figures than their voltage-mode counterparts, low FPN can be achieved by applying correlated double sampling (CDS) and gain correction techniques. This work presents a 32 × 32 gray-level imaging integrated circuit featuring focal plane image compression, such that for each 4 × 4 pixel block, analog circuits implement differential pulse-code modulation, linear transform, and vector quantization. Other processing functions implemented in the chip are CDS and A/D conversion. Theoretical details are described, as well as the test setup of the chip fabricated in a 0.35 μm CMOS process. To validate the proposed technique, experimental results and captured photographs are shown. The CMOS imager compresses captured images at 0.94 bits/pixel for an overall power consumption below 40 mW (white image), which is equivalent to approximately 36 μW per pixel. Using photographs taken from bar-target pattern inputs, it is shown that details up to 2 cycles/c mare preserved in the decoded images.
APA, Harvard, Vancouver, ISO, and other styles
21

Mat Jizat, Jessnor Arif, Ahmad Fakhri Ab. Nasir, Anwar P.P Abdul Majeed, and Edmund Yuen. "Effect of Image Compression using Fast Fourier Transformation and Discrete Wavelet Transformation on Transfer Learning Wafer Defect Image Classification." MEKATRONIKA 2, no. 1 (June 5, 2020): 16–22. http://dx.doi.org/10.15282/mekatronika.v2i1.6704.

Full text
Abstract:
Automated inspection machines for wafer defects usually captured thousands of images on a large scale to preserve the detail of defect features. However, most transfer learning architecture requires smaller images as input images. Thus, proper compression is required to preserve the defect features whilst maintaining an acceptable classification accuracy. This paper reports on the effect of image compression using Fast Fourier Transformation and Discrete Wavelet Transformation on transfer learning wafer defect image classification. A total of 500 images with 5 classes with 4 defect classes and 1 non-defect class were split to 60:20:20 ratio for training, validating and testing using InceptionV3 and Logistic Regression classifier. However, the input images were compressed using Fast Fourier Transformation and Discrete Wavelet Transformation using 4 level decomposition and Debauchies 4 wavelet family. The images were compressed by 50%, 75%, 90%, 95%, and 99%. As a result, the Fast Fourier Transformation compression show an increase from 89% to 94% in classification accuracy up to 95% compression, while Discrete Wavelet Transformation shows consistent classification accuracy throughout albeit diminishing image quality. From the experiment, it can be concluded that FFT and DWT image compression can be a reliable method for image compression for grayscale image classification as the image memory space drop 56.1% while classification accuracy increased by 5.6% with 95% FFT compression and memory space drop 55.6% while classification accuracy increased 2.2% with 50% DWT compression.
APA, Harvard, Vancouver, ISO, and other styles
22

BONYADI, MOHAMMAD REZA, and MOHSEN EBRAHIMI MOGHADDAM. "A NONUNIFORM HIGH-QUALITY IMAGE COMPRESSION METHOD TO PRESERVE USER-SPECIFIED COMPRESSION RATIO." International Journal of Image and Graphics 11, no. 03 (July 2011): 355–75. http://dx.doi.org/10.1142/s0219467811004123.

Full text
Abstract:
Most of image compression methods are based on frequency domain transforms that are followed by a quantization and rounding approach to discard some coefficients. It is obvious that the quality of compressed images highly depends on the manner of discarding these coefficients. However, finding a good balance between image quality and compression ratio is an important issue in such manners. In this paper, a new lossy compression method called linear mapping image compression (LMIC) is proposed to compress images with high quality while the user-specified compression ratio is satisfied. This method is based on discrete cosine transform (DCT) and an adaptive zonal mask. The proposed method divides image to equal size blocks and the structure of zonal mask for each block is determined independently by considering its gray-level distance (GLD). The experimental results showed that the presented method had higher pick signal to noise ratio (PSNR) in comparison with some related works in a specified compression ratio. In addition, the results were comparable with JPEG2000.
APA, Harvard, Vancouver, ISO, and other styles
23

omran alkaam, nora. "Image Compression by Wavelet Packets." Oriental journal of computer science and technology 11, no. 1 (March 20, 2018): 24–28. http://dx.doi.org/10.13005/ojcst11.01.05.

Full text
Abstract:
This research implements Image processing to reduce the size of image without losing the important information This paper aims to determine the best wavelet to compress the still image at a particular decomposition level using wavelet packet transforms.
APA, Harvard, Vancouver, ISO, and other styles
24

Popoola, Jide Julius, and Michael Elijah Adekanye. "Comparative Performance Evaluation of Three Image Compression Algorithms." Journal of Applied Science & Process Engineering 4, no. 1 (April 28, 2017): 113–26. http://dx.doi.org/10.33736/jaspe.371.2017.

Full text
Abstract:
The advent of computer and internet has brought about massive change to the ways images are being managed. This revolution has resulted in changes in image processing and management as well as the huge space requirement for images’ uploading, downloading, transferring and storing nowadays. In guiding against this huge space requirement, images need to be compressed before either storing or transmitting. Several algorithms or techniques on image compression had been developed in literature. In this study, three of these image compression algorithms were developed using MATLAB codes. The three algorithms developed are discrete cosine transform (DCT), discrete wavelet transform (DWT) and set partitioning in hierarchical tree (SPIHT). In order to ascertain which of them is most appropriate for image storing and transmission, comparative performance evaluations were conducted on the three developed algorithms using five performance indices. The results of the comparative performance evaluations show that the three algorithms are effective in image compression but with different efficiency rates. In addition, the comparative performance evaluations results show that DWT has the highest compression ratio and distortion level while the corresponding values for SPIHT is the lowest with those of DCT fall in-between. Also, the results of the study show that the lower the mean square error and the higher the peak signal-to-noise-ratio, the lower the distortion level in the compressed image.
APA, Harvard, Vancouver, ISO, and other styles
25

Sundarakrishnan, Sundarakrishnan, B. Jaison B.Jaison, and S. P. Raja S.P.Raja. "Secured Color Image Compression based on Compressive Sampling and Lü System." Information Technology And Control 49, no. 3 (September 23, 2020): 346–69. http://dx.doi.org/10.5755/j01.itc.49.3.25901.

Full text
Abstract:
An efficacious and unharmed approach is vital for transmission of sensitive and secret images over unsecure public Internet. In this paper, secured color image compression method based on compressive sampling and Lü system is proposed. Initially, the plain-image is sparsely represented in transform basis. Compressive sampling measurements are obtained from these sparse transform coefficients by employing incoherent sensing matrix. To upgrade the security level, permutation-substitution operations are performed on pixels based on Lü system. To concoct input sensitivity in the scheme, the keys are obtained from input image. Lastly, fast and efficient greedy algorithm is utilized for sparse signal reconstruction. To evaluate the performance of the proposed scheme Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM), Average Difference (AD), Structural Content (SC), Normalized Cross Correlation (NCC), Normalized Absolute Error (NAE), Edge Strength Similarity (ESSIM), Maximum Difference (MD), Correlation Coefficient, Unified Average Changing Intensity (UACI), Key Sensitivity, Number of pixel change Rate (NPCR), Key Space and Histogram metrics are used. Experimental results demonstrate that the proposed scheme produced highly satisfactory security.
APA, Harvard, Vancouver, ISO, and other styles
26

Mondal, Munmun, and Md Rafiqul Islam. "Fingerprint Image De-Noising Using Wavelet Transform with the Comparison of Filtered and After Compression Filtered Noise Image." Advanced Science, Engineering and Medicine 11, no. 11 (November 1, 2019): 1125–33. http://dx.doi.org/10.1166/asem.2019.2467.

Full text
Abstract:
Fingerprint is becoming the part of our day to day life right from our home to workplace. Now a days for security and safety purpose prime importance is given by it. Also, Fingerprint identification is one of the most popular biometric technologies and which is highly used in criminal investigations, commercial applications, and so on. The performance of a fingerprint image-matching algorithm depends heavily on the quality of the input fingerprint images. It is very important to acquire good quality images. The use of wavelet transform improves the quality of an image and reduces noise level. So, in this research, different compression techniques are used to overcome this problem. Also, we have used different wavelets transformation for compression of fingerprint images. Image quality before compression and after compression are measured by Mean Squared Error (MSE), Signal-to-Noise Ratio (SNR) and Peak Signal-to-Noise Ratio (PSNR). This work is done in MATLAB using DSP and wavelet toolbox. At last, we have compared the filtered noise image method and the compression filtered noise image method.
APA, Harvard, Vancouver, ISO, and other styles
27

Pabi, D. J. Ashpin, P. Aruna, and N. Puviarasan. "Tri-mode dual level 3-D image compression over medical MRI images." International Journal of Advanced Computer Research 7, no. 28 (December 21, 2016): 8–14. http://dx.doi.org/10.19101/ijacr.2017.728007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Jain, Reema, and Manish Jain. "Digital Image Watermarking using 3-Level DWT and FFT via Image Compression." International Journal of Computer Applications 124, no. 16 (August 18, 2015): 35–38. http://dx.doi.org/10.5120/ijca2015905808.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Taha El-Omari, Nidhal Kamel. "An Efficient Two-Level Dictionary-Based Technique for Segmentation and Compression Compound Images." Modern Applied Science 14, no. 4 (March 27, 2020): 52. http://dx.doi.org/10.5539/mas.v14n4p52.

Full text
Abstract:
Image data compression algorithms are essential for getting storage space reduction and, perhaps more importantly, to increase their transfer rates, in terms of space-time complexity. Considering that there isn't any encoder that gives good results across all image types and contents, this paper proposed an evolvable lossless statistical block-based technique for segmentation and compression compound or mixed documents that have different content types, such as pictures, graphics, and/or texts. Derived from the number of detected colors and to achieve better compression ratios, a new well-defined representation of the image is created which nonetheless retains the same image components. With the effort of reducing noise or other variations inside the scanned image, some primary operations are implemented. Thereafter, the proposed algorithm breaks down the compound document image into equal-size-square blocks. Next, inspired by the number of colors detected in each block, these blocks are categorized into a set of six-image objects, called classes, where each one contains a set of closely interrelated pixels that share the same common relevant attributes like color gamut and number, color occurrence, grey level, and others. After that, a new representation of these coherent classes is formed using the Lookup Dictionary Table (LUD), which is the real essence of this proposed algorithm. In order to form distinguishable labeled regions sharing the same attributes, adjacent blocks of similar color features are consolidated together into a single coherent whole entity, called segments or regions. After each region is encoded by one of the most off-the-shelf applicable compression techniques, these regions are eventually fused together into a single data file which then subjects to another compression stage to ensure better compression ratios. After the proposed algorithm has been applied and tested on a database containing 3151 24-bit-RGB-bitmap document images, the empirically-based results prove that the overall algorithm is efficient in the long run and has superior storage space reduction when compared with other existing algorithms. As for the empirical findings, the proposed algorithm has achieved (71.039 %) relative reduction in the data storage space.
APA, Harvard, Vancouver, ISO, and other styles
30

Sirota, A. A., M. A. Dryuchenko, and E. Yu Mitrofanova. "Digital watermarking method based on heteroassociative image compression and its realization with artificial neural networks." Computer Optics 42, no. 3 (July 25, 2018): 483–94. http://dx.doi.org/10.18287/2412-6179-2018-42-3-483-494.

Full text
Abstract:
In this paper, we present a digital watermarking method and associated algorithms that use a heteroassociative compressive transformation to embed a digital watermark bit sequence into blocks (fragments) of container images. A principal feature of the proposed method is the use of the heteroassociative compressing transformation – a mutual mapping with the compression of two neighboring image regions of an arbitrary shape. We also present the results of our experiments, namely the dependencies of quality indicators of thus created digital watermarks, which show the container distortion level, and the probability of digital watermark extraction error. In the final section, we analyze the performance of the proposed digital watermarking algorithms under various distortions and transformations aimed at destroying the hidden data, and compare these algorithms with the existing ones.
APA, Harvard, Vancouver, ISO, and other styles
31

OSHIKIRI, Masahiro, Koichiro DEGUCHI, Yasutaka TAMURA, and Takao AKATSUKA. "Image Data Compression Using 2-Level Weighted Vector Quantization." Transactions of the Society of Instrument and Control Engineers 26, no. 2 (1990): 211–18. http://dx.doi.org/10.9746/sicetr1965.26.211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Tian, Tao, Hanli Wang, Lingxuan Zuo, C. C. Jay Kuo, and Sam Kwong. "Just Noticeable Difference Level Prediction for Perceptual Image Compression." IEEE Transactions on Broadcasting 66, no. 3 (September 2020): 690–700. http://dx.doi.org/10.1109/tbc.2020.2977542.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Kim, Se Hun, Saibal Mukohopadhyay, and Marilyn Wolf. "System-Level Energy Optimization for Error-Tolerant Image Compression." IEEE Embedded Systems Letters 2, no. 3 (September 2010): 81–84. http://dx.doi.org/10.1109/les.2010.2060467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Tian, Tao, Hanli Wang, Sam Kwong, and C. C. Jay Kuo. "Perceptual Image Compression with Block-Level Just Noticeable Difference Prediction." ACM Transactions on Multimedia Computing, Communications, and Applications 16, no. 4 (January 28, 2021): 1–15. http://dx.doi.org/10.1145/3408320.

Full text
Abstract:
A block-level perceptual image compression framework is proposed in this work, including a block-level just noticeable difference (JND) prediction model and a preprocessing scheme. Specifically speaking, block-level JND values are first deduced by utilizing the OTSU method based on the variation of block-level structural similarity values between two adjacent picture-level JND values in the MCL-JCI dataset. After the JND value for each image block is generated, a convolutional neural network–based prediction model is designed to forecast block-level JND values for a given target image. Then, a preprocessing scheme is devised to modify the discrete cosine transform coefficients during JPEG compression on the basis of the distribution of block-level JND values of the target test image. Finally, the test image is compressed by the max JND value across all of its image blocks in the light of the initial quality factor setting. The experimental results demonstrate that the proposed block-level perceptual image compression method is able to achieve 16.75% bit saving as compared to the state-of-the-art method with similar subjective quality. The project page can be found at https://mic.tongji.edu.cn/43/3f/c9778a148287/page.htm.
APA, Harvard, Vancouver, ISO, and other styles
35

Gandor, Tomasz, and Jakub Nalepa. "First Gradually, Then Suddenly: Understanding the Impact of Image Compression on Object Detection Using Deep Learning." Sensors 22, no. 3 (February 1, 2022): 1104. http://dx.doi.org/10.3390/s22031104.

Full text
Abstract:
Video surveillance systems process high volumes of image data. To enable long-term retention of recorded images and because of the data transfer limitations in geographically distributed systems, lossy compression is commonly applied to images prior to processing, but this causes a deterioration in image quality due to the removal of potentially important image details. In this paper, we investigate the impact of image compression on the performance of object detection methods based on convolutional neural networks. We focus on Joint Photographic Expert Group (JPEG) compression and thoroughly analyze a range of the performance metrics. Our experimental study, performed over a widely used object detection benchmark, assessed the robustness of nine popular object-detection deep models against varying compression characteristics. We show that our methodology can allow practitioners to establish an acceptable compression level for specific use cases; hence, it can play a key role in applications that process and store very large image data.
APA, Harvard, Vancouver, ISO, and other styles
36

Kavitha, T., and K. Jayasankar. "Ideal Huffman Code for Lossless Image Compression for Ubiquitous Access." Indonesian Journal of Electrical Engineering and Computer Science 12, no. 2 (November 1, 2018): 765. http://dx.doi.org/10.11591/ijeecs.v12.i2.pp765-774.

Full text
Abstract:
<p>Compression technique is adopted to solve various big data problems such as storage and transmission. The growth of cloud computing and smart phone industries has led to generation of huge volume of digital data. Digital data can be in various forms as audio, video, images and documents. These digital data are generally compressed and stored in cloud storage environment. Efficient storing and retrieval mechanism of digital data by adopting good compression technique will result in reducing cost. The compression technique is composed of lossy and lossless compression technique. Here we consider Lossless image compression technique, minimizing the number of bits for encoding will aid in improving the coding efficiency and high compression. Fixed length coding cannot assure in minimizing bit length. In order to minimize the bits variable Length codes with prefix-free codes nature are preferred. However the existing compression model presented induce high computing overhead, to address this issue, this work presents an ideal and efficient modified Huffman technique that improves compression factor up to 33.44% for Bi-level images and 32.578% for Half-tone Images. The average computation time both encoding and decoding shows an improvement of 20.73% for Bi-level images and 28.71% for Half-tone images. The proposed work has achieved overall 2% increase in coding efficiency, reduced memory usage of 0.435% for Bi-level images and 0.19% for Half-tone Images. The overall result achieved shows that the proposed model can be adopted to support ubiquitous access to digital data.</p>
APA, Harvard, Vancouver, ISO, and other styles
37

Dalui, Indrani, SurajitGoon, and Avisek Chatterjee. "A NEW APPROACH OF FRACTAL COMPRESSION USING COLOR IMAGE." International Journal of Engineering Technologies and Management Research 6, no. 6 (March 25, 2020): 74–71. http://dx.doi.org/10.29121/ijetmr.v6.i6.2019.395.

Full text
Abstract:
Fractal image compression depends on self-similarity, where one segment of a image is like the other one segment of a similar picture. Fractal coding is constantly connected to grey level images. The simplest technique to encode a color image by gray- scale fractal image coding algorithm is to part the RGB color image into three Channels - red, green and blue, and compress them independently by regarding each color segment as a specific gray-scale image. The colorimetric association of RGB color pictures is examined through the calculation of the relationship essential of their three-dimensional histogram. For normal color images, as a typical conduct, the connection necessary is found to pursue a power law, with a non- integer exponent type of a given image. This conduct recognizes a fractal or multiscale self-comparable sharing of the colors contained, in average characteristic pictures. This finding of a conceivable fractal structure in the colorimetric association of regular images complement other fractal properties recently saw in their spatial association. Such fractal colorimetric properties might be useful to the characterization and demonstrating of natural images, and may add to advance in vision. The outcomes got demonstrate that the fractal-based compression for the color image fills in similarly with respect to the color image.
APA, Harvard, Vancouver, ISO, and other styles
38

Pan, Chen, Guodong Ye, Xiaoling Huang, and Junwei Zhou. "Novel Meaningful Image Encryption Based on Block Compressive Sensing." Security and Communication Networks 2019 (November 30, 2019): 1–12. http://dx.doi.org/10.1155/2019/6572105.

Full text
Abstract:
This paper proposes a new image compression-encryption algorithm based on a meaningful image encryption framework. In block compressed sensing, the plain image is divided into blocks, and subsequently, each block is rendered sparse. The zigzag scrambling method is used to scramble pixel positions in all the blocks, and subsequently, dimension reduction is undertaken via compressive sensing. To ensure the robustness and security of our algorithm and the convenience of subsequent embedding operations, each block is merged, quantized, and disturbed again to obtain the secret image. In particular, landscape paintings have a characteristic hazy beauty, and secret images can be camouflaged in them to some extent. For this reason, in this paper, a landscape painting is selected as the carrier image. After a 2-level discrete wavelet transform (DWT) of the carrier image, the low-frequency and high-frequency coefficients obtained are further subjected to a discrete cosine transform (DCT). The DCT is simultaneously applied to the secret image as well to split it. Next, it is embedded into the DCT coefficients of the low-frequency and high-frequency components, respectively. Finally, the encrypted image is obtained. The experimental results show that, under the same compression ratio, the proposed image compression-encryption algorithm has better reconstruction effect, stronger security and imperceptibility, lower computational complexity, shorter time consumption, and lesser storage space requirements than the existing ones.
APA, Harvard, Vancouver, ISO, and other styles
39

Irawati, Indrarini Dyah, Sugondo Hadiyoso, and Yuli Sun Hariyani. "Multi-wavelet level comparison on compressive sensing for MRI image reconstruction." Bulletin of Electrical Engineering and Informatics 9, no. 4 (August 1, 2020): 1461–67. http://dx.doi.org/10.11591/eei.v9i4.2347.

Full text
Abstract:
In this study, we proposed compressive sampling for MRI reconstruction based on sparse representation using multi-wavelet transformation. Comparing the performance of wavelet decomposition level, which are Level 1, Level 2, Level 3, and Level 4. We used gaussian random process to generate measurement matrix. The algorithm used to reconstruct the image is . The experimental results showed that the use of wavelet multi-level can generate higher compression ratio but requires a longer processing time. MRI reconstruction results based on the parameters of the peak signal to noise ratio (PSNR) and structural similarity index measure (SSIM) show that the higher the level of decomposition in wavelets, the value of both decreases.
APA, Harvard, Vancouver, ISO, and other styles
40

Siddeq, Mohammed M., and Marcos A. Rodrigues. "A novel Hexa data encoding method for 2D image crypto-compression." Multimedia Tools and Applications 79, no. 9-10 (December 12, 2019): 6045–59. http://dx.doi.org/10.1007/s11042-019-08405-3.

Full text
Abstract:
AbstractWe proposed a novel method for 2D image compression-encryption whose quality is demonstrated through accurate 2D image reconstruction at higher compression ratios. The method is based on the DWT-Discrete Wavelet Transform where high frequency sub-bands are connected with a novel Hexadata crypto-compression algorithm at compression stage and a new fast matching search algorithm at decoding stage. The novel crypto-compression method consists of four main steps: 1) A five-level DWT is applied to an image to zoom out the low frequency sub-band and increase the number of high frequency sub-bands to facilitate the compression process; 2) The Hexa data compression algorithm is applied to each high frequency sub-band independently by using five different keys to reduce each sub-band to1/6of its original size; 3) Build a look up table of probability data to enable decoding of the original high frequency sub-bands, and 4) Apply arithmetic coding to the outputs of steps (2) and (3). At decompression stage a fast matching search algorithm is used to reconstruct all high frequency sub-bands. We have tested the technique on 2D images including streaming from videos (YouTube). Results show that the proposed crypto-compression method yields high compression ratios up to 99% with high perceptual quality images.
APA, Harvard, Vancouver, ISO, and other styles
41

Beevi, Shabila, Mariya Thomas, Madhu S. Nair, and M. Wilscy. "Lossless Color Image Compression using Double Level RCT in BBWCA." Procedia Computer Science 93 (2016): 513–20. http://dx.doi.org/10.1016/j.procs.2016.07.242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Alcaraz-Corona, Sergio, and Ramon M. Rodriguez-Dagnino. "Bi-Level Image Compression Estimating the Markov Order of Dependencies." IEEE Journal of Selected Topics in Signal Processing 4, no. 3 (June 2010): 605–11. http://dx.doi.org/10.1109/jstsp.2010.2048232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Khan, Aftab, and Ashfaq Khan. "Lossless colour image compression using RCT for bi-level BWCA." Signal, Image and Video Processing 10, no. 3 (May 24, 2015): 601–7. http://dx.doi.org/10.1007/s11760-015-0783-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

YANG, GUOAN, and NANNING ZHENG. "AN OPTIMIZATION ALGORITHM FOR BIORTHOGONAL WAVELET FILTER BANKS DESIGN." International Journal of Wavelets, Multiresolution and Information Processing 06, no. 01 (January 2008): 51–63. http://dx.doi.org/10.1142/s0219691308002215.

Full text
Abstract:
A new approach for designing the Biorthogonal Wavelet Filter Bank (BWFB) for the purpose of image compression is presented in this paper. The approach is broken into two steps. First, an optimal filter bank is designed in the theoretical sense, based on Vaidyanathan's coding gain criterion in the SubBand Coding (SBC) system. Then, the above filter bank is optimized based on the criterion of Peak Signal-to-Noise Ratio (PSNR) in the JPEG2000 image compression system, resulting in a BWFB in practical application sense. With the approach, a series of BWFBs for a specific class of applications related to image compression, such as gray-level images, can be quickly designed. Here, new 7/5 BWFBs are presented based on the above approach for image compression applications. Experiments show that the 7/5 BWFBs not only have excellent compression performance, but also easy computation and are more suitable for VLSI hardware implementations. They perform equally well with respect to 7/5 filters in the JPEG2000 standard.
APA, Harvard, Vancouver, ISO, and other styles
45

Kuznetsova, Polina, Vicente Ordonez, Tamara L. Berg, and Yejin Choi. "TreeTalk: Composition and Compression of Trees for Image Descriptions." Transactions of the Association for Computational Linguistics 2 (December 2014): 351–62. http://dx.doi.org/10.1162/tacl_a_00188.

Full text
Abstract:
We present a new tree based approach to composing expressive image descriptions that makes use of naturally occuring web images with captions. We investigate two related tasks: image caption generalization and generation, where the former is an optional subtask of the latter. The high-level idea of our approach is to harvest expressive phrases (as tree fragments) from existing image descriptions, then to compose a new description by selectively combining the extracted (and optionally pruned) tree fragments. Key algorithmic components are tree composition and compression, both integrating tree structure with sequence structure. Our proposed system attains significantly better performance than previous approaches for both image caption generalization and generation. In addition, our work is the first to show the empirical benefit of automatically generalized captions for composing natural image descriptions.
APA, Harvard, Vancouver, ISO, and other styles
46

Zelmati, Omar, Boban Bondžulić, Boban Pavlović, Ivan Tot, and Saad Merrouche. "Study of subjective and objective quality assessment of infrared compressed images." Journal of Electrical Engineering 73, no. 2 (April 1, 2022): 73–87. http://dx.doi.org/10.2478/jee-2022-0011.

Full text
Abstract:
Abstract Given the lack of accessible infrared compressed images’ benchmarks annotated by human subjects, this work presents a new database with the aim of studying both subjective and objective image quality assessment (IQA) on compressed long wavelength infrared (LWIR) images. The database contains 20 reference (pristine) images and 200 distorted (degraded) images obtained by application of the most known compression algorithms used in multimedia and communication fields, namely: JPEG and JPEG-2000. Each compressed image is evaluated by 31 subjects having different levels of experience in LWIR images. Mean opinion scores (MOS) and natural scene statistics (NSS) of pristine and compressed images are elaborated to study the performance of the database. Five analyses are conducted on collected images and subjective scores, namely: analysis by compression type, analysis by file size, analysis by reference image, analysis by quality level and analysis by subject. Moreover, a wide set of objective IQA metrics is applied on the images and the obtained scores are compared with the collected subjective scores. Results show that objective IQA measures correlate with human subjective results with a degree of agreement up to 95 %, so this benchmark is promising to improve existing and develop new IQA measures for compressed LWIR images. Thanks to a real-world surveillance original images based on which we analyze how image compression and quality level affect the quality of compressed images, this database is primarily suitable for (military and civilian) surveillance applications. The database is accessible via the link: https://github.com/azedomar/compressed-LWIR-images-IQA-database. As a follow-up to this work, an extension of the database is underway to study other types of distortion in addition to compression.
APA, Harvard, Vancouver, ISO, and other styles
47

Ayachi, R. El, B. Bouikhalene, and M. Fakir. "New Image Compression Algorithm using Haar Wavelet Transform." International Journal of Informatics and Communication Technology (IJ-ICT) 6, no. 1 (June 22, 2017): 43. http://dx.doi.org/10.11591/ijict.v6i1.pp43-48.

Full text
Abstract:
<p>The compression is a process of Image Processing which interested to change the information representation in order to reduce the stockage capacity and transmission time. In this work we propose a new image compression algorithm based on Haar wavelets by introducing a compression coefficient that controls the compression levels. This method reduces the complexity in obtaining the desired level of compression from the original image only and without using intermediate levels.</p>
APA, Harvard, Vancouver, ISO, and other styles
48

DUDHAGARA, CHETAN R., and MAYUR M. PATEL. "A Comparative Study and Analysis of EZW and SPIHT methods for Wavelet based Image Compression." Oriental journal of computer science and technology 10, no. 3 (August 10, 2017): 669–73. http://dx.doi.org/10.13005/ojcst/10.03.17.

Full text
Abstract:
In recent years there has been widely increase the use of digital media everywhere. To increase the use of digital media, there is a huge problem of storage, manipulation and transmission of data over the internet. These digital media such as image, audio and video require large memory space. So it is necessary to compress the digital data to require less memory space and less bandwidth to transmission of data over network. Image compressions techniques are used to compress the data for reduce the storage requirement. It plays an important role for transfer of data such as image over the network. Two methods are used in this paper on Barbara image. This compression study is performed by using Set Partitioning In Hierarchical Trees (SPIHT) and Embedded Zero tree Wavelet (EZW) compression techniques. There are many parameters are used to compare this techniques. Mean Square Error (MSE), Pick Signal to Noise Ration (PSNR) and Compression Ratio (CR) are used at different level of decompositions.
APA, Harvard, Vancouver, ISO, and other styles
49

Han, Chong, Songtao Zhang, Biao Zhang, Jian Zhou, and Lijuan Sun. "A Distributed Image Compression Scheme for Energy Harvesting Wireless Multimedia Sensor Networks." Sensors 20, no. 3 (January 25, 2020): 667. http://dx.doi.org/10.3390/s20030667.

Full text
Abstract:
As an emerging technology, edge computing will enable traditional sensor networks to be effective and motivate a series of new applications. Meanwhile, limited battery power directly affects the performance and survival time of sensor networks. As an extension application for traditional sensor networks, the energy consumption of Wireless Multimedia Sensor Networks (WMSNs) is more prominent. For the image compression and transmission in WMSNs, consider using solar energy as the replenishment of node energy; a distributed image compression scheme based on solar energy harvesting is proposed. Two level clustering management is adopted. The camera node-normal node cluster enables camera nodes to gather and send collected raw images to the corresponding normal nodes for compression, and the normal node cluster enables the normal nodes to send the compressed images to the corresponding cluster head node. The re-clustering and dynamic adjustment methods for normal nodes are proposed to adjust adaptively the operation mode in the working chain. Simulation results show that the proposed distributed image compression scheme can effectively balance the energy consumption of the network. Compared with the existing image transmission schemes, the proposed scheme can transmit more and higher quality images and ensure the survival of the network.
APA, Harvard, Vancouver, ISO, and other styles
50

Al-Maadeed, Somaya, Afnan Al-Ali, and Turki Abdalla. "A New Chaos-Based Image-Encryption and Compression Algorithm." Journal of Electrical and Computer Engineering 2012 (2012): 1–11. http://dx.doi.org/10.1155/2012/179693.

Full text
Abstract:
We propose a new and efficient method to develop secure image-encryption techniques. The new algorithm combines two techniques: encryption and compression. In this technique, a wavelet transform was used to decompose the image and decorrelate its pixels into approximation and detail components. The more important component (the approximation component) is encrypted using a chaos-based encryption algorithm. This algorithm produces a cipher of the test image that has good diffusion and confusion properties. The remaining components (the detail components) are compressed using a wavelet transform. This proposed algorithm was verified to provide a high security level. A complete specification for the new algorithm is provided. Several test images are used to demonstrate the validity of the proposed algorithm. The results of several experiments show that the proposed algorithm for image cryptosystems provides an efficient and secure approach to real-time image encryption and transmission.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography