To see the other types of publications on this topic, follow the link: Image compression.

Journal articles on the topic 'Image compression'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Image compression.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Er., Neha Saini Mr. Naveen Dhillion Mr. Manit Kapoor. "A PAPER ON A COMPARATIVE STUDY BLOCK TRUNCATING CODING, WAVELET, FRACTAL IMAGE COMPRESSION & EMBEDDED ZERO TREE." INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY 5, no. 7 (July 15, 2016): 1052–61. https://doi.org/10.5281/zenodo.57987.

Full text
Abstract:
Many different image compression techniques currently exist for the compression of different types of images. Image compression is fundamental to the efficient and cost-effective use of digital imaging technology and applications. In this study Image compression was applied to compress and decompress image at various compression ratios. Compressing an image is significantly different than compressing raw binary data. For this different compression algorithm are used to compress images. Fractal image compression has been widely used to compress the image.  We undertake a study of the performance difference of different transform coding techniques i.e. Block Truncating Coding, Wavelet, Fractal and Embedded Zero Tree image compression. This paper focuses important features of transform coding in compression of still images, including the extent to which the quality of image is degraded by the process of compression and decompression. The above techniques have been successfully used in many applications. Images obtained with those techniques yield very good results.  The numerical analysis of such algorithms is carried out by measuring Peak Signal to Noise Ratio (PSNR), Compression Ratio (CR). For the implementation of this proposed work we use the Image Processing Toolbox under Matlab software.  
APA, Harvard, Vancouver, ISO, and other styles
2

Saudagar, Abdul Khader Jilani. "Biomedical Image Compression Techniques for Clinical Image Processing." International Journal of Online and Biomedical Engineering (iJOE) 16, no. 12 (October 19, 2020): 133. http://dx.doi.org/10.3991/ijoe.v16i12.17019.

Full text
Abstract:
Image processing is widely used in the domain of biomedical engineering especially for compression of clinical images. Clinical diagnosis receives high importance which involves handling patient’s data more accurately and wisely when treating patients remotely. Many researchers proposed different methods for compression of medical images using Artificial Intelligence techniques. Developing efficient automated systems for compression of medical images in telemedicine is the focal point in this paper. Three major approaches were proposed here for medical image compression. They are image compression using neural network, fuzzy logic and neuro-fuzzy logic to preserve higher spectral representation to maintain finer edge information’s, and relational coding for inter band coefficients to achieve high compressions. The developed image coding model is evaluated over various quality factors. From the simulation results it is observed that the proposed image coding system can achieve efficient compression performance compared with existing block coding and JPEG coding approaches, even under resource constraint environments.
APA, Harvard, Vancouver, ISO, and other styles
3

Kanimozhirajasekaran*1, &. P. D. Sathya2. "AN EFFICIENT OPTIMIZATION TECHNIQUE FOR FRACTAL IMAGE COMPRESSION OF MEDICAL IMAGE." GLOBAL JOURNAL OF ENGINEERING SCIENCE AND RESEARCHES 5, no. 2 (August 27, 2018): 97–103. https://doi.org/10.5281/zenodo.1404141.

Full text
Abstract:
Medical images plays a vital role in the area of medicine. It is important to store the medical images for future reference. So, there is a need for compressing of medical images for storage and communication purpose. Over the last few decades, many image compression methods have been introduced. They gives high compression ratio with loss of quality of image. Medical imagesshould always be stored in lossless format.There are several lossless compression techniques using which, original images can be restored.The objective of image compression is to reduce the redundancy of the image and to store or transmit data in an efficient form.The different compression algorithms currently in use in medical imaging,One such type of image compression is Fractal Image Compression (FIC). These FIC techniques commonly uses the optimization techniques to find the optimal best solution. The aim of the FIC is to divide the image into pieces or sections and then finds self-similar ones. It produces high compression ratio, fast decompression in short amount of time. In this paper, Flower Pollination Based Optimization approach is used for fractal image compression. This optimization technique effectively reduces the encoding time while retaining the quality of the image.Here, Flower pollination algorithm(FPA) is compared with Genetic algorithm(GA)and their performances are analyzed in terms of compression ratio, encoding time and PSNR(Peak Signal-to Noise Ratio) value.
APA, Harvard, Vancouver, ISO, and other styles
4

Khan, Sulaiman, Shah Nazir, Anwar Hussain, Amjad Ali, and Ayaz Ullah. "An efficient JPEG image compression based on Haar wavelet transform, discrete cosine transform, and run length encoding techniques for advanced manufacturing processes." Measurement and Control 52, no. 9-10 (October 19, 2019): 1532–44. http://dx.doi.org/10.1177/0020294019877508.

Full text
Abstract:
Image compression plays a key role in the transmission of an image and storage capacity. Image compression aims to reduce the size of the image with no loss of significant information and no loss of quality in the image. To reduce the storage capacity of the image, the image compression is proposed in order to offer a compact illustration of the information included in the image. Image compression exists in the form of lossy or lossless. Even though image compression mechanism has a prominent role for compressing images, certain conflicts still exist in the available techniques. This paper presents an approach of Haar wavelet transform, discrete cosine transforms, and run length encoding techniques for advanced manufacturing processes with high image compression rates. These techniques work by converting an image (signal) into half of its length which is known as “detail levels”; then, the compression process is done. For simulation purposes of the proposed research, the images are segmented into 8 × 8 blocks and then inversed (decoded) operation is performed on the processed 8 × 8 block to reconstruct the original image. The same experiments were done on two other algorithms, that is, discrete cosine transform and run length encoding schemes. The proposed system is tested by comparing the results of all the three algorithms based on different images. The comparison among these techniques is drawn on the basis of peak signal to noise ratio and compression ratio. The results obtained from the experiments show that the Haar wavelet transform outperforms very well with an accuracy of 97.8% and speeds up the compression and decompression process of the image with no loss of information and quality of image. The proposed study can easily be implemented in industries for the compression of images. These compressed images are suggested for multiple purposes like image compression for metrology as measurement materials in advanced manufacturing processes, low storage and bandwidth requirements, and compressing multimedia data like audio and video formats.
APA, Harvard, Vancouver, ISO, and other styles
5

David S, Alex, Almas Begum, and Ravikumar S. "Content clustering for MRI Image compression using PPAM." International Journal of Engineering & Technology 7, no. 1.7 (February 5, 2018): 126. http://dx.doi.org/10.14419/ijet.v7i1.7.10631.

Full text
Abstract:
Image compression helps to save the utilization of memory, data while transferring the images between nodes. Compression is one of the key technique in medical image. Both lossy and lossless compressions where used based on the application. In case of medical imaging each and every components of pixel is very important hence its nature to chose lossless compression medical images. MRI images are compressed after processing. Here in this paper we have used PPMA method to compress the MRI image. For retrieval of the compressed image content clustering method used.
APA, Harvard, Vancouver, ISO, and other styles
6

Walaa, M. Abd-Elhafiez, Gharibi Wajeb, and Heshmat Mohamed. "An efficient color image compression technique." TELKOMNIKA Telecommunication, Computing, Electronics and Control 18, no. 5 (November 17, 2020): 2371~2377. https://doi.org/10.12928/TELKOMNIKA.v18i5.8632.

Full text
Abstract:
We present a new image compression method to improve visual perception of the decompressed images and achieve higher image compression ratio. This method balances between the compression rate and image quality by compressing the essential parts of the image-edges. The key subject/edge is of more significance than background/non-edge image. Taking into consideration the value of image components and the effect of smoothness in image compression, this method classifies the image components as edge or non-edge. Low-quality lossy compression is applied to non-edge components whereas high-quality lossy compression is applied to edge components. Outcomes show that our suggested method is efficient in terms of compression ratio, bits per-pixel and peak signal to noise ratio.
APA, Harvard, Vancouver, ISO, and other styles
7

Katayama, O., S. Ishihama, K. Namiki, and I. Ohi. "Color Changes in Electronic Endoscopic Images Caused by Image Compression." Diagnostic and Therapeutic Endoscopy 4, no. 1 (January 1, 1997): 43–50. http://dx.doi.org/10.1155/dte.4.43.

Full text
Abstract:
In recent years, recording of color still images into magneto–optical video disks has been increasingly used as a method for recording electronic endoscopic images. In this case, image compression is often used to reduce the volume and cost of recording media and also to minimize the time required for image recording and playback. With this in mind, we recorded 8 images into a magneto-optical video disk in 4 image compression modes (no compression, weak compression, moderate compression, and strong compression) using the Joint Photographic Image Coding Experts Group (JPEG) system, which is a widely used and representative method for compressing color still images, in order to determine the relationship between the degree of image compression and the color information in electronic endoscopic images. The acquired images were transferred to an image processor using an offline system. A total of 10 regions of interest (ROls) were selected, and red (R), green (G), and blue (B) images were obtained using different compression modes. From histograms generated for these images, mean densities of R, G, and B in each ROI were measured and analyzed. The results revealed that color changes were greater for B, which had the lowest density, than for R or G as the degree of compression was increased.
APA, Harvard, Vancouver, ISO, and other styles
8

Kaur, Gaganpreet, Hitashi Hitashi, and Dr Gurdev Singh. "PERFORMANCE EVALUATION OF IMAGE QUALITY BASED ON FRACTAL IMAGE COMPRESSION." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 2, no. 1 (February 2, 2012): 20–27. http://dx.doi.org/10.24297/ijct.v2i1.2608.

Full text
Abstract:
Fractal techniques for image compression haverecently attracted a great deal of attention. Fractalimage compression is a relatively recenttechnique based on the representation of animage by a contractive transform, on the space ofimages, for which the fixed point is close to theoriginal image. This broad principle encompassesa very wide variety of coding schemes, many ofwhich have been explored in the rapidly growingbody of published research.Unfortunately, littlein the way of practical algorithms or techniqueshas been published. Here present a technique forimage compression that is based on a very simpletype of iterative fractal. In our algorithm awavelet transform (quadrature mirror filterpyramid) is used to decompose an image intobands containing information from differentscales (spatial frequencies) and orientations. Theconditional probabilities between these differentscale bands are then determined, and used as thebasis for a predictive coder.We undertake a study of the performance offractal image compression. This paper focusesimportant features of compression of still images,including the extent to which the quality of imageis degraded by the process of compression anddecompression.The numerical experiment is doneby considering various types of images and byapplying fractal Image compression to compressan image. It was found that fractal yields betterresult as compared to other compressiontechniques. It provide better peak signal to noiseratio as compare to other techniques, but it takehigher encoding time.The numerical results arecalculated in Matlab.
APA, Harvard, Vancouver, ISO, and other styles
9

Cardone, Barbara, Ferdinando Di Martino, and Salvatore Sessa. "Fuzzy Transform Image Compression in the YUV Space." Computation 11, no. 10 (October 1, 2023): 191. http://dx.doi.org/10.3390/computation11100191.

Full text
Abstract:
This research proposes a new image compression method based on the F1-transform which improves the quality of the reconstructed image without increasing the coding/decoding CPU time. The advantage of compressing color images in the YUV space is due to the fact that while the three bands Red, Green and Blue are equally perceived by the human eye, in YUV space most of the image information perceived by the human eye is contained in the Y band, as opposed to the U and V bands. Using this advantage, we construct a new color image compression algorithm based on F1-transform in which the image compression is accomplished in the YUV space, so that better-quality compressed images can be obtained without increasing the execution time. The results of tests performed on a set of color images show that our color image compression method improves the quality of the decoded images with respect to the image compression algorithms JPEG, F1-transform on the RGB color space and F-transform on the YUV color space, regardless of the selected compression rate and with comparable CPU times.
APA, Harvard, Vancouver, ISO, and other styles
10

Kryvenko, Sergii, Vladimir Lukin, Boban Bondžulić, and Nenad Stojanović. "COMPRESSION OF NOISY GRAYSCALE IMAGES WITH COMPRESSION RATIO ANALYSIS." Advanced Information Systems 9, no. 2 (April 30, 2025): 68–74. https://doi.org/10.20998/2522-9052.2025.2.09.

Full text
Abstract:
The object of the study is the process of compressing noisy images in a lossy manner by better portable graphics (BPG) encoder. The subject of the study is the method for adaptive selection of the coder parameter Q depending on noise intensity and image complexity. The goal of the study is to consider the basic characteristics of lossy compression of remote sensing images contaminated by additive white Gaussian noise with giving recommendations of preferable Q setting. Methods used: numerical simulation, verification for test images. Results obtained: 1) the dependencies of compression ratio on Q are monotonically increasing functions; 2) their characteristics are strongly dependent on noise intensity and image complexity; 3) dependencies of logarithm of CR on Q contain information on possible existence and position of optimal operation point for compressed noisy images; 4) compression ratios for large Q contain information on image complexity with low sensitivity to noise presence and intensity; 5) it is possible to get useful information from dependences of compression ratio on Q. Conclusions: the results of this research allow: 1) estimating image complexity; 2) adapting Q to noise intensity and image complexity.
APA, Harvard, Vancouver, ISO, and other styles
11

Khatun, Shamina, and Anas Iqbal. "A Review of Image Compression Using Fractal Image Compression with Neural Network." International Journal of Innovative Research in Computer Science & Technology 6, no. 2 (March 31, 2018): 9–11. http://dx.doi.org/10.21276/ijircst.2018.6.2.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Vamsikrishna, Mangalapalli, Oggi Sudhakar, Bhagya Prasad Bugge, Asileti Suneel Kumar, Blessy Thankachan, K. B. V. S. R. Subrahmanyam, Natha Deepthi, and Praveen Mande. "Region based lossless compression for digital images using entropy coding." Indonesian Journal of Electrical Engineering and Computer Science 38, no. 3 (June 1, 2025): 1870. https://doi.org/10.11591/ijeecs.v38.i3.pp1870-1879.

Full text
Abstract:
Image compression is a method for reducing video and image storage space. Moreover, enhancing the performance of the transmission and storage processes is important. The region based coding technique is important for compressing and sending medical images. In the medical field, lossless compression can help telemedicine applications achieve high efficiency. It affects image quality and takes a long time to encode. As a result, this study proposes region-based lossless compression for digital images using entropy coding. The best performance is achieved by segmenting these areas. In this case, an integer wavelet transform (IWT) is utilized after the ROI of the image was manually generated. The IWT compression method is helpful for reversibly reconstructing the original image to the required quality. For enhancing the quality of compression, entropy coding is utilized. By passing images of varying sizes and formats, various quantitative metrics can be determined. The simulation results demonstrate that the region based lossless compression technique utilizing range blocks and iterations resulted in reduced encoding time and improved quality.
APA, Harvard, Vancouver, ISO, and other styles
13

Paul, Okuwobi Idowu, and Yong Hua Lu. "A New Approach in Digital Image Compression Using Unequal Error Protection (UEP)." Applied Mechanics and Materials 704 (December 2014): 403–7. http://dx.doi.org/10.4028/www.scientific.net/amm.704.403.

Full text
Abstract:
This paper proposes a new algorithms for compression of digital images especially at the encoding stage of compressive sensing. The research consider the fact that a certain region of a given imagery is more important in most applications. The first algorithm proposed for the encoding stage of Compressive Sensing (CS) exploits the known structure of transform image coefficients. The proposed algorithm makes use of the unequal error protection (UEP) principle, which is widely used in the area of error control coding. The second algorithm which exploits the UEP principle to recover the more important part of an image with more quality while the rest part of the image is not significantly degraded. The proposed algorithm shown to be successful in digital image compression where images are represented in the spatial and transform domains. This new algorithm were recommended for use in image compression.
APA, Harvard, Vancouver, ISO, and other styles
14

T, Sujatha, and Selvam K. "LOSSLESS IMAGE COMPRESSION USING DIFFERENT ENCODING ALGORITHM FOR VARIOUS MEDICAL IMAGES." ICTACT Journal on Image and Video Processing 12, no. 4 (May 1, 2022): 2704–9. https://doi.org/10.21917/ijivp.2022.0384.

Full text
Abstract:
In the medical industry, the amount of data that can be collected and kept is currently increasing. As a result, in order to handle these large amounts of data efficiently, compression methods must be re-examined while taking the algorithm complexity into account. An image processing strategy should be explored to eliminate the duplication image contents, so boosting the capability to retain or transport data in the best possible manner. Image Compression (IC) is a method of compressing images as they are being stored and processed. The information is preserved in a lossless image compression technique which allows for exact image reconstruction from compressed data with retain the quality of image to higher possible extend but it does not significantly decrease the size of the image. In this research work, the encoding algorithm is applied to various medical images such as brain image, dental x-ray image, hand x ray images, breast mammogram images and skin image can be used to minimize the bit size of the image pixels based on the different encoding algorithm such as Huffman, Lempel-Ziv-Welch (LZW) and Run Length Encoding (RLE) for effective compression and decompression without any quality loss to reconstruct the image. The image processing toolbox is used to apply the compression algorithms in MATLAB. To assess the compression efficiency of various medical images using different encoding techniques and performance indicators such as Compression Ratio (CR) and Compression Factor (CF). The LZW technique compresses binary images; however, it fails to generate a lossless image in this implementation. Huffman and RLE algorithms have a lower CR value, which means they compress data more efficiently than LZW, although RLE has a larger CF value than LZW and Huffman. When fewer CR and more CF are recorded, RLE coding becomes more viable. Finally, using state-of-the-art methodologies for the sample medical images, performance measures such as PSNR and MSE is retrieved and assessed.
APA, Harvard, Vancouver, ISO, and other styles
15

Di Martino, Ferdinando, and Salvatore Sessa. "A Multilevel Fuzzy Transform Method for High Resolution Image Compression." Axioms 11, no. 10 (October 13, 2022): 551. http://dx.doi.org/10.3390/axioms11100551.

Full text
Abstract:
The Multilevel Fuzzy Transform technique (MF-tr) is a hierarchical image compression method based on Fuzzy Transform, which is successfully used to compress images and manage the information loss of the reconstructed image. Unlike other lossy image compression methods, it ensures that the quality of the reconstructed image is not lower than a prefixed threshold. However, this method is not suitable for compressing massive images due to the high processing times and memory usage. In this paper, we propose a variation of MF-tr for the compression of massive images. The image is divided into tiles, each of which is individually compressed using MF-tr; thereafter, the image is reconstructed by merging the decompressed tiles. Comparative tests performed on remote sensing images show that the proposed method provides better performance than MF-tr in terms of compression rate and CPU time. Moreover, comparison tests show that our method reconstructs the image with CPU times that are at least two times less than those obtained using the MF-tr algorithm.
APA, Harvard, Vancouver, ISO, and other styles
16

Mohammed, Hind Rostom, and Ameer Abd Al-Razaq. "SWF Image Compression by Evaluating objects compression ratio." Journal of Kufa for Mathematics and Computer 1, no. 2 (October 30, 2010): 105–18. http://dx.doi.org/10.31642/jokmc/2018/010209.

Full text
Abstract:
This work discusses the compression objects ratio for Macromedia Flash File (SWF) Image by Wavelet functions for compression and there effect for Macromedia Flash File (SWF) Images compression . We discusses classification objects in Macromedia Flash (SWF) image in to nine types objects Action, Font,Image, Sound, Text, Button, Frame, Shape and Sprite. The work is particularly targeted towards wavelet image compression best case by using Haar Wavelet Transformation with an idea to minimize the computational requirements by applying different compression thresholds for the waveletcoefficients and these results are obtained in fraction of seconds and thus to improve thequality of the reconstructed image. The promising results obtained concerning reconstructed images quality as well as preservation of significant image details, while, on the other hand achieving highcompression rates and better image quality while DB4 Wavelet Transformation higher compression rates ratio without kept for image quality .
APA, Harvard, Vancouver, ISO, and other styles
17

Mohammed, Sajaa G., Safa S. Abdul-Jabbar, and Faisel G. Mohammed. "Art Image Compression Based on Lossless LZW Hashing Ciphering Algorithm." Journal of Physics: Conference Series 2114, no. 1 (December 1, 2021): 012080. http://dx.doi.org/10.1088/1742-6596/2114/1/012080.

Full text
Abstract:
Abstract Color image compression is a good way to encode digital images by decreasing the number of bits wanted to supply the image. The main objective is to reduce storage space, reduce transportation costs and maintain good quality. In current research work, a simple effective methodology is proposed for the purpose of compressing color art digital images and obtaining a low bit rate by compressing the matrix resulting from the scalar quantization process (reducing the number of bits from 24 to 8 bits) using displacement coding and then compressing the remainder using the Mabel ZF algorithm Welch LZW. The proposed methodology maintains the quality of the reconstructed image. Macroscopic and quantitative experimental results on technical color images show that the proposed methodology gives reconstructed images with a high PSNR value compared to standard image compression techniques.
APA, Harvard, Vancouver, ISO, and other styles
18

Lalithambigai, B., and S. Chitra. "Segment Based Compressive Sensing (SBCS) of Color Images for Internet of Multimedia Things Applications." Journal of Medical Imaging and Health Informatics 12, no. 1 (January 1, 2022): 1–6. http://dx.doi.org/10.1166/jmihi.2022.3848.

Full text
Abstract:
Telemedicine is one of the IoMT applications transmitting medical images from hospital to remote medical centers for diagnosis and treatment. To share this multimedia content across internet, storage and transmission become a challenge because of its huge volume. New compression techniques are being continuously introduced to circumvent this issue. Compressive sensing (CS) is a new paradigm in signal compression. Block based compressive sensing (BCS) is a standard and commonly used technique in color image compression. However, BCS suffers from block artifacts and during transmission, mistakes can be introduced to affect the BCS coefficients, degrading the reconstructed image’s quality. The performance of BCS at low compression ratios is also poor. To overcome these limitations, without dividing the image into blocks, the image matrix is considered as a whole and compressively sensed by segment based compressive sensing (SBCS). This is a novel strategy that is offered in this article, for efficient compression of digital color images at low compression ratios. Metrics of performance The peak signal to noise ratio (PSNR), the mean structural similarity index (MSSIM), and the colour perception metric delta E are computed and compared to those obtained using block-based compressive sensing (BBCS). The results show that SBCS performs better than BBCS.
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Yan Wei, and Hui Li Yu. "Wavelet Transforms of Image Reconstruction Based on Compressed Sampling." Applied Mechanics and Materials 58-60 (June 2011): 1920–25. http://dx.doi.org/10.4028/www.scientific.net/amm.58-60.1920.

Full text
Abstract:
A compressive sensing technique for image signal to cope with image compression and restoration is adopted in this paper. First of all wavelet transforms method is applied in image compressing to preserve the constructive, Secondly, sparse matrix is available by required wavelet ratio. Thirdly, the compressing image is used to restoration the original image. Experimental results show that the proposed algorithm is effective and compares favorably with existing techniques.
APA, Harvard, Vancouver, ISO, and other styles
20

Mansyuri, Umar. "KOMPRESI DATA TEKS DENGAN METODE RUN LENGTH ENCODING." Jurnal Ilmiah Sistem Informasi 1, no. 2 (December 12, 2021): 102–9. http://dx.doi.org/10.46306/sm.v1i2.13.

Full text
Abstract:
One method of using data compression is by using a method called Run Length Encoding (RLE), especially image data. The RLE method is one of the simplest lossless types of data compression schemes and is based on the simple principle of data encoding. The RLE method is very suitable for compressing data containing repetitive characters such as simple graphic images. The compressed data are 28 RGB (Red, Green, Blue) images and 28 grayscale images in jpg, png, bmp, and tiff formats, respectively. Image data is compressed with an encoder and decoder program using the RLE algorithm in the matlab application. The RLE method is said to be effective in compressing image data if the compression ratio is less than 100% because it has a lot of color repetition in the pixels. The RLE method is said to be ineffective if the compression ratio is more than 100% because it has a little repetition of colors in the pixels. Of the 28 RGB images tested, it was found that the RLE method was effective on 1 image and not effective on 27 images. For the 28 grayscale images tested, it was found that the RLE method was effective on 6 images and not effective on 22 images
APA, Harvard, Vancouver, ISO, and other styles
21

Shyamala, N., and Dr S. Geetha. "Compression of Medical Images Using Wavelet Transform and Metaheuristic Algorithm for Telemedicine Applications." International Journal of Electrical and Electronics Research 10, no. 2 (June 30, 2022): 161–66. http://dx.doi.org/10.37391/ijeer.100219.

Full text
Abstract:
Medical image compression becomes necessary to efficiently handle huge number of medical images for storage and transmission purposes. Wavelet transform is one of the popular techniques widely used for medical image compression. However, these methods have some limitations like discontinuity which occurs when reducing image size employing thresholding method. To overcome this, optimization method is considered with the available compression methods. In this paper, a method is proposed for efficient compression of medical images based on integer wavelet transform and modified grasshopper optimization algorithm. Medical images are pre-processed using hybrid median filter to discard noise and then decomposed using integer wavelet transform. The proposed method employed modified grasshopper optimization algorithm to select the optimal coefficients for efficient compression and decompression. Four different imaging techniques, particularly magnetic resonance imaging, computed tomography, ultrasound, and X-ray, were used in a series of tests. The suggested method's compressing performance is proven by comparing it to well-known approaches in terms of mean square error, peak signal to noise ratio, and mean structural similarity index at various compression ratios. The findings showed that the proposed approach provided effective compression with high decompression image quality.
APA, Harvard, Vancouver, ISO, and other styles
22

Anandita, Ida Bagus Gede, I. Gede Aris Gunadi, and Gede Indrawan. "Analisis Kinerja Dan Kualitas Hasil Kompresi Pada Citra Medis Sinar-X Menggunakan Algoritma Huffman, Lempel Ziv Welch Dan Run Length Encoding." SINTECH (Science and Information Technology) Journal 1, no. 1 (February 9, 2018): 7–15. http://dx.doi.org/10.31598/sintechjournal.v1i1.179.

Full text
Abstract:
Technological progress in the medical area made medical images like X-rays stored in digital files. The medical image file is relatively large so that the image needs to be compressed. The lossless compression technique is an image compression where the decompression results are the same as the original or no information lost in the compression process. The existing algorithms on lossless compression techniques are Run Length Encoding (RLE), Huffman, and Lempel Ziv Welch (LZW). This study compared the performance of the three algorithms in compressing medical images. The result of image decompression will be compared to its performance in the objective assessment such as ratio, compression time, MSE (Mean Square Error) and PNSR (Peak Signal to Noise Ratio). MSE and PSNR are used for quantitative image quality measurement for subjective assessment assisted by three experts who will compare the original image with the decompression image. Based on the results obtained from the objective assessment of compression performance of RLE algorithm showed the best performance by yielding ratio, time, MSE and PSNR respectively 86,92%, 3,11ms, 0 and 0db. For Huffman, the results can be 12.26%, 96.94ms, 0, and 0db respectively. While LZW results can be in sequence -63.79%, 160ms, 0.3 and 58.955db. For the results of the subjective assessment, the experts argued that all images can be analyzed well.
APA, Harvard, Vancouver, ISO, and other styles
23

Hatem, Hiyam, Raed Majeed, and Jumana Waleed. "A singular value decomposition based approach for the compression of encrypted images." International Journal of Engineering & Technology 7, no. 3 (July 8, 2018): 1332. http://dx.doi.org/10.14419/ijet.v7i3.12707.

Full text
Abstract:
Image compression is a process which supplies a good solution to the current problems of data storage by reducing redundancy, and irrelevance within images. This paper provides effective encryption then compression technique applied for compressing images within the entire domain of encryption. The Singular Value Decomposition (SVD) application has been described for the results of compression from an image encrypted based on Discrete wavelet transforms (DWT). Initially, the original image has been decomposed into a pyramid of wavelet by utilizing DWT. The DWT subbands are enciphered via a pseudo random number and pseudo random permutation. Then, encrypted images are compressed evaluated by the SVD method which encompasses the corresponding singular values and singular vectors. The performance evaluated on several images and the experimental results and security evaluation is given to validate the explained goals of high security and good compression performance.
APA, Harvard, Vancouver, ISO, and other styles
24

Bao, Xuecai, Chen Ye, Longzhe Han, and Xiaohua Xu. "Image Compression for Wireless Sensor Network: A Model Segmentation-Based Compressive Autoencoder." Wireless Communications and Mobile Computing 2023 (October 25, 2023): 1–12. http://dx.doi.org/10.1155/2023/8466088.

Full text
Abstract:
Aiming at the problems of image quality, compression performance, and transmission efficiency of image compression in wireless sensor networks (WSN), a model segmentation-based compressive autoencoder (MS-CAE) is proposed. In the proposed algorithm, we first divide each image in the dataset into pixel blocks and design a novel deep image compression network with a compressive autoencoder to form a compressed feature map by encoding pixel blocks. Then, the reconstructed image is obtained by using the quantized coefficients of the quantizer and splicing the decoded feature maps in order. Finally, the deep network model is segmented into two parts: the encoding network and the decoding network. The weight parameters of the encoding network are deployed to the edge device for the compressed image in the sensor network. For high-quality reconstructed images, the weight parameters of the decoding network are deployed to the cloud system. Experimental results demonstrate that the proposed MS-CAE obtains a high signal-to-noise ratio (PSNR) for the details of the image, and the compression ratio at the same bit per pixel (bpp) is significantly higher than that of the compared image compression algorithms. It also indicates that the MS-CAE not only greatly relieves the pressure of the hardware system in sensor network but also effectively improves image transmission efficiency and solves the deployment problem of image monitoring in remote and energy-poor areas.
APA, Harvard, Vancouver, ISO, and other styles
25

Singh Samra, Hardeep. "Image Compression Techniques." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 2, no. 2 (April 30, 2012): 49–52. http://dx.doi.org/10.24297/ijct.v2i1.2616.

Full text
Abstract:
Digital images required large number of bits to represent them and in their canonical representation, generally contain significant amount of redundancy. Image compression techniques reduce the number of bits required to represent an image by taking advantage of these redundancies.To overcome this redundancy several image compression techniques are discussed in this paper along with their benefits.
APA, Harvard, Vancouver, ISO, and other styles
26

Wen, Cathlyn Y., and Robert J. Beaton. "Subjective Image Quality Evaluation of Image Compression Techniques." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 40, no. 23 (October 1996): 1188–92. http://dx.doi.org/10.1177/154193129604002309.

Full text
Abstract:
Image compression reduces the amount of data in digital images and, therefore, allows efficient storage, processing, and transmission of pictorial information. However, compression algorithms can degrade image quality by introducing artifacts, which may be unacceptable for users' tasks. This work examined the subjective effects of JPEG and wavelet compression algorithms on a series of medical images. Six digitized chest images were processed by each algorithm at various compression levels. Twelve radiologists rated the perceived image quality of the compressed images relative to the corresponding uncompressed images, as well as rated the acceptability of the compressed images for diagnostic purposes. The results indicate that subjective image quality and acceptability decreased with increasing compression levels; however, all images remained acceptable for diagnostic purposes. At high compression ratios, JPEG compressed images were judged less acceptable for diagnostic purposes than the wavelet compressed images. These results contribute to emerging system design guidelines for digital imaging workstations.
APA, Harvard, Vancouver, ISO, and other styles
27

Kaur, Harjit. "Image Compression Techniques with LZW method." International Journal for Research in Applied Science and Engineering Technology 10, no. 1 (January 31, 2022): 1773–77. http://dx.doi.org/10.22214/ijraset.2022.39999.

Full text
Abstract:
Abstract: Image compression is a technique which is used to reduce the size of the data. In other words, it means to remove the extra data from the available by applying some techniques and tricks which makes the data easy for storing and transmitting it over the transmission medium. The compression techniques are broadly divided into two categories. First one is Lossy Compression in which some of the data is lost while compressing it and second technique is lossless technique in which data is not lost after compressing it. These compression techniques can be applied on different image formats. This review paper compares the different compression techniques. Keywords: lossy, lossless, image formats, compression techniques.
APA, Harvard, Vancouver, ISO, and other styles
28

Gashnikov, M. "Statistical encoding for image compression based on hierarchical grid interpolation." Computer Optics 41, no. 6 (2017): 905–12. http://dx.doi.org/10.18287/2412-6179-2017-41-6-905-912.

Full text
Abstract:
Algorithms of statistical encoding for image compression are investigated. An approach is proposed to increase the efficiency of variable-length codes when compressing images with losses. An algorithm of statistical encoding is developed for use as part of image compression methods that encode a de-correlated signal with an uneven probability distribution. An experimental comparison of the proposed algorithm with the algorithms ZIP and ARJ is performed while encoding the specific data of the hierarchical compression method. In addition, an experimental comparison of the hierarchical method of image compression, including the developed coding algorithm, with the JPEG method and the method based on the wavelet transform is carried out.
APA, Harvard, Vancouver, ISO, and other styles
29

Anitha, C. Yoga. "Performance Evaluation of Hybrid Method for Securing and Compressing Images." International Journal of Computing Algorithm 9, no. 1 (2020): 1–9. http://dx.doi.org/10.20894/ijcoa.101.009.001.001.

Full text
Abstract:
Security is a most important field of research work for sending and receiving of data in secret way over the network. Cryptographyis a method for securing transformation like image, audio, video, text without any hacking problem. Encryption and Decryption are two methods used to secure the data. Image compression technique used to reducing the size of an image for effective data communication. There are variety of algorithms has been proposed in the literature for securing images using encryption/decryption techniques and reduce the size of images using image compression techniques. These techniques still need improvement to overcome issues, challenges and its limitations. Hence in this research work a hybrid method which combines securing image using RSA, hill cipher and 2bit rotation and compressing of images using lossless compression algorithm has been proposed. This method compared to execution time of existing method. This method secures the image and reduces the size of the image for data communication over the internet. This method is suitable for various applications uses images like remote sensing, medical and Spatio-temporal.
APA, Harvard, Vancouver, ISO, and other styles
30

Sawaneh, Ibrahim Abdulai. "DWT Based Image Compression for Health Systems." Journal of Advance Research in Medical & Health Science (ISSN: 2208-2425) 4, no. 9 (September 30, 2018): 01–67. http://dx.doi.org/10.53555/nnmhs.v4i9.603.

Full text
Abstract:
There are calls for enhancing present healthcare sectors when it comes to handling huge data size of patients’ records. The huge files contain lots of duplicate copies. Therefore, the ideal of compression comes into play. Image data compression removes redundant copies (multiple unnecessary copies) that increase the storage space and transmission bandwidth. Image data compression is pivotal as it helps reduce image file size and speeds up file transmission rate over the internet through multiple wavelet analytics methods without loss in the transmitted medical image data. Therefore this report presents data compression implementation for healthcare systems using a proposed scheme of discrete wavelet transform (DWT), Fourier transform (FT) and Fast Fourier transform with capacity of compressing and recovering medical image data without data loss. Healthcare images such as those of human heart and brain need fast transmission for reliable and efficient result. Using DWT which has optimal reconstruction quality greatly improves compression. A representation of enabling innovations in communication technologies with big data for health monitoring is achievable through effective data compression techniques. Our experimental implementation shows that using Haar wavelet with parametric determination of MSE and PSNR solve our aims. Many imaging techniques were also deployed to further ascertain DWT method’s efficiency such as image compression and image de-noising. The proposed compression of medical image was excellent. It is essential to reduce the size of data sets by employing compression procedures to shrink storage space, reduce transmission rate, and limit massive energy usage in health monitoring systems. The motivation for this work was to implement compression method to modify traditional healthcare platform to lower file size, and reduce cost of operation. Image compression aims at reconstructing images from extensively lesser estimations than were already thought necessary in relations with non-zero coefficients. Rationally, fewer well-chosen interpretations is adequate to reproduce the new sample exactly as the source image. We look at DWT to implement our compression method.
APA, Harvard, Vancouver, ISO, and other styles
31

Al-Saleem, Riyadh M., Yasameen A. Ghani, and Shihab A. Shawkat. "Improvement of Image Compression by Changing the Mathematical Equation Style in Communication Systems." International Journal of Digital Multimedia Broadcasting 2022 (November 4, 2022): 1–7. http://dx.doi.org/10.1155/2022/3231533.

Full text
Abstract:
Compression is an essential process to reduce the amount of information by reducing the number of bits; this process is necessary for uploading images, audio, video, storage services, and TV transmission. In this paper, image compressions with losses from this action will be shown for some common patterns. The compression process uses different mathematical equations that have different methods and efficiencies, so some common mathematical methods for each style are presented taking into consideration the pros and cons of each method. In this paper, it is demonstrated that there is a quality improvement by applying anisotropic interpolation to edge enhancement for its ability to satisfy the dispersed data of the propagation process, which leads to faster compression due to concern for optimum quality rather than fast algorithms. The test images for these patterns showed a discrepancy in the image resolution when the compression coefficient was increased, as the results using three types of image compression methods proved a clear superiority when using “partial differential equations (PDE)”.
APA, Harvard, Vancouver, ISO, and other styles
32

Kovalenko, Bogdan, Volodymyr Rebrov, and Volodymyr Lukin. "Analysis of the potential efficiency of post-filtering noisy images after lossy compression." Ukrainian journal of remote sensing 10, no. 1 (April 3, 2023): 11–16. http://dx.doi.org/10.36023/ujrs.2023.10.1.231.

Full text
Abstract:
An increase in the number of images and their average size is the general trend nowadays. This increase leads to certain problems with data storage and transfer via communication lines. A common way to solve this problem is to apply lossy compression that provides sufficiently larger compression ratios compared to lossless compression approaches. However, lossy compression has several peculiarities, especially if a compressed image is corrupted by quite intensive noise. First, a specific noise-filtering effect is observed. Second, an optimal operational point (OOP) might exist where the quality of a compressed image is closer to the corresponding noise-free image than the quality of the original image according to a chosen quality metric. In this case, it is worth compressing this image in the OOP or its closest neighborhood. These peculiarities have been earlier studied and their positive impact on image quality improvement has been demonstrated. Filtering of noisy images due to lossy compression is not perfect. Because of this, it is worth checking can additional quality improvement be reached using such an approach as post-filtering. In this study, we attempt to answer the questions: “is it worth to post-filter an image after lossy compression, especially in OOP’s neighborhood? And what benefit can it bring in the sense of image quality?”. The study is carried out for better portable graphics (BPG) coder and the DCT-based filter focusing mainly on one-component (grayscale) images. The quality of images is characterized by several metrics such as PSNR, PSNR-HVS-M, and FSIM. Possible image quality increasing via post-filtering is demonstrated and the recommendations for filter parameter setting are given.
APA, Harvard, Vancouver, ISO, and other styles
33

Naumenko, Victoriia, Bogdan Kovalenko, and Volodymyr Lukin. "BPG-based compression analysis of Poisson-noisy medical images." Radioelectronic and Computer Systems, no. 3 (September 29, 2023): 91–100. http://dx.doi.org/10.32620/reks.2023.3.08.

Full text
Abstract:
The subject matter is lossy compression using the BPG encoder for medical images with varying levels of visual complexity, which are corrupted by Poisson noise. The goal of this study is to determine the optimal parameters for image compression and select the most suitable metric for identifying the optimal operational point. The tasks addressed include: selecting test images sized 512x512 in grayscale with varying degrees of visual complexity, encompassing visually intricate images rich in edges and textures, moderately complex images with edges and textures adjacent to homogeneous regions, and visually simple images primarily composed of homogeneous regions; establishing image quality evaluation metrics and assessing their performance across different encoder compression parameters; choosing one or multiple metrics that distinctly identify the position of the optimal operational point; and providing recommendations based on the attained results regarding the compression of medical images corrupted by Poisson noise using a BPG encoder, with the aim of maximizing the restored image’s quality resemblance to the original. The employed methods encompass image quality assessment techniques employing MSE, PSNR, MSSIM, and PSNR-HVS-M metrics, as well as software modeling in Python without using the built-in Poisson noise generator. The ensuing results indicate that optimal operational points (OOP) can be discerned for all these metrics when the compressed image quality surpasses that of the corresponding original image, accompanied by a sufficiently high compression ratio. Moreover, striking a suitable balance between the compression ratio and image quality leads to partial noise reduction without introducing notable distortions in the compressed image. This study underscores the significance of employing appropriate metrics for evaluating the quality of compressed medical images and provides insights into determining the compression parameter Q to attain the BPG encoder’s optimal operational point for specific images. Conclusions. The scientific novelty of the findings encompasses the following: 1) the capability of all metrics to determine the OOP for images of moderate visual complexity or those dominated by homogeneous areas; MSE and PSNR metrics demonstrating superior results for images rich in textures and edges; 2) the research highlights the dependency of Q in the OOP on the average image intensity, which can be reasonably established for a given image earmarked for compression based on our outcomes. The compression ratios for images compressed at the OOP are sufficiently high, further substantiating the rationale for compressing images in close proximity to the OOP.
APA, Harvard, Vancouver, ISO, and other styles
34

Ma, Shaowen. "Comparison of image compression techniques using Huffman and Lempel-Ziv-Welch algorithms." Applied and Computational Engineering 5, no. 1 (June 14, 2023): 793–801. http://dx.doi.org/10.54254/2755-2721/5/20230705.

Full text
Abstract:
Image compression technology is very popular in the field of image analysis because the compressed image is convenient for storage and transmission. In this paper, the Huffman algorithm and Lempel-Ziv-Welch (LZW) algorithm are introduced. They are widely used in the field of image compression, and the compressed image results of the two algorithms are calculated and compared. Based on the four dimensions of Compression Ratio (CR), Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Bits Per Pixel (BPP), the applicable conditions of the two algorithms in compressing small image files are analysed. The results illustrated that when the source image files are less than 300kb, the Compression Ratio (CR) of Huffman algorithm was better than that of LZW algorithm. However, for Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Bits Per Pixel (BPP), which are used to represent the compressed images qualities, LZW algorithm gave more satisfactory results.
APA, Harvard, Vancouver, ISO, and other styles
35

Mahdi, Reyadh, Faisal Ghazi Abdiwi, and Abd Abrahim Mosslah. "Enhanced Compression Medical Image Speed for Computed Radiography." Wasit Journal for Pure sciences 3, no. 2 (June 30, 2024): 146–49. http://dx.doi.org/10.31185/wjps.298.

Full text
Abstract:
In recent years, there has been a significant surge in the volume of medical imaging data. This surge poses challenges for the functioning of PACS communication systems and image archiving. The most effective solution to address this issue involves compressing images through digital encryption, which optimally utilizes storage space. This process involves reformatting the imaging data by reducing redundancy, leading to image compression. While this reduction in redundancy is readily apparent in individual images, there is a vulnerability in these methods that tends to overlook a source of repetition present in similar stored images. To emphasize this common occurrence, we introduce the term "redundancy group." Similar images are frequently encountered within medical image databases, resulting in a considerable redundancy reduction. In this paper, our focus is on enhancing the control of redundancy extraction in the data used, specifically medical images. To enhance the compression efficiency of standard image compression, we employ improved methods, namely MinMax Predictive (MMP) and Min-Max Differential (MMD). Our experiments demonstrate that these methods lead to a substantial enhancement in brain CT compression, with potential improvements of up to 130% when using Huffman coding. Similar improvements are observed in arithmetic coding, with a 94% improvement compared to the number-arithmetic code and a 37% improvement compared to the Lempel-Ziv compression. These improvements occur when combining the MMP technology with the MMD technology, utilizing inverse operations that result in lossless compression.
APA, Harvard, Vancouver, ISO, and other styles
36

Cahya Dewi, Dewa Ayu Indah, and I. Made Oka Widyantara. "Usage analysis of SVD, DWT and JPEG compression methods for image compression." Jurnal Ilmu Komputer 14, no. 2 (September 30, 2021): 99. http://dx.doi.org/10.24843/jik.2021.v14.i02.p04.

Full text
Abstract:
Through image compression, can save bandwidth usage on telecommunication networks, accelerate image file sending time and can save memory in image file storage. Technique to reduce image size through compression techniques is needed. Image compression is one of the image processing techniques performed on digital images with the aim of reducing the redundancy of the data contained in the image so that it can be stored or transmitted efficiently. This research analyzed the results of image compression and measure the error level of the image compression results. The analysis to be carried out is in the form of an analysis of JPEG compression techniques with various types of images. The method of measuring the compression results uses the MSE and PSNR methods. Meanwhile, to determine the percentage level of compression using the compression ratio calculation. The average ratio for JPEG compression was 0.08605, the compression rate was 91.39%. The average compression ratio for the DWT method was 0.133090833, the compression rate was 86.69%. The average compression ratio of the SVD method was 0.101938833 and the compression rate was 89.80%.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhou, Xichuan, Lang Xu, Shujun Liu, Yingcheng Lin, Lei Zhang, and Cheng Zhuo. "An Efficient Compressive Convolutional Network for Unified Object Detection and Image Compression." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5949–56. http://dx.doi.org/10.1609/aaai.v33i01.33015949.

Full text
Abstract:
This paper addresses the challenge of designing efficient framework for real-time object detection and image compression. The proposed Compressive Convolutional Network (CCN) is basically a compressive-sensing-enabled convolutional neural network. Instead of designing different components for compressive sensing and object detection, the CCN optimizes and reuses the convolution operation for recoverable data embedding and image compression. Technically, the incoherence condition, which is the sufficient condition for recoverable data embedding, is incorporated in the first convolutional layer of the CCN model as regularization; Therefore, the CCN convolution kernels learned by training over the VOC and COCO image set can be used for data embedding and image compression. By reusing the convolution operation, no extra computational overhead is required for image compression. As a result, the CCN is 3.1 to 5.0 fold more efficient than the conventional approaches. In our experiments, the CCN achieved 78.1 mAP for object detection and 3.0 dB to 5.2 dB higher PSNR for image compression than the examined compressive sensing approaches.
APA, Harvard, Vancouver, ISO, and other styles
38

Haque, Ershadul, Manoranjan Paul, Faranak Tohidi, and Anwaar Ulhaq. "An Overview of Quantum Circuit Design Focusing on Compression and Representation." Electronics 14, no. 1 (December 27, 2024): 72. https://doi.org/10.3390/electronics14010072.

Full text
Abstract:
Quantum image computing has attracted attention due to its vast storage capacity and faster image data processing, leveraging unique properties such as parallelism, superposition, and entanglement, surpassing classical computers. Although classical computing power has grown substantially over the last decade, its rate of improvement has slowed, struggling to meet the demands of massive datasets. Several approaches have emerged for encoding and compressing classical images on quantum processors. However, a significant limitation is the complexity of preparing the quantum state, which translates pixel coordinates into corresponding quantum circuits. Current approaches for representing large-scale images require higher quantum resources, such as qubits and connection gates, presenting significant hurdles. This article aims to overview the pixel intensity and state preparation circuits requiring fewer quantum resources and explore effective compression techniques for medium and high-resolution images. It also conducts a comprehensive study of quantum image representation and compression techniques, categorizing methods by grayscale and color image types and evaluating their strengths and weaknesses. Moreover, the efficacy of each model’s compression can guide future research toward efficient circuit designs for medium- to high-resolution images. Furthermore, it is a valuable reference for advancing quantum image processing research by providing a systematic framework for evaluating quantum image compression and representation algorithms.
APA, Harvard, Vancouver, ISO, and other styles
39

Syuhada, Ibnu. "Implementasi Algoritma Arithmetic Coding dan Sannon-Fano Pada Kompresi Citra PNG." TIN: Terapan Informatika Nusantara 2, no. 9 (February 25, 2022): 527–32. http://dx.doi.org/10.47065/tin.v2i9.1027.

Full text
Abstract:
The rapid development of technology plays an important role in the rapid exchange of information. In sending information in the form of images, there are still problems, including because of the large size of the image so that the solution to this problem is to perform compression. In this thesis, we will implement and compare the performance of the Arithmetic Coding and Shannon-Fano algorithms by calculating the compression ratio, compressed file size, compression and decompression process speed. Based on all test results, that the Arithmetic Coding algorithm produces an average compression ratio of 62.88% and a Shannon-Fano compression ratio of 61.73%, then Arithmetic Coding the average speed in image compression is 0.072449 seconds and Shannon-Fano 0.077838 second. Then the Shannon-Fano algorithm has an average speed for decompression of 0.028946 seconds and the Arithmetic Coding algorithm 0.034169 seconds. The decompressed image on the Arithmetic Coding and Shannon-Fano algorithm is in accordance with the original image. It can be concluded from the test results that the Arithmetic Coding algorithm is more efficient in compressing *.png images than the Shannon-Fano algorithm, although in terms of decompression Shannon-Fanose is a little faster compared to Arithmetic Coding.
APA, Harvard, Vancouver, ISO, and other styles
40

Dr. R. B. Dubey, Dr R. B. Dubey, and Parul Parul. "Visually Lossless JPEG2000 Image Compression." Indian Journal of Applied Research 3, no. 9 (October 1, 2011): 211–16. http://dx.doi.org/10.15373/2249555x/sept2013/66.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Gunasheela, Keragodu Shivanna, and Shreenivasamurthy Prasantha Haranahalli. "Two-dimensional satellite image compression using compressive sensing." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 1 (February 1, 2022): 311–19. https://doi.org/10.11591/ijece.v12i1.pp311-319.

Full text
Abstract:
Compressive sensing is receiving a lot of attention from the image processing research community as a promising technique for image recovery from very few samples. The modality of compressive sensing technique is very useful in the applications where it is not feasible to acquire many samples. It is also prominently useful in satellite imaging applications since it drastically reduces the number of input samples thereby reducing the storage and communication bandwidth required to store and transmit the data into the ground station. In this paper, an interior point-based method is used to recover the entire satellite image from compressive sensing samples. The compression results obtained are compared with the compression results from conventional satellite image compression algorithms. The results demonstrate the increase in reconstruction accuracy as well as higher compression rate in case of compressive sensing-based compression technique.
APA, Harvard, Vancouver, ISO, and other styles
42

Mateika, Darius, and Romanas Martavicius. "ANALYSIS OF THE COMPRESSION RATIO AND QUALITY IN AERIAL IMAGES." Aviation 11, no. 4 (December 31, 2007): 24–28. http://dx.doi.org/10.3846/16487788.2007.9635973.

Full text
Abstract:
In modern photomap systems, images are stored in centralized storage. Choosing a proper compression format for the storage of an aerial image is an important problem. This paper analyses aerial image compression in popular compression formats. For the comparison of compression formats, an image quality evaluation algorithm based on the calculation of the mean exponent error value is proposed. An image quality evaluation experiment is presented. The distribution of errors in aerial images and explanation of the causes for worse than usual compression effect are analysed. An integrated solution for the aerial image compression problem is proposed and the compression format most suitable for aerial images is specified.
APA, Harvard, Vancouver, ISO, and other styles
43

Mohamed, Basma A., and Heba M. Afify. "MAMMOGRAM COMPRESSION TECHNIQUES USING HAAR WAVELET AND QUADTREE DECOMPOSITION-BASED IMAGE ENHANCEMENT." Biomedical Engineering: Applications, Basis and Communications 29, no. 05 (October 2017): 1750038. http://dx.doi.org/10.4015/s1016237217500387.

Full text
Abstract:
Biomedical image compression plays an important role in the medical field. Mammograms are medical images used in the early detection of breast cancer. Mammogram image compression is a challenging task because these images contain information that occupies huge size for storage. The aim of image compression is to reduce the image size and the time taken for recovering the original image without any loss. In this paper, two different techniques of mammogram compression are introduced. The proposed algorithm includes two main steps. First, a preprocessing step is applied to enhance the image, and then a compression algorithm is applied to the enhanced image. The algorithm is tested using 322 mammogram images from the online MIAS database. Three parameters are used to evaluate the performance of the compression techniques; compression ratio (CR), Peak Signal to Noise Ratio (PSNR) and processing time. According to the results, Haar wavelet-based compression for enhanced images is better in terms of CR of 26.25% and PSNR of 47.27[Formula: see text]dB.
APA, Harvard, Vancouver, ISO, and other styles
44

Malevé, Nicolas. "Lost in Compression." Media Theory 8, no. 1 (June 11, 2024): 205–28. http://dx.doi.org/10.70064/mt.v8i1.1074.

Full text
Abstract:
Recent developments of image generators have introduced a new point of contention in the already contested field of artificial intelligence: the ownership of images. In 2023 Getty Images sued the company Stability AI, accusing it of illegally appropriating photographs for the purpose of training its models. Analysing image generators and stock agencies as probabilistic systems, this text argues that their significant difference lies in their model of appropriation. Where Stability AI proceeds through direct appropriation, the stock agency proceeds through contractual appropriation using its dominant position in the market. The article discusses Getty Images’ release of an image generator trained on its own image collection and critically reflects on the stock agency’s attempt to insert its contractual engine in the core of the generative AI technology, making copyright the regulating principle of the relations of ownership of the system and the principle that constrains the range of images it can produce.
APA, Harvard, Vancouver, ISO, and other styles
45

Avinash, Gopal B. "Image compression and data integrity in confocal microscopy." Proceedings, annual meeting, Electron Microscopy Society of America 51 (August 1, 1993): 206–7. http://dx.doi.org/10.1017/s0424820100146874.

Full text
Abstract:
In confocal microscopy, one method of managing large data is to store the data in a compressed form using image compression algorithms. These algorithms can be either lossless or lossy. Lossless algorithms compress images without losing any information with modest compression ratios (memory for the original / memory for the compressed) which are usually between 1 and 2 for typical confocal 2-D images. However, lossy algorithms can provide higher compression ratios (3 to 8) at the expense of information content in the images. The main purpose of this study is to empirically demonstrate the use of lossy compression techniques to images obtained from a confocal microscope while retaining the qualitative and quantitative image integrity under certain criteria.A fluorescent pollen specimen was imaged using ODYSSEY, a real-time laser scanning confocal microscope from NORAN Instruments, Inc. The images (128 by 128) consisted of a single frame (scanned in 33ms), a 4-frame average, a 64-frame average and an edge-preserving smoothed image of the single frame.
APA, Harvard, Vancouver, ISO, and other styles
46

Chaturvedi, Soumya. "Different Type of Image Compression using Various techniques, Highlighting Segmentation based image Compression." International Journal for Research in Applied Science and Engineering Technology 10, no. 2 (February 28, 2022): 171–77. http://dx.doi.org/10.22214/ijraset.2022.40207.

Full text
Abstract:
Abstract: Image compression (IC) plays an important part in Digital Image Processing (DIP), it is as well very very essential for effective transmission and storing of images. Image Compression (IC), is basically recusing the size of an image and that too without adjusting the quality of the picture. It is kind of software with records pressure on digital Image. The objective is to lessen reiteration of the picture info for you to be accomplished of store or transmit information in a proficient shape. This paper gives review of kinds of images and its compression strategies. An image, in its genuine form, conveys big extent of data which requiress no longer finest large quantity of memory provisions for its garage but moreover causes difficult transmission over limited bandwidth channel. So, one of the acute factors for picture storage space or transmission over any exchange media is Image Compression. Image Compression makes it possible for increasing file sizes of practicable, storable and communicable dimensions. Keywords: Image Compression; segmentation based image compression component; formatting; Lossless compression; Lossy compression; techniques.
APA, Harvard, Vancouver, ISO, and other styles
47

AYENI, Ayobami Gabriel, Sunday, AGHOLOR, and Godwin Oluseyi ODULAJA. "PERFORMANCE EVALUATION OF LOSSLESS COMPRESSION FOR IMAGES IN MOBILE RESPONSIVE WEB USING PARTICIPANTS’ OBSERVATION AND THURSTON RATING." Lagos Journal of Contemporary Studies in Education 2, no. 01 (July 30, 2024): 24–32. https://doi.org/10.36349/lajocse.2024.v02i01.003.

Full text
Abstract:
Compressing graphic-related content and multimedia elements like logos, objects, banners, and image data is undoubtedly necessary for a mobile responsive website. In image compression, redundant and/or irrelevant information is eliminated whilst the leftover is resourcefully encoded. However, selecting the compression technique or choice of appropriate method in handling image compression requires caution so as not to toss away non-redundant information and relevant fragments of image files in a bid to compress. This study investigates the functional performance and variation of a lossless method for compressing images in the mobile responsive web. Participants-based experimental observation was adopted for research instrumentation, using the Thurston scale of rating to design a close-ended instrument for data collection. A simple clustering technique was used to select seventy-five (75) information technology and computing practitioners with a specialty in web development and graphic design from Ogun East Senatorial District in Nigeria; however, only fifty (50) practitioners were available as expert judges for Thurston rating on electronically administered research instrument. The results show average mean values of 3.78, 3.46, and 3.05 using decision rule in SPSS to validate the three research questions respectively, which depict distinct aesthetic effects and graphic quality of lossless compression, as well as notable improvement in web page size and browser loading time when using lossless compression for all images in mobile response website
APA, Harvard, Vancouver, ISO, and other styles
48

Dr., Aziz Makandar Ms. Rekha Biradar. "Performance analysis of Medical Image Compression using DCT and FFT Transforms." LC International Journal of STEM 3, no. 4 (January 6, 2023): 51–60. https://doi.org/10.5281/zenodo.7607120.

Full text
Abstract:
There is a high demand for image compression since it reduces the computational time, which in turn reduces the storage and transmission costs. Image compression involves reducing excessive and irrelevant data while maintaining reasonable image quality. Image compression techniques such as the Discrete Cosine Transform (DCT) and Fast Fourier Transform (FFT) are the focus of this study. These tools were selected because of their wide application in image processing; one example is JPEG (Joint Photographic Experts Group), which uses DCT for compression. A comparison is made between DCT and FFT, two compression methods implemented in MATLAB. CT and MRI images are used for an experiment, the quality of an image is determined by various parameters. To perform DCT the filter mask is used and a threshold is used for FFT to keep the top coefficient values. The experimental findings are compared and evaluated in terms of Peak Signal to Noise Ratio (PSNR) and Compression Ratio (CR).
APA, Harvard, Vancouver, ISO, and other styles
49

Barrios, Yubal, Alfonso Rodríguez, Antonio Sánchez, Arturo Pérez, Sebastián López, Andrés Otero, Eduardo de la Torre, and Roberto Sarmiento. "Lossy Hyperspectral Image Compression on a Reconfigurable and Fault-Tolerant FPGA-Based Adaptive Computing Platform." Electronics 9, no. 10 (September 26, 2020): 1576. http://dx.doi.org/10.3390/electronics9101576.

Full text
Abstract:
This paper describes a novel hardware implementation of a lossy multispectral and hyperspectral image compressor for on-board operation in space missions. The compression algorithm is a lossy extension of the Consultative Committee for Space Data Systems (CCSDS) 123.0-B-1 lossless standard that includes a bit-rate control stage, which in turn manages the losses the compressor may introduce to achieve higher compression ratios without compromising the recovered image quality. The algorithm has been implemented using High-Level Synthesis (HLS) techniques to increase design productivity by raising the abstraction level. The proposed lossy compression solution is deployed onto ARTICo3, a dynamically reconfigurable multi-accelerator architecture, obtaining a run-time adaptive solution that enables user-selectable performance (i.e., load more hardware accelerators to transparently increase throughput), power consumption, and fault tolerance (i.e., group hardware accelerators to transparently enable hardware redundancy). The whole compression solution is tested on a Xilinx Zynq UltraScale+ Field-Programmable Gate Array (FPGA)-based MPSoC using different input images, from multispectral to ultraspectral. For images acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), the proposed implementation renders an execution time of approximately 36 s when 8 accelerators are compressing concurrently at 100 MHz, which in turn uses around 20% of the LUTs and 17% of the dedicated memory blocks available in the target device. In this scenario, a speedup of 15.6× is obtained in comparison with a pure software version of the algorithm running in an ARM Cortex-A53 processor.
APA, Harvard, Vancouver, ISO, and other styles
50

Shivanna, Gunasheela Keragodu, and Haranahalli Shreenivasamurthy Prasantha. "Two-dimensional satellite image compression using compressive sensing." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 1 (February 1, 2022): 311. http://dx.doi.org/10.11591/ijece.v12i1.pp311-319.

Full text
Abstract:
Compressive sensing is receiving a lot of attention from the image processing research community as a promising technique for image recovery from very few samples. The modality of compressive sensing technique is very useful in the applications where it is not feasible to acquire many samples. It is also prominently useful in satellite imaging applications since it drastically reduces the number of input samples thereby reducing the storage and communication bandwidth required to store and transmit the data into the ground station. In this paper, an interior point-based method is used to recover the entire satellite image from compressive sensing samples. The compression results obtained are compared with the compression results from conventional satellite image compression algorithms. The results demonstrate the increase in reconstruction accuracy as well as higher compression rate in case of compressive sensing-based compression technique.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography