To see the other types of publications on this topic, follow the link: Image coding standard.

Journal articles on the topic 'Image coding standard'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Image coding standard.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Dufaux, Frederic, Gary J. Sullivan, and Touradj Ebrahimi. "The JPEG XR image coding standard [Standards in a Nutshell]." IEEE Signal Processing Magazine 26, no. 6 (November 2009): 195–204. http://dx.doi.org/10.1109/msp.2009.934187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Ren Chong, Yi Long You, and Feng Xiang You. "Research of Image Processing Based on Lifting Wavelet Transform." Applied Mechanics and Materials 263-266 (December 2012): 2502–9. http://dx.doi.org/10.4028/www.scientific.net/amm.263-266.2502.

Full text
Abstract:
This paper Study problems which based on lifting wavelet transform image processing. Coding and decoding a complete digital image by using W97-2 wavelet basis wavelet transform, combined with the embedded zerotree wavelet coding and binary arithmetic coding, and complete a lossless compression combined with the international standard test images. Experimental results show that graphics, image processing will come into a higher level because of wavelet analysis combined with image processing.
APA, Harvard, Vancouver, ISO, and other styles
3

Götting, Detlef, Achim Ibenthal, and Rolf-Rainer Grigat. "Fractal Image Coding and Magnification Using Invariant Features." Fractals 05, supp01 (April 1997): 65–74. http://dx.doi.org/10.1142/s0218348x97000644.

Full text
Abstract:
Fractal image coding has significant potential for the compression of still and moving images and also for scaling up images. The objective of our investigations was twofold. First, compression ratios of factor 60 and more for still images have been achieved, yielding a better quality of the decoded picture material than standard methods like JPEG. Second, image enlargement up to factors of 16 per dimension has been realized by means of fractal zoom, leading to natural and sharp representation of the scaled image content. Quality improvements were achieved due to the introduction of an extended luminance transform. In order to reduce the computational complexity of the encoding process, a new class of simple and suited invariant features is proposed, facilitating the search in the multidimensional space spanned by image domains and affine transforms.
APA, Harvard, Vancouver, ISO, and other styles
4

Tanaka, Midori, Tomoyuki Takanashi, and Takahiko Horiuchi. "Glossiness-aware Image Coding in JPEG Framework." Journal of Imaging Science and Technology 64, no. 5 (September 1, 2020): 50409–1. http://dx.doi.org/10.2352/j.imagingsci.technol.2020.64.5.050409.

Full text
Abstract:
Abstract In images, the representation of glossiness, translucency, and roughness of material objects (Shitsukan) is essential for realistic image reproduction. To date, image coding has been developed considering various indices of the quality of the encoded image, for example, the peak signal-to-noise ratio. Consequently, image coding methods that preserve subjective impressions of qualities such as Shitsukan have not been studied. In this study, the authors focus on the property of glossiness and propose a method of glossiness-aware image coding. Their purpose is to develop an encoding algorithm that produces images that can be decoded by standard JPEG decoders, which are commonly used worldwide. The proposed method consists of three procedures: block classification, glossiness enhancement, and non-glossiness information reduction. In block classification, the types of glossiness in a target image are classified using block units. In glossiness enhancement, the glossiness in each type of block is emphasized to reduce the amount of degradation of glossiness during JPEG encoding. The third procedure, non-glossiness information reduction, further compresses the information while maintaining the glossiness by reducing the information in each block that does not represent the glossiness in the image. To test the effectiveness of the proposed method, the authors conducted a subjective evaluation experiment using paired comparison of images coded by the proposed method and JPEG images with the same data size. The glossiness was found to be better preserved in images coded by the proposed method than in the JPEG images.
APA, Harvard, Vancouver, ISO, and other styles
5

Man, Hong, Alen Docef, and Faouzi Kossentini. "Performance Analysis of the JPEG 2000 Image Coding Standard." Multimedia Tools and Applications 26, no. 1 (May 2005): 27–57. http://dx.doi.org/10.1007/s11042-005-6848-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tsang, Sik-Ho, Yui-Lam Chan, and Wei Kuang. "Standard compliant light field lenslet image coding model using enhanced screen content coding framework." Journal of Electronic Imaging 28, no. 05 (October 23, 2019): 1. http://dx.doi.org/10.1117/1.jei.28.5.053027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Khaitu, Shree Ram, and Sanjeeb Prasad Panday. "Fractal Image Compression Using Canonical Huffman Coding." Journal of the Institute of Engineering 15, no. 1 (February 16, 2020): 91–105. http://dx.doi.org/10.3126/jie.v15i1.27718.

Full text
Abstract:
Image Compression techniques have become a very important subject with the rapid growth of multimedia application. The main motivations behind the image compression are for the efficient and lossless transmission as well as for storage of digital data. Image Compression techniques are of two types; Lossless and Lossy compression techniques. Lossy compression techniques are applied for the natural images as minor loss of the data are acceptable. Entropy encoding is the lossless compression scheme that is independent with particular features of the media as it has its own unique codes and symbols. Huffman coding is an entropy coding approach for efficient transmission of data. This paper highlights the fractal image compression method based on the fractal features and searching and finding the best replacement blocks for the original image. Canonical Huffman coding which provides good fractal compression than arithmetic coding is used in this paper. The result obtained depicts that Canonical Huffman coding based fractal compression technique increases the speed of the compression and has better PNSR as well as better compression ratio than standard Huffman coding.
APA, Harvard, Vancouver, ISO, and other styles
8

Noll, Peter, and Davis Pan. "ISO/MPEG Audio Coding." International Journal of High Speed Electronics and Systems 08, no. 01 (March 1997): 69–118. http://dx.doi.org/10.1142/s0129156497000044.

Full text
Abstract:
The Moving Pictures Expert Group within the International Organization of Standardization (ISO/MPEG) has developed, and is presently developing, a series of audiovisual standards. Its audio coding standard MPEG Phase 1 is the first international standard in the field of high quality digital audio compression and has been applied in many areas, both for consumer and professional audio. Typical application areas for digital audio are in the fields of audio production, program distribution and exchange, digital sound broadcasting, digital storage, and various multimedia applications. This paper will describe in some detail the main features of MPEG Phase 1 coders. As a logical further step in digital audio a multichannel audio standard MPEG Phase 2 is being standardized to provide an improved stereophonic image for audio-only applications including teleconferencing and for improved television systems. The status of this standardization process will be covered briefly.
APA, Harvard, Vancouver, ISO, and other styles
9

ZHANG, YAN, and HAI-MING GU. "CONTOUR BASED MULTI-ROI MULTI-QUALITY ROI CODING FOR STILL IMAGE." International Journal of Pattern Recognition and Artificial Intelligence 25, no. 01 (February 2011): 135–45. http://dx.doi.org/10.1142/s0218001411008488.

Full text
Abstract:
Region-of-interest (ROI) image coding is one of the new features included in the JPEG2000 image coding standard. Two methods are defined in the standard: the Maxshift method and the generic scaling based method. In this paper, a new region-of-interest coding method called Contour-based Multi-ROI Multi-quality Image Coding (CMM) is proposed. Unlike other existing methods, the CMM method takes the contour and texture of the whole image as a special ROI, which makes the visually most important parts (in both ROI and Background) to be coded first. Experimental results indicate that the proposed method significantly outperforms the previous ROI coding schemes in the overall ROI coding performance.
APA, Harvard, Vancouver, ISO, and other styles
10

Hussain, Ikram, Oh-Jin Kwon, and Seungcheol Choi. "Evaluating the Coding Performance of 360° Image Projection Formats Using Objective Quality Metrics." Symmetry 13, no. 1 (January 5, 2021): 80. http://dx.doi.org/10.3390/sym13010080.

Full text
Abstract:
Recently, 360° content has emerged as a new method for offering real-life interaction. Ultra-high resolution 360° content is mapped to the two-dimensional plane to adjust to the input of existing generic coding standards for transmission. Many formats have been proposed, and tremendous work is being done to investigate 360° videos in the Joint Video Exploration Team using projection-based coding. However, the standardization activities for quality assessment of 360° images are limited. In this study, we evaluate the coding performance of various projection formats, including recently-proposed formats adapting to the input of JPEG and JPEG 2000 content. We present an overview of the nine state-of-the-art formats considered in the evaluation. We also propose an evaluation framework for reducing the bias toward the native equi-rectangular (ERP) format. We consider the downsampled ERP image as the ground truth image. Firstly, format conversions are applied to the ERP image. Secondly, each converted image is subjected to the JPEG and JPEG 2000 image coding standards, then decoded and converted back to the downsampled ERP to find the coding gain of each format. The quality metrics designed for 360° content and conventional 2D metrics have been used for both end-to-end distortion measurement and codec level, in two subsampling modes, i.e., YUV (4:2:0 and 4:4:4). Our evaluation results prove that the hybrid equi-angular format and equatorial cylindrical format achieve better coding performance among the compared formats. Our work presents evidence to find the coding gain of these formats over ERP, which is useful for identifying the best image format for a future standard.
APA, Harvard, Vancouver, ISO, and other styles
11

Tian, Hua, Ming Jun Li, and Huan Huan Liu. "Research and Exploration on Static Image Compression Technology Based on JPEG2000." Applied Mechanics and Materials 644-650 (September 2014): 4182–86. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.4182.

Full text
Abstract:
This article introduces GPU-accelerated image processing parallel computing technology into standard core coding system of JPEG2000 static image compression and accelerates and designs the image compression process using CUDA acceleration principle. It also establishes the algorithm of image pixel array layered and reconstruction coding and realizes the coding of this algorithm using VC software. In order to verify the effectiveness and universal applicability of the algorithm and procedures, this paper compresses four images of different sizes and pixels in the static form of the JPEG2000. Through the comparison of the compression time, we can find that GPU hardware image processing system has a higher speedup ratio. With the increase of pixel and size, speedup ratio gradually increased which means that GPU acceleration has good adaptability.
APA, Harvard, Vancouver, ISO, and other styles
12

PRASAD, MUNAGA V. N. K., and K. K. SHUKLA. "TREE TRIANGULAR CODING IMAGE COMPRESSION ALGORITHMS." International Journal of Image and Graphics 01, no. 04 (October 2001): 591–603. http://dx.doi.org/10.1142/s0219467801000414.

Full text
Abstract:
This paper presents new algorithms for image compression that are an improvement on the recently published Binary Tree Triangular Coding (BTTC). These algorithms are based on recursive decomposition of the image domain into triangles where the new triangle vertex is located at the point of maximum prediction error and does not require the constraints of right-angled isosceles triangle and square image as in previous algorithm. These algorithms execute in O(n log n) for encoding and θ(n) for decoding, where n is the number of image pixels. Simulation results show that the new algorithms have a significant execution time advantage over conventional BTTC while providing a quality of the reconstructed image as good as BTTC. This improvement is obtained by eliminating a major weakness of the standard BTTC, wherein the algorithm does not utilize the point of maximum error for domain decomposition despite performing an exhaustive search (in the worst case) over the triangular domain.
APA, Harvard, Vancouver, ISO, and other styles
13

ZHANG, Ting, Huihui BAI, Mengmeng ZHANG, and Yao ZHAO. "Standard-Compliant Multiple Description Image Coding Based on Convolutional Neural Networks." IEICE Transactions on Information and Systems E101.D, no. 10 (October 1, 2018): 2543–46. http://dx.doi.org/10.1587/transinf.2018edl8028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

YUEN, CHING-HUNG, and KWOK-WO WONG. "CRYPTANALYSIS ON SECURE FRACTAL IMAGE CODING BASED ON FRACTAL PARAMETER ENCRYPTION." Fractals 20, no. 01 (March 2012): 41–51. http://dx.doi.org/10.1142/s0218348x12500041.

Full text
Abstract:
The vulnerabilities of the selective encryption scheme for fractal image coding proposed by Lian et al.1 are identified. By comparing multiple cipher-images of the same plain-image encrypted with different keys, the positions of unencrypted parameters in each encoded block are located. This allows the adversary to recover the encrypted depth of the quadtree by observing the length of each matched domain block. With this depth information and the unencrypted parameters, the adversary is able to reconstruct an intelligent image. Experimental results show that some standard test images can be successfully decoded and recognized by replacing the encrypted contrast scaling factor and brightness offset with specific values. Some remedial approaches are suggested to enhance the security of the scheme.
APA, Harvard, Vancouver, ISO, and other styles
15

Salih, Yusra Ahmed, Aree Ali Mohammed, and Loay Edwar George. "Improved Image Compression Scheme Using Hybrid Encoding Algorithm." Kurdistan Journal of Applied Research 4, no. 2 (October 31, 2019): 90–101. http://dx.doi.org/10.24017/science.2019.2.9.

Full text
Abstract:
A color image compression is the most challenging task in the field of multimedia. During last decades several techniques are developed for improving the quality, coding time and compression ratio using different coding strategies. In this work, an effective compression method for hybrid images is proposed based on the discrete wavelet transformation and hybrid encoding algorithm (Huffman and SPIHT). This paper's primary participation is to take advantage of the hybrid encoding technique to maintain the quality of the reconstructed image and the reduction of time complexity. The sample test images are taken from both standard image database and high-quality images (SD and HD). The performance of the proposed scheme is evaluated by using different metrics such as (PSNR, compression ratio and encoding time). Test results indicate that the time and compression ratio of encoding are improved in the expense of the image quality.
APA, Harvard, Vancouver, ISO, and other styles
16

Pinheiro, Antonio. "JPEG column: 82nd JPEG meeting in Lisbon, Portugal." ACM SIGMultimedia Records 11, no. 1 (March 2019): 1. http://dx.doi.org/10.1145/3458462.3458468.

Full text
Abstract:
JPEG has been the most common representation format of digital images for more than 25 years. Other image representation formats have been standardised by JPEG committee like JPEG 2000 or more recently JPEG XS. Furthermore, JPEG has been extended with new functionalities like HDR or alpha plane coding with the JPEG XT standard, and more recently with a reference software. Another solutions have been also proposed by different players with limited success. The JPEG committee decided it is the time to create a new working item, named JPEG XL, that aims to develop an image coding standard with increased quality and flexibility combined with a better compression efficiency. The evaluation of the call for proposals responses had already confirmed the industry interest, and development of core experiments has now begun. Several functionalities will be considered, like support for lossless transcoding of images represented with JPEG standard.
APA, Harvard, Vancouver, ISO, and other styles
17

Coelho, Diego F. G., Renato J. Cintra, Fábio M. Bayer, Sunera Kulasekera, Arjuna Madanayake, Paulo Martinez, Thiago L. T. Silveira, Raíza S. Oliveira, and Vassil S. Dimitrov. "Low-Complexity Loeffler DCT Approximations for Image and Video Coding." Journal of Low Power Electronics and Applications 8, no. 4 (November 22, 2018): 46. http://dx.doi.org/10.3390/jlpea8040046.

Full text
Abstract:
This paper introduced a matrix parametrization method based on the Loeffler discrete cosine transform (DCT) algorithm. As a result, a new class of 8-point DCT approximations was proposed, capable of unifying the mathematical formalism of several 8-point DCT approximations archived in the literature. Pareto-efficient DCT approximations are obtained through multicriteria optimization, where computational complexity, proximity, and coding performance are considered. Efficient approximations and their scaled 16- and 32-point versions are embedded into image and video encoders, including a JPEG-like codec and H.264/AVC and H.265/HEVC standards. Results are compared to the unmodified standard codecs. Efficient approximations are mapped and implemented on a Xilinx VLX240T FPGA and evaluated for area, speed, and power consumption.
APA, Harvard, Vancouver, ISO, and other styles
18

Cho, Sang-Gyu, Zoran Bojkovic, Dragorad Milovanovic, Jungsik Lee, and Jae-Jeong Hwang. "Image quality evaluation: JPEG 2000 versus intra-only H.264/AVC High Profile." Facta universitatis - series: Electronics and Energetics 20, no. 1 (2007): 71–83. http://dx.doi.org/10.2298/fuee0701071c.

Full text
Abstract:
The objective of this work is to provide image quality evaluation for intra-only H.264/AVC High Profile (HP) standard versus JPEG2000 standard. Here, we review the structure of the two standards and the coding algorithms in the context of subjective and objective assessments. Simulations were performed on a test set of monochrome and color image. As a result of simulations, we observed that the subjective and objective image quality of H.264/AVC is superior to JPEG2000, except the blocking artifact which is inherent, since it consists of block transform rather than whole image transform. Thus, we propose a unified measurement system to properly define image quality.
APA, Harvard, Vancouver, ISO, and other styles
19

Sowmithri, K. "An Iterative Lifting Scheme on DCT Coefficients for Image Coding." International Journal of Students' Research in Technology & Management 3, no. 4 (September 27, 2015): 317–19. http://dx.doi.org/10.18510/ijsrtm.2015.341.

Full text
Abstract:
Image coding is considered to be more effective, as it reduces number of bits required to store and/or to transmit image data. Transform based image coders play a significant role as they decorrelate the spatial low level information. It is found utilization in International compression standards such as JPEG, JPEG 2000, MPEG and H264. The choice of transform is an important issue in all these transforms coding schemes. Most of the literature suggests either Discrete Cosine Transform (DCT) or Discrete Wavelet Transform (DWT). In this proposed work, the energy preservation of DCT coefficients is analysed, and to down sample these coefficients, lifting scheme is iteratively applied so as to compensate the artifacts that appear in the reconstructed picture, and to yield the higher compression ratio. This is followed by scalar quantization and entropy coding, as in JPEG. The performance of the proposed iterative lifting scheme, employed on decorrelated DCT coefficients is measured with standard Peak Signal to Noise Ratio (PSNR) and the results are encouraging.
APA, Harvard, Vancouver, ISO, and other styles
20

SREELEKHA, G., and P. S. SATHIDEVI. "A WAVELET-BASED PERCEPTUAL IMAGE CODER INCORPORATING A NEW MODEL FOR COMPRESSION OF COLOR IMAGES." International Journal of Wavelets, Multiresolution and Information Processing 07, no. 05 (September 2009): 675–92. http://dx.doi.org/10.1142/s0219691309003197.

Full text
Abstract:
A wavelet-based perceptual image coder for the compression of color images is proposed here in which the coding structure is coupled with Human Visual System models to produce high quality images. The major contribution is the development of a new model for the compression of the color components based on psychovisual experiments, which quantifies the optimum amount of compression that can be applied to the color components for a given rate. The model is developed for YCbCr color space and the perceptually uniform CIE Lab color space. A complete coding structure for the compression of color images is developed by incorporating the new perceptual model. The performance of the proposed coder is compared with a wavelet-based coder that uses the quantization stage of the JPEG2000 standard. The perceptual quality of the compressed images is tested using the wavelet-based subjective and objective perceptual quality matrices such as Mean Opinion Score, Visual Information Fidelity and Visual Signal to Noise Ratio. Though the model is developed for a perceptually lossless high quality image compression, results obtained reveal that the proposed structure gives very good perceptual quality compared to the existing schemes for lower bit rates. These advantages make the proposed coder a candidate for replacing the encoder stage of the current image compression standards.
APA, Harvard, Vancouver, ISO, and other styles
21

Farzamnia, Ali, Sharifah Syed-Yusof, Norsheila Fisal, and Syed Abu-Bakar. "Investigation of Error Concealment Using Different Transform Codings and Multiple Description Codings." Journal of Electrical Engineering 63, no. 3 (May 1, 2012): 171–79. http://dx.doi.org/10.2478/v10187-012-0025-7.

Full text
Abstract:
Investigation of Error Concealment Using Different Transform Codings and Multiple Description CodingsThere has been increasing usage of Multiple Description Coding (MDC) for error concealment in non-ideal channels. A lot of ideas have been masterminded for MDC method up to now. This paper described the attempts to conceal the error and reconstruct the lost descriptions caused by combining MDC and lapped orthogonal transform (LOT). In this work LOT and other transforms codings (DCT and wavelet) are used to decorrelate the image pixels in the transform domain. LOT has better performance at low bit rates in comparison to DCT and wavelet transform. The results show that MSE for the proposed methods in comparison to DCT and wavelet have decreased significantly. The PSNR values of reconstructed images are high. The subjective evaluation of image is very good and clear. Furthermore, the standard deviations of reconstructed images are very small especially in low capacity channels.
APA, Harvard, Vancouver, ISO, and other styles
22

Cho, Seunghyun, Dong-Wook Kim, and Seung-Won Jung. "Quality enhancement of VVC intra-frame coding for multimedia services over the Internet." International Journal of Distributed Sensor Networks 16, no. 5 (May 2020): 155014772091764. http://dx.doi.org/10.1177/1550147720917647.

Full text
Abstract:
In this article, versatile video coding, the next-generation video coding standard, is combined with a deep convolutional neural network to achieve state-of-the-art image compression efficiency. The proposed hierarchical grouped residual dense network exhaustively exploits hierarchical features in each architectural level to maximize the image quality enhancement capability. The basic building block employed for hierarchical grouped residual dense network is residual dense block which exploits hierarchical features from internal convolutional layers. Residual dense blocks are then combined into a grouped residual dense block exploiting hierarchical features from residual dense blocks. Finally, grouped residual dense blocks are connected to comprise a hierarchical grouped residual dense block so that hierarchical features from grouped residual dense blocks can also be exploited for quality enhancement of versatile video coding intra-coded images. Various non-architectural and architectural aspects affecting the training efficiency and performance of hierarchical grouped residual dense network are explored. The proposed hierarchical grouped residual dense network respectively obtained 10.72% and 14.3% of Bjøntegaard-delta-rate gains against versatile video coding in the experiments conducted on two public image datasets with different characteristics to verify the image compression efficiency.
APA, Harvard, Vancouver, ISO, and other styles
23

Rahmat, Romi Fadillah, T. S. M. Andreas, Fahmi Fahmi, Muhammad Fermi Pasha, Mohammed Yahya Alzahrani, and Rahmat Budiarto. "Analysis of DICOM Image Compression Alternative Using Huffman Coding." Journal of Healthcare Engineering 2019 (June 17, 2019): 1–11. http://dx.doi.org/10.1155/2019/5810540.

Full text
Abstract:
Compression, in general, aims to reduce file size, with or without decreasing data quality of the original file. Digital Imaging and Communication in Medicine (DICOM) is a medical imaging file standard used to store multiple information such as patient data, imaging procedures, and the image itself. With the rising usage of medical imaging in clinical diagnosis, there is a need for a fast and secure method to share large number of medical images between healthcare practitioners, and compression has always been an option. This work analyses the Huffman coding compression method, one of the lossless compression techniques, as an alternative method to compress a DICOM file in open PACS settings. The idea of the Huffman coding compression method is to provide codeword with less number of bits for the symbol that has a higher value of byte frequency distribution. Experiments using different type of DICOM images are conducted, and the analysis on the performances in terms of compression ratio and compression/decompression time, as well as security, is provided. The experimental results showed that the Huffman coding technique has the capability to compress the DICOM file up to 1 : 3.7010 ratio and up to 72.98% space savings.
APA, Harvard, Vancouver, ISO, and other styles
24

KITAURA, Y., M. MUNEYASU, and K. NAKANISHI. "A New Progressive Image Quality Control Method for ROI Coding in JPEG2000 Standard." IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E91-A, no. 4 (April 1, 2008): 998–1005. http://dx.doi.org/10.1093/ietfec/e91-a.4.998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Descampe, Antonin, Thomas Richter, Touradj Ebrahimi, Siegfried Foessel, Joachim Keinert, Tim Bruylants, Pascal Pellegrin, Charles Buysschaert, and Gael Rouvroy. "JPEG XS—A New Standard for Visually Lossless Low-Latency Lightweight Image Coding." Proceedings of the IEEE 109, no. 9 (September 2021): 1559–77. http://dx.doi.org/10.1109/jproc.2021.3080916.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Zhang, Lu. "The Design of Static Image Compression System." Advanced Materials Research 1042 (October 2014): 150–53. http://dx.doi.org/10.4028/www.scientific.net/amr.1042.150.

Full text
Abstract:
In this paper, a static image compression system based on DSP is introduced. The compression standard of this system is JPEG2000 that is the new image compression standard by ISO. First, the basic algorithm and key technologic of JPEG2000 are explained. Then, the realization method of the static image compression system based on JPEG2000 image coding is analyzed. Finally, the hardware of this system which is based on DSP is described. And by the verification of the system, very good static image compression effect can be get from this system.
APA, Harvard, Vancouver, ISO, and other styles
27

Liu, Hai Bo, and Xiao Sheng Huang. "An Improved Error Concealment Technique Based on Multi-View Video Coding." Applied Mechanics and Materials 599-601 (August 2014): 1383–86. http://dx.doi.org/10.4028/www.scientific.net/amm.599-601.1383.

Full text
Abstract:
In this paper, we propose a improved error concealment technique based on multi-view video coding to recover damaged video images. At first,It uses BMA(Boundary Matching Algorithm) method to recover the lost or erroneously received motion vector or disparity vector,then combining inter-view correlation, temporal correlation and spatial correlation to recover the lost blocks. The JM12.0 model of H.264 standard is used to evaluate the algorithm. And the experimental results show that our algorithm achieved a better image reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhang, Xi, and Noriaki Fukuda. "Lossy to lossless image coding based on wavelets using a complex allpass filter." International Journal of Wavelets, Multiresolution and Information Processing 12, no. 04 (July 2014): 1460002. http://dx.doi.org/10.1142/s0219691314600029.

Full text
Abstract:
Wavelet-based image coding has been adopted in the international standard JPEG 2000 for its efficiency. It is well-known that the orthogonality and symmetry of wavelets are two important properties for many applications of signal processing and image processing. Both can be simultaneously realized by the wavelet filter banks composed of a complex allpass filter, thus, it is expected to get a better coding performance than the conventional biorthogonal wavelets. This paper proposes an effective implementation of orthonormal symmetric wavelet filter banks composed of a complex allpass filter for lossy to lossless image compression. First, irreversible real-to-real wavelet transforms are realized by implementing a complex allpass filter for lossy image coding. Next, reversible integer-to-integer wavelet transforms are proposed by incorporating the rounding operation into the filtering processing to obtain an invertible complex allpass filter for lossless image coding. Finally, the coding performance of the proposed orthonormal symmetric wavelets is evaluated and compared with the D-9/7 and D-5/3 biorthogonal wavelets. It is shown from the experimental results that the proposed allpass-based orthonormal symmetric wavelets can achieve a better coding performance than the conventional D-9/7 and D-5/3 biorthogonal wavelets both in lossy and lossless coding.
APA, Harvard, Vancouver, ISO, and other styles
29

YANG, GUOAN, and NANNING ZHENG. "AN OPTIMIZATION ALGORITHM FOR BIORTHOGONAL WAVELET FILTER BANKS DESIGN." International Journal of Wavelets, Multiresolution and Information Processing 06, no. 01 (January 2008): 51–63. http://dx.doi.org/10.1142/s0219691308002215.

Full text
Abstract:
A new approach for designing the Biorthogonal Wavelet Filter Bank (BWFB) for the purpose of image compression is presented in this paper. The approach is broken into two steps. First, an optimal filter bank is designed in the theoretical sense, based on Vaidyanathan's coding gain criterion in the SubBand Coding (SBC) system. Then, the above filter bank is optimized based on the criterion of Peak Signal-to-Noise Ratio (PSNR) in the JPEG2000 image compression system, resulting in a BWFB in practical application sense. With the approach, a series of BWFBs for a specific class of applications related to image compression, such as gray-level images, can be quickly designed. Here, new 7/5 BWFBs are presented based on the above approach for image compression applications. Experiments show that the 7/5 BWFBs not only have excellent compression performance, but also easy computation and are more suitable for VLSI hardware implementations. They perform equally well with respect to 7/5 filters in the JPEG2000 standard.
APA, Harvard, Vancouver, ISO, and other styles
30

Song, Hong Mei, Hai Wei Mu, and Dong Yan Zhao. "Study on Nearly Lossless Compression with Progressive Decoding." Advanced Materials Research 926-930 (May 2014): 1751–54. http://dx.doi.org/10.4028/www.scientific.net/amr.926-930.1751.

Full text
Abstract:
A progressive transmission and decoding nearly lossless compression algorithm is proposed. The image data are grouped according to different frequencies based on DCT transform, then it uses the JPEG-LS core algorithmtexture prediction and Golomb coding on each group of data, in order to achieve progressive image transmission and decoding. Experimentation on the standard test images with this algorithm and comparing with JPEG-LS shows that the compression ratio of this algorithm is very similar to the compression ratio of JPEG-LS, and this algorithm loses a little image information but it has the ability of the progressive transmission and decoding.
APA, Harvard, Vancouver, ISO, and other styles
31

Blanes, Ian, Aaron Kiely, Miguel Hernández-Cabronero, and Joan Serra-Sagristà. "Performance Impact of Parameter Tuning on the CCSDS-123.0-B-2 Low-Complexity Lossless and Near-Lossless Multispectral and Hyperspectral Image Compression Standard." Remote Sensing 11, no. 11 (June 11, 2019): 1390. http://dx.doi.org/10.3390/rs11111390.

Full text
Abstract:
This article studies the performance impact related to different parameter choices for the new CCSDS-123.0-B-2 Low-Complexity Lossless and Near-Lossless Multispectral and Hyperspectral Image Compression standard. This standard supersedes CCSDS-123.0-B-1 and extends it by incorporating a new near-lossless compression capability, as well as other new features. This article studies the coding performance impact of different choices for the principal parameters of the new extensions, in addition to reviewing related parameter choices for existing features. Experimental results include data from 16 different instruments with varying detector types, image dimensions, number of spectral bands, bit depth, level of noise, level of calibration, and other image characteristics. Guidelines are provided on how to adjust the parameters in relation to their coding performance impact.
APA, Harvard, Vancouver, ISO, and other styles
32

Murakami, Yuri. "Lossless and lossy coding for multispectral image based on sRGB standard and residual components." Journal of Electronic Imaging 20, no. 2 (April 1, 2011): 023003. http://dx.doi.org/10.1117/1.3574104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Narayanan, M. Rajaram, and S. Gowri. "Intelligent Vision Based Technique Using ANN for Surface Finish Assessment of Machined Components." Key Engineering Materials 364-366 (December 2007): 1251–56. http://dx.doi.org/10.4028/www.scientific.net/kem.364-366.1251.

Full text
Abstract:
In this work, an FPGA hardware based image processing algorithm for preprocessing the images and enhance the image quality has been developed. The captured images were processed using a FPGA chip to remove the noise and then using a neural network, the surface roughness of machined parts produced by the grinding process was estimated. To ensure the effectiveness of this approach the roughness values quantified using these image vision techniques were then compared with widely accepted standard mechanical stylus instrument values. Quantification of digital images for surface roughness was performed by extracting key image features using Fourier transform and the standard deviation of gray level intensity values. A VLSI chip belonging to the Xilinx family Spartan-IIE FPGA board was used for the hardware based filter implementation. The coding was done using the popular VHDL language with the algorithms developed so as to exploit the implicit parallel processing capability of the chip. Thus, in this work an exhaustive analysis was done with comparison studies wherever required to make sure that the present approach of estimating surface finish based on the computer vision processing of image is more accurate and could be implemented in real time on a chip.
APA, Harvard, Vancouver, ISO, and other styles
34

Singh, Kulwinder, Ming Ma, Dong Won Park, and Syungog An. "Image Indexing Based On Mpeg-7 Scalable Color Descriptor." Key Engineering Materials 277-279 (January 2005): 375–82. http://dx.doi.org/10.4028/www.scientific.net/kem.277-279.375.

Full text
Abstract:
The MPEG-7 standard defines a set of descriptors that extract low-level features such as color, texture and object shape from an image and generate metadata that represents the extracted information. In this paper we propose a new image retrieval technique for image indexing based on the MPEG-7 scalable color descriptor. We use some specifications of the scalable color descriptor (SCD) for the implementation of the color histograms. The MPEG-7 standard defines 1 l norm − based matching in the SCD. But in our approach, for distance measurement, we achieve a better result by using cosine similarity coefficient for color histograms. This approach has significantly increased the accuracy of obtaining results for image retrieval. Experiments based on scalable color descriptors are illustrated. We also present the color spaces supported by the different image and video coding standards such as JPEG-2000, MPEG-1, 2, 4 and MPEG-7. In addition, this paper outlines the broad details of MPEG-7 Color Descriptors.
APA, Harvard, Vancouver, ISO, and other styles
35

Kim, Seonjae, Dongsan Jun, Byung-Gyu Kim, Seungkwon Beack, Misuk Lee, and Taejin Lee. "Two-Dimensional Audio Compression Method Using Video Coding Schemes." Electronics 10, no. 9 (May 6, 2021): 1094. http://dx.doi.org/10.3390/electronics10091094.

Full text
Abstract:
As video compression is one of the core technologies that enables seamless media streaming within the available network bandwidth, it is crucial to employ media codecs to support powerful coding performance and higher visual quality. Versatile Video Coding (VVC) is the latest video coding standard developed by the Joint Video Experts Team (JVET) that can compress original data hundreds of times in the image or video; the latest audio coding standard, Unified Speech and Audio Coding (USAC), achieves a compression rate of about 20 times for audio or speech data. In this paper, we propose a pre-processing method to generate a two-dimensional (2D) audio signal as an input of a VVC encoder, and investigate the applicability to 2D audio compression using the video coding scheme. To evaluate the coding performance, we measure both signal-to-noise ratio (SNR) and bits per sample (bps). The experimental result shows the possibility of researching 2D audio encoding using video coding schemes.
APA, Harvard, Vancouver, ISO, and other styles
36

Journal, Baghdad Science. "Image Compression Using Tap 9/7 Wavelet Transform and Quadtree Coding Scheme." Baghdad Science Journal 8, no. 2 (June 12, 2011): 676–83. http://dx.doi.org/10.21123/bsj.8.2.676-683.

Full text
Abstract:
This paper is concerned with the design and implementation of an image compression method based on biorthogonal tap-9/7 discrete wavelet transform (DWT) and quadtree coding method. As a first step the color correlation is handled using YUV color representation instead of RGB. Then, the chromatic sub-bands are downsampled, and the data of each color band is transformed using wavelet transform. The produced wavelet sub-bands are quantized using hierarchal scalar quantization method. The detail quantized coefficient is coded using quadtree coding followed by Lempel-Ziv-Welch (LZW) encoding. While the approximation coefficients are coded using delta coding followed by LZW encoding. The test results indicated that the compression results are comparable to those gained by standard compression schemes.
APA, Harvard, Vancouver, ISO, and other styles
37

RAITTINEN, HARRI, and KIMMO KASKI. "CRITICAL REVIEW OF FRACTAL IMAGE COMPRESSION." International Journal of Modern Physics C 06, no. 01 (February 1995): 47–66. http://dx.doi.org/10.1142/s0129183195000058.

Full text
Abstract:
In this paper, fractal compression methods are reviewed. Three new methods are developed and their results are compared with the results obtained using four previously published fractal compression methods. Furthermore, we have compared the results of these methods with the standard JPEG method. For comparison, we have used an extensive set of image quality measures. According to these tests, fractal methods do not yield significantly better compression results when compared with conventional methods. This is especially the case when high coding accuracy (small compression ratio) is desired.
APA, Harvard, Vancouver, ISO, and other styles
38

NADARAJAH, SARALEES. "LAPLACIAN DCT COEFFICIENT MODELS." International Journal of Wavelets, Multiresolution and Information Processing 06, no. 04 (July 2008): 553–73. http://dx.doi.org/10.1142/s0219691308002483.

Full text
Abstract:
It is well known that the distribution of the discrete cosine transform (DCT) coefficients of most natural images follow a Laplace distribution. In this note, a collection of formulas is derived for the distribution of the actual DCT coefficient. The corresponding estimation procedures are derived by the method of moments and the method of maximum likelihood. Finally, the superior performance of the derived distributions over the standard Laplace model is illustrated. It is expected that this work could serve as a useful reference and lead to improved modeling with respect to image analysis and image coding.
APA, Harvard, Vancouver, ISO, and other styles
39

Christopoulos, C., J. Askelof, and M. Larsson. "Efficient methods for encoding regions of interest in the upcoming JPEG2000 still image coding standard." IEEE Signal Processing Letters 7, no. 9 (September 2000): 247–49. http://dx.doi.org/10.1109/97.863146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Santos, Lucana, Sebastián Lopez, Gustavo M. Callico, José F. Lopez, and Roberto Sarmiento. "Performance Evaluation of the H.264/AVC Video Coding Standard for Lossy Hyperspectral Image Compression." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 5, no. 2 (April 2012): 451–61. http://dx.doi.org/10.1109/jstars.2011.2173906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Chen, Xiao, and Xiaoqing Xu. "A Fast and Efficient Adaptive Threshold Rate Control Scheme for Remote Sensing Images." Scientific World Journal 2012 (2012): 1–6. http://dx.doi.org/10.1100/2012/691413.

Full text
Abstract:
The JPEG2000 image compression standard is ideal for processing remote sensing images. However, its algorithm is complex and it requires large amounts of memory, making it difficult to adapt to the limited transmission and storage resources necessary for remote sensing images. In the present study, an improved rate control algorithm for remote sensing images is proposed. The required coded blocks are sorted downward according to their numbers of bit planes prior to entropy coding. An adaptive threshold computed from the combination of the minimum number of bit planes, along with the minimum rate-distortion slope and the compression ratio, is used to truncate passes of each code block during Tier-1 encoding. This routine avoids the encoding of all code passes and improves the coding efficiency. The simulation results show that the computational cost and working buffer memory size of the proposed algorithm reach only 18.13 and 7.81%, respectively, of the same parameters in the postcompression rate distortion algorithm, while the peak signal-to-noise ratio across the images remains almost the same. The proposed algorithm not only greatly reduces the code complexity and buffer requirements but also maintains the image quality.
APA, Harvard, Vancouver, ISO, and other styles
42

Poornima, G. R., and S. C. Prasanna Kumar. "Efficient H.264 Decoder Architecture using External Memory and Pipelining." Indonesian Journal of Electrical Engineering and Computer Science 12, no. 3 (December 1, 2018): 995. http://dx.doi.org/10.11591/ijeecs.v12.i3.pp995-1002.

Full text
Abstract:
<p>A H.264 standard is one of the most popular coding standard with significant improvement in video broadcasting and streaming application. However it’s significant in compression but needs huge calculation and complex algorithm for providing better image quality and compression rate. In H.264 coding technique, designing of decoder is a key factor for efficient coding. In this paper we are designing a decoder using a complex input. We ensured several improvement like looping arrangement, buffer upgradation, buffer supplement, memory reusability and pipelining architecture. We have modified the memory structure also. Our designed decoder achieves a better frame decoding efficiency against state-of-art methods. The proposed approach also provides good area optimization with a maximum frequency of 355 MHz<em>.</em></p>
APA, Harvard, Vancouver, ISO, and other styles
43

PhiCong, Huy, Stuart Perry, and Xiem HoangVan. "Adaptive Content Frame Skipping for Wyner–Ziv-Based Light Field Image Compression." Electronics 9, no. 11 (October 29, 2020): 1798. http://dx.doi.org/10.3390/electronics9111798.

Full text
Abstract:
Light field (LF) imaging introduces attractive possibilities for digital imaging, such as digital focusing, post-capture changing of the focal plane or view point, and scene depth estimation, by capturing both spatial and angular information of incident light rays. However, LF image compression is still a great challenge, not only due to light field imagery requiring a large amount of storage space and a large transmission bandwidth, but also due to the complexity requirements of various applications. In this paper, we propose a novel LF adaptive content frame skipping compression solution by following a Wyner–Ziv (WZ) coding approach. In the proposed coding approach, the LF image is firstly converted into a four-dimensional LF (4D-LF) data format. To achieve good compression performance, we select an efficient scanning mechanism to generate a 4D-LF pseudo-sequence by analyzing the content of the LF image with different scanning methods. In addition, to further explore the high frame correlation of the 4D-LF pseudo-sequence, we introduce an adaptive frame skipping algorithm followed by decision tree techniques based on the LF characteristics, e.g., the depth of field and angular information. The experimental results show that the proposed WZ-LF coding solution achieves outstanding rate distortion (RD) performance while having less computational complexity. Notably, a bit rate saving of 53% is achieved compared to the standard high-efficiency video coding (HEVC) Intra codec.
APA, Harvard, Vancouver, ISO, and other styles
44

Anaraki, Marjan Sedighi, Fangyan Dong, Hajime Nobuhara, and Kaoru Hirota. "Dyadic Curvelet Transform (DClet) for Image Noise Reduction." Journal of Advanced Computational Intelligence and Intelligent Informatics 11, no. 6 (July 20, 2007): 641–47. http://dx.doi.org/10.20965/jaciii.2007.p0641.

Full text
Abstract:
Dyadic Curvelet transform (DClet) is proposed as a tool for image processing and computer vision. It is an extended curvelet transform that solves the problem of conventional curvelet, of decomposition into components at different scales. It provides simplicity, dyadic scales, and absence of redundancy for analysis and synthesis objects with discontinuities along curves, i.e., edges via directional basis functions. The performance of the proposed method is evaluated by removing Gaussian, Speckles, and Random noises from different noisy standard images. Average 26.71 dB Peak Signal to Noise Ratio (PSNR) compared to 25.87 dB via the wavelet transform is evidence that the DClet outperforms the wavelet transform for removing noise. The proposed method is robust, which makes it suitable for biomedical applications. It is a candidate for gray and color image enhancement and applicable for compression or efficient coding in which critical sampling might be relevant.
APA, Harvard, Vancouver, ISO, and other styles
45

Vasiljevic, Ivana, Dinu Dragan, Ratko Obradovic, and Veljko Petrović. "Analysis of Compression Techniques for Stereoscopic Images." SPIIRAS Proceedings 6, no. 61 (November 26, 2018): 197–220. http://dx.doi.org/10.15622/sp.61.8.

Full text
Abstract:
Virtual Reality (VR) and Augmented Reality (AR) Head-Mounted Displays (HMDs) have been emerging in the last years and they are gaining an increased popularity in many industries. HMDs are generally used in entertainment, social interaction, education, but their use for work is also increasing in domains such as medicine, modeling and simulation. Despite the recent release of many types of HMDs, two major problems are hindering their widespread adoption in the mainstream market: the extremely high costs and the user experience issues [1]. The illusion of a 3D display in HMDs is achieved with a technique called stereoscopy. Applications of stereoscopic imagining are such that data transfer rates and—in mobile applications—storage quickly become a bottleneck. Therefore, efficient image compression techniques are required. Standard image compression techniques are not suitable for stereoscopic images due to the discrete differences that occur between the compressed and uncompressed images. The issue is that the loss in lossy image compression may blur the minute differences between the left-eye and right-eye images that are crucial in establishing the illusion of 3D perception. However, in order to achieve more efficient coding, there are various coding techniques that can be adapted to stereoscopic images. Stereo image compression techniques that can be found in the literature utilize discrete Wavelet transformation and the morphological compression algorithm applied to the transform coefficients. This paper provides an overview and comparison of available techniques for the compression of stereoscopic images, as there is still no technique that is accepted as best for all criteria. We want to test the techniques with users who would actually be potential users of HMDs and therefore would be exposed to these techniques. Also, we focused our research on low-priced, consumer grade HMDs which should be available for larger population.
APA, Harvard, Vancouver, ISO, and other styles
46

Jerbi, Khaled, Mickaël Raulet, Olivier Déforges, and Mohamed Abid. "Automatic Generation of Optimized and Synthesizable Hardware Implementation from High-Level Dataflow Programs." VLSI Design 2012 (August 16, 2012): 1–14. http://dx.doi.org/10.1155/2012/298396.

Full text
Abstract:
In this paper, we introduce the Reconfigurable Video Coding (RVC) standard based on the idea that video processing algorithms can be defined as a library of components that can be updated and standardized separately. MPEG RVC framework aims at providing a unified high-level specification of current MPEG coding technologies using a dataflow language called Cal Actor Language (CAL). CAL is associated with a set of tools to design dataflow applications and to generate hardware and software implementations. Before this work, the existing CAL hardware compilers did not support high-level features of the CAL. After presenting the main notions of the RVC standard, this paper introduces an automatic transformation process that analyses the non-compliant features and makes the required changes in the intermediate representation of the compiler while keeping the same behavior. Finally, the implementation results of the transformation on video and still image decoders are summarized. We show that the obtained results can largely satisfy the real time constraints for an embedded design on FPGA as we obtain a throughput of 73 FPS for MPEG 4 decoder and 34 FPS for coding and decoding process of the LAR coder using a video of CIF image size. This work resolves the main limitation of hardware generation from CAL designs.
APA, Harvard, Vancouver, ISO, and other styles
47

NAGARAJ, NITHIN. "HUFFMAN CODING AS A NONLINEAR DYNAMICAL SYSTEM." International Journal of Bifurcation and Chaos 21, no. 06 (June 2011): 1727–36. http://dx.doi.org/10.1142/s0218127411029392.

Full text
Abstract:
In this paper, source coding or data compression is viewed as a measurement problem. Given a measurement device with fewer states than the observable of a stochastic source, how can one capture their essential information? We propose modeling stochastic sources as piecewise-linear discrete chaotic dynamical systems known as Generalized Luröth Series (GLS) which has its roots in Georg Cantor's work in 1869. These GLS are special maps with the property that their Lyapunov exponent is equal to the Shannon's entropy of the source (up to a constant of proportionality). By successively approximating the source with GLS having fewer states (with the nearest Lyapunov exponent), we derive a binary coding algorithm which turns out to be a rediscovery of Huffman coding, the popular lossless compression algorithm used in the JPEG international standard for still image compression.
APA, Harvard, Vancouver, ISO, and other styles
48

Radosavljević, Miloš, Branko Brkljač, Predrag Lugonja, Vladimir Crnojević, Željen Trpovski, Zixiang Xiong, and Dejan Vukobratović. "Lossy Compression of Multispectral Satellite Images with Application to Crop Thematic Mapping: A HEVC Comparative Study." Remote Sensing 12, no. 10 (May 16, 2020): 1590. http://dx.doi.org/10.3390/rs12101590.

Full text
Abstract:
Remote sensing applications have gained in popularity in recent years, which has resulted in vast amounts of data being produced on a daily basis. Managing and delivering large sets of data becomes extremely difficult and resource demanding for the data vendors, but even more for individual users and third party stakeholders. Hence, research in the field of efficient remote sensing data handling and manipulation has become a very active research topic (from both storage and communication perspectives). Driven by the rapid growth in the volume of optical satellite measurements, in this work we explore the lossy compression technique for multispectral satellite images. We give a comprehensive analysis of the High Efficiency Video Coding (HEVC) still-image intra coding part applied to the multispectral image data. Thereafter, we analyze the impact of the distortions introduced by the HEVC’s intra compression in the general case, as well as in the specific context of crop classification application. Results show that HEVC’s intra coding achieves better trade-off between compression gain and image quality, as compared to standard JPEG 2000 solution. On the other hand, this also reflects in the better performance of the designed pixel-based classifier in the analyzed crop classification task. We show that HEVC can obtain up to 150:1 compression ratio, when observing compression in the context of specific application, without significantly losing on classification performance compared to classifier trained and applied on raw data. In comparison, in order to maintain the same performance, JPEG 2000 allows compression ratio up to 70:1.
APA, Harvard, Vancouver, ISO, and other styles
49

Ko, Hyung-Hwa. "Enhanced Binary MQ Arithmetic Coder with Look-Up Table." Information 12, no. 4 (March 26, 2021): 143. http://dx.doi.org/10.3390/info12040143.

Full text
Abstract:
Binary MQ arithmetic coding is widely used as a basic entropy coder in multimedia coding system. MQ coder esteems high in compression efficiency to be used in JBIG2 and JPEG2000. The importance of arithmetic coding is increasing after it is adopted as a unique entropy coder in HEVC standard. In the binary MQ coder, arithmetic approximation without multiplication is used in the process of recursive subdivision of range interval. Because of the MPS/LPS exchange activity that happens in the MQ coder, the output byte tends to increase. This paper proposes an enhanced binary MQ arithmetic coder to make use of look-up table (LUT) for (A × Qe) using quantization skill to improve the coding efficiency. Multi-level quantization using 2-level, 4-level and 8-level look-up tables is proposed in this paper. Experimental results applying to binary documents show about 3% improvement for basic context-free binary arithmetic coding. In the case of JBIG2 bi-level image compression standard, compression efficiency improved about 0.9%. In addition, in the case of lossless JPEG2000 compression, compressed byte decreases 1.5% using 8-level LUT. For the lossy JPEG2000 coding, this figure is a little lower, about 0.3% improvement of PSNR at the same rate.
APA, Harvard, Vancouver, ISO, and other styles
50

Eseholi, Tarek, François-Xavier Coudoux, Patrick Corlay, Rahmad Sadli, and Maxence Bigerelle. "A Multiscale Topographical Analysis Based on Morphological Information: The HEVC Multiscale Decomposition." Materials 13, no. 23 (December 7, 2020): 5582. http://dx.doi.org/10.3390/ma13235582.

Full text
Abstract:
In this paper, we evaluate the effect of scale analysis as well as the filtering process on the performances of an original compressed-domain classifier in the field of material surface topographies classification. Each surface profile is multiscale analyzed by using a Gaussian Filter analyzing method to be decomposed into three multiscale filtered image types: Low-pass (LP), Band-pass (BP), and High-pass (HP) filtered versions, respectively. The complete set of filtered image data constitutes the collected database. First, the images are lossless compressed using the state-of-the art High-efficiency video coding (HEVC) video coding standard. Then, the Intra-Prediction Modes Histogram (IPHM) feature descriptor is computed directly in the compressed domain from each HEVC compressed image. Finally, we apply the IPHM feature descriptors as an input of a Support Vector Machine (SVM) classifier. SVM is introduced here to strengthen the performances of the proposed classification system thanks to the powerful properties of machine learning tools. We evaluate the proposed solution we called “HEVC Multiscale Decomposition” (HEVC-MD) on a huge database of nearly 42,000 multiscale topographic images. A simple preliminary version of the algorithm reaches an accuracy of 52%. We increase this accuracy to 70% by using the multiscale analysis of the high-frequency range HP filtered image data sets. Finally, we verify that considering only the highest-scale analysis of low-frequency range LP was more appropriate for classifying our six surface topographies with an accuracy of up to 81%. To compare these new topographical descriptors to those conventionally used, SVM is applied on a set of 34 roughness parameters defined on the International Standard GPS ISO 25178 (Geometrical Product Specification), and one obtains accuracies of 38%, 52%, 65%, and 57% respectively for Sa, multiscale Sa, 34 roughness parameters, and multiscale ones. Compared to conventional roughness descriptors, the HEVC-MD descriptors increase surfaces discrimination from 65% to 81%.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography