To see the other types of publications on this topic, follow the link: JPEG (Image coding standard); image processing.

Journal articles on the topic 'JPEG (Image coding standard); image processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'JPEG (Image coding standard); image processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhang, Xi, and Noriaki Fukuda. "Lossy to lossless image coding based on wavelets using a complex allpass filter." International Journal of Wavelets, Multiresolution and Information Processing 12, no. 04 (July 2014): 1460002. http://dx.doi.org/10.1142/s0219691314600029.

Full text
Abstract:
Wavelet-based image coding has been adopted in the international standard JPEG 2000 for its efficiency. It is well-known that the orthogonality and symmetry of wavelets are two important properties for many applications of signal processing and image processing. Both can be simultaneously realized by the wavelet filter banks composed of a complex allpass filter, thus, it is expected to get a better coding performance than the conventional biorthogonal wavelets. This paper proposes an effective implementation of orthonormal symmetric wavelet filter banks composed of a complex allpass filter for lossy to lossless image compression. First, irreversible real-to-real wavelet transforms are realized by implementing a complex allpass filter for lossy image coding. Next, reversible integer-to-integer wavelet transforms are proposed by incorporating the rounding operation into the filtering processing to obtain an invertible complex allpass filter for lossless image coding. Finally, the coding performance of the proposed orthonormal symmetric wavelets is evaluated and compared with the D-9/7 and D-5/3 biorthogonal wavelets. It is shown from the experimental results that the proposed allpass-based orthonormal symmetric wavelets can achieve a better coding performance than the conventional D-9/7 and D-5/3 biorthogonal wavelets both in lossy and lossless coding.
APA, Harvard, Vancouver, ISO, and other styles
2

Kliuchenia, V. V. "Design of a discrete сosine transformation processor for image compression systems on a losless-to-lossy circuit." Doklady BGUIR 19, no. 3 (June 2, 2021): 5–13. http://dx.doi.org/10.35596/1729-7648-2021-19-3-5-13.

Full text
Abstract:
Today, mobile multimedia systems that use the H.261 / 3/4/5, MPEG-1/2/4 and JPEG standards for encoding / decoding video, audio and images are widely spread [1–4]. The core of these standards is the discrete cosine transform (DCT) of I, II, III ... VIII types [DCT]. Wide support in a huge number of multimedia applications of the JPEG format by circuitry and software solutions and the need for image coding according to the L2L scheme determines the relevance of the problem of creating a decorrelated transformation based on DCT and methods for rapid prototyping of processors for computing an integer DCT on programmable systems on a FPGA chip. At the same time, such characteristics as structural regularity, modularity, high computational parallelism, low latency and power consumption are taken into account. Direct and inverse transformation should be carried out according to the “whole-to-whole” processing scheme with preservation of the perfective reconstruction of the original image (the coefficients are represented by integer or binary rational numbers; the number of multiplication operations is minimal, if possible, they are excluded from the algorithm). The wellknown integer DCTs (BinDCT, IntDCT) do not give a complete reversible bit to bit conversion. To encode an image according to the L2L scheme, the decorrelated transform must be reversible and implemented in integer arithmetic, i. e. the conversion would follow an “integer-to-integer” processing scheme with a minimum number of rounding operations affecting the compactness of energy in equivalent conversion subbands. This article shows how, on the basis of integer forward and inverse DCTs, to create a new universal architecture of decorrelated transform on FPGAs for transformational image coding systems that operate on the principle of “lossless-to-lossy” (L2L), and to obtain the best experimental results for objective and subjective performance compared to comparable compression systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Saudagar, Abdul Khader Jilani. "Biomedical Image Compression Techniques for Clinical Image Processing." International Journal of Online and Biomedical Engineering (iJOE) 16, no. 12 (October 19, 2020): 133. http://dx.doi.org/10.3991/ijoe.v16i12.17019.

Full text
Abstract:
Image processing is widely used in the domain of biomedical engineering especially for compression of clinical images. Clinical diagnosis receives high importance which involves handling patient’s data more accurately and wisely when treating patients remotely. Many researchers proposed different methods for compression of medical images using Artificial Intelligence techniques. Developing efficient automated systems for compression of medical images in telemedicine is the focal point in this paper. Three major approaches were proposed here for medical image compression. They are image compression using neural network, fuzzy logic and neuro-fuzzy logic to preserve higher spectral representation to maintain finer edge information’s, and relational coding for inter band coefficients to achieve high compressions. The developed image coding model is evaluated over various quality factors. From the simulation results it is observed that the proposed image coding system can achieve efficient compression performance compared with existing block coding and JPEG coding approaches, even under resource constraint environments.
APA, Harvard, Vancouver, ISO, and other styles
4

Man, Hong, Alen Docef, and Faouzi Kossentini. "Performance Analysis of the JPEG 2000 Image Coding Standard." Multimedia Tools and Applications 26, no. 1 (May 2005): 27–57. http://dx.doi.org/10.1007/s11042-005-6848-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tanaka, Midori, Tomoyuki Takanashi, and Takahiko Horiuchi. "Glossiness-aware Image Coding in JPEG Framework." Journal of Imaging Science and Technology 64, no. 5 (September 1, 2020): 50409–1. http://dx.doi.org/10.2352/j.imagingsci.technol.2020.64.5.050409.

Full text
Abstract:
Abstract In images, the representation of glossiness, translucency, and roughness of material objects (Shitsukan) is essential for realistic image reproduction. To date, image coding has been developed considering various indices of the quality of the encoded image, for example, the peak signal-to-noise ratio. Consequently, image coding methods that preserve subjective impressions of qualities such as Shitsukan have not been studied. In this study, the authors focus on the property of glossiness and propose a method of glossiness-aware image coding. Their purpose is to develop an encoding algorithm that produces images that can be decoded by standard JPEG decoders, which are commonly used worldwide. The proposed method consists of three procedures: block classification, glossiness enhancement, and non-glossiness information reduction. In block classification, the types of glossiness in a target image are classified using block units. In glossiness enhancement, the glossiness in each type of block is emphasized to reduce the amount of degradation of glossiness during JPEG encoding. The third procedure, non-glossiness information reduction, further compresses the information while maintaining the glossiness by reducing the information in each block that does not represent the glossiness in the image. To test the effectiveness of the proposed method, the authors conducted a subjective evaluation experiment using paired comparison of images coded by the proposed method and JPEG images with the same data size. The glossiness was found to be better preserved in images coded by the proposed method than in the JPEG images.
APA, Harvard, Vancouver, ISO, and other styles
6

Pinheiro, Antonio. "JPEG column: 82nd JPEG meeting in Lisbon, Portugal." ACM SIGMultimedia Records 11, no. 1 (March 2019): 1. http://dx.doi.org/10.1145/3458462.3458468.

Full text
Abstract:
JPEG has been the most common representation format of digital images for more than 25 years. Other image representation formats have been standardised by JPEG committee like JPEG 2000 or more recently JPEG XS. Furthermore, JPEG has been extended with new functionalities like HDR or alpha plane coding with the JPEG XT standard, and more recently with a reference software. Another solutions have been also proposed by different players with limited success. The JPEG committee decided it is the time to create a new working item, named JPEG XL, that aims to develop an image coding standard with increased quality and flexibility combined with a better compression efficiency. The evaluation of the call for proposals responses had already confirmed the industry interest, and development of core experiments has now begun. Several functionalities will be considered, like support for lossless transcoding of images represented with JPEG standard.
APA, Harvard, Vancouver, ISO, and other styles
7

Dufaux, Frederic, Gary J. Sullivan, and Touradj Ebrahimi. "The JPEG XR image coding standard [Standards in a Nutshell]." IEEE Signal Processing Magazine 26, no. 6 (November 2009): 195–204. http://dx.doi.org/10.1109/msp.2009.934187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Skodras, A., C. Christopoulos, and T. Ebrahimi. "The JPEG 2000 still image compression standard." IEEE Signal Processing Magazine 18, no. 5 (2001): 36–58. http://dx.doi.org/10.1109/79.952804.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sowmithri, K. "An Iterative Lifting Scheme on DCT Coefficients for Image Coding." International Journal of Students' Research in Technology & Management 3, no. 4 (September 27, 2015): 317–19. http://dx.doi.org/10.18510/ijsrtm.2015.341.

Full text
Abstract:
Image coding is considered to be more effective, as it reduces number of bits required to store and/or to transmit image data. Transform based image coders play a significant role as they decorrelate the spatial low level information. It is found utilization in International compression standards such as JPEG, JPEG 2000, MPEG and H264. The choice of transform is an important issue in all these transforms coding schemes. Most of the literature suggests either Discrete Cosine Transform (DCT) or Discrete Wavelet Transform (DWT). In this proposed work, the energy preservation of DCT coefficients is analysed, and to down sample these coefficients, lifting scheme is iteratively applied so as to compensate the artifacts that appear in the reconstructed picture, and to yield the higher compression ratio. This is followed by scalar quantization and entropy coding, as in JPEG. The performance of the proposed iterative lifting scheme, employed on decorrelated DCT coefficients is measured with standard Peak Signal to Noise Ratio (PSNR) and the results are encouraging.
APA, Harvard, Vancouver, ISO, and other styles
10

Mechouek, Khaoula, Nasreddine Kouadria, Noureddine Doghmane, and Nadia Kaddeche. "Low Complexity DCT Approximation for Image Compression in Wireless Image Sensor Networks." Journal of Circuits, Systems and Computers 25, no. 08 (May 17, 2016): 1650088. http://dx.doi.org/10.1142/s0218126616500882.

Full text
Abstract:
Energy consumption is a critical problem affecting the lifetime of wireless image sensor networks (WISNs). In such systems, images are usually compressed using JPEG standard to save energy during transmission. And since DCT transformation is the most computationally intensive part in the JPEG technique, several approximation techniques have been proposed to further decrease the energy consumption. In this paper, we propose a low-complexity DCT approximation method which is based on the combination of the rounded DCT with a pruned approach. Experimental comparison with recently proposed schemes, using Atmel Atmega128L platform, shows that our method requires less arithmetic operations, and hence less processing time and/or the energy consumption while providing better performance in terms of PSNR metric.
APA, Harvard, Vancouver, ISO, and other styles
11

Yan, Gao, Yan Liang, Xin Zhou, and Chun Xia Qi. "A Digital Watermarking Algorithm Based on the Wavelet Bit Plane Coding." Advanced Materials Research 821-822 (September 2013): 1438–41. http://dx.doi.org/10.4028/www.scientific.net/amr.821-822.1438.

Full text
Abstract:
In this paper a algorithm of digital image watermark based on wavelet bit plane is introduced, and the original image is not required for detecting the watermarking. The digital watermark is embedded by changing information of some bit planes in DWT images at different resolutions. The watermark can be extracted on the difference bit plane values of subimages of the decomposed watermarked image which is then mapped to an image with a few shades of gray. Experimental results show that the watermark is robust to several signal processing techniques, including JPEG compression and some image processing operations
APA, Harvard, Vancouver, ISO, and other styles
12

Jia, Hong Li, and Qiang Liu. "JPEG DCT Compressiom." Advanced Materials Research 712-715 (June 2013): 2542–45. http://dx.doi.org/10.4028/www.scientific.net/amr.712-715.2542.

Full text
Abstract:
With the rapid spread of image processing applications and the further development of multimedia technologies, compression standards become more and more important. This paper intends to explain JPEG (Joint Photographic Experts Group) compression, which is currently a worldwide standard for digital image compression, is based on the discrete cosine transform (DCT). Based on the research, the paper describes theory and algorithms of the JPEG DCT compression and implements a baseline JPEG codec (encoder/decoder) with MATLAB.
APA, Harvard, Vancouver, ISO, and other styles
13

Song, Hong Mei, Hai Wei Mu, and Dong Yan Zhao. "Study on Nearly Lossless Compression with Progressive Decoding." Advanced Materials Research 926-930 (May 2014): 1751–54. http://dx.doi.org/10.4028/www.scientific.net/amr.926-930.1751.

Full text
Abstract:
A progressive transmission and decoding nearly lossless compression algorithm is proposed. The image data are grouped according to different frequencies based on DCT transform, then it uses the JPEG-LS core algorithmtexture prediction and Golomb coding on each group of data, in order to achieve progressive image transmission and decoding. Experimentation on the standard test images with this algorithm and comparing with JPEG-LS shows that the compression ratio of this algorithm is very similar to the compression ratio of JPEG-LS, and this algorithm loses a little image information but it has the ability of the progressive transmission and decoding.
APA, Harvard, Vancouver, ISO, and other styles
14

Li, Shi Jun, Xi Long Qu, and Qiang Li. "Implementation of the JPEG on DSP Processors." Applied Mechanics and Materials 34-35 (October 2010): 1536–39. http://dx.doi.org/10.4028/www.scientific.net/amm.34-35.1536.

Full text
Abstract:
this paper introduces the design and implementation of JPEG image compression based on the high speed DSP TMS320VC5416 available from Texas Instruments. Especially, the realization and optimization of DCT transform is discussed and the image Lena is compressed with different way. Experiments show that the reconstructed images have PSNR above 34dB . JPEG algorithm is a digital image compression algorithm with high compression ratio, little distortion characteristics, and has been identified as international standards. This standard has been widely used in digital cameras, surveillance systems, mobile phones, video phones, and many other aspects. It is important to research and realize a real-time image compress system Using JPEG. DSP is used in real-time processing and portable applications with special hardware structure. DSP with high processing speed and excellent operation performance is particularly adapted to image processing. This article introduces DSP-based implementation of JPEG[1].
APA, Harvard, Vancouver, ISO, and other styles
15

Hussain, Ikram, Oh-Jin Kwon, and Seungcheol Choi. "Evaluating the Coding Performance of 360° Image Projection Formats Using Objective Quality Metrics." Symmetry 13, no. 1 (January 5, 2021): 80. http://dx.doi.org/10.3390/sym13010080.

Full text
Abstract:
Recently, 360° content has emerged as a new method for offering real-life interaction. Ultra-high resolution 360° content is mapped to the two-dimensional plane to adjust to the input of existing generic coding standards for transmission. Many formats have been proposed, and tremendous work is being done to investigate 360° videos in the Joint Video Exploration Team using projection-based coding. However, the standardization activities for quality assessment of 360° images are limited. In this study, we evaluate the coding performance of various projection formats, including recently-proposed formats adapting to the input of JPEG and JPEG 2000 content. We present an overview of the nine state-of-the-art formats considered in the evaluation. We also propose an evaluation framework for reducing the bias toward the native equi-rectangular (ERP) format. We consider the downsampled ERP image as the ground truth image. Firstly, format conversions are applied to the ERP image. Secondly, each converted image is subjected to the JPEG and JPEG 2000 image coding standards, then decoded and converted back to the downsampled ERP to find the coding gain of each format. The quality metrics designed for 360° content and conventional 2D metrics have been used for both end-to-end distortion measurement and codec level, in two subsampling modes, i.e., YUV (4:2:0 and 4:4:4). Our evaluation results prove that the hybrid equi-angular format and equatorial cylindrical format achieve better coding performance among the compared formats. Our work presents evidence to find the coding gain of these formats over ERP, which is useful for identifying the best image format for a future standard.
APA, Harvard, Vancouver, ISO, and other styles
16

RAITTINEN, HARRI, and KIMMO KASKI. "CRITICAL REVIEW OF FRACTAL IMAGE COMPRESSION." International Journal of Modern Physics C 06, no. 01 (February 1995): 47–66. http://dx.doi.org/10.1142/s0129183195000058.

Full text
Abstract:
In this paper, fractal compression methods are reviewed. Three new methods are developed and their results are compared with the results obtained using four previously published fractal compression methods. Furthermore, we have compared the results of these methods with the standard JPEG method. For comparison, we have used an extensive set of image quality measures. According to these tests, fractal methods do not yield significantly better compression results when compared with conventional methods. This is especially the case when high coding accuracy (small compression ratio) is desired.
APA, Harvard, Vancouver, ISO, and other styles
17

Götting, Detlef, Achim Ibenthal, and Rolf-Rainer Grigat. "Fractal Image Coding and Magnification Using Invariant Features." Fractals 05, supp01 (April 1997): 65–74. http://dx.doi.org/10.1142/s0218348x97000644.

Full text
Abstract:
Fractal image coding has significant potential for the compression of still and moving images and also for scaling up images. The objective of our investigations was twofold. First, compression ratios of factor 60 and more for still images have been achieved, yielding a better quality of the decoded picture material than standard methods like JPEG. Second, image enlargement up to factors of 16 per dimension has been realized by means of fractal zoom, leading to natural and sharp representation of the scaled image content. Quality improvements were achieved due to the introduction of an extended luminance transform. In order to reduce the computational complexity of the encoding process, a new class of simple and suited invariant features is proposed, facilitating the search in the multidimensional space spanned by image domains and affine transforms.
APA, Harvard, Vancouver, ISO, and other styles
18

Descampe, Antonin, Thomas Richter, Touradj Ebrahimi, Siegfried Foessel, Joachim Keinert, Tim Bruylants, Pascal Pellegrin, Charles Buysschaert, and Gael Rouvroy. "JPEG XS—A New Standard for Visually Lossless Low-Latency Lightweight Image Coding." Proceedings of the IEEE 109, no. 9 (September 2021): 1559–77. http://dx.doi.org/10.1109/jproc.2021.3080916.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

El-said, Shaimaa A., Khalid F. A. Hussein, and Mohamed M. Fouad. "Image Compression Technique for Low Bit Rate Transmission." International Journal of Computer Vision and Image Processing 1, no. 4 (October 2011): 1–18. http://dx.doi.org/10.4018/ijcvip.2011100101.

Full text
Abstract:
A novel Adaptive Lossy Image Compression (ALIC) technique is proposed to achieve high compression ratio by reducing the number of source symbols through the application of an efficient technique. The proposed algorithm is based on processing the discrete cosine transform (DCT) of the image to extract the highest energy coefficients in addition to applying one of the novel quantization schemes proposed in the present work. This method is straightforward and simple. It does not need complicated calculation; therefore the hardware implementation is easy to attach. Experimental comparisons are carried out to compare the performance of the proposed technique with those of other standard techniques such as the JPEG. The experimental results show that the proposed compression technique achieves high compression ratio with higher peak signal to noise ratio than that of JPEG at low bit rate without the visual degradation that appears in case of JPEG.
APA, Harvard, Vancouver, ISO, and other styles
20

Cho, Sang-Gyu, Zoran Bojkovic, Dragorad Milovanovic, Jungsik Lee, and Jae-Jeong Hwang. "Image quality evaluation: JPEG 2000 versus intra-only H.264/AVC High Profile." Facta universitatis - series: Electronics and Energetics 20, no. 1 (2007): 71–83. http://dx.doi.org/10.2298/fuee0701071c.

Full text
Abstract:
The objective of this work is to provide image quality evaluation for intra-only H.264/AVC High Profile (HP) standard versus JPEG2000 standard. Here, we review the structure of the two standards and the coding algorithms in the context of subjective and objective assessments. Simulations were performed on a test set of monochrome and color image. As a result of simulations, we observed that the subjective and objective image quality of H.264/AVC is superior to JPEG2000, except the blocking artifact which is inherent, since it consists of block transform rather than whole image transform. Thus, we propose a unified measurement system to properly define image quality.
APA, Harvard, Vancouver, ISO, and other styles
21

OKUDA, M., and N. ADAMI. "JPEG Compatible Raw Image Coding Based on Polynomial Tone Mapping Model." IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E91-A, no. 10 (October 1, 2008): 2928–33. http://dx.doi.org/10.1093/ietfec/e91-a.10.2928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Coelho, Diego F. G., Renato J. Cintra, Fábio M. Bayer, Sunera Kulasekera, Arjuna Madanayake, Paulo Martinez, Thiago L. T. Silveira, Raíza S. Oliveira, and Vassil S. Dimitrov. "Low-Complexity Loeffler DCT Approximations for Image and Video Coding." Journal of Low Power Electronics and Applications 8, no. 4 (November 22, 2018): 46. http://dx.doi.org/10.3390/jlpea8040046.

Full text
Abstract:
This paper introduced a matrix parametrization method based on the Loeffler discrete cosine transform (DCT) algorithm. As a result, a new class of 8-point DCT approximations was proposed, capable of unifying the mathematical formalism of several 8-point DCT approximations archived in the literature. Pareto-efficient DCT approximations are obtained through multicriteria optimization, where computational complexity, proximity, and coding performance are considered. Efficient approximations and their scaled 16- and 32-point versions are embedded into image and video encoders, including a JPEG-like codec and H.264/AVC and H.265/HEVC standards. Results are compared to the unmodified standard codecs. Efficient approximations are mapped and implemented on a Xilinx VLX240T FPGA and evaluated for area, speed, and power consumption.
APA, Harvard, Vancouver, ISO, and other styles
23

Rabbani, Majid, and Rajan Joshi. "An overview of the JPEG 2000 still image compression standard." Signal Processing: Image Communication 17, no. 1 (January 2002): 3–48. http://dx.doi.org/10.1016/s0923-5965(01)00024-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Гаврилов, Дмитро Сергійович, Сергій Степанович Бучік, Юрій Михайлович Бабенко, Сергій Сергійович Шульгін, and Олександр Васильович Слободянюк. "Метод обробки вiдеодaних з можливістю їх захисту після квaнтувaння." RADIOELECTRONIC AND COMPUTER SYSTEMS, no. 2 (June 2, 2021): 64–77. http://dx.doi.org/10.32620/reks.2021.2.06.

Full text
Abstract:
The subject of research in the article is the video processing processes based on the JPEG platform for data transmission in the information and telecommunication network. The aim is to build a method for processing a video image with the possibility of protecting it at the quantization stage with subsequent arithmetic coding. That will allow, while preserving the structural and statistical regularity, to ensure the necessary level of accessibility, reliability, and confidentiality when transmitting video data. Task: research of known methods of selective video image processing with the subsequent formalization of the video image processing procedure at the quantization stage and statistical coding of significant blocks based on the JPEG platform. The methods used are an algorithm based on the JPEG platform, methods for selecting significant informative blocks, arithmetic coding. The following results were obtained. A method for processing a video image with the possibility of its protection at the stage of quantization with subsequent arithmetic coding has been developed. This method will allow, while preserving the structural and statistical regularity, to fulfill the set requirements for an accessible, reliable, and confidential transmission of video data. Ensuring the required level of availability is associated with a 30% reduction in the video image volume compared to the original volume. Simultaneously, the provision of the required level of confidence is confirmed by an estimate of the peak signal-to-noise ratio for an authorized user, which is dB. Ensuring the required level of confidentiality is confirmed by an estimate of the peak signal-to-noise ratio in case of unauthorized access, which is equal to dB. Conclusions. The scientific novelty of the results obtained is as follows: for the first time, two methods of processing video images at the quantization stage have been proposed. The proposed technologies fulfill the assigned tasks to ensure the required level of confidentiality at a given level of confidence. Simultaneously, the method of using encryption tables has a higher level of cryptographic stability than the method of using the key matrix. It is due to a a more complex mathematical apparatus. Which, in turn, increases the time for processing the tributes. To fulfill the requirement of data availability, it is proposed to use arithmetic coding for info-normative blocks, which should be more efficient compared with the methods of code tables. So, the method of using the scoring tables has greater cryptographic stability, and the method of using the matrix-key has higher performance. Simultaneously, the use of arithmetic coding will satisfy the need for accessibility by reducing the initial volume.
APA, Harvard, Vancouver, ISO, and other styles
25

Puchala, Dariusz, Kamil Stokfiszewski, and Mykhaylo Yatsymirskyy. "Image Statistics Preserving Encrypt-then-Compress Scheme Dedicated for JPEG Compression Standard." Entropy 23, no. 4 (March 31, 2021): 421. http://dx.doi.org/10.3390/e23040421.

Full text
Abstract:
In this paper, the authors analyze in more details an image encryption scheme, proposed by the authors in their earlier work, which preserves input image statistics and can be used in connection with the JPEG compression standard. The image encryption process takes advantage of fast linear transforms parametrized with private keys and is carried out prior to the compression stage in a way that does not alter those statistical characteristics of the input image that are crucial from the point of view of the subsequent compression. This feature makes the encryption process transparent to the compression stage and enables the JPEG algorithm to maintain its full compression capabilities even though it operates on the encrypted image data. The main advantage of the considered approach is the fact that the JPEG algorithm can be used without any modifications as a part of the encrypt-then-compress image processing framework. The paper includes a detailed mathematical model of the examined scheme allowing for theoretical analysis of the impact of the image encryption step on the effectiveness of the compression process. The combinatorial and statistical analysis of the encryption process is also included and it allows to evaluate its cryptographic strength. In addition, the paper considers several practical use-case scenarios with different characteristics of the compression and encryption stages. The final part of the paper contains the additional results of the experimental studies regarding general effectiveness of the presented scheme. The results show that for a wide range of compression ratios the considered scheme performs comparably to the JPEG algorithm alone, that is, without the encryption stage, in terms of the quality measures of reconstructed images. Moreover, the results of statistical analysis as well as those obtained with generally approved quality measures of image cryptographic systems, prove high strength and efficiency of the scheme’s encryption stage.
APA, Harvard, Vancouver, ISO, and other styles
26

Wang, Zhe, Trung-Hieu Tran, Ponnanna Kelettira Muthappa, and Sven Simon. "A JND-Based Pixel-Domain Algorithm and Hardware Architecture for Perceptual Image Coding." Journal of Imaging 5, no. 5 (April 26, 2019): 50. http://dx.doi.org/10.3390/jimaging5050050.

Full text
Abstract:
This paper presents a hardware efficient pixel-domain just-noticeable difference (JND) model and its hardware architecture implemented on an FPGA. This JND model architecture is further proposed to be part of a low complexity pixel-domain perceptual image coding architecture, which is based on downsampling and predictive coding. The downsampling is performed adaptively on the input image based on regions-of-interest (ROIs) identified by measuring the downsampling distortions against the visibility thresholds given by the JND model. The coding error at any pixel location can be guaranteed to be within the corresponding JND threshold in order to obtain excellent visual quality. Experimental results show the improved accuracy of the proposed JND model in estimating visual redundancies compared with classic JND models published earlier. Compression experiments demonstrate improved rate-distortion performance and visual quality over JPEG-LS as well as reduced compressed bit rates compared with other standard codecs such as JPEG 2000 at the same peak signal-to-perceptible-noise ratio (PSPNR). FPGA synthesis results targeting a mid-range device show very moderate hardware resource requirements and over 100 Megapixel/s throughput of both the JND model and the perceptual encoder.
APA, Harvard, Vancouver, ISO, and other styles
27

Wang, Yuer, Zhong Jie Zhu, and Wei Dong Chen. "HVS-Based Low Bit-Rate Image Compression." Applied Mechanics and Materials 511-512 (February 2014): 441–46. http://dx.doi.org/10.4028/www.scientific.net/amm.511-512.441.

Full text
Abstract:
Image coding and compression is one of the most key techniques in the area of image signal processing, However, most of the existing coding methods such as JPEG, employ the similar hybrid architecture to compress images and videos. After many years of development, it is difficult to further improve the coding performance. In addition, most of the existing image compression algorithms are designed to minimize difference between the original and decompressed images based on pixel wise distortion metrics, such as MSE, PSNR which do not consider the HVS features and is not able to guarantee good perceptual quality of reconstructed images, especially at low bit-rate scenarios. In this paper, we propose a novel scheme for low bit-rate image compression. Firstly, the original image is quantized to a binary image based on heat transfer theory. Secondly, the bit sequence of the binary image is divided into several sub-sets and each one is designated a priority based on the rate-distortion principle. Thirdly, the sub-sets with high priorities are selected based on the given bit-rate. Finally, the context-based binary arithmetic coding is employed to encode the sub-sets selected to produce the final compressed stream. At decoder, the image is decoded and reconstructed based on anisotropic diffusion. Experiments are conducted and provide convincing results.
APA, Harvard, Vancouver, ISO, and other styles
28

Li, Ren Chong, Yi Long You, and Feng Xiang You. "Research of Image Processing Based on Lifting Wavelet Transform." Applied Mechanics and Materials 263-266 (December 2012): 2502–9. http://dx.doi.org/10.4028/www.scientific.net/amm.263-266.2502.

Full text
Abstract:
This paper Study problems which based on lifting wavelet transform image processing. Coding and decoding a complete digital image by using W97-2 wavelet basis wavelet transform, combined with the embedded zerotree wavelet coding and binary arithmetic coding, and complete a lossless compression combined with the international standard test images. Experimental results show that graphics, image processing will come into a higher level because of wavelet analysis combined with image processing.
APA, Harvard, Vancouver, ISO, and other styles
29

Jiao, Shuming, Zhi Jin, Chenliang Chang, Changyuan Zhou, Wenbin Zou, and Xia Li. "Compression of Phase-Only Holograms with JPEG Standard and Deep Learning." Applied Sciences 8, no. 8 (July 30, 2018): 1258. http://dx.doi.org/10.3390/app8081258.

Full text
Abstract:
It is a critical issue to reduce the enormous amount of data in the processing, storage and transmission of a hologram in digital format. In photograph compression, the JPEG standard is commonly supported by almost every system and device. It will be favorable if JPEG standard is applicable to hologram compression, with advantages of universal compatibility. However, the reconstructed image from a JPEG compressed hologram suffers from severe quality degradation since some high frequency features in the hologram will be lost during the compression process. In this work, we employ a deep convolutional neural network to reduce the artifacts in a JPEG compressed hologram. Simulation and experimental results reveal that our proposed “JPEG + deep learning” hologram compression scheme can achieve satisfactory reconstruction results for a computer-generated phase-only hologram after compression.
APA, Harvard, Vancouver, ISO, and other styles
30

LIAN, SHIGUO. "IMAGE AUTHENTICATION BASED ON FRACTAL FEATURES." Fractals 16, no. 04 (December 2008): 287–97. http://dx.doi.org/10.1142/s0218348x08004034.

Full text
Abstract:
In this paper, the fractal features of natural images are used to construct an image authentication scheme, which can detect whether an image is maliciously tampered (cutting, wiping, modification, etc.) or not and can even locate the tampered regions. For the original image, the fractal transformation is applied to each of the image blocks, and some of the transformation parameters are quantized and used as the authentication code. The authentication code can be stored or transmitted secretly. To authenticate an image, the new authentication code is computed from the image with the similar method, and then compared with the stored or received code. A metric is proposed to decide whether an image block is tampered or not. Comparative experiments show that the authentication scheme can detect malicious tampering, is robust against such common signal processing as JPEG compression, fractal coding, adding noise or filtering, and thus, obtains competent performances compared with existing image authentication schemes.
APA, Harvard, Vancouver, ISO, and other styles
31

Sultan, Bushra A., and Loay E. George. "Color image compression based on spatial and magnitude signal decomposition." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 5 (October 1, 2021): 4069. http://dx.doi.org/10.11591/ijece.v11i5.pp4069-4081.

Full text
Abstract:
<p>In this paper, a simple color image compression system has been proposed using image signal decomposition. Where, the RGB image color band is converted to the less correlated YUV color model and the pixel value (magnitude) in each band is decomposed into 2-values; most and least significant. According to the importance of the most significant value (MSV) that influenced by any simply modification happened, an adaptive lossless image compression system is proposed using bit plane (BP) slicing, delta pulse code modulation (Delta PCM), adaptive quadtree (QT) partitioning followed by an adaptive shift encoder. On the other hand, a lossy compression system is introduced to handle the least significant value (LSV), it is based on an adaptive, error bounded coding system, and it uses the DCT compression scheme. The performance of the developed compression system was analyzed and compared with those attained from the universal standard JPEG, and the results of applying the proposed system indicated its performance is comparable or better than that of the JPEG standards.</p>
APA, Harvard, Vancouver, ISO, and other styles
32

Journal, Baghdad Science. "An Embedded Data Using Slantlet Transform." Baghdad Science Journal 8, no. 3 (September 4, 2011): 840–48. http://dx.doi.org/10.21123/bsj.8.3.840-848.

Full text
Abstract:
Data hiding is the process of encoding extra information in an image by making small modification to its pixels. To be practical, the hidden data must be perceptually invisible yet robust to common signal processing operations. This paper introduces a scheme for hiding a signature image that could be as much as 25% of the host image data and hence could be used both in digital watermarking as well as image/data hiding. The proposed algorithm uses orthogonal discrete wavelet transforms with two zero moments and with improved time localization called discrete slantlet transform for both host and signature image. A scaling factor ? in frequency domain control the quality of the watermarked images. Experimental results of signature image recovery after applying JPEG coding to the watermarking image are included.
APA, Harvard, Vancouver, ISO, and other styles
33

Singh, Kulwinder, Ming Ma, Dong Won Park, and Syungog An. "Image Indexing Based On Mpeg-7 Scalable Color Descriptor." Key Engineering Materials 277-279 (January 2005): 375–82. http://dx.doi.org/10.4028/www.scientific.net/kem.277-279.375.

Full text
Abstract:
The MPEG-7 standard defines a set of descriptors that extract low-level features such as color, texture and object shape from an image and generate metadata that represents the extracted information. In this paper we propose a new image retrieval technique for image indexing based on the MPEG-7 scalable color descriptor. We use some specifications of the scalable color descriptor (SCD) for the implementation of the color histograms. The MPEG-7 standard defines 1 l norm − based matching in the SCD. But in our approach, for distance measurement, we achieve a better result by using cosine similarity coefficient for color histograms. This approach has significantly increased the accuracy of obtaining results for image retrieval. Experiments based on scalable color descriptors are illustrated. We also present the color spaces supported by the different image and video coding standards such as JPEG-2000, MPEG-1, 2, 4 and MPEG-7. In addition, this paper outlines the broad details of MPEG-7 Color Descriptors.
APA, Harvard, Vancouver, ISO, and other styles
34

OH, TICK HUI, and ROSLI BESAR. "JPEG2000 AND JPEG: A STATISTICAL APPROACH FOR LOSSILY COMPRESSED MEDICAL IMAGES QUALITY EVALUATION." International Journal of Wavelets, Multiresolution and Information Processing 02, no. 03 (September 2004): 249–67. http://dx.doi.org/10.1142/s0219691304000500.

Full text
Abstract:
Image compression will always reduce the image fidelity, especially when the image is compressed at lower bit rates, which cannot be tolerated, especially in medical field. But compression is necessary due to the constraint of transmission bandwidth and the limited storage capacity. In this paper, the bad compression performance of the new JPEG2000 and the more conventional JPEG are studied. Besides, using the common objective measures such as Peak Signal-to-Noise Ratio (PSNR) and Root Mean Square Error (RMSE) as the standard quality measurements, we focus more on the statistical and frequency measures as the image characteristics evaluation. Four types of medical modalities are used: X-ray, Magnetic Resonance Imaging (MRI), Ultrasound and Computed Tomography (CT). From these various measurements, we found the general acceptance compression bit rate for each of the medical modalities.
APA, Harvard, Vancouver, ISO, and other styles
35

Radosavljević, Miloš, Branko Brkljač, Predrag Lugonja, Vladimir Crnojević, Željen Trpovski, Zixiang Xiong, and Dejan Vukobratović. "Lossy Compression of Multispectral Satellite Images with Application to Crop Thematic Mapping: A HEVC Comparative Study." Remote Sensing 12, no. 10 (May 16, 2020): 1590. http://dx.doi.org/10.3390/rs12101590.

Full text
Abstract:
Remote sensing applications have gained in popularity in recent years, which has resulted in vast amounts of data being produced on a daily basis. Managing and delivering large sets of data becomes extremely difficult and resource demanding for the data vendors, but even more for individual users and third party stakeholders. Hence, research in the field of efficient remote sensing data handling and manipulation has become a very active research topic (from both storage and communication perspectives). Driven by the rapid growth in the volume of optical satellite measurements, in this work we explore the lossy compression technique for multispectral satellite images. We give a comprehensive analysis of the High Efficiency Video Coding (HEVC) still-image intra coding part applied to the multispectral image data. Thereafter, we analyze the impact of the distortions introduced by the HEVC’s intra compression in the general case, as well as in the specific context of crop classification application. Results show that HEVC’s intra coding achieves better trade-off between compression gain and image quality, as compared to standard JPEG 2000 solution. On the other hand, this also reflects in the better performance of the designed pixel-based classifier in the analyzed crop classification task. We show that HEVC can obtain up to 150:1 compression ratio, when observing compression in the context of specific application, without significantly losing on classification performance compared to classifier trained and applied on raw data. In comparison, in order to maintain the same performance, JPEG 2000 allows compression ratio up to 70:1.
APA, Harvard, Vancouver, ISO, and other styles
36

Tian, Hua, Ming Jun Li, and Huan Huan Liu. "Research and Exploration on Static Image Compression Technology Based on JPEG2000." Applied Mechanics and Materials 644-650 (September 2014): 4182–86. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.4182.

Full text
Abstract:
This article introduces GPU-accelerated image processing parallel computing technology into standard core coding system of JPEG2000 static image compression and accelerates and designs the image compression process using CUDA acceleration principle. It also establishes the algorithm of image pixel array layered and reconstruction coding and realizes the coding of this algorithm using VC software. In order to verify the effectiveness and universal applicability of the algorithm and procedures, this paper compresses four images of different sizes and pixels in the static form of the JPEG2000. Through the comparison of the compression time, we can find that GPU hardware image processing system has a higher speedup ratio. With the increase of pixel and size, speedup ratio gradually increased which means that GPU acceleration has good adaptability.
APA, Harvard, Vancouver, ISO, and other styles
37

NAGARAJ, NITHIN. "HUFFMAN CODING AS A NONLINEAR DYNAMICAL SYSTEM." International Journal of Bifurcation and Chaos 21, no. 06 (June 2011): 1727–36. http://dx.doi.org/10.1142/s0218127411029392.

Full text
Abstract:
In this paper, source coding or data compression is viewed as a measurement problem. Given a measurement device with fewer states than the observable of a stochastic source, how can one capture their essential information? We propose modeling stochastic sources as piecewise-linear discrete chaotic dynamical systems known as Generalized Luröth Series (GLS) which has its roots in Georg Cantor's work in 1869. These GLS are special maps with the property that their Lyapunov exponent is equal to the Shannon's entropy of the source (up to a constant of proportionality). By successively approximating the source with GLS having fewer states (with the nearest Lyapunov exponent), we derive a binary coding algorithm which turns out to be a rediscovery of Huffman coding, the popular lossless compression algorithm used in the JPEG international standard for still image compression.
APA, Harvard, Vancouver, ISO, and other styles
38

Mikhailiuk, Aliaksei, Nanyang Ye, and Rafał K. Mantiuk. "The effect of display brightness and viewing distance: a dataset for visually lossless image compression." Electronic Imaging 2021, no. 11 (January 18, 2021): 152–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.11.hvei-152.

Full text
Abstract:
Visibility of image artifacts depends on the viewing conditions, such as display brightness and distance to the display. However, most image and video quality metrics operate under the assumption of a single standard viewing condition, without considering luminance or distance to the display. To address this limitation, we isolate brightness and distance as the components impacting the visibility of artifacts and collect a new dataset for visually lossless image compression. The dataset includes images encoded with JPEG andWebP at the quality level that makes compression artifacts imperceptible to an average observer. The visibility thresholds are collected under two luminance conditions: 10 cd/m2, simulating a dimmed mobile phone, and 220 cd/m2, which is a typical peak luminance of modern computer displays; and two distance conditions: 30 and 60 pixels per visual degree. The dataset was used to evaluate existing image quality and visibility metrics in their ability to consider display brightness and its distance to viewer. We include two deep neural network architectures, proposed to control image compression for visually lossless coding in our experiments.
APA, Harvard, Vancouver, ISO, and other styles
39

Kang, Li-Wei, and Jin-Jang Leou. "An error resilient coding scheme for JPEG image transmission based on data embedding and side-match vector quantization." Journal of Visual Communication and Image Representation 17, no. 4 (August 2006): 876–91. http://dx.doi.org/10.1016/j.jvcir.2005.08.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Chiang, Chen-Hua, Chi-Lun Weng, and Hung-Wen Chiu. "Automatic classification of medical image modality and anatomical location using convolutional neural network." PLOS ONE 16, no. 6 (June 11, 2021): e0253205. http://dx.doi.org/10.1371/journal.pone.0253205.

Full text
Abstract:
Modern radiologic images comply with DICOM (digital imaging and communications in medicine) standard, which, upon conversion to other image format, would lose its image detail and information such as patient demographics or type of image modality that DICOM format carries. As there is a growing interest in using large amount of image data for research purpose and acquisition of large amount of medical image is now a standard practice in the clinical setting, efficient handling and storage of large amount of image data is important in both the clinical and research setting. In this study, four classes of images were created, namely, CT (computed tomography) of abdomen, CT of brain, MRI (magnetic resonance imaging) of brain and MRI of spine. After converting these images into JPEG (Joint Photographic Experts Group) format, our proposed CNN architecture could automatically classify these 4 groups of medical images by both their image modality and anatomic location. We achieved excellent overall classification accuracy in both validation and test sets (> 99.5%), specificity and F1 score (> 99%) in each category of this dataset which contained both diseased and normal images. Our study has shown that using CNN for medical image classification is a promising methodology and could work on non-DICOM images, which could potentially save image processing time and storage space.
APA, Harvard, Vancouver, ISO, and other styles
41

Zhu, Dan Dan, Xiu Ping Zhang, You Liang Zhang, and Jun Bo Dai. "A New Image Watermarking Algorithm Using NSCT and Harris Detector in Green Manufacturing." Applied Mechanics and Materials 340 (July 2013): 277–82. http://dx.doi.org/10.4028/www.scientific.net/amm.340.277.

Full text
Abstract:
Based on multistage theory and NSCT, a new feature-based image watermarking scheme is proposed in this paper. Firstly, the multistage Harris detector is utilized to extract steady feature points from the host image; then, the local feature regions (LFR) are ascertained adaptively according to the feature scale theory, and they are scaled to a standard size; finally, the digital watermark are embedded into no subsample contour let low frequency area, in the low frequency regions made pseudo-Zernike moment calculation. Experimental results show that the proposed scheme is not only invisible and robust against common signals processing such as median filtering, sharpening, noise adding and JPEG compression et al, but also robust against the unclassified geometric attacks such as rotation, translation, scaling, row or column removal, shearing, local geometric distortion and combination attacks et al.
APA, Harvard, Vancouver, ISO, and other styles
42

Setyono, Andik, Md Jahangir Alam, and C. Eswaran. "Exploration and Development of the JPEG Compression for Mobile Communications System." International Journal of Mobile Computing and Multimedia Communications 5, no. 1 (January 2013): 25–46. http://dx.doi.org/10.4018/jmcmc.2013010103.

Full text
Abstract:
The JPEG compression algorithm is widely used in communications technology for multimedia data transmission. This algorithm is also very efficient for mobile applications since it can achieve compression ratios more than 100:1, thus greatly facilitating the storage and transmission processes for images. Though lossless JPEG compression is an ideal solution, the compression ratio achieved with this technique is relatively very small. JPEG2000 provides higher compression ratio and quality compared to JPEG but the main problem with this compression technique is its complexity resulting in longer processing time thus making it unsuitable for mobile communications. In this study, the authors explore methods for enhancing the performance of JPEG compression standard for mobile applications. They show that by using a splitting technique along with the JPEG compression, one can transmit data files of size larger than the maximum capacity which is possible with the existing mobile network. To evaluate the performance of proposed method, the authors perform some simulations using the emulator on desktop computer and mobile phone. The parameters used for performance evaluation are the speed of the compression process, the compression ratio and the compressed image quality. The simulation results presented in this paper will be very useful for developing a practical mobile communication system for multimedia data using JPEG compression.
APA, Harvard, Vancouver, ISO, and other styles
43

Li, Guang Ming, and Zhen Qi He. "Design of Image Acquisition System Based on ARM&FPGA." Applied Mechanics and Materials 65 (June 2011): 363–66. http://dx.doi.org/10.4028/www.scientific.net/amm.65.363.

Full text
Abstract:
One kind of ARM and FPGA integrated design ideas have been proposed in this paper. The cost of ARM and FPGA integrated design is not large, but it can be powerful and expand the scope of application of ARM and FPGA. The system can be divided into two parts: image acquisition and image processing (image compression and storage). The part of image acquisition is composed by the FPGA, and the part of image processing is by the ARM. The system uses high-speed SRAM switching mode “Ping-pong operation” can help to improve image acquisition speed on the part of image acquisition. In addition, the paper describes the method of image compression and image storage, JPEG2000 coding standard is used in image compression, and SQLite embedded database is used in image storage. The operation of the whole system can be stable and reliable through integrated debugging. The results show that the system can achieve the requirements of industrial site with required high acquisition of speed. Because the use of the “Ping-pong operation” in the system, the acquisition speed can be improved significantly and the effect can be reached in a high speed standard.
APA, Harvard, Vancouver, ISO, and other styles
44

Zhu, Ye, Xiaoqian Shen, Shikun Liu, Xiaoli Zhang, and Gang Yan. "Image Splicing Location Based on Illumination Maps and Cluster Region Proposal Network." Applied Sciences 11, no. 18 (September 11, 2021): 8437. http://dx.doi.org/10.3390/app11188437.

Full text
Abstract:
Splicing is the most common operation in image forgery, where the tampered background regions are imported from different images. Illumination maps are inherent attribute of images and provide significant clues when searching for splicing locations. This paper proposes an end-to-end dual-stream network for splicing location, where the illumination stream, which includes Grey-Edge (GE) and Inverse-Intensity Chromaticity (IIC), extract the inconsistent features, and the image stream extracts the global unnatural tampered features. The dual-stream feature in our network is fused through Multiple Feature Pyramid Network (MFPN), which contains richer context information. Finally, a Cluster Region Proposal Network (C-RPN) with spatial attention and an adaptive cluster anchor are proposed to generate potential tampered regions with greater retention of location information. Extensive experiments, which were evaluated on the NIST16 and CASIA standard datasets, show that our proposed algorithm is superior to some state-of-the-art algorithms, because it achieves accurate tampered locations at the pixel level, and has great robustness in post-processing operations, such as noise, blur and JPEG recompression.
APA, Harvard, Vancouver, ISO, and other styles
45

KITAURA, Y., M. MUNEYASU, and K. NAKANISHI. "A New Progressive Image Quality Control Method for ROI Coding in JPEG2000 Standard." IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E91-A, no. 4 (April 1, 2008): 998–1005. http://dx.doi.org/10.1093/ietfec/e91-a.4.998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Narayanan, M. Rajaram, and S. Gowri. "Intelligent Vision Based Technique Using ANN for Surface Finish Assessment of Machined Components." Key Engineering Materials 364-366 (December 2007): 1251–56. http://dx.doi.org/10.4028/www.scientific.net/kem.364-366.1251.

Full text
Abstract:
In this work, an FPGA hardware based image processing algorithm for preprocessing the images and enhance the image quality has been developed. The captured images were processed using a FPGA chip to remove the noise and then using a neural network, the surface roughness of machined parts produced by the grinding process was estimated. To ensure the effectiveness of this approach the roughness values quantified using these image vision techniques were then compared with widely accepted standard mechanical stylus instrument values. Quantification of digital images for surface roughness was performed by extracting key image features using Fourier transform and the standard deviation of gray level intensity values. A VLSI chip belonging to the Xilinx family Spartan-IIE FPGA board was used for the hardware based filter implementation. The coding was done using the popular VHDL language with the algorithms developed so as to exploit the implicit parallel processing capability of the chip. Thus, in this work an exhaustive analysis was done with comparison studies wherever required to make sure that the present approach of estimating surface finish based on the computer vision processing of image is more accurate and could be implemented in real time on a chip.
APA, Harvard, Vancouver, ISO, and other styles
47

YANG, GUOAN, and NANNING ZHENG. "AN OPTIMIZATION ALGORITHM FOR BIORTHOGONAL WAVELET FILTER BANKS DESIGN." International Journal of Wavelets, Multiresolution and Information Processing 06, no. 01 (January 2008): 51–63. http://dx.doi.org/10.1142/s0219691308002215.

Full text
Abstract:
A new approach for designing the Biorthogonal Wavelet Filter Bank (BWFB) for the purpose of image compression is presented in this paper. The approach is broken into two steps. First, an optimal filter bank is designed in the theoretical sense, based on Vaidyanathan's coding gain criterion in the SubBand Coding (SBC) system. Then, the above filter bank is optimized based on the criterion of Peak Signal-to-Noise Ratio (PSNR) in the JPEG2000 image compression system, resulting in a BWFB in practical application sense. With the approach, a series of BWFBs for a specific class of applications related to image compression, such as gray-level images, can be quickly designed. Here, new 7/5 BWFBs are presented based on the above approach for image compression applications. Experiments show that the 7/5 BWFBs not only have excellent compression performance, but also easy computation and are more suitable for VLSI hardware implementations. They perform equally well with respect to 7/5 filters in the JPEG2000 standard.
APA, Harvard, Vancouver, ISO, and other styles
48

Anaraki, Marjan Sedighi, Fangyan Dong, Hajime Nobuhara, and Kaoru Hirota. "Dyadic Curvelet Transform (DClet) for Image Noise Reduction." Journal of Advanced Computational Intelligence and Intelligent Informatics 11, no. 6 (July 20, 2007): 641–47. http://dx.doi.org/10.20965/jaciii.2007.p0641.

Full text
Abstract:
Dyadic Curvelet transform (DClet) is proposed as a tool for image processing and computer vision. It is an extended curvelet transform that solves the problem of conventional curvelet, of decomposition into components at different scales. It provides simplicity, dyadic scales, and absence of redundancy for analysis and synthesis objects with discontinuities along curves, i.e., edges via directional basis functions. The performance of the proposed method is evaluated by removing Gaussian, Speckles, and Random noises from different noisy standard images. Average 26.71 dB Peak Signal to Noise Ratio (PSNR) compared to 25.87 dB via the wavelet transform is evidence that the DClet outperforms the wavelet transform for removing noise. The proposed method is robust, which makes it suitable for biomedical applications. It is a candidate for gray and color image enhancement and applicable for compression or efficient coding in which critical sampling might be relevant.
APA, Harvard, Vancouver, ISO, and other styles
49

Khalaf, Walaa, Dhafer Zaghar, and Noor Hashim. "Enhancement of Curve-Fitting Image Compression Using Hyperbolic Function." Symmetry 11, no. 2 (February 23, 2019): 291. http://dx.doi.org/10.3390/sym11020291.

Full text
Abstract:
Image compression is one of the most interesting fields of image processing that is used to reduce image size. 2D curve-fitting is a method that converts the image data (pixel values) to a set of mathematical equations that are used to represent the image. These equations have a fixed form with a few coefficients estimated from the image which has been divided into several blocks. Since the number of coefficients is lower than the original block pixel size, it can be used as a tool for image compression. In this paper, a new curve-fitting model has been proposed to be derived from the symmetric function (hyperbolic tangent) with only three coefficients. The main disadvantages of previous approaches were the additional errors and degradation of edges of the reconstructed image, as well as the blocking effect. To overcome this deficiency, it is proposed that this symmetric hyperbolic tangent (tanh) function be used instead of the classical 1st- and 2nd-order curve-fitting functions which are asymmetric for reformulating the blocks of the image. Depending on the symmetric property of hyperbolic tangent function, this will reduce the reconstruction error and improve fine details and texture of the reconstructed image. The results of this work have been tested and compared with 1st-order curve-fitting, and standard image compression (JPEG) methods. The main advantages of the proposed approach are: strengthening the edges of the image, removing the blocking effect, improving the Structural SIMilarity (SSIM) index, and increasing the Peak Signal-to-Noise Ratio (PSNR) up to 20 dB. Simulation results show that the proposed method has a significant improvement on the objective and subjective quality of the reconstructed image.
APA, Harvard, Vancouver, ISO, and other styles
50

Jha, Mithilesh Kumar, Brejesh Lall, and Sumantra Dutta Roy. "Statistically Matched Wavelet Based Texture Synthesis in a Compressive Sensing Framework." ISRN Signal Processing 2014 (February 17, 2014): 1–18. http://dx.doi.org/10.1155/2014/838315.

Full text
Abstract:
This paper proposes a statistically matched wavelet based textured image coding scheme for efficient representation of texture data in a compressive sensing (CS) frame work. Statistically matched wavelet based data representation causes most of the captured energy to be concentrated in the approximation subspace, while very little information remains in the detail subspace. We encode not the full-resolution statistically matched wavelet subband coefficients but only the approximation subband coefficients (LL) using standard image compression scheme like JPEG2000. The detail subband coefficients, that is, HL, LH, and HH, are jointly encoded in a compressive sensing framework. Compressive sensing technique has proved that it is possible to achieve a sampling rate lower than the Nyquist rate with acceptable reconstruction quality. The experimental results demonstrate that the proposed scheme can provide better PSNR and MOS with a similar compression ratio than the conventional DWT-based image compression schemes in a CS framework and other wavelet based texture synthesis schemes like HMT-3S.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography