Artículos de revistas sobre el tema "JPEG (Image coding standard)"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: JPEG (Image coding standard).

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "JPEG (Image coding standard)".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Pinheiro, Antonio. "JPEG column: 82nd JPEG meeting in Lisbon, Portugal". ACM SIGMultimedia Records 11, n.º 1 (marzo de 2019): 1. http://dx.doi.org/10.1145/3458462.3458468.

Texto completo
Resumen
JPEG has been the most common representation format of digital images for more than 25 years. Other image representation formats have been standardised by JPEG committee like JPEG 2000 or more recently JPEG XS. Furthermore, JPEG has been extended with new functionalities like HDR or alpha plane coding with the JPEG XT standard, and more recently with a reference software. Another solutions have been also proposed by different players with limited success. The JPEG committee decided it is the time to create a new working item, named JPEG XL, that aims to develop an image coding standard with increased quality and flexibility combined with a better compression efficiency. The evaluation of the call for proposals responses had already confirmed the industry interest, and development of core experiments has now begun. Several functionalities will be considered, like support for lossless transcoding of images represented with JPEG standard.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Ishikawa, Takaaki. "Lightweight Image Coding Technology in JPEG Standard". Journal of The Institute of Image Information and Television Engineers 74, n.º 1 (2020): 87–92. http://dx.doi.org/10.3169/itej.74.87.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Dufaux, Frederic, Gary J. Sullivan y Touradj Ebrahimi. "The JPEG XR image coding standard [Standards in a Nutshell]". IEEE Signal Processing Magazine 26, n.º 6 (noviembre de 2009): 195–204. http://dx.doi.org/10.1109/msp.2009.934187.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Tanaka, Midori, Tomoyuki Takanashi y Takahiko Horiuchi. "Glossiness-aware Image Coding in JPEG Framework". Journal of Imaging Science and Technology 64, n.º 5 (1 de septiembre de 2020): 50409–1. http://dx.doi.org/10.2352/j.imagingsci.technol.2020.64.5.050409.

Texto completo
Resumen
Abstract In images, the representation of glossiness, translucency, and roughness of material objects (Shitsukan) is essential for realistic image reproduction. To date, image coding has been developed considering various indices of the quality of the encoded image, for example, the peak signal-to-noise ratio. Consequently, image coding methods that preserve subjective impressions of qualities such as Shitsukan have not been studied. In this study, the authors focus on the property of glossiness and propose a method of glossiness-aware image coding. Their purpose is to develop an encoding algorithm that produces images that can be decoded by standard JPEG decoders, which are commonly used worldwide. The proposed method consists of three procedures: block classification, glossiness enhancement, and non-glossiness information reduction. In block classification, the types of glossiness in a target image are classified using block units. In glossiness enhancement, the glossiness in each type of block is emphasized to reduce the amount of degradation of glossiness during JPEG encoding. The third procedure, non-glossiness information reduction, further compresses the information while maintaining the glossiness by reducing the information in each block that does not represent the glossiness in the image. To test the effectiveness of the proposed method, the authors conducted a subjective evaluation experiment using paired comparison of images coded by the proposed method and JPEG images with the same data size. The glossiness was found to be better preserved in images coded by the proposed method than in the JPEG images.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Hussain, Ikram, Oh-Jin Kwon y Seungcheol Choi. "Evaluating the Coding Performance of 360° Image Projection Formats Using Objective Quality Metrics". Symmetry 13, n.º 1 (5 de enero de 2021): 80. http://dx.doi.org/10.3390/sym13010080.

Texto completo
Resumen
Recently, 360° content has emerged as a new method for offering real-life interaction. Ultra-high resolution 360° content is mapped to the two-dimensional plane to adjust to the input of existing generic coding standards for transmission. Many formats have been proposed, and tremendous work is being done to investigate 360° videos in the Joint Video Exploration Team using projection-based coding. However, the standardization activities for quality assessment of 360° images are limited. In this study, we evaluate the coding performance of various projection formats, including recently-proposed formats adapting to the input of JPEG and JPEG 2000 content. We present an overview of the nine state-of-the-art formats considered in the evaluation. We also propose an evaluation framework for reducing the bias toward the native equi-rectangular (ERP) format. We consider the downsampled ERP image as the ground truth image. Firstly, format conversions are applied to the ERP image. Secondly, each converted image is subjected to the JPEG and JPEG 2000 image coding standards, then decoded and converted back to the downsampled ERP to find the coding gain of each format. The quality metrics designed for 360° content and conventional 2D metrics have been used for both end-to-end distortion measurement and codec level, in two subsampling modes, i.e., YUV (4:2:0 and 4:4:4). Our evaluation results prove that the hybrid equi-angular format and equatorial cylindrical format achieve better coding performance among the compared formats. Our work presents evidence to find the coding gain of these formats over ERP, which is useful for identifying the best image format for a future standard.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Pinheiro, Antonio. "JPEG column: 89th JPEG meeting". ACM SIGMultimedia Records 12, n.º 4 (diciembre de 2020): 1. http://dx.doi.org/10.1145/3548580.3548583.

Texto completo
Resumen
JPEG initiates standardisation of image compression based on AI. The 89th JPEG meeting was held online from 5 to 9 October 2020. During this meeting, multiple JPEG standardisation activities and explorations were discussed and progressed. Notably, the call for evidence on learning-based image coding was successfully completed and evidence was found that this technology promises several new functionalities while offering at the same time superior compression efficiency, beyond the state of the art. A new work item, JPEG AI, that will use learning-based image coding as core technology has been proposed, enlarging the already wide families of JPEG standards.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Schiopu, Ionut y Adrian Munteanu. "Deep Learning Post-Filtering Using Multi-Head Attention and Multiresolution Feature Fusion for Image and Intra-Video Quality Enhancement". Sensors 22, n.º 4 (10 de febrero de 2022): 1353. http://dx.doi.org/10.3390/s22041353.

Texto completo
Resumen
The paper proposes a novel post-filtering method based on convolutional neural networks (CNNs) for quality enhancement of RGB/grayscale images and video sequences. The lossy images are encoded using common image codecs, such as JPEG and JPEG2000. The video sequences are encoded using previous and ongoing video coding standards, high-efficiency video coding (HEVC) and versatile video coding (VVC), respectively. A novel deep neural network architecture is proposed to estimate fine refinement details for full-, half-, and quarter-patch resolutions. The proposed architecture is built using a set of efficient processing blocks designed based on the following concepts: (i) the multi-head attention mechanism for refining the feature maps, (ii) the weight sharing concept for reducing the network complexity, and (iii) novel block designs of layer structures for multiresolution feature fusion. The proposed method provides substantial performance improvements compared with both common image codecs and video coding standards. Experimental results on high-resolution images and standard video sequences show that the proposed post-filtering method provides average BD-rate savings of 31.44% over JPEG and 54.61% over HEVC (x265) for RGB images, Y-BD-rate savings of 26.21% over JPEG and 15.28% over VVC (VTM) for grayscale images, and 15.47% over HEVC and 14.66% over VVC for video sequences.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Man, Hong, Alen Docef y Faouzi Kossentini. "Performance Analysis of the JPEG 2000 Image Coding Standard". Multimedia Tools and Applications 26, n.º 1 (mayo de 2005): 27–57. http://dx.doi.org/10.1007/s11042-005-6848-5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Song, Hong Mei, Hai Wei Mu y Dong Yan Zhao. "Study on Nearly Lossless Compression with Progressive Decoding". Advanced Materials Research 926-930 (mayo de 2014): 1751–54. http://dx.doi.org/10.4028/www.scientific.net/amr.926-930.1751.

Texto completo
Resumen
A progressive transmission and decoding nearly lossless compression algorithm is proposed. The image data are grouped according to different frequencies based on DCT transform, then it uses the JPEG-LS core algorithmtexture prediction and Golomb coding on each group of data, in order to achieve progressive image transmission and decoding. Experimentation on the standard test images with this algorithm and comparing with JPEG-LS shows that the compression ratio of this algorithm is very similar to the compression ratio of JPEG-LS, and this algorithm loses a little image information but it has the ability of the progressive transmission and decoding.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Sowmithri, K. "An Iterative Lifting Scheme on DCT Coefficients for Image Coding". International Journal of Students' Research in Technology & Management 3, n.º 4 (27 de septiembre de 2015): 317–19. http://dx.doi.org/10.18510/ijsrtm.2015.341.

Texto completo
Resumen
Image coding is considered to be more effective, as it reduces number of bits required to store and/or to transmit image data. Transform based image coders play a significant role as they decorrelate the spatial low level information. It is found utilization in International compression standards such as JPEG, JPEG 2000, MPEG and H264. The choice of transform is an important issue in all these transforms coding schemes. Most of the literature suggests either Discrete Cosine Transform (DCT) or Discrete Wavelet Transform (DWT). In this proposed work, the energy preservation of DCT coefficients is analysed, and to down sample these coefficients, lifting scheme is iteratively applied so as to compensate the artifacts that appear in the reconstructed picture, and to yield the higher compression ratio. This is followed by scalar quantization and entropy coding, as in JPEG. The performance of the proposed iterative lifting scheme, employed on decorrelated DCT coefficients is measured with standard Peak Signal to Noise Ratio (PSNR) and the results are encouraging.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Götting, Detlef, Achim Ibenthal y Rolf-Rainer Grigat. "Fractal Image Coding and Magnification Using Invariant Features". Fractals 05, supp01 (abril de 1997): 65–74. http://dx.doi.org/10.1142/s0218348x97000644.

Texto completo
Resumen
Fractal image coding has significant potential for the compression of still and moving images and also for scaling up images. The objective of our investigations was twofold. First, compression ratios of factor 60 and more for still images have been achieved, yielding a better quality of the decoded picture material than standard methods like JPEG. Second, image enlargement up to factors of 16 per dimension has been realized by means of fractal zoom, leading to natural and sharp representation of the scaled image content. Quality improvements were achieved due to the introduction of an extended luminance transform. In order to reduce the computational complexity of the encoding process, a new class of simple and suited invariant features is proposed, facilitating the search in the multidimensional space spanned by image domains and affine transforms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Iqbal, Yasir y Oh-Jin Kwon. "Improved JPEG Coding by Filtering 8 × 8 DCT Blocks". Journal of Imaging 7, n.º 7 (15 de julio de 2021): 117. http://dx.doi.org/10.3390/jimaging7070117.

Texto completo
Resumen
The JPEG format, consisting of a set of image compression techniques, is one of the most commonly used image coding standards for both lossy and lossless image encoding. In this format, various techniques are used to improve image transmission and storage. In the final step of lossy image coding, JPEG uses either arithmetic or Huffman entropy coding modes to further compress data processed by lossy compression. Both modes encode all the 8 × 8 DCT blocks without filtering empty ones. An end-of-block marker is coded for empty blocks, and these empty blocks cause an unnecessary increase in file size when they are stored with the rest of the data. In this paper, we propose a modified version of the JPEG entropy coding. In the proposed version, instead of storing an end-of-block code for empty blocks with the rest of the data, we store their location in a separate buffer and then compress the buffer with an efficient lossless method to achieve a higher compression ratio. The size of the additional buffer, which keeps the information of location for the empty and non-empty blocks, was considered during the calculation of bits per pixel for the test images. In image compression, peak signal-to-noise ratio versus bits per pixel has been a major measure for evaluating the coding performance. Experimental results indicate that the proposed modified algorithm achieves lower bits per pixel while retaining quality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Cho, Sang-Gyu, Zoran Bojkovic, Dragorad Milovanovic, Jungsik Lee y Jae-Jeong Hwang. "Image quality evaluation: JPEG 2000 versus intra-only H.264/AVC High Profile". Facta universitatis - series: Electronics and Energetics 20, n.º 1 (2007): 71–83. http://dx.doi.org/10.2298/fuee0701071c.

Texto completo
Resumen
The objective of this work is to provide image quality evaluation for intra-only H.264/AVC High Profile (HP) standard versus JPEG2000 standard. Here, we review the structure of the two standards and the coding algorithms in the context of subjective and objective assessments. Simulations were performed on a test set of monochrome and color image. As a result of simulations, we observed that the subjective and objective image quality of H.264/AVC is superior to JPEG2000, except the blocking artifact which is inherent, since it consists of block transform rather than whole image transform. Thus, we propose a unified measurement system to properly define image quality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Coelho, Diego F. G., Renato J. Cintra, Fábio M. Bayer, Sunera Kulasekera, Arjuna Madanayake, Paulo Martinez, Thiago L. T. Silveira, Raíza S. Oliveira y Vassil S. Dimitrov. "Low-Complexity Loeffler DCT Approximations for Image and Video Coding". Journal of Low Power Electronics and Applications 8, n.º 4 (22 de noviembre de 2018): 46. http://dx.doi.org/10.3390/jlpea8040046.

Texto completo
Resumen
This paper introduced a matrix parametrization method based on the Loeffler discrete cosine transform (DCT) algorithm. As a result, a new class of 8-point DCT approximations was proposed, capable of unifying the mathematical formalism of several 8-point DCT approximations archived in the literature. Pareto-efficient DCT approximations are obtained through multicriteria optimization, where computational complexity, proximity, and coding performance are considered. Efficient approximations and their scaled 16- and 32-point versions are embedded into image and video encoders, including a JPEG-like codec and H.264/AVC and H.265/HEVC standards. Results are compared to the unmodified standard codecs. Efficient approximations are mapped and implemented on a Xilinx VLX240T FPGA and evaluated for area, speed, and power consumption.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Descampe, Antonin, Thomas Richter, Touradj Ebrahimi, Siegfried Foessel, Joachim Keinert, Tim Bruylants, Pascal Pellegrin, Charles Buysschaert y Gael Rouvroy. "JPEG XS—A New Standard for Visually Lossless Low-Latency Lightweight Image Coding". Proceedings of the IEEE 109, n.º 9 (septiembre de 2021): 1559–77. http://dx.doi.org/10.1109/jproc.2021.3080916.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Endoh, Toshiaki. ""International Standard Coding Scheme for Color Still Images. JPEG Algorithm." Journal of the Institute of Television Engineers of Japan 46, n.º 8 (1992): 1021–24. http://dx.doi.org/10.3169/itej1978.46.1021.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

RAITTINEN, HARRI y KIMMO KASKI. "CRITICAL REVIEW OF FRACTAL IMAGE COMPRESSION". International Journal of Modern Physics C 06, n.º 01 (febrero de 1995): 47–66. http://dx.doi.org/10.1142/s0129183195000058.

Texto completo
Resumen
In this paper, fractal compression methods are reviewed. Three new methods are developed and their results are compared with the results obtained using four previously published fractal compression methods. Furthermore, we have compared the results of these methods with the standard JPEG method. For comparison, we have used an extensive set of image quality measures. According to these tests, fractal methods do not yield significantly better compression results when compared with conventional methods. This is especially the case when high coding accuracy (small compression ratio) is desired.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Sultan, Bushra A. y Loay E. George. "Color image compression based on spatial and magnitude signal decomposition". International Journal of Electrical and Computer Engineering (IJECE) 11, n.º 5 (1 de octubre de 2021): 4069. http://dx.doi.org/10.11591/ijece.v11i5.pp4069-4081.

Texto completo
Resumen
<p>In this paper, a simple color image compression system has been proposed using image signal decomposition. Where, the RGB image color band is converted to the less correlated YUV color model and the pixel value (magnitude) in each band is decomposed into 2-values; most and least significant. According to the importance of the most significant value (MSV) that influenced by any simply modification happened, an adaptive lossless image compression system is proposed using bit plane (BP) slicing, delta pulse code modulation (Delta PCM), adaptive quadtree (QT) partitioning followed by an adaptive shift encoder. On the other hand, a lossy compression system is introduced to handle the least significant value (LSV), it is based on an adaptive, error bounded coding system, and it uses the DCT compression scheme. The performance of the developed compression system was analyzed and compared with those attained from the universal standard JPEG, and the results of applying the proposed system indicated its performance is comparable or better than that of the JPEG standards.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Wang, Zhe, Trung-Hieu Tran, Ponnanna Kelettira Muthappa y Sven Simon. "A JND-Based Pixel-Domain Algorithm and Hardware Architecture for Perceptual Image Coding". Journal of Imaging 5, n.º 5 (26 de abril de 2019): 50. http://dx.doi.org/10.3390/jimaging5050050.

Texto completo
Resumen
This paper presents a hardware efficient pixel-domain just-noticeable difference (JND) model and its hardware architecture implemented on an FPGA. This JND model architecture is further proposed to be part of a low complexity pixel-domain perceptual image coding architecture, which is based on downsampling and predictive coding. The downsampling is performed adaptively on the input image based on regions-of-interest (ROIs) identified by measuring the downsampling distortions against the visibility thresholds given by the JND model. The coding error at any pixel location can be guaranteed to be within the corresponding JND threshold in order to obtain excellent visual quality. Experimental results show the improved accuracy of the proposed JND model in estimating visual redundancies compared with classic JND models published earlier. Compression experiments demonstrate improved rate-distortion performance and visual quality over JPEG-LS as well as reduced compressed bit rates compared with other standard codecs such as JPEG 2000 at the same peak signal-to-perceptible-noise ratio (PSPNR). FPGA synthesis results targeting a mid-range device show very moderate hardware resource requirements and over 100 Megapixel/s throughput of both the JND model and the perceptual encoder.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Hsieh, Ping Ang y Ja-Ling Wu. "A Review of the Asymmetric Numeral System and Its Applications to Digital Images". Entropy 24, n.º 3 (7 de marzo de 2022): 375. http://dx.doi.org/10.3390/e24030375.

Texto completo
Resumen
The Asymmetric Numeral System (ANS) is a new entropy compression method that the industry has highly valued in recent years. ANS is valued by the industry precisely because it captures the benefits of both Huffman Coding and Arithmetic Coding. Surprisingly, compared with Huffman and Arithmetic coding, systematic descriptions of ANS are relatively rare. In 2017, JPEG proposed a new image compression standard—JPEG XL, which uses ANS as its entropy compression method. This fact implies that the ANS technique is mature and will play a kernel role in compressing digital images. However, because the realization of ANS involves combination optimization and the process is not unique, only a few members in the compression academia community and the domestic industry have noticed the progress of this powerful entropy compression approach. Therefore, we think a thorough overview of ANS is beneficial, and this idea brings our contributions to the first part of this work. In addition to providing compact representations, ANS has the following prominent feature: just like its Arithmetic Coding counterpart, ANS has Chaos characteristics. The chaotic behavior of ANS is reflected in two aspects. The first one is that the corresponding compressed output will change a lot if there is a tiny change in the original input; moreover, the reverse is also applied. The second is that ANS compressing an image will produce two intertwined outcomes: a positive integer (aka. state) and a bitstream segment. Correct ANS decompression is possible only when both can be precisely obtained. Combining these two characteristics helps process digital images, e.g., art collection images and medical images, to achieve compression and encryption simultaneously. In the second part of this work, we explore the characteristics of ANS in depth and develop its applications specific to joint compression and encryption of digital images.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Radosavljević, Miloš, Branko Brkljač, Predrag Lugonja, Vladimir Crnojević, Željen Trpovski, Zixiang Xiong y Dejan Vukobratović. "Lossy Compression of Multispectral Satellite Images with Application to Crop Thematic Mapping: A HEVC Comparative Study". Remote Sensing 12, n.º 10 (16 de mayo de 2020): 1590. http://dx.doi.org/10.3390/rs12101590.

Texto completo
Resumen
Remote sensing applications have gained in popularity in recent years, which has resulted in vast amounts of data being produced on a daily basis. Managing and delivering large sets of data becomes extremely difficult and resource demanding for the data vendors, but even more for individual users and third party stakeholders. Hence, research in the field of efficient remote sensing data handling and manipulation has become a very active research topic (from both storage and communication perspectives). Driven by the rapid growth in the volume of optical satellite measurements, in this work we explore the lossy compression technique for multispectral satellite images. We give a comprehensive analysis of the High Efficiency Video Coding (HEVC) still-image intra coding part applied to the multispectral image data. Thereafter, we analyze the impact of the distortions introduced by the HEVC’s intra compression in the general case, as well as in the specific context of crop classification application. Results show that HEVC’s intra coding achieves better trade-off between compression gain and image quality, as compared to standard JPEG 2000 solution. On the other hand, this also reflects in the better performance of the designed pixel-based classifier in the analyzed crop classification task. We show that HEVC can obtain up to 150:1 compression ratio, when observing compression in the context of specific application, without significantly losing on classification performance compared to classifier trained and applied on raw data. In comparison, in order to maintain the same performance, JPEG 2000 allows compression ratio up to 70:1.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Singh, Kulwinder, Ming Ma, Dong Won Park y Syungog An. "Image Indexing Based On Mpeg-7 Scalable Color Descriptor". Key Engineering Materials 277-279 (enero de 2005): 375–82. http://dx.doi.org/10.4028/www.scientific.net/kem.277-279.375.

Texto completo
Resumen
The MPEG-7 standard defines a set of descriptors that extract low-level features such as color, texture and object shape from an image and generate metadata that represents the extracted information. In this paper we propose a new image retrieval technique for image indexing based on the MPEG-7 scalable color descriptor. We use some specifications of the scalable color descriptor (SCD) for the implementation of the color histograms. The MPEG-7 standard defines 1 l norm − based matching in the SCD. But in our approach, for distance measurement, we achieve a better result by using cosine similarity coefficient for color histograms. This approach has significantly increased the accuracy of obtaining results for image retrieval. Experiments based on scalable color descriptors are illustrated. We also present the color spaces supported by the different image and video coding standards such as JPEG-2000, MPEG-1, 2, 4 and MPEG-7. In addition, this paper outlines the broad details of MPEG-7 Color Descriptors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Zhang, Xi y Noriaki Fukuda. "Lossy to lossless image coding based on wavelets using a complex allpass filter". International Journal of Wavelets, Multiresolution and Information Processing 12, n.º 04 (julio de 2014): 1460002. http://dx.doi.org/10.1142/s0219691314600029.

Texto completo
Resumen
Wavelet-based image coding has been adopted in the international standard JPEG 2000 for its efficiency. It is well-known that the orthogonality and symmetry of wavelets are two important properties for many applications of signal processing and image processing. Both can be simultaneously realized by the wavelet filter banks composed of a complex allpass filter, thus, it is expected to get a better coding performance than the conventional biorthogonal wavelets. This paper proposes an effective implementation of orthonormal symmetric wavelet filter banks composed of a complex allpass filter for lossy to lossless image compression. First, irreversible real-to-real wavelet transforms are realized by implementing a complex allpass filter for lossy image coding. Next, reversible integer-to-integer wavelet transforms are proposed by incorporating the rounding operation into the filtering processing to obtain an invertible complex allpass filter for lossless image coding. Finally, the coding performance of the proposed orthonormal symmetric wavelets is evaluated and compared with the D-9/7 and D-5/3 biorthogonal wavelets. It is shown from the experimental results that the proposed allpass-based orthonormal symmetric wavelets can achieve a better coding performance than the conventional D-9/7 and D-5/3 biorthogonal wavelets both in lossy and lossless coding.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Podilchuk, Christine I. y Robert J. Safranek. "Image and Video Compression: A Review". International Journal of High Speed Electronics and Systems 08, n.º 01 (marzo de 1997): 119–77. http://dx.doi.org/10.1142/s0129156497000056.

Texto completo
Resumen
The area of image and video compression has made tremendous progress over the last several decades. The successes in image compression are due to advances and better understanding of waveform coding methods which take advantage of the signal statistics, perceptual methods which take advantage of psychovisual properties of the human visual system (HVS) and object-based models especially for very low bit rate work. Recent years have produced several image coding standards—JPEG for still image compression and H.261, MPEG-I and MPEG-II for video compression. While we have devoted a special section in this paper to cover international coding standards because of their practical value, we have also covered a large class of nonstandard coding technology in the interest of completeness and potential future value. Very low bit rate video coding remains a challenging problem as does our understanding of the human visual system for perceptually optimum compression. The wide range of applications and bit rates, from video telephony at rates as low as 9.6 kbps to HDTV at 20 Mbps and higher, has acted as a catalyst for generating new ideas in tackling the different challenges characterized by the particular application. The area of image compression will remain an interesting and fruitful area of research as we focus on combining source coding with channel coding and multimedia networking.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

NAGARAJ, NITHIN. "HUFFMAN CODING AS A NONLINEAR DYNAMICAL SYSTEM". International Journal of Bifurcation and Chaos 21, n.º 06 (junio de 2011): 1727–36. http://dx.doi.org/10.1142/s0218127411029392.

Texto completo
Resumen
In this paper, source coding or data compression is viewed as a measurement problem. Given a measurement device with fewer states than the observable of a stochastic source, how can one capture their essential information? We propose modeling stochastic sources as piecewise-linear discrete chaotic dynamical systems known as Generalized Luröth Series (GLS) which has its roots in Georg Cantor's work in 1869. These GLS are special maps with the property that their Lyapunov exponent is equal to the Shannon's entropy of the source (up to a constant of proportionality). By successively approximating the source with GLS having fewer states (with the nearest Lyapunov exponent), we derive a binary coding algorithm which turns out to be a rediscovery of Huffman coding, the popular lossless compression algorithm used in the JPEG international standard for still image compression.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Koryciak, Sebastian, Agnieszka Dąbrowska-Boruch y Kazimierz Wiatr. "Hardware Implementation of IDCT Fast Algorithms for Still Images Decompression in the Jpeg Standard". Image Processing & Communications 17, n.º 4 (1 de diciembre de 2012): 103–8. http://dx.doi.org/10.2478/v10248-012-0035-x.

Texto completo
Resumen
Abstract Many algorithms are used in JPEG standard for compression of still images, but the most demanding one is the DCT. The fast discrete cosine transform is the basic transform which occur in most coding algorithms. In the case of images it is performed on 8×8 pixel blocks. Paper presents comparison of IDCT algorithms concentrated on amount of arithmetic operations, multiplications, and number of pipelined steps. Results are achieved by implementing each one in programmable device FPGA (xc6vlx240t).
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Mikhailiuk, Aliaksei, Nanyang Ye y Rafał K. Mantiuk. "The effect of display brightness and viewing distance: a dataset for visually lossless image compression". Electronic Imaging 2021, n.º 11 (18 de enero de 2021): 152–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.11.hvei-152.

Texto completo
Resumen
Visibility of image artifacts depends on the viewing conditions, such as display brightness and distance to the display. However, most image and video quality metrics operate under the assumption of a single standard viewing condition, without considering luminance or distance to the display. To address this limitation, we isolate brightness and distance as the components impacting the visibility of artifacts and collect a new dataset for visually lossless image compression. The dataset includes images encoded with JPEG andWebP at the quality level that makes compression artifacts imperceptible to an average observer. The visibility thresholds are collected under two luminance conditions: 10 cd/m2, simulating a dimmed mobile phone, and 220 cd/m2, which is a typical peak luminance of modern computer displays; and two distance conditions: 30 and 60 pixels per visual degree. The dataset was used to evaluate existing image quality and visibility metrics in their ability to consider display brightness and its distance to viewer. We include two deep neural network architectures, proposed to control image compression for visually lossless coding in our experiments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Kliuchenia, V. V. "Design of a discrete сosine transformation processor for image compression systems on a losless-to-lossy circuit". Doklady BGUIR 19, n.º 3 (2 de junio de 2021): 5–13. http://dx.doi.org/10.35596/1729-7648-2021-19-3-5-13.

Texto completo
Resumen
Today, mobile multimedia systems that use the H.261 / 3/4/5, MPEG-1/2/4 and JPEG standards for encoding / decoding video, audio and images are widely spread [1–4]. The core of these standards is the discrete cosine transform (DCT) of I, II, III ... VIII types [DCT]. Wide support in a huge number of multimedia applications of the JPEG format by circuitry and software solutions and the need for image coding according to the L2L scheme determines the relevance of the problem of creating a decorrelated transformation based on DCT and methods for rapid prototyping of processors for computing an integer DCT on programmable systems on a FPGA chip. At the same time, such characteristics as structural regularity, modularity, high computational parallelism, low latency and power consumption are taken into account. Direct and inverse transformation should be carried out according to the “whole-to-whole” processing scheme with preservation of the perfective reconstruction of the original image (the coefficients are represented by integer or binary rational numbers; the number of multiplication operations is minimal, if possible, they are excluded from the algorithm). The wellknown integer DCTs (BinDCT, IntDCT) do not give a complete reversible bit to bit conversion. To encode an image according to the L2L scheme, the decorrelated transform must be reversible and implemented in integer arithmetic, i. e. the conversion would follow an “integer-to-integer” processing scheme with a minimum number of rounding operations affecting the compactness of energy in equivalent conversion subbands. This article shows how, on the basis of integer forward and inverse DCTs, to create a new universal architecture of decorrelated transform on FPGAs for transformational image coding systems that operate on the principle of “lossless-to-lossy” (L2L), and to obtain the best experimental results for objective and subjective performance compared to comparable compression systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Sii, Alan, Simying Ong y KokSheik Wong. "Improved Coefficient Recovery and Its Application for Rewritable Data Embedding". Journal of Imaging 7, n.º 11 (18 de noviembre de 2021): 244. http://dx.doi.org/10.3390/jimaging7110244.

Texto completo
Resumen
JPEG is the most commonly utilized image coding standard for storage and transmission purposes. It achieves a good rate–distortion trade-off, and it has been adopted by many, if not all, handheld devices. However, often information loss occurs due to transmission error or damage to the storage device. To address this problem, various coefficient recovery methods have been proposed in the past, including a divide-and-conquer approach to speed up the recovery process. However, the segmentation technique considered in the existing method operates with the assumption of a bi-modal distribution for the pixel values, but most images do not satisfy this condition. Therefore, in this work, an adaptive method was employed to perform more accurate segmentation, so that the real potential of the previous coefficient recovery methods can be unleashed. In addition, an improved rewritable adaptive data embedding method is also proposed that exploits the recoverability of coefficients. Discrete cosine transformation (DCT) patches and blocks for data hiding are judiciously selected based on the predetermined precision to control the embedding capacity and image distortion. Our results suggest that the adaptive coefficient recovery method is able to improve on the conventional method up to 27% in terms of CPU time, and it also achieved better image quality with most considered images. Furthermore, the proposed rewritable data embedding method is able to embed 20,146 bits into an image of dimensions 512×512.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Nayak, Dibyalekha, Kananbala Ray, Tejaswini Kar y Chiman Kwan. "Walsh–Hadamard Kernel Feature-Based Image Compression Using DCT with Bi-Level Quantization". Computers 11, n.º 7 (4 de julio de 2022): 110. http://dx.doi.org/10.3390/computers11070110.

Texto completo
Resumen
To meet the high bit rate requirements in many multimedia applications, a lossy image compression algorithm based on Walsh–Hadamard kernel-based feature extraction, discrete cosine transform (DCT), and bi-level quantization is proposed in this paper. The selection of the quantization matrix of the block is made based on a weighted combination of the block feature strength (BFS) of the block extracted by projecting the selected Walsh–Hadamard basis kernels on an image block. The BFS is compared with an automatically generated threshold for applying the specific quantization matrix for compression. In this paper, higher BFS blocks are processed via DCT and high Q matrix, and blocks with lower feature strength are processed via DCT and low Q matrix. So, blocks with higher feature strength are less compressed and vice versa. The proposed algorithm is compared to different DCT and block truncation coding (BTC)-based approaches based on the quality parameters, such as peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) at constant bits per pixel (bpp). The proposed method shows significant improvements in performance over standard JPEG and recent approaches at lower bpp. It achieved an average PSNR of 35.61 dB and an average SSIM of 0.90 at a bpp of 0.5 and better perceptual quality with lower visual artifacts.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Krotova, Elena, Andrey Chekmenev y Aleksandr Bolgov. "THE METHOD OF EXTRACTING THE STEGANOGRAPHY WATERMARKS BY KEY USING HAAR WAVELETS". Applied Mathematics and Control Sciences, n.º 4 (30 de diciembre de 2019): 59–70. http://dx.doi.org/10.15593/2499-9873/2019.4.04.

Texto completo
Resumen
This article describes a method for applying an encoded digital steganographic watermark to a digital image and its subsequent extraction using Haar wavelets. The method of applying a digital steganographic watermark by key, and highlighting this sign with a key, is considered. The relevance of this method of applying and highlighting a digital steganographic watermark is considered. A few words describe the method of splitting a signal into sub-signals using the Haar algorithm and how it is applicable in the context of digital images. The results of checking the application of a digital watermark for resistance to various transformations are presented, such as: blurring with a 3×3, 5×5 core, jpeg compression with a compression ratio of 50 and 70 %, deleting the 1 LSB, 2 and 4 LSBs. Corresponding images are presented that illustrate the results of tests for resistance to conversion of a digital steganographic watermark. A small, illustrative and easy to implement example of applying a digital steganographic watermark, as well as its extraction using a previously created key using simple coding, which consists in the fact that the columns of pixels of the original image are shifted by a certain number of positions, is presented. Also, the article provides a brief description of the LSB algorithm and considers the main advantages and disadvantages of the algorithm developed and presented in this article with the standard LSB algorithm. In conclusion, the corresponding conclusions were drawn about the applicability of the developed algorithm, about its shortcomings and advantages, described in this article.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Huang, Bormin, Alok Ahuja y Hung-Lung Huang. "Optimal Compression of High Spectral Resolution Satellite Data via Adaptive Vector Quantization with Linear Prediction". Journal of Atmospheric and Oceanic Technology 25, n.º 6 (1 de junio de 2008): 1041–47. http://dx.doi.org/10.1175/2007jtecha917.1.

Texto completo
Resumen
Abstract Contemporary and future high spectral resolution sounders represent a significant technical advancement for environmental and meteorological prediction and monitoring. Given their large volume of spectral observations, the use of robust data compression techniques will be beneficial to data transmission and storage. In this paper, a novel adaptive vector quantization (VQ)-based linear prediction (AVQLP) method for lossless compression of high spectral resolution sounder data is proposed. The AVQLP method optimally adjusts the quantization codebook sizes to yield the maximum compression on prediction residuals and side information. The method outperforms the state-of-the-art compression methods [Joint Photographic Experts Group (JPEG)-LS, JPEG2000 Parts 1 and 2, Consultative Committee for Space Data Systems (CCSDS) Image Data Compression (IDC) 5/3, Context-Based Adaptive Lossless Image Coding (CALIC), and 3D Set Partitioning in Hierarchical Trees (SPIHT)] and achieves a new high in lossless compression for the standard test set of 10 NASA Atmospheric Infrared Sounder (AIRS) granules. It also compares favorably in terms of computational efficiency and compression gain to recently reported adaptive clustering methods for lossless compression of high spectral resolution data. Given its superior compression performance, the AVQLP method is well suited to ground operation of high spectral resolution satellite data compression for rebroadcast and archiving purposes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Puchala, D. y M. M. Yatsymirskyy. "Joint compression and encryption of visual data using orthogonal parametric transforms". Bulletin of the Polish Academy of Sciences Technical Sciences 64, n.º 2 (1 de junio de 2016): 373–82. http://dx.doi.org/10.1515/bpasts-2016-0042.

Texto completo
Resumen
Abstract In this paper, we introduce a novel method of joint compression and encryption of visual data. In the proposed approach the compression stage is based on block quantization while the encryption uses fast parametric orthogonal transforms of arbitrary forms in combination with a novel scheme of intra-block mixing of data vectors. Theoretical analysis of the method indicates no impact of encryption stage on the effectiveness of block quantization with an additional step of first order entropy coding. Moreover, a series of experimental studies involving natural images and JPEG lossy compression standard were performed. Here, the obtained results indicate a high level of visual content concealment with only a small reduction of compression performance. An additional analysis of security allows to state that the proposed method is resistant to cryptanalytic attacks known for visual data encryption schemes including the most efficient NZCA attack. The proposed method can be also characterized by high computational efficiency and feasibility of hardware realizations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Ishikawa, Takaaki. "JPEG Still Image Coding and Standardization". Journal of the Institute of Image Information and Television Engineers 67, n.º 6 (2013): 477–81. http://dx.doi.org/10.3169/itej.67.477.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Jókay, Matúš y Tomáš Moravćík. "Image-based jpeg steganography". Tatra Mountains Mathematical Publications 45, n.º 1 (1 de diciembre de 2010): 65–74. http://dx.doi.org/10.2478/v10127-010-0006-9.

Texto completo
Resumen
ABSTRACT This paper deals with the steganographic algorithm LSB (modification of the Least Significant Bits) in JPEG images. The focus is on minimizing of the number of modified DCT coefficients using (2k − 1, 2k − k − 1) Hamming codes. Experimental part of the paper examines the dependencies between the coding, efficiency and saturation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Shan, Rongyang, Chengyou Wang, Wei Huang y Xiao Zhou. "DCT-JPEG Image Coding Based on GPU". International Journal of Hybrid Information Technology 8, n.º 5 (31 de mayo de 2015): 293–302. http://dx.doi.org/10.14257/ijhit.2015.8.5.32.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

In, J., S. Shirani y F. Kossentini. "On RD optimized progressive image coding using JPEG". IEEE Transactions on Image Processing 8, n.º 11 (1999): 1630–38. http://dx.doi.org/10.1109/83.799890.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Sebestyen, Istvan. "JPEG: Still image data compression standard". Computer Standards & Interfaces 15, n.º 4 (septiembre de 1993): 365–66. http://dx.doi.org/10.1016/0920-5489(93)90038-s.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Pinheiro, Antonio. "JPEG Column: 93rd JPEG Meeting". ACM SIGMultimedia Records 13, n.º 4 (diciembre de 2021): 1. http://dx.doi.org/10.1145/3578508.3578512.

Texto completo
Resumen
The 93rd JPEG meeting was held online from 18 to 22 October 2021. The JPEG Committee continued its work on the development of new standardised solutions for the representation of visual information. Notably, the JPEG Committee has decided to release a new call for proposals on point cloud coding based on machine learning technologies that targets both compression efficiency and effective performance for 3D processing as well as machine and computer vision tasks. This activity will be conducted in parallel with JPEG AI standardization. Furthermore, it was also decided to pursue the development of a new standard in the context of the exploration on JPEG Fake News activity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Zadeh, Pooneh Bagheri, Akbar Sheikh Akbari y Tom Buggy. "DCT image codec using variance of sub-regions". Open Computer Science 5, n.º 1 (11 de agosto de 2015): 13–21. http://dx.doi.org/10.1515/comp-2015-0003.

Texto completo
Resumen
AbstractThis paper presents a novel variance of subregions and discrete cosine transform based image-coding scheme. The proposed encoder divides the input image into a number of non-overlapping blocks. The coefficients in each block are then transformed into their spatial frequencies using a discrete cosine transform. Coefficients with the same spatial frequency index at different blocks are put together generating a number of matrices, where each matrix contains coefficients of a particular spatial frequency index. The matrix containing DC coefficients is losslessly coded to preserve its visually important information. Matrices containing high frequency coefficients are coded using a variance of sub-regions based encoding algorithm proposed in this paper. Perceptual weights are used to regulate the threshold value required in the coding process of the high frequency matrices. An extension of the system to the progressive image transmission is also developed. The proposed coding scheme, JPEG and JPEG2000were applied to a number of test images. Results show that the proposed coding scheme outperforms JPEG and JPEG2000 subjectively and objectively at low compression ratios. Results also indicate that the proposed codec decoded images exhibit superior subjective quality at high compression ratios compared to that of JPEG, while offering satisfactory results to that of JPEG2000.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Ullah, Faiz, Oh-Jin Kwon y Seungcheol Choi. "Generation of a Panorama Compatible with the JPEG 360 International Standard Using a Single PTZ Camera". Applied Sciences 11, n.º 22 (21 de noviembre de 2021): 11019. http://dx.doi.org/10.3390/app112211019.

Texto completo
Resumen
Recently, the JPEG working group (ISO/IEC JTC1 SC29 WG1) developed an international standard, JPEG 360, that specifies the metadata and functionalities for saving and sharing 360-degree images efficiently to create a more realistic environment in various virtual reality services. We surveyed the metadata formats of existing 360-degree images and compared them to the JPEG 360 metadata format. We found that existing omnidirectional cameras and stitching software packages use formats that are incompatible with the JPEG 360 standard to embed metadata in JPEG image files. This paper proposes an easy-to-use tool for embedding JPEG 360 standard metadata for 360-degree images in JPEG image files using a JPEG-defined box format: the JPEG universal metadata box format. The proposed implementation will help 360-degree cameras and software vendors provide immersive services to users in a standardized manner for various markets, such as entertainment, education, professional training, navigation, and virtual and augmented reality applications. We also propose and develop an economical JPEG 360 standard compatible panoramic image acquisition system from a single PTZ camera with a special-use case of a wide field of view image of a conference or meeting. A remote attendee of the conference/meeting can see the realistic and immersive environment through our PTZ panorama in virtual reality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Wang, Fang Chao, Sen Bai, Bo Zhao y Nan He. "Grayscale Image Compression and Encryption Based on Format Conversion". Applied Mechanics and Materials 411-414 (septiembre de 2013): 1193–96. http://dx.doi.org/10.4028/www.scientific.net/amm.411-414.1193.

Texto completo
Resumen
In this paper, we describe a novel encryption algorithm, which converts a greyscale image into a colored JPEG image. Firstly, it creates MCU (Minimum Coding Unit) of the colored JPEG image from the DU (Data Unit) of the greyscale image by the 8x8 construction matrix randomly. Secondly, it shuffles all the DUs with quantized DCT (Discrete Cosine Transform) coefficients according to a random ergodic matrix. Lastly, it rearranges the DUs as the format of the colored JPEG image and proceeds with the normal compression and encoding. The results show that the encryption speed of the algorithm is fast enough for real-time transmission and the encrypted image has almost the same size as original image after direct compression.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Skodras, A., C. Christopoulos y T. Ebrahimi. "The JPEG 2000 still image compression standard". IEEE Signal Processing Magazine 18, n.º 5 (2001): 36–58. http://dx.doi.org/10.1109/79.952804.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Xu, Bo Hao y Yong Sheng Hao. "Research on Image Technology with Progressive Image Compression Based on JPEG and Laplacian Pyramid". Advanced Materials Research 886 (enero de 2014): 650–54. http://dx.doi.org/10.4028/www.scientific.net/amr.886.650.

Texto completo
Resumen
Progressive image transmission is a kind of image technology has been widely used in various fields, it can not only save bandwidth but also improve the user experience to meet user demand for different image quality. According to user's demand for image quality, realizing the progress of image compression coding flow can meet the demand of users. This article mainly introduce by means of JPEG and Laplacian pyramid coding principle implement progressive image compression.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Saudagar, Abdul Khader Jilani. "Biomedical Image Compression Techniques for Clinical Image Processing". International Journal of Online and Biomedical Engineering (iJOE) 16, n.º 12 (19 de octubre de 2020): 133. http://dx.doi.org/10.3991/ijoe.v16i12.17019.

Texto completo
Resumen
Image processing is widely used in the domain of biomedical engineering especially for compression of clinical images. Clinical diagnosis receives high importance which involves handling patient’s data more accurately and wisely when treating patients remotely. Many researchers proposed different methods for compression of medical images using Artificial Intelligence techniques. Developing efficient automated systems for compression of medical images in telemedicine is the focal point in this paper. Three major approaches were proposed here for medical image compression. They are image compression using neural network, fuzzy logic and neuro-fuzzy logic to preserve higher spectral representation to maintain finer edge information’s, and relational coding for inter band coefficients to achieve high compressions. The developed image coding model is evaluated over various quality factors. From the simulation results it is observed that the proposed image coding system can achieve efficient compression performance compared with existing block coding and JPEG coding approaches, even under resource constraint environments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Bi, Sheng y Qiang Wang. "Fractal Image Coding Based on a Fitting Surface". Journal of Applied Mathematics 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/634848.

Texto completo
Resumen
A no-search fractal image coding method based on a fitting surface is proposed. In our research, an improved gray-level transform with a fitting surface is introduced. One advantage of this method is that the fitting surface is used for both the range and domain blocks and one set of parameters can be saved. Another advantage is that the fitting surface can approximate the range and domain blocks better than the previous fitting planes; this can result in smaller block matching errors and better decoded image quality. Since the no-search and quadtree techniques are adopted, smaller matching errors also imply less number of blocks matching which results in a faster encoding process. Moreover, by combining all the fitting surfaces, a fitting surface image (FSI) is also proposed to speed up the fractal decoding. Experiments show that our proposed method can yield superior performance over the other three methods. Relative to range-averaged image, FSI can provide faster fractal decoding process. Finally, by combining the proposed fractal coding method with JPEG, a hybrid coding method is designed which can provide higher PSNR than JPEG while maintaining the same Bpp.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

S, Bhavani y Thanushkodi K. "ADAPTIVE MESH BASED 3D MR IMAGE COMPRESSION USING JPEG CODING". International Journal on Intelligent Electronic Systems 4, n.º 2 (2010): 7–13. http://dx.doi.org/10.18000/ijies.30073.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Takezawa, Megumi y Miki Haseyama. "Quality Improvement of JPEG Images Based on Fractal Image Coding". Journal of the Institute of Image Information and Television Engineers 58, n.º 9 (2004): 1317–23. http://dx.doi.org/10.3169/itej.58.1317.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Ouled Zaid, Azza, Christian Olivier, Olivier Alata y Francois Marmoiton. "Transform image coding with global thresholding: Application to baseline JPEG". Pattern Recognition Letters 24, n.º 7 (abril de 2003): 959–64. http://dx.doi.org/10.1016/s0167-8655(02)00219-2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Vander Kam, R. A., Ping Wah Wong y R. M. Gray. "JPEG-compliant perceptual coding for a grayscale image printing pipeline". IEEE Transactions on Image Processing 8, n.º 1 (1999): 1–14. http://dx.doi.org/10.1109/83.736675.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía