Journal articles on the topic 'Image coding'

To see the other types of publications on this topic, follow the link: Image coding.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Image coding.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Saudagar, Abdul Khader Jilani. "Biomedical Image Compression Techniques for Clinical Image Processing." International Journal of Online and Biomedical Engineering (iJOE) 16, no. 12 (October 19, 2020): 133. http://dx.doi.org/10.3991/ijoe.v16i12.17019.

Full text
Abstract:
Image processing is widely used in the domain of biomedical engineering especially for compression of clinical images. Clinical diagnosis receives high importance which involves handling patient’s data more accurately and wisely when treating patients remotely. Many researchers proposed different methods for compression of medical images using Artificial Intelligence techniques. Developing efficient automated systems for compression of medical images in telemedicine is the focal point in this paper. Three major approaches were proposed here for medical image compression. They are image compression using neural network, fuzzy logic and neuro-fuzzy logic to preserve higher spectral representation to maintain finer edge information’s, and relational coding for inter band coefficients to achieve high compressions. The developed image coding model is evaluated over various quality factors. From the simulation results it is observed that the proposed image coding system can achieve efficient compression performance compared with existing block coding and JPEG coding approaches, even under resource constraint environments.
APA, Harvard, Vancouver, ISO, and other styles
2

Takezawa, Takuma, and Yukihiko Yamashita. "Wavelet Based Image Coding via Image Component Prediction Using Neural Networks." International Journal of Machine Learning and Computing 11, no. 2 (March 2021): 137–42. http://dx.doi.org/10.18178/ijmlc.2021.11.2.1026.

Full text
Abstract:
In the process of wavelet based image coding, it is possible to enhance the performance by applying prediction. However, it is difficult to apply the prediction using a decoded image to the 2D DWT which is used in JPEG2000 because the decoded pixels are apart from pixels which should be predicted. Therefore, not images but DWT coefficients have been predicted. To solve this problem, predictive coding is applied for one-dimensional transform part in 2D DWT. Zhou and Yamashita proposed to use half-pixel line segment matching for the prediction of wavelet based image coding with prediction. In this research, convolutional neural networks are used as the predictor which estimates a pair of target pixels from the values of pixels which have already been decoded and adjacent to the target row. It helps to reduce the redundancy by sending the error between the real value and its predicted value. We also show its advantage by experimental results.
APA, Harvard, Vancouver, ISO, and other styles
3

M.A.P., Manimekalai. "Efficient Image Compression Using Improved Huffman Coding With Enhanced Lempel ZIV CODING Approach." Journal of Advanced Research in Dynamical and Control Systems 12, no. 01-Special Issue (February 13, 2020): 359–68. http://dx.doi.org/10.5373/jardcs/v12sp1/20201082.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tanaka, Midori, Tomoyuki Takanashi, and Takahiko Horiuchi. "Glossiness-aware Image Coding in JPEG Framework." Journal of Imaging Science and Technology 64, no. 5 (September 1, 2020): 50409–1. http://dx.doi.org/10.2352/j.imagingsci.technol.2020.64.5.050409.

Full text
Abstract:
Abstract In images, the representation of glossiness, translucency, and roughness of material objects (Shitsukan) is essential for realistic image reproduction. To date, image coding has been developed considering various indices of the quality of the encoded image, for example, the peak signal-to-noise ratio. Consequently, image coding methods that preserve subjective impressions of qualities such as Shitsukan have not been studied. In this study, the authors focus on the property of glossiness and propose a method of glossiness-aware image coding. Their purpose is to develop an encoding algorithm that produces images that can be decoded by standard JPEG decoders, which are commonly used worldwide. The proposed method consists of three procedures: block classification, glossiness enhancement, and non-glossiness information reduction. In block classification, the types of glossiness in a target image are classified using block units. In glossiness enhancement, the glossiness in each type of block is emphasized to reduce the amount of degradation of glossiness during JPEG encoding. The third procedure, non-glossiness information reduction, further compresses the information while maintaining the glossiness by reducing the information in each block that does not represent the glossiness in the image. To test the effectiveness of the proposed method, the authors conducted a subjective evaluation experiment using paired comparison of images coded by the proposed method and JPEG images with the same data size. The glossiness was found to be better preserved in images coded by the proposed method than in the JPEG images.
APA, Harvard, Vancouver, ISO, and other styles
5

Pearlman, William A., and Amir Said. "Image Wavelet Coding Systems: Part II of Set Partition Coding and Image Wavelet Coding Systems." Foundations and Trends® in Signal Processing 2, no. 3 (2007): 181–246. http://dx.doi.org/10.1561/2000000014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kumar, Vikas. "Compression Techniques Vs Huffman Coding." International Journal of Informatics and Communication Technology (IJ-ICT) 4, no. 1 (April 1, 2015): 29. http://dx.doi.org/10.11591/ijict.v4i1.pp29-37.

Full text
Abstract:
<p>The technique for compressioning the Images has been increasing because the fresh images need large amounts of disk space. It is seems to be a big disadvantage during transmission &amp; storage of image. Even though there are so many compression technique already presents and have better technique which is faster, memory efficient and simple, and friendly with the requirements of the user. In this paper we proposed the method for image compression and decompression using a simple coding technique called Huffman coding and show why this is more efficient then other technique. This technique is simple in implementation and utilizes less memory compression to other. A software algorithm has been developed and implemented to compress and decompress the given image using Huffman coding techniques in a MATLAB platform.</p><p> </p>
APA, Harvard, Vancouver, ISO, and other styles
7

Reid, M. M., R. J. Millar, and N. D. Black. "Second-generation image coding." ACM Computing Surveys 29, no. 1 (March 1997): 3–29. http://dx.doi.org/10.1145/248621.248622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nohre, R. "Fragmentation-based image coding." Electronics Letters 31, no. 11 (May 25, 1995): 870–71. http://dx.doi.org/10.1049/el:19950583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, D., and A. C. Bovik. "Visual pattern image coding." IEEE Transactions on Communications 38, no. 12 (1990): 2137–46. http://dx.doi.org/10.1109/26.64656.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pardàs, Montse. "Object-based image coding." Vistas in Astronomy 41, no. 3 (January 1997): 455–61. http://dx.doi.org/10.1016/s0083-6656(97)00051-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Zhou Wang and A. C. Bovik. "Embedded foveation image coding." IEEE Transactions on Image Processing 10, no. 10 (2001): 1397–410. http://dx.doi.org/10.1109/83.951527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Tsai, M. J., J. D. Villasenor, and F. Chen. "Stack-run image coding." IEEE Transactions on Circuits and Systems for Video Technology 6, no. 5 (1996): 519–21. http://dx.doi.org/10.1109/76.538934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Silva, V., L. Cruz, F. Lopes, A. Rodrigues, and L. de Sá. "Multiprocessor based image coding." Microprocessing and Microprogramming 32, no. 1-5 (August 1991): 343–48. http://dx.doi.org/10.1016/0165-6074(91)90368-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Qiu, G., and G. D. Finlayson. "Image Coding for Classification." Color and Imaging Conference 7, no. 1 (January 1, 1999): 278–82. http://dx.doi.org/10.2352/cic.1999.7.1.art00053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Fowler, J. E., M. R. Carbonara, and S. C. Ahalt. "Image coding using differential vector quantization image coding using differential vector quantization." IEEE Transactions on Circuits and Systems for Video Technology 3, no. 5 (1993): 350–67. http://dx.doi.org/10.1109/76.246087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Sheng, Zhong, Xiao Yu Jiang, and Wei Zhen. "Pseudo-Color Coding with Phase-Modulated Image Density." Advanced Materials Research 403-408 (November 2011): 1618–21. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.1618.

Full text
Abstract:
Traditional pseudo color coding for gray images based on image enhancement technique cannot adequately deal with some of the details information of the image. In this paper, an enhanced approach of peudo-color coding with phase-modulated image density is presented. This method has distinct levels, richer colours, and is adequate for human perception of color ,providing a better algorithm for pseudo color coding based on gray images. This method has great potential in research and application and characteristic of strong universal usage, which can process high gray resolution image.
APA, Harvard, Vancouver, ISO, and other styles
17

Pankiraj, Jeya Bright, Vishnuvarthanan Govindaraj, Yudong Zhang, Pallikonda Rajasekaran Murugan, and Anisha Milton. "Development of Scalable Coding of Encrypted Images Using Enhanced Block Truncation Code." Webology 19, no. 1 (January 20, 2022): 1620–39. http://dx.doi.org/10.14704/web/v19i1/web19109.

Full text
Abstract:
Only few researchers are reported on scalable coding of encrypted images, and it is an important area of research. In this paper, a novel method of scalable coding of encrypted images using Enhanced Block Truncation Code (EBTC) has been proposed. The raw image is compressed using EBTC and then encrypted using the pseudo-random number (PSRN) at the transmitter and the Key is disseminated to the receiver. The transmitted image is decrypted at the receiver by using the PSRN key. Finally, the output image is constructed using EBTC, scaled by scaling factor 2 and Bilinear Interpolation Technique. The proposed system gives better PSNR, Compression ratio and storage requirement than existing techniques such as Hadamard, DMMBTC and BTC.
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Ren Chong, Yi Long You, and Feng Xiang You. "Research of Image Processing Based on Lifting Wavelet Transform." Applied Mechanics and Materials 263-266 (December 2012): 2502–9. http://dx.doi.org/10.4028/www.scientific.net/amm.263-266.2502.

Full text
Abstract:
This paper Study problems which based on lifting wavelet transform image processing. Coding and decoding a complete digital image by using W97-2 wavelet basis wavelet transform, combined with the embedded zerotree wavelet coding and binary arithmetic coding, and complete a lossless compression combined with the international standard test images. Experimental results show that graphics, image processing will come into a higher level because of wavelet analysis combined with image processing.
APA, Harvard, Vancouver, ISO, and other styles
19

HU, XIYUAN, SILONG PENG, and WEN-LIANG HWANG. "MULTIPLE COMPONENT PREDICTIVE CODING OF IMAGES." International Journal of Wavelets, Multiresolution and Information Processing 11, no. 02 (March 2013): 1350012. http://dx.doi.org/10.1142/s0219691313500124.

Full text
Abstract:
The conventional multiple component image compression approach separates the input image into several components, each of which is predicted and encoded independently. This approach creates redundancy because the prediction methods as well as the residual subcomponents must be transmitted. In this paper, we propose a new multiple-component predictive coding framework. First, we separate the reconstructed image into several subcomponents. Then, we use the previously encoded subcomponent to predict the current block, and then combine the prediction residuals of each subcomponent. To separate an image into multiple subcomponents, we designed a fast operator-based image separation algorithm. The numerical results demonstrate that the algorithm outperforms the H.264/AVC intra-frame prediction algorithm and the JPEG2000 algorithm on images with ample textures.
APA, Harvard, Vancouver, ISO, and other styles
20

Ibáñez-Berganza, Miguel, Carlo Lucibello, Luca Mariani, and Giovanni Pezzulo. "Information-theoretical analysis of the neural code for decoupled face representation." PLOS ONE 19, no. 1 (January 26, 2024): e0295054. http://dx.doi.org/10.1371/journal.pone.0295054.

Full text
Abstract:
Processing faces accurately and efficiently is a key capability of humans and other animals that engage in sophisticated social tasks. Recent studies reported a decoupled coding for faces in the primate inferotemporal cortex, with two separate neural populations coding for the geometric position of (texture-free) facial landmarks and for the image texture at fixed landmark positions, respectively. Here, we formally assess the efficiency of this decoupled coding by appealing to the information-theoretic notion of description length, which quantifies the amount of information that is saved when encoding novel facial images, with a given precision. We show that despite decoupled coding describes the facial images in terms of two sets of principal components (of landmark shape and image texture), it is more efficient (i.e., yields more information compression) than the encoding in terms of the image principal components only, which corresponds to the widely used eigenface method. The advantage of decoupled coding over eigenface coding increases with image resolution and is especially prominent when coding variants of training set images that only differ in facial expressions. Moreover, we demonstrate that decoupled coding entails better performance in three different tasks: the representation of facial images, the (daydream) sampling of novel facial images, and the recognition of facial identities and gender. In summary, our study provides a first principle perspective on the efficiency and accuracy of the decoupled coding of facial stimuli reported in the primate inferotemporal cortex.
APA, Harvard, Vancouver, ISO, and other styles
21

Frajka, Tama´s. "Residual image coding for stereo image compression." Optical Engineering 42, no. 1 (January 1, 2003): 182. http://dx.doi.org/10.1117/1.1526492.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Feng. "Simulation of Video Image Fault Tolerant Coding Transmission in Digital Multimedia." Mathematical Problems in Engineering 2022 (September 13, 2022): 1–7. http://dx.doi.org/10.1155/2022/4657091.

Full text
Abstract:
In order to effectively improve the quality of video image transmission, this paper proposes a method of digital multimedia video image coding. The transmission of digital multimedia video image fault-tolerant coding requires sparse decomposition of a digital multimedia video image to obtain the linear form of the image and complete the transmission of video image fault-tolerant coding. The traditional method of fault-tolerant coding is based on human visual characteristics but ignores the linear form of the digital multimedia video image, which leads to the unsatisfactory effect of coding and transmission. In this paper, a fault-tolerant coding method based on wavelet transform and vector quantization is proposed to decompose and reconstruct digital multimedia video images. The smoothness of wavelet transform can remove visual redundancy; the decomposed image is vector quantized. The mean square deviation method and the similar scalar optimal quantization method are used to select and calculate the image vector, construct the over complete database of a digital multimedia video image, and normalize it; the digital multimedia video image is thinly decomposed by asymmetric atoms, and a linear representation of the image is obtained. According to the above-given operations, we can master the distribution range and law of pixels and realize fault-tolerant coding. The experimental results show that when the number of iterations is 15, the CR index is the same, PSNR increases by 8.7%, coding is 23.7% faster and decoding is 15% faster. Conclusion. The proposed method can not only improve the speed of fault-tolerant coding but also improve the quality of video image transmission.
APA, Harvard, Vancouver, ISO, and other styles
23

Prof. Sathish. "Light Field Image Coding with Image Prediction in Redundancy." Journal of Soft Computing Paradigm 2, no. 3 (July 21, 2020): 160–67. http://dx.doi.org/10.36548/jscp.2020.3.003.

Full text
Abstract:
The proposed work involves a hybrid data representation using efficient light field coding. The existing light field coding solution are implemented using sub-aperture or micro-images. However, the full capacity in terms of intrinsic redundancy in light field images is not completely explored. This paper represents a hybrid data representation which explores four major redundancy types. Using coding block, the most predominant redundancy is exploited to find the optimum coding solution that provides maximum flexibility. To show how efficient the hybrid representation works, we have proposed a combination of pseudo-video sequence coding approach with pixel prediction methods. The observed experimental results shows a positive bit rate saving when compared to other similar methods. Similarly, the proposed method is also said to outperform other coding algorithms such as WaSP and MuLE when compared on a HEVC-based benchmark.
APA, Harvard, Vancouver, ISO, and other styles
24

Sadeeq, Haval Tariq, Thamer Hassan Hameed, Abdo Sulaiman Abdi, and Ayman Nashwan Abdulfatah. "Image Compression Using Neural Networks: A Review." International Journal of Online and Biomedical Engineering (iJOE) 17, no. 14 (December 14, 2021): 135–53. http://dx.doi.org/10.3991/ijoe.v17i14.26059.

Full text
Abstract:
Computer images consist of huge data and thus require more memory space. The compressed image requires less memory space and less transmission time. Imaging and video coding technology in recent years has evolved steadily. However, the image data growth rate is far above the compression ratio growth, Considering image and video acquisition system popularization. It is generally accepted, in particular that further improvement of coding efficiency within the conventional hybrid coding system is increasingly challenged. A new and exciting image compression solution is also offered by the deep convolution neural network (CNN), which in recent years has resumed the neural network and achieved significant success both in artificial intelligent fields and in signal processing. In this paper we include a systematic, detailed and current analysis of image compression techniques based on the neural network. Images are applied to the evolution and growth of compression methods based on the neural networks. In particular, the end-to-end frames based on neural networks are reviewed, revealing fascinating explorations of frameworks/standards for next-generation image coding. The most important studies are highlighted and future trends even envisaged in relation to image coding topics using neural networks.
APA, Harvard, Vancouver, ISO, and other styles
25

Grimes, David B., and Rajesh P. N. Rao. "Bilinear Sparse Coding for Invariant Vision." Neural Computation 17, no. 1 (January 1, 2005): 47–73. http://dx.doi.org/10.1162/0899766052530893.

Full text
Abstract:
Recent algorithms for sparse coding and independent component analysis (ICA) have demonstrated how localized features can be learned from natural images. However, these approaches do not take image transformations into account. We describe an unsupervised algorithm for learning both localized features and their transformations directly from images using a sparse bilinear generative model. We show that from an arbitrary set of natural images, the algorithm produces oriented basis filters that can simultaneously represent features in an image and their transformations. The learned generative model can be used to translate features to different locations, thereby reducing the need to learn the same feature at multiple locations, a limitation of previous approaches to sparse coding and ICA. Our results suggest that by explicitly modeling the interaction between local image features and their transformations, the sparse bilinear approach can provide a basis for achieving transformation-invariant vision.
APA, Harvard, Vancouver, ISO, and other styles
26

KAMAL, A. R. NADIRA BANU, S. THAMARAI SELVI, and HENRY SELVARAJ. "ITERATION-FREE FRACTAL CODING FOR IMAGE COMPRESSION USING GENETIC ALGORITHM." International Journal of Computational Intelligence and Applications 07, no. 04 (December 2008): 429–46. http://dx.doi.org/10.1142/s1469026808002399.

Full text
Abstract:
An iteration-free fractal coding for image compression is proposed using genetic algorithm (GA) with elitist model. The proposed methodology reduces the coding process time by minimizing intensive computations. The proposed technique utilizes the GA, which greatly decreases the search space for finding the self-similarities in the given image. The performance of the proposed method is compared with the iteration-free fractal-based image coding using vector quantization method for both single block and Quad tree partition on benchmark images for parameters such as image quality and coding time. It is observed that the proposed method achieves excellent performance in image quality with reduction in computing time.
APA, Harvard, Vancouver, ISO, and other styles
27

Tarchouli, Marwa, Marc Riviere, Thomas Guionnet, Wassim Hamidouche, Meriem Outtas, and Olivier Deforges. "Patch-Based Image Learned Codec using Overlapping." Signal & Image Processing : An International Journal 14, no. 1 (February 27, 2023): 1–21. http://dx.doi.org/10.5121/sipij.2023.14101.

Full text
Abstract:
End-to-end learned image and video codecs, based on auto-encoder architecture, adapt naturally to image resolution, thanks to their convolutional aspect. However, while coding high resolution images, these codecs face hardware problems such as memory saturation. This paper proposes a patch-based image coding solution based on an end-to-end learned model, which aims to remedy to the hardware limitation while maintaining the same quality as full resolution image coding. Our method consists in coding overlapping patches of the image and reconstructing them into a decoded image using a weighting function. This approach manages to be on par with the performance of full resolution image coding using an endto-end learned model, and even slightly outperforms it, while being adaptable to different memory sizes. Moreover, this work undertakes a full study on the effect of the patch size on this solution’s performance, and consequently determines the best patch resolution in terms of coding time and coding efficiency. Finally, the method introduced in this work is also compatible with any learned codec based on a conv/deconvolutional autoencoder architecture without having to retrain the model.
APA, Harvard, Vancouver, ISO, and other styles
28

Götting, Detlef, Achim Ibenthal, and Rolf-Rainer Grigat. "Fractal Image Coding and Magnification Using Invariant Features." Fractals 05, supp01 (April 1997): 65–74. http://dx.doi.org/10.1142/s0218348x97000644.

Full text
Abstract:
Fractal image coding has significant potential for the compression of still and moving images and also for scaling up images. The objective of our investigations was twofold. First, compression ratios of factor 60 and more for still images have been achieved, yielding a better quality of the decoded picture material than standard methods like JPEG. Second, image enlargement up to factors of 16 per dimension has been realized by means of fractal zoom, leading to natural and sharp representation of the scaled image content. Quality improvements were achieved due to the introduction of an extended luminance transform. In order to reduce the computational complexity of the encoding process, a new class of simple and suited invariant features is proposed, facilitating the search in the multidimensional space spanned by image domains and affine transforms.
APA, Harvard, Vancouver, ISO, and other styles
29

Hu, Yu-Chen, Chun-Chi Lo, Wu-Lin Chen, and Chia-Hsien Wen. "Joint image coding and image authentication based on absolute moment block truncation coding." Journal of Electronic Imaging 22, no. 1 (January 22, 2013): 013012. http://dx.doi.org/10.1117/1.jei.22.1.013012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Hashimoto, Hideo. "Introduction of image data compression. (5). Image data coding algorithm. II. Transform coding." Journal of the Institute of Television Engineers of Japan 43, no. 10 (1989): 1145–52. http://dx.doi.org/10.3169/itej1978.43.1145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Hatori, Yoshinori. "Introduction to image data compression. (4). Image data coding algorithm. I. Predictive coding." Journal of the Institute of Television Engineers of Japan 43, no. 9 (1989): 949–56. http://dx.doi.org/10.3169/itej1978.43.949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Mahajan, Vipul R., and Alka Khade. "A Survey: Content Based Image Retrieval using Block Truncation Coding." International Journal of Advanced Research in Computer Science and Software Engineering 7, no. 12 (January 3, 2018): 46. http://dx.doi.org/10.23956/ijarcsse.v7i12.495.

Full text
Abstract:
A new approach to index color images using the features extracted from the error diffusion Block truncation coding (EDBTC). The EDBTC produces two color quantizes and a bitmap Image, which is further, managed using vector quantization (VQ) to create the image feature Descriptor. Herein two features are presented namely, colour histogram feature (CHF),bit Pattern histogram feature (BHF) to measure the similarity between a query image and the Target image in database. The CHF and BHF are calculated from the VQ-indexed color quantized and VQ- indexed bitmap image, respectively. The distance calculated from CHF and BHF can be utilized to measure the similarity between two images. A new approach to index colour images using the features extracted from the error diffusion Block truncation coding (EDBTC). The EDBTC produces two colour quantizes and a bitmap Image, which is further, managed using vector quantization (VQ) to create the image feature Descriptor. Herein two features are presented namely, color histogram feature (CHF),bit Pattern histogram feature (BHF) to measure the similarity between a query image and the Target image in database. The CHF and BHF are calculated from the VQ-indexed color quantized and VQ- indexed bitmap image, respectively. The distance calculated from CHF and BHF can be utilized to measure the similarity between two images.
APA, Harvard, Vancouver, ISO, and other styles
33

Wu, Hao. "Image Self-Coding Algorithm Based on IoT Perception Layer." Mobile Information Systems 2022 (August 3, 2022): 1–11. http://dx.doi.org/10.1155/2022/9910655.

Full text
Abstract:
In fact, with the quick growth of IoT-related industries in recent years, multimedia contents such as digital image videos have also shown explosive growth. In the sensing layer of the three-layer IoT architecture, sensors are the most critical part, which mainly sense the state of the environment. In this paper, an image self-coding algorithm based on the IoT perception layer is proposed. There is no specific encoding algorithm for the pictures collected by the current Internet of Things network perception layer. This results in poor search results for the network images collected by the sensor at the perception layer. A deep convolutional neural network image self-coding algorithm based on the IoT perception layer combines prior knowledge with deep involutional aural networks to increase the discriminative ability of images while preserving the message of the images themselves and improving the goal of image search accuracy. The experimental results show that the block search algorithm is used for image registration to reduce energy consumption, the absolute difference sum algorithm is improved to improve the accuracy of image registration, and the progressively out weighted average algorithm is used to stitch the images. After image stitching, the communication volume with the base station is reduced, which can effectively reduce the network load by 60%.
APA, Harvard, Vancouver, ISO, and other styles
34

LU, JIAN, JIAPENG TIAN, CHEN XU, and YURU ZOU. "A DICTIONARY LEARNING APPROACH FOR FRACTAL IMAGE CODING." Fractals 27, no. 02 (March 2019): 1950020. http://dx.doi.org/10.1142/s0218348x19500208.

Full text
Abstract:
In recent years, sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, this paper investigates incorporating a dictionary learning approach into fractal image coding, which leads to a new model containing three terms: a patch-based sparse representation prior over a learned dictionary, a quadratic term measuring the closeness of the underlying image to a fractal image, and a data-fidelity term capturing the statistics of Gaussian noise. After the dictionary is learned, the resulting optimization problem with fractal coding can be solved effectively. The new method can not only efficiently recover noisy images, but also admirably achieve fractal image noiseless coding/compression. Experimental results suggest that in terms of visual quality, peak-signal-to-noise ratio, structural similarity index and mean absolute error, the proposed method significantly outperforms the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
35

Abdul-Amir, Saied O., and Khamies K. Hasan. "DCT/DPCM Hybrid Coding for Interlaced Image Compression." Tikrit Journal of Engineering Sciences 16, no. 1 (March 31, 2009): 121–32. http://dx.doi.org/10.25130/tjes.16.1.09.

Full text
Abstract:
By the nature of images, picture elements in local regions are highly correlated with one another. In such cases, image compression techniques are introduced to reduce the amount of data is needed to represent the same information, either exactly or approximately. In this work DCT/DPCM hybrid approach have been designed and implemented for interlaced images. The image signal was first transformed row-wise using discrete cosine transform (DCT) and a differential pulse code modulation (DPCM) scheme then was used column-wise to get difference signal. For still images the same 3-bit quantizer was employed which makes quantization process easier. For interlaced images 3-bit quantizer was used for the odd field and 2-bit quantizer for even field, since the difference signal of the even field was very small. A compression ratios of about 13:1 was obtained for interlaced image. Objective measurements showed a high peak to peak signal to noise ratio without noticeable impairment.
APA, Harvard, Vancouver, ISO, and other styles
36

Liu, Fu, Wen Wei Fu, and Hui Tang. "Encoding and Reconstruction about Video Image via Compressed Sensing." Advanced Materials Research 765-767 (September 2013): 2617–20. http://dx.doi.org/10.4028/www.scientific.net/amr.765-767.2617.

Full text
Abstract:
A new method for encoding and reconstruction high quality video image is given in this paper which uses the theory of compressed sensing. First the image frame of video is transformed into DCT domain. Then Image coding and decoding process using CS theory is given, frame I in image sequences is coded by frame coding mode after doing CS sampling to the DCT coefficients and the difference vector dv of the t-th fame for fame P. CS reconstruction and IDCT are done during decoding. Finally, the high quality reconstructed image is obtained. The experimental results shows that for images with sparseness, the image coding and decoding system integrated with CS theory and its methods can be used to obtain reconstructed images with high quality, and comparing with DCT and IDCT method, the method has some improvement in the term of PSNR for general images.
APA, Harvard, Vancouver, ISO, and other styles
37

Syuhada, Ibnu. "Implementasi Algoritma Arithmetic Coding dan Sannon-Fano Pada Kompresi Citra PNG." TIN: Terapan Informatika Nusantara 2, no. 9 (February 25, 2022): 527–32. http://dx.doi.org/10.47065/tin.v2i9.1027.

Full text
Abstract:
The rapid development of technology plays an important role in the rapid exchange of information. In sending information in the form of images, there are still problems, including because of the large size of the image so that the solution to this problem is to perform compression. In this thesis, we will implement and compare the performance of the Arithmetic Coding and Shannon-Fano algorithms by calculating the compression ratio, compressed file size, compression and decompression process speed. Based on all test results, that the Arithmetic Coding algorithm produces an average compression ratio of 62.88% and a Shannon-Fano compression ratio of 61.73%, then Arithmetic Coding the average speed in image compression is 0.072449 seconds and Shannon-Fano 0.077838 second. Then the Shannon-Fano algorithm has an average speed for decompression of 0.028946 seconds and the Arithmetic Coding algorithm 0.034169 seconds. The decompressed image on the Arithmetic Coding and Shannon-Fano algorithm is in accordance with the original image. It can be concluded from the test results that the Arithmetic Coding algorithm is more efficient in compressing *.png images than the Shannon-Fano algorithm, although in terms of decompression Shannon-Fanose is a little faster compared to Arithmetic Coding.
APA, Harvard, Vancouver, ISO, and other styles
38

Salman, Nassir H. "New Image Compression/Decompression Technique Using Arithmetic Coding Algorithm." Journal of Zankoy Sulaimani - Part A 19, no. 1 (October 16, 2016): 263–72. http://dx.doi.org/10.17656/jzs.10604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Khaitu, Shree Ram, and Sanjeeb Prasad Panday. "Fractal Image Compression Using Canonical Huffman Coding." Journal of the Institute of Engineering 15, no. 1 (February 16, 2020): 91–105. http://dx.doi.org/10.3126/jie.v15i1.27718.

Full text
Abstract:
Image Compression techniques have become a very important subject with the rapid growth of multimedia application. The main motivations behind the image compression are for the efficient and lossless transmission as well as for storage of digital data. Image Compression techniques are of two types; Lossless and Lossy compression techniques. Lossy compression techniques are applied for the natural images as minor loss of the data are acceptable. Entropy encoding is the lossless compression scheme that is independent with particular features of the media as it has its own unique codes and symbols. Huffman coding is an entropy coding approach for efficient transmission of data. This paper highlights the fractal image compression method based on the fractal features and searching and finding the best replacement blocks for the original image. Canonical Huffman coding which provides good fractal compression than arithmetic coding is used in this paper. The result obtained depicts that Canonical Huffman coding based fractal compression technique increases the speed of the compression and has better PNSR as well as better compression ratio than standard Huffman coding.
APA, Harvard, Vancouver, ISO, and other styles
40

Abdelwahab, Ahmed A. "Inter-Image Similarity-Based Fast Adaptive Block Size Vector Quantizer for Image Coding." International Journal of Image and Graphics 17, no. 03 (July 2017): 1750017. http://dx.doi.org/10.1142/s0219467817500176.

Full text
Abstract:
Block coding is well known in the digital image coding literature. Vector quantization and transform coding are examples of well-known block coding techniques. Different images have many similar spatial blocks introducing inter-image similarity. The smaller the block size, the higher the inter-image similarity. In this paper, a new block coding algorithm based on inter-image similarity is proposed where it is claimed that any original image can be reconstructed from the blocks of any other image. The proposed algorithm is simply a vector quantization without the need to a codebook design algorithm and using matrix operations-based fast full search algorithm to find the local minimum root-mean-square error distortion measure to find the most similar code block to the input block. The proposed algorithm is applied in both spatial and transform domains with adaptive code block size. In the spatial domain, the encoding process has fidelity as high as 36.07[Formula: see text]dB with bit rate of 2.22[Formula: see text]bpp, while in the transform domain, the encoded image has good fidelity of 34.94[Formula: see text]dB with bit rate as low as 0.72[Formula: see text]bpp on the average. Moreover, the code image can be used as a secret key to provide secure communications.
APA, Harvard, Vancouver, ISO, and other styles
41

LIAN, SHIGUO, XI CHEN, and DENGPAN YE. "SECURE FRACTAL IMAGE CODING BASED ON FRACTAL PARAMETER ENCRYPTION." Fractals 17, no. 02 (June 2009): 149–60. http://dx.doi.org/10.1142/s0218348x09004405.

Full text
Abstract:
In recent work, various fractal image coding methods are reported, which adopt the self-similarity of images to compress the size of images. However, till now, no solutions for the security of fractal encoded images have been provided. In this paper, a secure fractal image coding scheme is proposed and evaluated, which encrypts some of the fractal parameters during fractal encoding, and thus, produces the encrypted and encoded image. The encrypted image can only be recovered by the correct key. To maintain security and efficiency, only the suitable parameters are selected and encrypted through investigating the properties of various fractal parameters, including parameter space, parameter distribution and parameter sensitivity. The encryption process does not change the file format, keeps secure in perception, and costs little time or computational resources. These properties make it suitable for secure image encoding or transmission.
APA, Harvard, Vancouver, ISO, and other styles
42

Et. al., S. Anitha,. "Image Compression based on Octagon Based Intra Prediction." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 10 (June 7, 2021): 6144–51. http://dx.doi.org/10.17762/turcomat.v12i10.5452.

Full text
Abstract:
Recently image coding has been an important research area in many fields. Various compression algorithms have been developed in different ways for image compression. One of the ways in image coding is prediction based image coding. This paper proposes a novel technique for finding the prediction of a current pixel. Instead of traditional four mode prediction, this paper proposes an eight mode prediction scheme. The proposed method is tested with nine traditional images and compared with four recent methods. Experimental results substantially proved that the proposed method is better than recent methods..
APA, Harvard, Vancouver, ISO, and other styles
43

Wang, Yuer, Zhong Jie Zhu, and Wei Dong Chen. "HVS-Based Low Bit-Rate Image Compression." Applied Mechanics and Materials 511-512 (February 2014): 441–46. http://dx.doi.org/10.4028/www.scientific.net/amm.511-512.441.

Full text
Abstract:
Image coding and compression is one of the most key techniques in the area of image signal processing, However, most of the existing coding methods such as JPEG, employ the similar hybrid architecture to compress images and videos. After many years of development, it is difficult to further improve the coding performance. In addition, most of the existing image compression algorithms are designed to minimize difference between the original and decompressed images based on pixel wise distortion metrics, such as MSE, PSNR which do not consider the HVS features and is not able to guarantee good perceptual quality of reconstructed images, especially at low bit-rate scenarios. In this paper, we propose a novel scheme for low bit-rate image compression. Firstly, the original image is quantized to a binary image based on heat transfer theory. Secondly, the bit sequence of the binary image is divided into several sub-sets and each one is designated a priority based on the rate-distortion principle. Thirdly, the sub-sets with high priorities are selected based on the given bit-rate. Finally, the context-based binary arithmetic coding is employed to encode the sub-sets selected to produce the final compressed stream. At decoder, the image is decoded and reconstructed based on anisotropic diffusion. Experiments are conducted and provide convincing results.
APA, Harvard, Vancouver, ISO, and other styles
44

Xin, Gangtao, and Pingyi Fan. "Soft Compression for Lossless Image Coding Based on Shape Recognition." Entropy 23, no. 12 (December 14, 2021): 1680. http://dx.doi.org/10.3390/e23121680.

Full text
Abstract:
Soft compression is a lossless image compression method that is committed to eliminating coding redundancy and spatial redundancy simultaneously. To do so, it adopts shapes to encode an image. In this paper, we propose a compressible indicator function with regard to images, which gives a threshold of the average number of bits required to represent a location and can be used for illustrating the working principle. We investigate and analyze soft compression for binary image, gray image and multi-component image with specific algorithms and compressible indicator value. In terms of compression ratio, the soft compression algorithm outperforms the popular classical standards PNG and JPEG2000 in lossless image compression. It is expected that the bandwidth and storage space needed when transmitting and storing the same kind of images (such as medical images) can be greatly reduced with applying soft compression.
APA, Harvard, Vancouver, ISO, and other styles
45

Hara, Yuki, and Tomonori Kawano. "Run-Length Encoding Graphic Rules Applied to DNA-Coded Images and Animation Editable by Polymerase Chain Reactions." Journal of Advanced Computational Intelligence and Intelligent Informatics 19, no. 1 (January 20, 2015): 5–10. http://dx.doi.org/10.20965/jaciii.2015.p0005.

Full text
Abstract:
We previously proposed novel designs for artificial genes as media for storing digitally compressed image data, specifically for biocomputing by analogy to natural genes mainly used to encode proteins. A run-length encoding (RLE) rule had been applied in DNA-based image data processing, to form coding regions, and noncoding regions were created as space for designing biochemical editing. In the present study, we apply the RLE-based image-coding rule to creation of DNAbased animation. This article consisted of three parts: (i) a theoretical review of RLE-based image coding by DNA, (ii) a technical proposal for biochemical editing of DNA-coded images using the polymerase chain reaction, and (iii) a minimal demonstration of DNAbased animation using simple model images encoded on short DNA molecules.
APA, Harvard, Vancouver, ISO, and other styles
46

Fan Zhang, Lin Ma, Songnan Li, and King Ngi Ngan. "Practical Image Quality Metric Applied to Image Coding." IEEE Transactions on Multimedia 13, no. 4 (August 2011): 615–24. http://dx.doi.org/10.1109/tmm.2011.2134079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Butera, W., and V. M. Bove. "The coding ecology: image coding via competition among experts." IEEE Transactions on Circuits and Systems for Video Technology 10, no. 7 (2000): 1049–58. http://dx.doi.org/10.1109/76.875509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Saito, Takahiro, and Cha Keon Cheong. "Toward Formation of a New Concept of Image Coding. 'Structure Image Modeling'+'Image Reconstruction from Compressed Partial Information'='Image Coding'?" Journal of the Institute of Television Engineers of Japan 46, no. 9 (1992): 1123–33. http://dx.doi.org/10.3169/itej1978.46.1123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Bocharova, I. E., A. V. Porov, T. S. Bondarev, and O. V. Finkelshteyn. "Low-complexity lossless image coding." Automatic Control and Computer Sciences 48, no. 5 (September 2014): 303–11. http://dx.doi.org/10.3103/s0146411614050034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Abhayaratne, G. C. K. "Scalable near-lossless image coding." Journal of Electronic Imaging 15, no. 4 (October 1, 2006): 043008. http://dx.doi.org/10.1117/1.2360694.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography