Academic literature on the topic 'Content Based Texture Coding (CBTC)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Content Based Texture Coding (CBTC).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Content Based Texture Coding (CBTC)"

1

HOSUR, PRABHUDEV, and ROLANDO CARRASCO. "ENHANCED FRAME-BASED VIDEO CODING TO SUPPORT CONTENT-BASED FUNCTIONALITIES." International Journal of Computational Intelligence and Applications 06, no. 02 (June 2006): 161–75. http://dx.doi.org/10.1142/s1469026806001939.

Full text
Abstract:
This paper presents the enhanced frame-based video coding scheme. The input source video to the enhanced frame-based video encoder consists of a rectangular-sized video and shapes of arbitrarily shaped objects on video frames. The rectangular frame texture is encoded by the conventional frame-based coding technique and the video object's shape is encoded using the contour-based vertex coding. It is possible to achieve several useful content-based functionalities by utilizing the shape information in the bitstream at the cost of a very small overhead to the bit-rate.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Qiuwen, Shuaichao Wei, and Rijian Su. "Low-Complexity Texture Video Coding Based on Motion Homogeneity for 3D-HEVC." Scientific Programming 2019 (January 15, 2019): 1–13. http://dx.doi.org/10.1155/2019/1574081.

Full text
Abstract:
Three-dimensional extension of the high efficiency video coding (3D-HEVC) is an emerging international video compression standard for multiview video system applications. Similar to HEVC, a computationally expensive mode decision is performed using all depth levels and prediction modes to select the least rate-distortion (RD) cost for each coding unit (CU). In addition, new tools and intercomponent prediction techniques have been introduced to 3D-HEVC for improving the compression efficiency of the multiview texture videos. These techniques, despite achieving the highest texture video coding efficiency, involve extremely high-complex procedures, thus limiting 3D-HEVC encoders in practical applications. In this paper, a fast texture video coding method based on motion homogeneity is proposed to reduce 3D-HEVC computational complexity. Because the multiview texture videos instantly represent the same scene at the same time (considering that the optimal CU depth level and prediction modes are highly multiview content dependent), it is not efficient to use all depth levels and prediction modes in 3D-HEVC. The motion homogeneity model of a CU is first studied according to the motion vectors and prediction modes from the corresponding CUs. Based on this model, we present three efficient texture video coding approaches, such as the fast depth level range determination, early SKIP/Merge mode decision, and adaptive motion search range adjustment. Experimental results demonstrate that the proposed overall method can save 56.6% encoding time with only trivial coding efficiency degradation.
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Yan-Hong, Chin-Chen Chang, Chia-Chen Lin, and Cheng-Yi Hsu. "Content-Based Color Image Retrieval Using Block Truncation Coding Based on Binary Ant Colony Optimization." Symmetry 11, no. 1 (December 27, 2018): 21. http://dx.doi.org/10.3390/sym11010021.

Full text
Abstract:
In this paper, we propose a content-based image retrieval (CBIR) approach using color and texture features extracted from block truncation coding based on binary ant colony optimization (BACOBTC). First, we present a near-optimized common bitmap scheme for BTC. Then, we convert the image to two color quantizers and a bitmap image-utilizing BACOBTC. Subsequently, the color and texture features, i.e., the color histogram feature (CHF) and the bit pattern histogram feature (BHF) are extracted to measure the similarity between a query image and the target image in the database and retrieve the desired image. The performance of the proposed approach was compared with several former image-retrieval schemes. The results were evaluated in terms of Precision-Recall and Average Retrieval Rate, and they showed that our approach outperformed the referenced approaches.
APA, Harvard, Vancouver, ISO, and other styles
4

Dumitras, A., and B. G. Haskell. "An Encoder–Decoder Texture Replacement Method With Application to Content-Based Movie Coding." IEEE Transactions on Circuits and Systems for Video Technology 14, no. 6 (June 2004): 825–40. http://dx.doi.org/10.1109/tcsvt.2004.828336.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

WANG, XING-YUAN, and YAHUI LANG. "A FAST FRACTAL ENCODING METHOD BASED ON FRACTAL DIMENSION." Fractals 17, no. 04 (December 2009): 459–65. http://dx.doi.org/10.1142/s0218348x09004491.

Full text
Abstract:
In this paper a fast fractal coding method based on fractal dimension is proposed. Image texture is an important content in image analysis and processing which can be used to describe the extent of irregular surface. The fractal dimension in fractal theory can be used to describe the image texture, and it is the same with the human visual system. The higher the fractal dimension, the rougher the surface of the corresponding graph, and vice versa. Therefore in this paper a fast fractal encoding method based on fractal dimension is proposed. During the encoding process, using the fractal dimension of the image, all blocks of the given image first are defined into three classes. Then each range block searches the best match in the corresponding class. The method is based on differential box counting which is chosen specifically for texture analysis. Since the searching space is reduced and the classification operation is simple and computationally efficient, the encoding speed is improved and the quality of the decoded image is preserved. Experiments show that compared with the full search method, the proposed method greatly reduced the encoding time, obtained a rather good retrieved image, and achieved the stable speedup ratio.
APA, Harvard, Vancouver, ISO, and other styles
6

Han, Xinying, Yang Wu, and Rui Wan. "A Method for Style Transfer from Artistic Images Based on Depth Extraction Generative Adversarial Network." Applied Sciences 13, no. 2 (January 8, 2023): 867. http://dx.doi.org/10.3390/app13020867.

Full text
Abstract:
Depth extraction generative adversarial network (DE-GAN) is designed for artistic work style transfer. Traditional style transfer models focus on extracting texture features and color features from style images through an autoencoding network by mixing texture features and color features using high-dimensional coding. In the aesthetics of artworks, the color, texture, shape, and spatial features of the artistic object together constitute the artistic style of the work. In this paper, we propose a multi-feature extractor to extract color features, texture features, depth features, and shape masks from style images with U-net, multi-factor extractor, fast Fourier transform, and MiDas depth estimation network. At the same time, a self-encoder structure is used as the content extraction network core to generate a network that shares style parameters with the feature extraction network and finally realizes the generation of artwork images in three-dimensional artistic styles. The experimental analysis shows that compared with other advanced methods, DE-GAN-generated images have higher subjective image quality, and the generated style pictures are more consistent with the aesthetic characteristics of real works of art. The quantitative data analysis shows that images generated using the DE-GAN method have better performance in terms of structural features, image distortion, image clarity, and texture details.
APA, Harvard, Vancouver, ISO, and other styles
7

Deep, G., J. Kaur, Simar Preet Singh, Soumya Ranjan Nayak, Manoj Kumar, and Sandeep Kautish. "MeQryEP: A Texture Based Descriptor for Biomedical Image Retrieval." Journal of Healthcare Engineering 2022 (April 11, 2022): 1–20. http://dx.doi.org/10.1155/2022/9505229.

Full text
Abstract:
Image texture analysis is a dynamic area of research in computer vision and image processing, with applications ranging from medical image analysis to image segmentation to content-based image retrieval and beyond. “Quinary encoding on mesh patterns (MeQryEP)” is a new approach to extracting texture features for indexing and retrieval of biomedical images, which is implemented in this work. An extension of the previous study, this research investigates the use of local quinary patterns (LQP) on mesh patterns in three different orientations. To encode the gray scale relationship between the central pixel and its surrounding neighbors in a two-dimensional (2D) local region of an image, binary and nonbinary coding, such as local binary patterns (LBP), local ternary patterns (LTP), and LQP, are used, while the proposed strategy uses three selected directions of mesh patterns to encode the gray scale relationship between the surrounding neighbors for a given center pixel in a 2D image. An innovative aspect of the proposed method is that it makes use of mesh image structure quinary pattern features to encode additional spatial structure information, resulting in better retrieval. On three different kinds of benchmark biomedical data sets, analyses have been completed to assess the viability of MeQryEP. LIDC-IDRI-CT and VIA/I–ELCAP-CT are the lung image databases based on computed tomography (CT), while OASIS-MRI is a brain database based on magnetic resonance imaging (MRI). This method outperforms state-of-the-art texture extraction methods, such as LBP, LQEP, LTP, LMeP, LMeTerP, DLTerQEP, LQEQryP, and so on in terms of average retrieval precision (ARP) and average retrieval rate (ARR).
APA, Harvard, Vancouver, ISO, and other styles
8

Xu, Jianming, Weichun Liu, Yang Qin, and Guangrong Xu. "Image Super-Resolution Reconstruction Method for Lung Cancer CT-Scanned Images Based on Neural Network." BioMed Research International 2022 (July 18, 2022): 1–10. http://dx.doi.org/10.1155/2022/3543531.

Full text
Abstract:
The super-resolution (SR) reconstruction of a single image is an important image synthesis task especially for medical applications. This paper is studying the application of image segmentation for lung cancer images. This research work is utilizing the power of deep learning for resolution reconstruction for lung cancer-based images. At present, the neural networks utilized for image segmentation and classification are suffering from the loss of information where information passes through one layer to another deep layer. The commonly used loss functions include content-based reconstruction loss and generative confrontation network. The sparse coding single-image super-resolution reconstruction algorithm can easily lead to the phenomenon of incorrect geometric structure in the reconstructed image. In order to solve the problem of excessive smoothness and blurring of the reconstructed image edges caused by the introduction of this self-similarity constraint, a two-layer reconstruction framework based on a smooth layer and a texture layer is proposed for a medical application of lung cancer. This method uses a global nonzero gradient number constrained reconstruction model to reconstruct the smooth layer. The proposed sparse coding method is used to reconstruct high-resolution texture images. Finally, a global and local optimization models are used to further improve the quality of the reconstructed image. An adaptive multiscale remote sensing image super-division reconstruction network is designed. The selective core network and adaptive gating unit are integrated to extract and fuse features to obtain a preliminary reconstruction. Through the proposed dual-drive module, the feature prior drive loss and task drive loss are transmitted to the super-resolution network. The proposed work not only improves the subjective visual effect but the robustness has also been enhanced with more accurate construction of edges. The statistical evaluators are used to test the viability of the proposed scheme.
APA, Harvard, Vancouver, ISO, and other styles
9

Xing, Qiang, Jie Chen, Jieyu Liu, and Baifeng Song. "A Double Random Matrix Design Model for Fractal Art Patterns Based on Visual Characteristics." Mathematical Problems in Engineering 2022 (August 31, 2022): 1–11. http://dx.doi.org/10.1155/2022/5376587.

Full text
Abstract:
This paper adopts the method of visual characteristics of the double random matrix to conduct in-depth research and analysis on the design of fractal art patterns. For the practical application needs in fractal graphic design, a method is proposed to automatically extract the core base pattern based on fractal graphic content and generate a four-sided continuous pattern. The method first uses the Canny operator for edge detection to analyze the area of the main pattern. Then, it uses a grayscale cogeneration matrix to extract and analyze the graphic texture features, based on which the best splicing method is selected to splice the extracted pattern, and then, it achieves two splicing methods of flat row and staggered four-sided continuous pattern. The method has good practicality with low complexity and high versatility under the premise of ensuring the beauty of the generated four-sided continuous pattern. It can assist designers to design patterns, improve efficiency, and save design costs. In this paper, we improve the existing image segmentation methods, adopt two segmentation methods, namely quadratic tree segmentation and HV segmentation, propose a new local codebook selection strategy, and study the degree of self-adaptation of different methods to images in terms of segmentation methods and local codebook selection strategies. It makes the network unable to train efficiently on long text prediction problems. Finally, the improved algorithm of this paper is tested on the standard and living image libraries, and the experimental results show that the use of local codebooks makes the image coding speed significantly improved compared with the fixed fractal. In the face of the image to be retrieved, it is only necessary to perform the coding operation to obtain the fractal code to perform similarity matching, which can meet the requirement of real-time retrieval. Applying the improved distance formula, the search accuracy obtained on the test gallery is significantly better than that of the grayscale histogram algorithm.
APA, Harvard, Vancouver, ISO, and other styles
10

Jenks, Robert A., Ashkan Vaziri, Ali-Reza Boloori, and Garrett B. Stanley. "Self-Motion and the Shaping of Sensory Signals." Journal of Neurophysiology 103, no. 4 (April 2010): 2195–207. http://dx.doi.org/10.1152/jn.00106.2009.

Full text
Abstract:
Sensory systems must form stable representations of the external environment in the presence of self-induced variations in sensory signals. It is also possible that the variations themselves may provide useful information about self-motion relative to the external environment. Rats have been shown to be capable of fine texture discrimination and object localization based on palpation by facial vibrissae, or whiskers, alone. During behavior, the facial vibrissae brush against objects and undergo deflection patterns that are influenced both by the surface features of the objects and by the animal's own motion. The extent to which behavioral variability shapes the sensory inputs to this pathway is unknown. Using high-resolution, high-speed videography of unconstrained rats running on a linear track, we measured several behavioral variables including running speed, distance to the track wall, and head angle, as well as the proximal vibrissa deflections while the distal portions of the vibrissae were in contact with periodic gratings. The measured deflections, which serve as the sensory input to this pathway, were strongly modulated both by the properties of the gratings and the trial-to-trial variations in head-motion and locomotion. Using presumed internal knowledge of locomotion and head-rotation, gratings were classified using short-duration trials (<150 ms) from high-frequency vibrissa motion, and the continuous trajectory of the animal's own motion through the track was decoded from the low frequency content. Together, these results suggest that rats have simultaneous access to low- and high-frequency information about their environment, which has been shown to be parsed into different processing streams that are likely important for accurate object localization and texture coding.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Content Based Texture Coding (CBTC)"

1

Jain, Anurag. "Content-Based Texture Analysis and Synthesis for Low Bit-Rate Video Coding Using Perceptual Models." Thesis, 2006. https://etd.iisc.ac.in/handle/2005/4990.

Full text
Abstract:
Determining perceptually irrelevant and redundant information from human point of view is one of the fundamental problems today that is limiting the performance of current video compression algorithms. The performance of the existing video compression standards is based on minimizing the cumulative sum of objective distortion, namely mean squared error (MSE), measured for each pixel. Recently there have been quite a few advancements made to understand human visual models and apply them for a compact representation at very low bitrates. However, most of these approaches offer advantages over a very limited range of input sequences using predefined models for analysis of static scene, human head, and human body. The existing video compression standards typically aim to increase the spectral flatness measure of the residue signal, by increasing the number of both spatial and temporal predictors. With the increase in the choices of predictors, the corresponding bits, required to convey the choice of the predictor to the decoder, also increases. This mandates the need for jointly optimizing the distortion and the required side information for a given quantization factor using special rate distortion measures. This thesis is aimed at suggesting alternative solution of removing perceptual redundancy without increasing the number of predictors using two approaches. The first one is to increase the spectral flatness measure by removing perceptually irrelevant residual information. The second one is to model the perceptually relevant residual information loss due to quantization and parameterize the same for synthesizing it at the decoder end. This basically evolves around two analytical and estimation problems. The first problem is to identify the perceptually irrelevant quantization noise and remove it from the resulting source. The second problem is to model the perceptually relevant quantization noise. The first contribution of this dissertation is to classify regions into homogenous / non-homogenous and rigid / non-rigid, based on different perceptual ques like variance, edge, color, and motion. Quantization noise for each region is shaped differently to ensure minimal perceptual quality degradation. At very low bitrates, the rigid regions with small residue errors results in AC coefficients which are small in magnitude. These coefficients, which typically get quantized to zero value, are regenerated / synthesized at the decoder end using statistical characteristics of the temporal predictors. The regions are coarsely segmented based on edge, color, and motion descriptors. Regions with rigid texture are more optimized for rate compared to distortion using higher values of quantization parameter. The second contribution of this dissertation is identification and representation of non-rigid textured regions like grass, flowing water etc. with a dense motion vector field (DMVF) instead of conventional motion compensated signal. The analysis part contains identification of such regions and classification of macroblocks into rigid and non-rigid homogenous textures. The DMVF is computed only for the macroblocks classified under non-rigid textured regions. A replacement technique is used to substitute a block of texture pixels with a block of motion vectors which are then differentially coded using causal neighbors and context adaptive binary arithmetic coding (CABAC). As a part of texture synthesis, the decoder then simply decodes these motion vectors, regenerates the DMVF and compensates each pixel individually using the regenerated DMVF. The remaining macroblocks which are not classified as homogenous texture (rigid or non-rigid) are coded using conventional H.264 encoder. Although the underlying techniques are generic enough to be augmented with any video standard, we specifically picked H.264 video compression standard considering it is the current state-of-the-art. We compare coding approaches using NTIA model for objective measure of subjective quality. Comparing our techniques with H.264 standard compliant JM encoder developed by JVT (Joint Video Technology) committee members, we got a bit-rate savings of around 15%. The chapters of this dissertation are organized as follows. An introduction to the H.264 standard features and improvements made over several years over existing video standards like MPEG-2 and H.263 are presented in Chapter 1. It also consists of highlighting some of the techniques published to reduce the computation complexity for enabling real-time implementation of encoders. A literature survey of existing techniques which use perceptual criterion for video coding is presented in Chapter 2. Chapter 3 highlights some of the limitations of schemes mentioned in the literature and is followed by the contributions made in the present work to overcome these limitations. Experimental results are presented in Chapter 4 and the thesis is concluded in Chapter 5 highlighting some of the future work which could be carried out in this direction.
APA, Harvard, Vancouver, ISO, and other styles
2

Yeh, Li-chun, and 葉立群. "Content-Based Video Coding Using Texture Analysis and Synthesis." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/18870018206430283483.

Full text
Abstract:
碩士
國立成功大學
電機工程學系碩博士班
95
With age of digital-content coming, the issue from the past has continued to study have focused on how to improve picture and video compression. In the pursuit of compression ratio, the image or video is often lost visual quality. In distortion compression, the researcher pursuit balance between compression ratio and visual quality. Do not balance the two? Here, we present a method use of the existing video compression framework (H.264 codec), in addition to the general target: improved compression ratio. In encoder terminal, the input video background texture are analyzed, and then be removed. In the decoder terminal, texture is synthesized to retrieve the background. By this way, we could increase the compression ration and produce better visual quality. In this thesis, we present an effective algorithm for texture occupied certain percentage area image or video. First, we segment texture in the image, and then to analysis color, texture characteristic, replace the background texture. Continuing, we remove broken, small region. Reservations texture characteristics, such as location, colors and so on. In the decoder, we propose another synthesis method. In the past, the most use parameters to synthesize the background texture. In addition to the synthesis method can more effectively synthesize the structural background texture. This will increase effectiveness after using texture replaced.
APA, Harvard, Vancouver, ISO, and other styles
3

Ndjiki-Nya, Patrick [Verfasser]. "Mid-level content based video coding using texture analysis and synthesis / von Patrick Ndjiki-Nya." 2008. http://d-nb.info/989637751/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Content Based Texture Coding (CBTC)"

1

Shrinivasacharya, Purohit, and M. V. Sudhamani. "Content based image retrieval system using texture and modified block truncation coding." In 2013 International Conference on Advanced Computing & Communication Systems (ICACCS). IEEE, 2013. http://dx.doi.org/10.1109/icaccs.2013.6938770.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Machhour, Naoufal, and M'barek Nasri. "New Color and Texture Features Coding Method Combined to the Simulated Annealing Algorithm for Content Based Image Retrieval." In 2020 Fourth International Conference On Intelligent Computing in Data Sciences (ICDS). IEEE, 2020. http://dx.doi.org/10.1109/icds50568.2020.9268679.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography