Dissertations / Theses on the topic 'Image coding'

To see the other types of publications on this topic, follow the link: Image coding.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Image coding.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Streit, Juergen Stefan. "Digital image coding." Thesis, University of Southampton, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.361092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chowdhury, Md Mahbubul Islam. "Image segmentation for coding." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0017/MQ55494.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

VASCONCELLOS, EDMAR DA COSTA. "SUB-BAND IMAGE CODING." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1994. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=8635@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Este trabalho aborda o problema da compressão de imagens explorando a técnica de codificação por sub-bandas(SBB). Como estrutura básica, usada na primeira parte do trabalho, tem-se a divisão da imagem em 16 sub-bandas buscando replicar os resultados de woods [1]. As componentes das 16 SBB são quantizadas e codificadas, e bits são alocados às SBB usando como critério a minimização do erro médio quadrático. Os quantizadores são projetados segundo uma distribuição Gaussiana Generalizada. Neste processo de codificação, a sub-banda de mais baixa freqüência é codificada com DPCM, enquanto as demais SBB são codificadas por PCM. Como inovação, é proposto o uso do algoritmo de Lempel-Ziv na codificação sem perdas (compactação) das sub-bandas quantizadas. Na compactação são empregados os algoritmos de Huffman e LZW (modificação do LZA). Os resultados das simulações são apresentados em termos da taxa (bits/pixel) versus relação sinal ruído de pico e em termos de analise subjetiva das imagens reconstruídas. Os resultados obtidos indicam um desempenho de compressão superior quanto o algoritmo de Huffman é usado, comparado com o algoritmo LZW. A melhoria de desempenho, na técnica de decomposição em sub-bandas, observada com o algoritmo de Huffman foi superior (2dB acima). Todavia, tendo em vista as vantagens da universalidade do algoritmo de Lempel-Ziv, deve-se continuar a investigar o seu desempenho implementado de forma diferente do explorado neste trabalho.
This work focus on the problem of image compression, with exploring the techniques of subband coding. The basic structure, used in the sirst part of this tesis, encompass the uniform decomposition of the image into 16 subbands. This procedure aims at reproducing the reults of Woods [1]. The component of the 16 subbands are quatized and coded and bits are optimally allocated among the subbands to minimize the mean-squared error. The quantizers desingned match the Generelized Gaussian Distribuition, which model the subband components. In the coding process, the lowest subband is DPCM coded while the higher subbands are coded with PCM. As an innovation, it is proposed the use of the algorithm LZW for coding without error (compaction) the quantized subbands. In the compactation process, the Huffamn and LZW algorithms are used. The simulation results are presented in terms of rate (bits/pel) versus peak signal-to-noise and subjective quality. The performance of the subband decomposition tecnique obtained with the Huffamn´s algorithm is about 2dB better than that obtained with the LZW. The universality of the Lempel-Ziv algorithm is, however, an advantage that leads us to think that further investigation should still be pursued.
APA, Harvard, Vancouver, ISO, and other styles
4

Andersson, Tomas. "On error-robust source coding with image coding applications." Licentiate thesis, Stockholm : Department of Signals, Sensors and Systems, Royal Institute of Technology, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bergström, Peter. "Eye-movement controlled image coding /." Linköping : Univ, 2003. http://www.bibl.liu.se/liupubl/disp/disp2003/tek831s.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Silva, Eduardo Antonio Barros da. "Wavelet transforms for image coding." Thesis, University of Essex, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.282495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kubrick, Aharon H. "Image coding employing vector quantisation." Thesis, City University London, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.357009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Morgan, Pamela Sheila. "Medical image coding and segmentation :." Thesis, University of Bristol, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.442206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Desai, Ujjaval Yogesh. "Coding of segmented image sequences." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/11984.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (leaves 72-74).
by Ujjaval Yogesh.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
10

Frajka, Tamás. "Image coding subject to constraints /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2003. http://wwwlib.umi.com/cr/ucsd/fullcit?p3090437.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Chaiyaboonthanit, Thanit. "Image coding using wavelet transform and adaptive block truncation coding /." Online version of thesis, 1991. http://hdl.handle.net/1850/10913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Gauthier, Francois. "Sparse image coding using wavelet packets." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0004/MQ44010.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Khataie, Manijeh. "Structured vector quantizers in image coding." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0020/NQ47711.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Gauthier, François 1955. "Sparse image coding using wavelet packets." Thesis, McGill University, 1997. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=20206.

Full text
Abstract:
Using a perceptual approach to image analysis, the process encompassing image capture to perceptual feature extraction is modeled as a digital communications system. Within this novel approach, image features become information carrying symbols which are transmitted by the scene to the observer through a band-limited channel. The principal effect of the band-limited channel is to cause severe inter-symbol interference in image regions where feature density cannot be resolved by the modeled sensor. The output of the band-limited channel is then fed into an image analyzer whose purpose is naturally to reliably recover as much of the originally transmitted information as possible.
Using this qualitative model as a guiding framework, a novel sparse perceptual coding method is developed based on Spatial Frequency Analysis using Wavelet Packets. The goal is to extract the recoverable subset of symbols or image features originally transmitted from the scene. Each visual symbol or feature is quantitatively modeled using simple complex polynomials. In turn these elementary polynomials can be decomposed into both shape and positional terms.
The parameters of the elementary polynomials are estimated using Harmonic Wavelets Packets. This Spatial Frequency analysis technique borrows equally from both the Fourier and Wavelet domains. The obtained parameters can the be used to select a representative feature set which sparsely describes image data.
APA, Harvard, Vancouver, ISO, and other styles
15

Soryani, Mohsen. "Segmented coding of digital image sequences." Thesis, Heriot-Watt University, 1990. http://hdl.handle.net/10399/864.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Qi. "Motion compensation for image sequence coding." Thesis, Heriot-Watt University, 1991. http://hdl.handle.net/10399/821.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Sampson, Demetrios G. "Lattice vector quantization for image coding." Thesis, University of Essex, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.282525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Ngwa-Ndifor, Ngwason John Nde. "Segmental image coding and fidelity measures." Thesis, City University London, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Cubiss, Christopher. "Low bit-rate image sequence coding." Thesis, University of Edinburgh, 1994. http://hdl.handle.net/1842/13506.

Full text
Abstract:
Digital video, by its very nature, contains vast amounts of data. Indeed, the storage and transmission requirements of digital video frequency far exceed practical storage and transmission capacity. Therefore such research has been dedicated to developing compression algorithms for digital video. This research has recently culminated in the introduction of several standards for image compression. The CCITT H.261 and the motion picture experts group (MPEG) standards both target full-motion video and are based upon a hybrid architecture which combines motion-compensated prediction with transform coding. Although motion-compensated transform coding has been shown to produce reasonable quality reconstructed images, it has also been shown that as the compression ratio is progressively increased the quality of the reconstructed image rapidly degrades. The reasons for this degradation are twofold: firstly, the transform coder is optimised for encoding real-world images, not prediction errors; and secondly, the motion-estimation and transform-coding algorithms both decompose the image into a regular array of blocks which, as the coding distortion is progressively increased, results in the well known 'blocking' effect. The regular structure of this coding artifact makes this error particularly disturbing. This research investigates motion estimation and motion compensated prediction with the aim of characterising the prediction error so that more optimal spatial coding algorithms can be chosen. Motion-compensated prediction was considered in detail. Simple theoretical models of the prediction error were developed and it was shown that, for sufficiently accurate motion estimates, motion-compensated prediction could be considered as a non-ideal spatial band-pass filtering operation. Rate-distortion theory was employed to show that the inverse spectral flatness measure of the prediction error provides a direct indication of the expected coding gain of an optimal hybrid motion-compensated prediction algorithm.
APA, Harvard, Vancouver, ISO, and other styles
20

Li, Yun. "Coding of three-dimensional video content : Depth image coding by diffusion." Licentiate thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-19087.

Full text
Abstract:
Three-dimensional (3D) movies in theaters have become a massive commercial success during recent years, and it is likely that, with the advancement of display technologies and the production of 3D contents, TV broadcasting in 3D will play an important role in home entertainments in the not too distant future. 3D video contents contain at least two views from different perspectives for the left and the right eye of viewers. The amount of coded information is doubled if these views are encoded separately. Moreover, for multi-view displays (i.e. different perspectives of a scene in 3D are presented to the viewer at the same time through different angles), either video streams of all the required views must be transmitted to the receiver, or the displays must synthesize the missing views with a subset of the views. The latter approach has been widely proposed to reduce the amount of data being transmitted. The virtual views can be synthesized by the Depth Image Based Rendering (DIBR) approach from textures and associated depth images. However it is still the case that the amount of information for the textures plus the depths presents a significant challenge for the network transmission capacity. An efficient compression will, therefore, increase the availability of content access and provide a better video quality under the same network capacity constraints. In this thesis, the compression of depth images is addressed. These depth images can be assumed as being piece-wise smooth. Starting from the properties of depth images, a novel depth image model based on edges and sparse samples is presented, which may also be utilized for depth image post-processing. Based on this model, a depth image coding scheme that explicitly encodes the locations of depth edges is proposed, and the coding scheme has a scalable structure. Furthermore, a compression scheme for block-based 3D-HEVC is also devised, in which diffusion is used for intra prediction. In addition to the proposed schemes, the thesis illustrates several evaluation methodologies, especially, the subjective test of the stimulus-comparison method. It is suitable for evaluating the quality of two impaired images, as the objective metrics are inaccurate with respect to synthesized views. The MPEG test sequences were used for the evaluation. The results showed that virtual views synthesized from post-processed depth images by using the proposed model are better than those synthesized from original depth images. More importantly, the proposed coding schemes using such a model produced better synthesized views than the state of the art schemes. As a result, the outcome of the thesis can lead to a better quality of 3DTV experience.
APA, Harvard, Vancouver, ISO, and other styles
21

Oh, Han, and Hariharan G. Lalgudi. "Scalable Perceptual Image Coding for Remote Sensing Systems." International Foundation for Telemetering, 2008. http://hdl.handle.net/10150/606208.

Full text
Abstract:
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California
In this work, a scalable perceptual JPEG2000 encoder that exploits properties of the human visual system (HVS) is presented. The algorithm modifies the final three stages of a conventional JPEG2000 encoder. In the first stage, the quantization step size for each subband is chosen to be the inverse of the contrast sensitivity function (CSF). In bit-plane coding, two masking effects are considered during distortion calculation. In the final bitstream formation step, quality layers are formed corresponding to desired perceptual distortion thresholds. This modified encoder exhibits superior visual performance for remote sensing images compared to conventional JPEG2000 encoders. Additionally, it is completely JPEG2000 Part-1 compliant, and therefore can be decoded by any JPEG2000 decoder.
APA, Harvard, Vancouver, ISO, and other styles
22

Sun, Yong. "Source-channel coding for robust image transmission and for dirty-paper coding." Texas A&M University, 2005. http://hdl.handle.net/1969.1/4800.

Full text
Abstract:
In this dissertation, we studied two seemingly uncorrelated, but conceptually related problems in terms of source-channel coding: 1) wireless image transmission and 2) Costa ("dirty-paper") code design. In the first part of the dissertation, we consider progressive image transmission over a wireless system employing space-time coded OFDM. The space-time coded OFDM system based on a newly built broadband MIMO fading model is theoretically evaluated by assuming perfect channel state information (CSI) at the receiver for coherent detection. Then an adaptive modulation scheme is proposed to pick the constellation size that offers the best reconstructed image quality for each average signal-to-noise ratio (SNR). A more practical scenario is also considered without the assumption of perfect CSI. We employ low-complexity decision-feedback decoding for differentially space- time coded OFDM systems to exploit transmitter diversity. For JSCC, we adopt a product channel code structure that is proven to provide powerful error protection and bursty error correction. To further improve the system performance, we also apply the powerful iterative (turbo) coding techniques and propose the iterative decoding of differentially space-time coded multiple descriptions of images. The second part of the dissertation deals with practical dirty-paper code designs. We first invoke an information-theoretical interpretation of algebraic binning and motivate the code design guidelines in terms of source-channel coding. Then two dirty-paper code designs are proposed. The first is a nested turbo construction based on soft-output trellis-coded quantization (SOTCQ) for source coding and turbo trellis- coded modulation (TTCM) for channel coding. A novel procedure is devised to balance the dimensionalities of the equivalent lattice codes corresponding to SOTCQ and TTCM. The second dirty-paper code design employs TCQ and IRA codes for near-capacity performance. This is done by synergistically combining TCQ with IRA codes so that they work together as well as they do individually. Our TCQ/IRA design approaches the dirty-paper capacity limit at the low rate regime (e.g., < 1:0 bit/sample), while our nested SOTCQ/TTCM scheme provides the best performs so far at medium-to-high rates (e.g., >= 1:0 bit/sample). Thus the two proposed practical code designs are complementary to each other.
APA, Harvard, Vancouver, ISO, and other styles
23

Yeung, Yick Ming. "Fast rate control for JPEG2000 image coding /." View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202003%20YEUNG.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 63-65). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
24

Casas, Pla Josep Ramon. "Image Compression based on Perceptual Coding Techniques." Doctoral thesis, Universitat Politècnica de Catalunya, 1996. http://hdl.handle.net/10803/6920.

Full text
Abstract:
En aquesta tesi s'estudien els mètodes de codificació d'imatges i seqüències de vídeo des del punt de vista de la forma en què el sistema visual humà percep i entén la informació visual. La rellevància d'aquest estudi ve donada pel paper tan important que tenen els senyals d'imatge en la civilització actual i pel gran volum de dades que representen les fonts d'informació visual pels sistemes que les han de processar.
S'han estudiat tres aproximacions per a la codificació de textures en un esquema avançat de compressió fonamentat en aspectes de percepció visual. La primera aproximació es basa en les transicions de la imatge i estudia la interpolació d'àrees suaus a partir de les esmentades transicions. La segona contempla l'extracció, selecció i codificació de detalls significatius per al sistema visual humà.
Finalment, la tercera aproximació estudia la representació eficient de les textures fines i homogènies, que donen una aparença natural a les imatges sintetitzades aconseguint elevades tasses de compressió. Per a l'aplicació d'aquestes tècniques a la codificació d'imatge i vídeo, es proposa un model d'imatge de tres components adaptat a les característiques perceptuals de la visió humana.
Les aproximacions de codificació objecte de l'estudi han portat al disseny de tècniques noves d'anàlisi i codificació d'imatge. A partir d'eines no lineals de tractament obtingudes de l'entorn de la Morfologia Matemàtica, s'han desenvolupat tres tècniques de codificació de textures. En concret,

- Un mètode d'interpolació "morfològica" orientat a la resolució del problema d'interpolació de senyals bidimensionals a partir de conjunts arbitraris de punts dispersos.
- S'ha introduït de manera experimental un criteri subjectiu empíric per a la ordenació i selecció de detalls en les imatges, segons un criteri perceptual.
- Finalment, s'ha investigat l'aplicació d'una tècnica clàssica, la codificació "subbanda", a l'interior de regions de forma arbitrària, resultant en un nou mètode de codificació de textures anomenat "Region-based subband coding".

Aquestes tècniques han estat innovadores en el camp de codificació d'imatge entre les anomenades tècniques orientades a objectes o de Segona Generació. Tanmateix, el model d'imatge estudiat, es troba en la línia de les últimes propostes en l'entorn de l'MPEG4, el futur estàndard per a comunicació d'imatge a baixa velocitat, que contempla la possibilitat de la manipulació de continguts.
This thesis studies image and video sequence coding methods from the point of view of the way the human visual system perceives and understands visual information. The relevance of such study is due, on the one hand, to the important role that visual signals have in our civilization and, on the other hand, to the problem of representing the large amount of data that image and video processing systems have to deal with.
Three different approaches have been investigated for the coding of image textures in an advanced compression scheme relying in aspects of visual perception. The first approach is based on image transitions and the interpolation of smooth areas from such transitions. The second one, considers the extraction, selection and coding of meaningful image details.
Finally, the third approach studies the efficient representation of homogeneous fine textures that give a natural appearance to the reconstructed images at high compression levels. In order to apply these techniques for still image and video coding, a three component model of the image, that matches the perceptual properties of the human vision, is put forward.
The coding approaches subject of research have leaded to the design of three new image analysis and coding techniques. Using non-linear tools from the framework of Mathematical Morphology, three texture coding techniques are developed. In particular,

- A "morphological" image interpolation method aimed at the problem of scattered data interpolation.
- An empirical subjective criterion for the ranking and selection of image details according to visual perception.
- The application of a conventional image coding technique, subband coding, to the coding of arbitrarily shaped image regions (region-based subband coding).

These are new texture coding techniques in the field of object-oriented and Second Generation image and video coding schemes. Furthermore, the model of the image that has been investigated follows the line of the last proposals in the framework of MPEG4, the forthcoming coding standard for low bit-rate visual communications, which considers the possibility of content-based manipulation and coding of visual information.
APA, Harvard, Vancouver, ISO, and other styles
25

Eklund, Anders. "Image coding with H.264 I-frames." Thesis, Linköping University, Department of Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8920.

Full text
Abstract:

In this thesis work a part of the video coding standard H.264 has been implemented. The part of the video coder that is used to code the I-frames has been implemented to see how well suited it is for regular image coding. The big difference versus other image coding standards, such as JPEG and JPEG2000, is that this video coder uses both a predictor and a transform to compress the I-frames, while JPEG and JPEG2000 only use a transform. Since the prediction error is sent instead of the actual pixel values, a lot of the values are zero or close to zero before the transformation and quantization. The method is much like a video encoder but the difference is that blocks of an image are predicted instead of frames in a video sequence.


I det här examensarbetet har en del av videokodningsstandarden H.264 implementerats. Den del av videokodaren som används för att koda s.k. I-bilder har implementerats för att testa hur bra den fungerar för ren stillbildskodning. Den stora skillnaden mot andra stillbildskodningsmetoder, såsom JPEG och JPEG2000, är att denna videokodaren använder både en prediktor och en transform för att komprimera stillbilderna, till skillnad från JPEG och JPEG2000 som bara använder en transform. Eftersom prediktionsfelen skickas istället för själva pixelvärdena så är många värden lika med noll eller nära noll redan innan transformationen och kvantiseringen. Metoden liknar alltså till mycket en ren videokodare, med skillnaden att man predikterar block i en bild istället för bilder i en videosekvens.

APA, Harvard, Vancouver, ISO, and other styles
26

Moussa, Badi M. S. "Adaptive transform coding for digital image communication." Thesis, Loughborough University, 1985. https://dspace.lboro.ac.uk/2134/27360.

Full text
Abstract:
The performance of transform image coding schemes can be improved substantially by adapting to changes in image statistics. Essentially, this is accomplished through adaptation of the transform, bit allocation, and/or quantization parameters according to time-varying image statistics. Additionally adaptation can be used to achieve transmission rate reduction whilst maintaining a given picture quality.
APA, Harvard, Vancouver, ISO, and other styles
27

Sun, Huifang. "Interframe image coding by adaptive vector quantization." Thesis, University of Ottawa (Canada), 1986. http://hdl.handle.net/10393/5538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Caron, Steven. "Progressive image transmission by segmentation-based coding." Thesis, University of Ottawa (Canada), 1996. http://hdl.handle.net/10393/9933.

Full text
Abstract:
Progressive image transmission, where an image builds up gradually, is gaining popularity in image database browsing applications. In these applications, a user might have to reject a large number of unwanted images before selecting the desired one. In such a case, the time required to identify the image contents becomes very important. In this thesis, we present a novel progressive image transmission technique based on a representation by segmentation. The technique preserves edges at low bit rates and is biased toward fast identification. An image is segmented into regions having constant intensity by applying a morphological operator: the watershed. The segmented image is gradually simplified using a graph. The simplifications are transmitted in the reverse order. At the decoder, the image is dynamically divided into an increasing number of regions as the transmission progresses. A subjective experiment was designed, and the recognition times of images transmitted with the proposed algorithm were compared with the recognition times of the same images transmitted with JPEG. The proposed method was found to result in faster recognition of image contents for almost all the images. Some work on the objective evaluation of coarse images is also presented.
APA, Harvard, Vancouver, ISO, and other styles
29

Andersson, Kenneth. "Motion estimation for perceptual image sequence coding /." Linköping : Univ, 2003. http://www.bibl.liu.se/liupubl/disp/disp2003/tek794s.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Coben, Muhammed Z. "Region-based subband coding of image sequences." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/15500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Redmill, David Wallace. "Image and video coding for noisy channels." Thesis, University of Cambridge, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.294977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Ali, Maaruf. "Fractal image coding techniques and their applications." Thesis, King's College London (University of London), 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.265858.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Yin, Xiaowei. "Wavelet techniques for colour document image coding." Thesis, University of Essex, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.446463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Goh, Kwong Huang. "Wavelet transform based image and video coding." Thesis, University of Strathclyde, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

GORNSZTEJN, JAIME. "IMAGE COMPRESSION TECHNIQUES BASEC ON SUBBAND CODING." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1993. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=8758@1.

Full text
Abstract:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Neste trabalho são examinadas técnicas de compressão de imagens por sub-bandas. O algoritmo de análise/síntese utilizado emprega filtros recursivos passa-tudo de 1º ordem, o que reduz a complexidade computacional sem introduzir aliasing ou distorção de fase. Técnicas de processamento específicas para o caso destes filtros foram discutidas. As limitações da codificação direta das sub-bandas mostraram a conveniência de, inicialmente, separar componentes de baixa e alta freqüências. A imagem de baixa freqüência representa o brilho e a textura e é codificada por blocos no domínio da Transformada Cossenoidal Discreta. A imagem de erro, com aspecto essencialmente passa-alta, destacando as transições, é dividida em sub-bandas que são quantizadas vetorialmente. A exploração das características e correlação das sub-bandas permite aperfeiçoar esta técnica. A qualidade objetiva de cada técnica é medida pela razão sinal/ruído de pico e a subjetiva resulta da análise visual das imagens. Ambas são comparáveis ou superiores às de codificadores existentes com complexidade semelhante, para taxas entre 0.6 e 0.7 bits/pixel.
Image compression techniques based on subband coding are studied in this work. The analysis/synthesis algorithm is implemented using first-order all-pass recursive filters, which significantly reduces the computational complexity and reconstructs the input with neither aliasing nor phase distortion. Specific processing techniques for these filters were discussed. Limitations in direct subband coding show the convenience of initially splitting the image to be compressed into its low-pass and high-pass components, representing sharp edges, is divided into subbands which are vector quantized. Further improvement of this technique results from the study of subband characteristics and correlacion. Objective quality of each technique is measured by the peak signal-to-noise ratio and subjective quality results from visual inspection of reconstructed images. Both are superior or comparable to existing coders of similar complexity, for rates between 0.6 and 0.7 bits/pixel.
APA, Harvard, Vancouver, ISO, and other styles
36

Chan, Ming-Hong. "Image coding algorithms for video-conferencing applications." Thesis, Imperial College London, 1989. http://hdl.handle.net/10044/1/47376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Tender, Neil H. (Neil Howard). "Content-adaptive bi-level (facsimile) image coding." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/34079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Lee, Chong U. "Contour motion compensation for image sequence coding." Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/14515.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1989.
Includes bibliographical references (leaves 124-130).
by Chong Uk Lee.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
39

Wang, Wei-Zhi, and 王偉志. "Fractal image coding for still-image." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/16354565487043672434.

Full text
Abstract:
碩士
逢甲大學
電機工程學系
89
Abstract Recently, due to the limitation of physical transmission capacity and bandwidth of communications, data compression has become a very important problem. Many compression methods have been developed. One usually distinguishes, between different coding schemes: transform coding, multiresolution coding, VQ, predictive method, and other more recent schemes such as fractal image coding. In recent years, many researches have been done to study of properties of fractal image coding due to its ability on generating high-resolution reconstructed images at very high compression ratios. However, it suffers from very high encoding complexity. In order to overcome this big trouble, several methods to reduce the search space have presented. In Chapter 3 of this thesis, a novel and simple idea based on range block rotation rather than domain block rotation suggested by Jacquin’s original approach is proposed. The principle of range block rotation can effectively reduce the total number of domain block rotations. In Chapter 4, a new fractal coding method, called Fractal Dimension-based Fast Fractal Coding (FDFFC), is proposed. Our method is based on the fact that two equal-sized image blocks cannot be closely matched unless their fractal dimensions are close. This implies that domain blocks whose fractal dimension differs greatly from the range block may be eliminated from the domain pool for matching. Finally, this thesis proposes an effective fractal coding scheme, called “Two-Layer Classified Fractal Coding using Isometry Prediction (2CFCIP),” to improve the rate-distortion and speed performance of fractal coding. This proposed fractal encoder includes the coding schemes proposed in Chapter 3 and Chapter 4. It can be seen as a bottom-up approach using block merging. This encoder can be used in any still-image coding.
APA, Harvard, Vancouver, ISO, and other styles
40

"Transform coding of image." Chinese University of Hong Kong, 1988. http://library.cuhk.edu.hk/record=b5885929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Luo, Rui Lin, and 羅瑞琳. "Segmented image sequence coding." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/42932768256430683678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

YANG, WEI-TING, and 楊偉廷. "Adaptive DPCM image coding." Thesis, 1986. http://ndltd.ncl.edu.tw/handle/88005852571421305602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Jagannathan, S. "Coding of satellite image data." Thesis, 1998. http://hdl.handle.net/2009/2715.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Wang, Yue-Jonq, and 王羽仲. "Entropy coding on halftone image." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/78231519051920106006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Chang, Hsuan Ting, and 張軒庭. "Studies on fractal image coding." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/24921211209646733232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Wu, Kun-da, and 吳坤達. "PCA-ANN for Image Coding." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/37089023513063970533.

Full text
Abstract:
碩士
義守大學
資訊工程學系碩士班
97
Principal component analysis (PCA) is a linear transformation based on linear algebra technology using less data to explain the original data with least errors. It is usually used in signal processing to reduce the dimension of information, Because PCA algorithm requires the computation of eigenvalues and eigenvectors, it is time consuming. In this paper, we use artificial neural networks (ANN) to estimate the principle eigenvectors by learning characteristics. The traditional vector quantization (VQ) uses a codebook to represent all possible image blocks. The advantage of VQ is the high compression ratio and the shortcoming is the codebook design which is time-consuming, In PCA, the eigenvectors, also called the eigenimages, also forms a codebook in another sense, which is similar to VQ scheme. This paper uses the eigenimages as a codebook and performs a VQ-like codng method to take the advantages of both PCA and VQ methods.
APA, Harvard, Vancouver, ISO, and other styles
47

"Entropy coding and post-processing for image and video coding." 2010. http://library.cuhk.edu.hk/record=b5896644.

Full text
Abstract:
Fong, Yiu Leung.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2010.
Includes bibliographical references (leaves 83-87).
Abstracts in English and Chinese.
Abstract --- p.2
Acknowledgement --- p.6
Chapter 1. --- Introduction --- p.9
Chapter 2. --- Background and Motivation --- p.10
Chapter 2.1 --- Context-Based Arithmetic Coding --- p.10
Chapter 2.2 --- Video Post-processing --- p.13
Chapter 3. --- Context-Based Arithmetic Coding for JPEG --- p.16
Chapter 3.1 --- Introduction --- p.16
Chapter 3.1.1 --- Huffman Coding --- p.16
Chapter 3.1.1.1 --- Introduction --- p.16
Chapter 3.1.1.2 --- Concept --- p.16
Chapter 3.1.1.3 --- Drawbacks --- p.18
Chapter 3.1.2 --- Context-Based Arithmetic Coding --- p.19
Chapter 3.1.2.1 --- Introduction --- p.19
Chapter 3.1.2.2 --- Concept --- p.20
Chapter 3.2 --- Proposed Method --- p.30
Chapter 3.2.1 --- Introduction --- p.30
Chapter 3.2.2 --- Redundancy in Quantized DCT Coefficients --- p.32
Chapter 3.2.2.1 --- Zig-Zag Scanning Position --- p.32
Chapter 3.2.2.2 --- Magnitudes of Previously Coded Coefficients --- p.41
Chapter 3.2.3 --- Proposed Scheme --- p.43
Chapter 3.2.3.1 --- Overview --- p.43
Chapter 3.2.3.2 --- Preparation of Coding --- p.44
Chapter 3.2.3.3 --- Coding of Non-zero Coefficient Flags and EOB Decisions --- p.45
Chapter 3.2.3.4 --- Coding of ´بLEVEL' --- p.48
Chapter 3.2.3.5 --- Separate Coding of Color Planes --- p.53
Chapter 3.3 --- Experimental Results --- p.54
Chapter 3.3.1 --- Evaluation Method --- p.54
Chapter 3.3.2 --- Methods under Evaluation --- p.55
Chapter 3.3.3 --- Average File Size Reduction --- p.57
Chapter 3.3.4 --- File Size Reduction on Individual Images --- p.59
Chapter 3.3.5 --- Performance of Individual Techniques --- p.63
Chapter 3.4 --- Discussions --- p.66
Chapter 4. --- Video Post-processing for H.264 --- p.67
Chapter 4.1 --- Introduction --- p.67
Chapter 4.2 --- Proposed Method --- p.68
Chapter 4.3 --- Experimental Results --- p.69
Chapter 4.3.1 --- Deblocking on Compressed Frames --- p.69
Chapter 4.3.2 --- Deblocking on Residue of Compressed Frames --- p.72
Chapter 4.3.3 --- Performance Investigation --- p.74
Chapter 4.3.4 --- Investigation Experiment 1 --- p.75
Chapter 4.3.5 --- Investigation Experiment 2 --- p.77
Chapter 4.3.6 --- Investigation Experiment 3 --- p.79
Chapter 4.4 --- Discussions --- p.81
Chapter 5. --- Conclusions --- p.82
References --- p.83
APA, Harvard, Vancouver, ISO, and other styles
48

Yu, Chih-Jung, and 余芝融. "Image Coding and Watermarking Using Block Truncation Coding and Holography." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/09556120186122193317.

Full text
Abstract:
碩士
國立臺灣大學
電信工程學研究所
99
With the advancement of computers and the Internet, multimedia contents are very popular and easy to obtain. However, it often requires a large amount of memory storage to record the complex information contains in these multimedia contents. Thus, transmitting uncompressed digital media through the internet may be impractical. Block truncation coding (BTC) is a simple and efficient compression technology. However, the annoying blocking effect and false contour accompanied in high coding gain limit the application when compares to some modern compression technique. In order to improve the image quality, some halftoning techniques is combined with the BTC. In this study, we modify the schemes of two existing compression techniques which were based on BTC, and try to enhance the image quality. In addition, the surge of digital media is also creating a pressing need for copyright protection and content authentication. To avoid counterfeiting or unauthorized docu-ments being used, we could hide a set of auxiliary data, called digital watermark, into the original content. One of the watermarking techniques can be achieved by holography. While traditional holography is mostly obtained by optical interferometric equipments that require excessive setups, calibrations, and additional recording materials, the computer-generated holography is readily accomplished by computer manipulation. Hence, in this thesis, we also discussed several watermark embedding technique based on computer-generated holographic approach.
APA, Harvard, Vancouver, ISO, and other styles
49

"Stereoscopic video coding." Chinese University of Hong Kong, 1995. http://library.cuhk.edu.hk/record=b5895597.

Full text
Abstract:
by Roland Siu-kwong Ip.
Thesis (M.Phil.)--Chinese University of Hong Kong, 1995.
Includes bibliographical references (leaves 101-[105]).
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Motivation --- p.1
Chapter 1.2 --- Image Compression --- p.2
Chapter 1.2.1 --- Classification of Image Compression --- p.2
Chapter 1.2.2 --- Lossy Compression Approaches --- p.3
Chapter 1.3 --- Video Compression --- p.4
Chapter 1.3.1 --- Video Compression System --- p.5
Chapter 1.4 --- Stereoscopic Video Compression --- p.6
Chapter 1.5 --- Organization of the thesis --- p.6
Chapter 2 --- Motion Video Coding Theory --- p.8
Chapter 2.1 --- Introduction --- p.8
Chapter 2.2 --- Representations --- p.8
Chapter 2.2.1 --- Temporal Processing --- p.13
Chapter 2.2.2 --- Spatial Processing --- p.19
Chapter 2.3 --- Quantization --- p.25
Chapter 2.3.1 --- Scalar Quantization --- p.25
Chapter 2.3.2 --- Vector Quantization --- p.27
Chapter 2.4 --- Code Word Assignment --- p.29
Chapter 2.5 --- Selection of Video Coding Standard --- p.31
Chapter 3 --- MPEG Compatible Stereoscopic Coding --- p.34
Chapter 3.1 --- Introduction --- p.34
Chapter 3.2 --- MPEG Compatibility --- p.36
Chapter 3.3 --- Stereoscopic Video Coding --- p.37
Chapter 3.3.1 --- Coding by Stereoscopic Differences --- p.37
Chapter 3.3.2 --- I-pictures only Disparity Coding --- p.40
Chapter 3.4 --- Stereoscopic MPEG Encoder --- p.44
Chapter 3.4.1 --- Stereo Disparity Estimator --- p.45
Chapter 3.4.2 --- Improved Disparity Estimation --- p.47
Chapter 3.4.3 --- Stereo Bitstream Multiplexer --- p.49
Chapter 3.5 --- Generic Implementation --- p.50
Chapter 3.5.1 --- Macroblock Converter --- p.54
Chapter 3.5.2 --- DCT Functional Block --- p.55
Chapter 3.5.3 --- Rate Control --- p.57
Chapter 3.6 --- Stereoscopic MPEG Decoder --- p.58
Chapter 3.6.1 --- Mono Playback --- p.58
Chapter 3.6.2 --- Stereo Playback --- p.60
Chapter 4 --- Performance Evaluation --- p.63
Chapter 4.1 --- Introduction --- p.63
Chapter 4.2 --- Test Sequences Generation --- p.63
Chapter 4.3 --- Simulation Environment --- p.64
Chapter 4.4 --- Simulation Results --- p.65
Chapter 4.4.1 --- Objective Results --- p.65
Chapter 4.4.2 --- Subjective Results --- p.72
Chapter 5 --- Conclusions --- p.80
Chapter A --- MPEG ´ؤ An International Standard --- p.83
Chapter A.l --- Introduction --- p.83
Chapter A.2 --- Preprocessing --- p.84
Chapter A.3 --- Data Structure of Pictures --- p.85
Chapter A.4 --- Picture Coding --- p.86
Chapter A.4.1 --- Coding of Motion Vectors --- p.90
Chapter A.4.2 --- Coding of Quantized Coefficients --- p.94
References --- p.101
APA, Harvard, Vancouver, ISO, and other styles
50

Khataie, Manijeh. "Structured vector quantizers in image coding." Thesis, 1999. http://spectrum.library.concordia.ca/993/1/NQ47711.pdf.

Full text
Abstract:
Image data compression is concerned with the minimization of the volume of data used to represent an image. In recent years, image compression algorithms using Vector Quantization (VQ) have been receiving considerable attention. Unstructured vector quantizers, i.e., those with no restriction on the geometrical structure of the codebook, suffer from two basic drawbacks, viz., the codebook search complexity and the large storage requirement. This explains the interest in the structured VQ schemes, such as lattice-based VQ and multi-stage VQ. The objective of this thesis is to devise techniques to reduce the complexity of vector quantizers. In order to reduce the codebook search complexity and memory requirement, a universal Gaussian codebook in a residual VQ or a lattice-based VQ is used. To achieve a better performance, a part of work has been done in the frequency domain. Specifically, in order to retain the high-frequency coefficients in transform coding, two methods are suggested. One is developed for moderate to high rate data compression while the other is effective for low to moderate data rate. In the first part of this thesis, a residual VQ using a low rate optimal VQ in the first-stage and a Gaussian codebook in the other stages are introduced. From rate distortion theory, for most memoryless sources and many Gaussian sources with memory, the quantization error under MSE criterion, for small distortion, is memoryless and Gaussian. For VQ with a realistic rate, the error signal has a non-Gaussian distribution. It is shown that the distribution of locally normalized error signals, however, becomes close to a Gaussian distribution. In the second part, a new two-stage quantizer is proposed. The function of the first stage is to encode the more important low-pass components of the image and that of the second is to do the same for the high-frequency components ignored in the first stage. In one scheme, a high-rate lattice-based vector quantizer is used as the quantizer for both stages. In another scheme, the standard JPEG with a low rate is used as the quantizer of the first stage, and a lattice-based VQ is used for the second stage. The resulting bit rate of the two-stage lattice-based VQ in either scheme is found to be considerably better than that of JPEG for moderate to high bit rates. In the third part of the thesis, a method to retain the high-frequency coefficients is proposed by using a relatively huge codebook obtained by truncating the lattices with a large radius. As a result, a large number of points fall inside the boundary of the codebook, and thus, the images are encoded with high quality and low complexity: To reduce the bit rate, a shorter representation is assigned to the more frequently used lattice points. To index the large number of lattice points which fall inside the boundary, two methods that are based on grouping of the lattice points according to their frequencies of occurrence are proposed. For most of the test images, the proposed methods of retaining high-frequency coefficients is found to outperform JPEG
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography