Dissertations / Theses on the topic 'Image coding standard'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Image coding standard.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Pao, I.-Ming. "Improved standard-conforming video coding techniques /." Thesis, Connect to this title online; UW restricted, 1999. http://hdl.handle.net/1773/5936.
Full textYeung, Yick Ming. "Fast rate control for JPEG2000 image coding /." View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202003%20YEUNG.
Full textIncludes bibliographical references (leaves 63-65). Also available in electronic version. Access restricted to campus users.
Xin, Jun. "Improved standard-conforming video transcoding techniques /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/5871.
Full textGiakoumakis, Michail D. "Refinements in a DCT based non-uniform embedding watermarking scheme." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Mar%5FGiakoumakis.pdf.
Full textThesis advisor(s): Roberto Cristi, Ron Pieper, Craig Rasmussen. Includes bibliographical references (p. 119-121). Also available online.
Kamaras, Konstantinos. "JPEG2000 image compression and error resilience for transmission over wireless channels." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://sirsi.nps.navy.mil/uhtbin/hyperion-image/02Mar%5FKamaras.pdf.
Full textThorpe, Christopher. "Compression aided feature based steganalysis of perturbed quantization steganography in JPEG images." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 114 p, 2008. http://proquest.umi.com/pqdweb?did=1459914021&sid=6&Fmt=2&clientId=8331&RQT=309&VName=PQD.
Full textGupta, Amit Kumar Electrical Engineering & Telecommunications Faculty of Engineering UNSW. "Hardware optimization of JPEG2000." Awarded by:University of New South Wales. School of Electrical Engineering and Telecommunications, 2006. http://handle.unsw.edu.au/1959.4/30581.
Full textAbraham, Arun S. "Bandwidth-aware video transmission with adaptive image scaling." [Gainesville, Fla.] : University of Florida, 2003. http://purl.fcla.edu/fcla/etd/UFE0001221.
Full textGatica, Perez Daniel. "Extensive operators in lattices of partitions for digital video analysis /." Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/5874.
Full textUehara, Takeyuki. "Contributions to image encryption and authentication." Access electronically, 2003. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20040920.124409/index.html.
Full textDyer, Michael Ian Electrical Engineering & Telecommunications Faculty of Engineering UNSW. "Hardware Implementation Techniques for JPEG2000." Awarded by:University of New South Wales. Electrical Engineering and Telecommunications, 2007. http://handle.unsw.edu.au/1959.4/30510.
Full textMeng, Bojun. "Efficient intra prediction algorithm in H.264 /." View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202003%20MENG.
Full textIncludes bibliographical references (leaves 66-68). Also available in electronic version. Access restricted to campus users.
Frandina, Peter. "VHDL modeling and synthesis of the JPEG-XR inverse transform /." Online version of thesis, 2009. http://hdl.handle.net/1850/10755.
Full textLin, Li-Yang. "VLSI implementation for MPEG-1/Audio Layer III chip : bitstream processor - low power design /." [St. Lucia, Qld.], 2004. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe18396.pdf.
Full textNolte, Ernst Hendrik. "Image compression quality measurement : a comparison of the performance of JPEG and fractal compression on satellite images." Thesis, Stellenbosch : Stellenbosch University, 2000. http://hdl.handle.net/10019.1/51796.
Full textENGLISH ABSTRACT: The purpose of this thesis is to investigate the nature of digital image compression and the calculation of the quality of the compressed images. The work is focused on greyscale images in the domain of satellite images and aerial photographs. Two compression techniques are studied in detail namely the JPEG and fractal compression methods. Implementations of both these techniques are then applied to a set of test images. The rest of this thesis is dedicated to investigating the measurement of the loss of quality that was introduced by the compression. A general method for quality measurement (signal To Noise Ratio) is discussed as well as a technique that was presented in literature quite recently (Grey Block Distance). Hereafter, a new measure is presented. After this, a means of comparing the performance of these measures is presented. It was found that the new measure for image quality estimation performed marginally better than the SNR algorithm. Lastly, some possible improvements on this technique are mentioned and the validity of the method used for comparing the quality measures is discussed.
AFRIKAANSE OPSOMMING: Die doel van hierdie tesis is om ondersoek in te stel na die aard van digitale beeldsamepersing en die berekening van beeldkwaliteit na samepersing. Daar word gekonsentreer op grysvlak beelde in die spesifieke domein van satellietbeelde en lugfotos. Twee spesifieke samepersingstegnieke word in diepte ondersoek naamlik die JPEG en fraktale samepersingsmetodes. Implementasies van beide hierdie tegnieke word op 'n stel toetsbeelde aangewend. Die res van hierdie tesis word dan gewy aan die ondersoek van die meting van die kwaliteitsverlies van hierdie saamgeperste beelde. Daar word gekyk na 'n metode wat in algemene gebruik in die praktyk is asook na 'n nuwer metode wat onlangs in die literatuur veskyn het. Hierna word 'n nuwe tegniek bekendgestel. Verder word daar 'n vergelyking van hierdie mates en 'n ondersoek na die interpretasie van die 'kwaliteit' van hierdie kwaliteitsmate gedoen. Daar is gevind dat die nuwe maatstaf vir kwaliteit net so goed en selfs beter werk as die algemene maat vir beeldkwaliteit naamlik die Sein tot Ruis Verhouding. Laastens word daar moontlike verbeterings op die maatstaf genoem en daar volg 'n bespreking oor die geldigheid van die metode wat gevolg is om die kwaliteit van die kwaliteitsmate te bepaal
Shao, Wenbin. "Automatic annotation of digital photos." Access electronically, 2007. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20080403.120857/index.html.
Full textMuller, Rikus. "Applying the MDCT to image compression." Thesis, Stellenbosch : University of Stellenbosch, 2009. http://hdl.handle.net/10019.1/1197.
Full textThe replacement of the standard discrete cosine transform (DCT) of JPEG with the windowed modifed DCT (MDCT) is investigated to determine whether improvements in numerical quality can be achieved. To this end, we employ an existing algorithm for optimal quantisation, for which we also propose improvements. This involves the modelling and prediction of quantisation tables to initialise the algorithm, a strategy that is also thoroughly tested. Furthermore, the effects of various window functions on the coding results are investigated, and we find that improved quality can indeed be achieved by modifying JPEG in this fashion.
Natu, Ambarish Shrikrishna Electrical Engineering & Telecommunications Faculty of Engineering UNSW. "Error resilience in JPEG2000." Awarded by:University of New South Wales. Electrical Engineering and Telecommunications, 2003. http://handle.unsw.edu.au/1959.4/18835.
Full textChoi, Kai-san. "Automatic source camera identification by lens aberration and JPEG compression statistics." Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B38902345.
Full textChoi, Kai-san, and 蔡啟新. "Automatic source camera identification by lens aberration and JPEG compression statistics." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B38902345.
Full textMuller, Rikus. "A study of image compression techniques, with specific focus on weighted finite automata." Thesis, Link to the online version, 2005. http://hdl.handle.net/10019/1128.
Full textShah, Syed Irtiza Ali. "Single camera based vision systems for ground and; aerial robots." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/37143.
Full textJamrozik, Michele Lynn. "Spatio-temporal segmentation in the compressed domain." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/15681.
Full textChien, Chi-Hao. "A comparison study of the implementation of digital camera's RAW and JPEG and scanner's TIFF file formats, and color management procedures for inkjet textile printing applications /." Online version of thesis, 2009. http://hdl.handle.net/1850/10886.
Full textKailasanathan, Chandrapal. "Securing digital images." Access electronically, 2003. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20041026.150935/index.html.
Full textSamuel, Sindhu. "Digital rights management (DRM) : watermark encoding scheme for JPEG images." Pretoria : [s.n.], 2007. http://upetd.up.ac.za/thesis/available/etd-09122008-182920/.
Full textKang, James M. "A query engine of novelty in video streams /." Link to online version, 2005. https://ritdml.rit.edu/dspace/handle/1850/977.
Full textPevný, Tomáš. "Kernel methods in steganalysis." Diss., Online access via UMI:, 2008.
Find full textLucero, Aldo. "Compressing scientific data with control and minimization of the L-infinity metric under the JPEG 2000 framework." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2007. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.
Full textZitzmann, Cathel. "Détection statistique d'information cachée dans des images naturelles." Thesis, Troyes, 2013. http://www.theses.fr/2013TROY0012/document.
Full textThe need of secure communication is not something new: from ancient, methods exist to conceal communication. Cryptography helped make unintelligible message using encryption, steganography can hide the fact that a message is exchanged.This thesis is part of the project "Hidden Information Research" funded by the National Research Agency, Troyes University of Technology worked on the mathematical modeling of a natural image and creating detectors of hidden information in digital pictures.This thesis proposes to study the steganalysis in natural images in terms of parametric statistical decision. In JPEG images, a detector based on the modeling of quantized DCT coefficients is proposed and calculations of probabilities of the detector are established theoretically. In addition, a study of the number of shrinkage occurring during embedding by F3 and F4 algorithms is proposed. Finally, for the uncompressed images, the proposed tests are optimal under certain constraints, a difficulty overcome is the data quantization
Beltrão, Gabriel Tedgue. "Rápida predição da direção do bloco para aplicação com transformadas direcionais." [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260075.
Full textDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-21T22:39:06Z (GMT). No. of bitstreams: 1 Beltrao_GabrielTedgue_M.pdf: 7074938 bytes, checksum: 0a2d464733f2fb5dcc14430cc1844758 (MD5) Previous issue date: 2012
Resumo: As transformadas derivadas da DCT são amplamente utilizadas para compressão de vídeo. Recentemente, muitos autores têm destacado que os resíduos de predição normalmente apresentam estruturas direcionais que não podem ser eficientemente representadas pela DCT convencional. Nesse contexto, muitas transformadas direcionais têm sido propostas como forma de suplantar a deficiência da DCT em lidar com tais estruturas. Apesar do desempenho superior das transformadas direcionais sobre a DCT convencional, para a sua aplicação na compressão de vídeo é necessário avaliar o aumento no tempo de codificação e a complexidade para sua implementação. Este trabalho propõe um rápido algoritmo para se estimar as direções existentes em um bloco antes da aplicação das transformadas direcionais. O codificador identifica as direções predominantes em cada bloco e aplica apenas a transformada referente àquela direção. O algoritmo pode ser usado em conjunto com qualquer proposta de transformadas direcionais que utilize a técnica de otimização por taxa-distorção (RDO) para a seleção da direção a ser explorada, reduzindo a complexidade de implementação a níveis similares a quando apenas a DCT convencional é utilizada
Abstract: DCT-based transforms are widely adopted for video compression. Recently, many authors have highlighted that prediction residuals usually have directional structures that cannot be efficiently represented by conventional DCT. In this context, many directional transforms have been proposed as a way to overcome DCT's deficiency in dealing with such structures. Although directional transforms have superior performance over the conventional DCT, for application in video compression it is necessary to evaluate increase in coding time and complexity for its implementation. This work proposes a fast algorithm for estimating blocks directions before applying directional transforms. The encoder identifies predominant directions in each block, and only applies the transform referent to that direction. The algorithm can be used in conjunction with any proposed algorithm for directional transforms that uses the rate-distortion optimization (RDO) process for selection of the direction to be explored; reducing implementation complexity to similar levels when only conventional DCT is used
Mestrado
Telecomunicações e Telemática
Mestre em Engenharia Elétrica
Zhou, Zhi. "Standards conforming video coding optimization /." Thesis, Connect to this title online; UW restricted, 2005. http://hdl.handle.net/1773/5984.
Full textZhang, Kui. "Knowledge based image sequence compression." Thesis, University of Surrey, 1998. http://epubs.surrey.ac.uk/843195/.
Full textAraujo, André Filgueiras de. "Uma proposta de estimação de movimento para o codificador de vídeo Dirac." [s.n.], 2010. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261689.
Full textDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-16T03:46:01Z (GMT). No. of bitstreams: 1 Araujo_AndreFilgueirasde_M.pdf: 3583920 bytes, checksum: afbfc9cf561651fe74a6a3d075474fc8 (MD5) Previous issue date: 2010
Resumo: Este trabalho tem como objetivo principal a elaboração de um novo algoritmo responsável por tornar mais eficiente a estimação de movimento do codec Dirac. A estimação de movimento é uma etapa crítica à codificação de vídeo, na qual se encontra a maior parte do seu processamento. O codec Dirac, recentemente lançado, tem como base técnicas diferentes das habitualmente utilizadas nos codecs mais comuns (como os da linha MPEG). O Dirac objetiva alcançar eficiência comparável aos melhores codecs da atualidade (notadamente o H.264/AVC). Desta forma, este trabalho apresenta inicialmente estudos comparativos visando à avaliação de métodos de estado da arte de estimação de movimento e do codec Dirac, estudos que fornecem a base de conhecimento para o algoritmo que é proposto na sequência. A proposta consiste no algoritmo Modified Hierarchical Enhanced Adaptive Rood Pattern Search (MHEARPS). Este apresenta desempenho superior aos outros algoritmos de relevância em todos os casos analisados, provendo em média complexidade 79% menor mantendo a qualidade de reconstrução.
Abstract: The main purpose of this work is to design a new algorithm which enhance motion estimation in Dirac video codec. Motion estimation is a critical stage in video coding, in which most of the processing lies. Dirac codec, recently released, is based on techniques different from the usually employed (as in MPEG-based codecs). Dirac video codec aims at achieving efficiency comparable to the best codecs (such as H.264/AVC). This work initially presents comparative studies of state-of-the-art motion estimation techniques and Dirac codec which support the conception of the algorithm which is proposed in the sequel. The proposal consists in the algorithm Modified Hierarchical Enhaced Adaptive Rood Pattern Search (MHEARPS). This presents superior performance when compared to other relevant algorithms in every analysed case, providing on average 79% less computations with similar video reconstruction quality.
Mestrado
Telecomunicações e Telemática
Mestre em Engenharia Elétrica
Almeida, Junior Jurandy Gomes de 1983. "Recuperação de vídeos comprimidos por conteúdo." [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275706.
Full textTese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-19T18:14:01Z (GMT). No. of bitstreams: 1 AlmeidaJunior_JurandyGomesde_D.pdf: 7003455 bytes, checksum: 9f2b66a600d6b6ae595b02265ceb1585 (MD5) Previous issue date: 2011
Resumo: Avanços recentes na tecnologia têm permitido o aumento da disponibilidade de dados de vídeo, criando grandes coleções de vídeo digital. Isso tem despertado grande interesse em sistemas capazes de gerenciar esses dados de forma eficiente. Fazer uso eficiente de informações de vídeo requer o desenvolvimento de ferramentas poderosas capazes de extrair representações semânticas de alto nível a partir de características de baixo nível do conteúdo de vídeo. Devido à complexidade desse material, existem cinco desafios principais na concepção de tais sistemas: (1) dividir o fluxo de vídeo em trechos manuseáveis de acordo com a sua estrutura de organização, (2) implementar algoritmos para codificar as propriedades de baixo nível de um trecho de vídeo em vetores de características, (3) desenvolver medidas de similaridade para comparar esses trechos a partir de seus vetores, (4) responder rapidamente a consultas por similaridade sobre uma enorme quantidade de sequências de vídeo e (5) apresentar os resultados de forma amigável a um usuário. Inúmeras técnicas têm sido propostas para atender a tais requisitos. A maioria dos trabalhos existentes envolve algoritmos e métodos computacionalmente custosos, em termos tanto de tempo quanto de espaço, limitando a sua aplicação apenas ao ambiente acadêmico e/ou a grandes empresas. Contrário a essa tendência, o mercado tem mostrado uma crescente demanda por dispositivos móveis e embutidos. Nesse cenário, é imperativo o desenvolvimento de técnicas tanto eficazes quanto eficientes a fim de permitir que um público maior tenha acesso a tecnologias modernas. Nesse contexto, este trabalho apresenta cinco abordagens originais voltadas a análise, indexação e recuperação de vídeos digitais. Todas essas contribuições são somadas na construção de um sistema de gestão de vídeos por conteudo computacionalmente rápido, capaz de atingir a um padrão de qualidade similar, ou até mesmo superior, a soluções atuais
Abstract: Recent advances in the technology have enabled the increase of the availability of video data, creating large digital video collections. This has spurred great interest in systems that are able to manage those data in a efficient way. Making efficient use of video information requires the development of powerful tools to extract high-level semantics from low-level features of the video content. Due to the complexity of the video material, there are five main challenges in designing such systems: (1) to divide the video stream into manageable segments according to its organization structure; (2) to implement algorithms for encoding the low-level features of each video segment into feature vectors; (3) to develop similarity measures for comparing these segments by using their feature vectors; (4) to quickly answer similarity queries over a huge amount of video sequences; and (5) to present the list of results in a user-friendly way. Numerous techniques have been proposed to support such requirements. Most of existing works involve algorithms and methods which are computationally expensive, in terms of both time and space, limiting their application to the academic world and/or big companies. Contrary to this trend, the market has shown a growing demand for mobile and embedded devices. In this scenario, it is imperative the development of techniques so effective as efficient in order to allow more people have access to modern technologies. In this context, this work presents five novel approaches for the analysis, indexing, and retrieval of digital videos. All these contributions are combined to create a computationally fast system for content-based video management, which is able to achieve a quality level similar, or even superior, to current solutions
Doutorado
Ciência da Computação
Doutor em Ciência da Computação
Silva, Cauane Blumenberg. "Adaptive tiling algorithm based on highly correlated picture regions for the HEVC standard." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/96040.
Full textThis Master Thesis proposes an adaptive algorithm that is able to dynamically choose suitable tile partitions for intra- and inter-predicted frames in order to reduce the impact on coding efficiency arising from such partitioning. Tiles are novel parallelismoriented tools that integrate the High Efficiency Video Coding (HEVC) standard, which divide the frame into independent rectangular regions that can be processed in parallel. To enable the parallelism, tiles break the coding dependencies across their boundaries leading to coding efficiency impacts. These impacts can be even higher if tile boundaries split highly correlated picture regions, because most of the coding tools use context information during the encoding process. Hence, the proposed algorithm clusters the highly correlated picture regions inside the same tile to reduce the inherent coding efficiency impact of using tiles. To wisely locate the highly correlated picture regions, image characteristics and encoding information are analyzed, generating partitioning maps that serve as the algorithm input. Based on these maps, the algorithm locates the natural context break of the picture and defines the tile boundaries on these key regions. This way, the dependency breaks caused by the tile boundaries match the natural context breaks of a picture, then minimizing the coding efficiency losses caused by the use of tiles. The proposed adaptive tiling algorithm, in some cases, provides over 0.4% and over 0.5% of BD-rate savings for intra- and inter-predicted frames respectively, when compared to uniform-spaced tiles, an approach which does not consider the picture context to define the tile partitions.
Nalluri, Purnachand. "A fast motion estimation algorithm and its VLSI architecture for high efficiency video coding." Doctoral thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/15442.
Full textVideo coding has been used in applications like video surveillance, video conferencing, video streaming, video broadcasting and video storage. In a typical video coding standard, many algorithms are combined to compress a video. However, one of those algorithms, the motion estimation is the most complex task. Hence, it is necessary to implement this task in real time by using appropriate VLSI architectures. This thesis proposes a new fast motion estimation algorithm and its implementation in real time. The results show that the proposed algorithm and its motion estimation hardware architecture out performs the state of the art. The proposed architecture operates at a maximum operating frequency of 241.6 MHz and is able to process 1080p@60Hz with all possible variables block sizes specified in HEVC standard as well as with motion vector search range of up to ±64 pixels.
A codificação de vídeo tem sido usada em aplicações tais como, vídeovigilância, vídeo-conferência, video streaming e armazenamento de vídeo. Numa norma de codificação de vídeo, diversos algoritmos são combinados para comprimir o vídeo. Contudo, um desses algoritmos, a estimação de movimento é a tarefa mais complexa. Por isso, é necessário implementar esta tarefa em tempo real usando arquiteturas de hardware apropriadas. Esta tese propõe um algoritmo de estimação de movimento rápido bem como a sua implementação em tempo real. Os resultados mostram que o algoritmo e a arquitetura de hardware propostos têm melhor desempenho que os existentes. A arquitetura proposta opera a uma frequência máxima de 241.6 MHz e é capaz de processar imagens de resolução 1080p@60Hz, com todos os tamanhos de blocos especificados na norma HEVC, bem como um domínio de pesquisa de vetores de movimento até ±64 pixels.
"Object-based scalable wavelet image and video coding." Thesis, 2008. http://library.cuhk.edu.hk/record=b6074669.
Full textThe objective of this thesis is to develop an object-based coding framework built upon a family of wavelet coding techniques for a variety of arbitrarily shaped visual object scalable coding applications. Two kinds of arbitrarily shaped visual object scalable coding techniques are investigated in this thesis. One is object-based scalable wavelet still image coding; another is object-based scalable wavelet video coding.
The second part of this thesis investigates various components of object-based scalable wavelet video coding. A generalized 3-D object-based directional threading, which unifies the concepts of temporal motion threading and spatial directional threading, is seamlessly incorporated into 3-D shape-adaptive directional wavelet transform to exploit the spatio-temporal correlation inside the 3-D video object. To improve the computational efficiency of multi-resolution motion estimation (MRME) in shift-invariant wavelet domain, two fast MRME algorithms are proposed for wavelet-based scalable video coding. As demonstrated in the experiments, the proposed 3-D object-based wavelet video coding techniques consistently outperform MPEG-4 and other wavelet-based schemes for coding arbitrarily shaped video object, while providing full spatio-temporal-quality scalability with non-redundant 3-D subband decomposition.
Liu, Yu.
Adviser: King Ngi Ngan.
Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3693.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2008.
Includes bibliographical references (leaves 166-173).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts in English and Chinese.
School code: 1307.
"Error-resilient coding tools in MPEG-4." 1998. http://library.cuhk.edu.hk/record=b5889563.
Full textThesis submitted in: July 1997.
Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.
Includes bibliographical references (leaves 70-71).
Abstract also in Chinese.
Chapter Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Image Coding Standard: JPEG --- p.1
Chapter 1.2 --- Video Coding Standard: MPEG --- p.6
Chapter 1.2.1 --- MPEG history --- p.6
Chapter 1.2.2 --- MPEG video compression algorithm overview --- p.8
Chapter 1.2.3 --- More MPEG features --- p.10
Chapter 1.3 --- Summary --- p.17
Chapter Chapter 2 --- Error Resiliency --- p.18
Chapter 2.1 --- Introduction --- p.18
Chapter 2.2 --- Traditional approaches --- p.19
Chapter 2.2.1 --- Channel coding --- p.19
Chapter 2.2.2 --- ARQ --- p.20
Chapter 2.2.3 --- Multi-layer coding --- p.20
Chapter 2.2.4 --- Error Concealment --- p.20
Chapter 2.3 --- MPEG-4 work on error resilience --- p.21
Chapter 2.3.1 --- Resynchronization --- p.21
Chapter 2.3.2 --- Data Recovery --- p.25
Chapter 2.3.3 --- Error Concealment --- p.28
Chapter 2.4 --- Summary --- p.29
Chapter Chapter 3 --- Fixed length codes --- p.30
Chapter 3.1 --- Introduction --- p.30
Chapter 3.2 --- Tunstall code --- p.31
Chapter 3.3 --- Lempel-Ziv code --- p.34
Chapter 3.3.1 --- LZ-77 --- p.35
Chapter 3.3.2 --- LZ-78 --- p.36
Chapter 3.4 --- Simulation --- p.38
Chapter 3.4.1 --- Experiment Setup --- p.38
Chapter 3.4.2 --- Results --- p.39
Chapter 3.4.3 --- Concluding Remarks --- p.42
Chapter Chapter 4 --- Self-Synchronizable codes --- p.44
Chapter 4.1 --- Introduction --- p.44
Chapter 4.2 --- Scholtz synchronizable code --- p.45
Chapter 4.2.1 --- Definition --- p.45
Chapter 4.2.2 --- Construction procedure --- p.45
Chapter 4.2.3 --- Synchronizer --- p.48
Chapter 4.2.4 --- Effects of errors --- p.51
Chapter 4.3 --- Simulation --- p.52
Chapter 4.3.1 --- Experiment Setup --- p.52
Chapter 4.3.2 --- Results --- p.56
Chapter 4.4 --- Concluding Remarks --- p.68
Chapter Chapter 5 --- Conclusions --- p.69
References --- p.70
"Model- and image-based scene representation." 1999. http://library.cuhk.edu.hk/record=b5889926.
Full textThesis (M.Phil.)--Chinese University of Hong Kong, 1999.
Includes bibliographical references (leaves 97-101).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.2
Chapter 1.1 --- Video representation using panorama mosaic and 3D face model --- p.2
Chapter 1.2 --- Mosaic-based Video Representation --- p.3
Chapter 1.3 --- "3D Human Face modeling ," --- p.7
Chapter 2 --- Background --- p.13
Chapter 2.1 --- Video Representation using Mosaic Image --- p.13
Chapter 2.1.1 --- Traditional Video Compression --- p.17
Chapter 2.2 --- 3D Face model Reconstruction via Multiple Views --- p.19
Chapter 2.2.1 --- Shape from Silhouettes --- p.19
Chapter 2.2.2 --- Head and Face Model Reconstruction --- p.22
Chapter 2.2.3 --- Reconstruction using Generic Model --- p.24
Chapter 3 --- System Overview --- p.27
Chapter 3.1 --- Panoramic Video Coding Process --- p.27
Chapter 3.2 --- 3D Face model Reconstruction Process --- p.28
Chapter 4 --- Panoramic Video Representation --- p.32
Chapter 4.1 --- Mosaic Construction --- p.32
Chapter 4.1.1 --- Cylindrical Panorama Mosaic --- p.32
Chapter 4.1.2 --- Cylindrical Projection of Mosaic Image --- p.34
Chapter 4.2 --- Foreground Segmentation and Registration --- p.37
Chapter 4.2.1 --- Segmentation Using Panorama Mosaic --- p.37
Chapter 4.2.2 --- Determination of Background by Local Processing --- p.38
Chapter 4.2.3 --- Segmentation from Frame-Mosaic Comparison --- p.40
Chapter 4.3 --- Compression of the Foreground Regions --- p.44
Chapter 4.3.1 --- MPEG-1 Compression --- p.44
Chapter 4.3.2 --- MPEG Coding Method: I/P/B Frames --- p.45
Chapter 4.4 --- Video Stream Reconstruction --- p.48
Chapter 5 --- Three Dimensional Human Face modeling --- p.52
Chapter 5.1 --- Capturing Images for 3D Face modeling --- p.53
Chapter 5.2 --- Shape Estimation and Model Deformation --- p.55
Chapter 5.2.1 --- Head Shape Estimation and Model deformation --- p.55
Chapter 5.2.2 --- Face organs shaping and positioning --- p.58
Chapter 5.2.3 --- Reconstruction with both intrinsic and extrinsic parameters --- p.59
Chapter 5.2.4 --- Reconstruction with only Intrinsic Parameter --- p.63
Chapter 5.2.5 --- Essential Matrix --- p.65
Chapter 5.2.6 --- Estimation of Essential Matrix --- p.66
Chapter 5.2.7 --- Recovery of 3D Coordinates from Essential Matrix --- p.67
Chapter 5.3 --- Integration of Head Shape and Face Organs --- p.70
Chapter 5.4 --- Texture-Mapping --- p.71
Chapter 6 --- Experimental Result & Discussion --- p.74
Chapter 6.1 --- Panoramic Video Representation --- p.74
Chapter 6.1.1 --- Compression Improvement from Foreground Extraction --- p.76
Chapter 6.1.2 --- Video Compression Performance --- p.78
Chapter 6.1.3 --- Quality of Reconstructed Video Sequence --- p.80
Chapter 6.2 --- 3D Face model Reconstruction --- p.91
Chapter 7 --- Conclusion and Future Direction --- p.94
Bibliography --- p.101
"The effects of evaluation and rotation on descriptors and similarity measures for a single class of image objects." Thesis, 2008. http://hdl.handle.net/10210/564.
Full textDr. W.A. Clarke
Amiri, Delaram. "Bilateral and adaptive loop filter implementations in 3D-high efficiency video coding standard." Thesis, 2015. http://hdl.handle.net/1805/7983.
Full textIn this thesis, we describe a different implementation for in loop filtering method for 3D-HEVC. First we propose the use of adaptive loop filtering (ALF) technique for 3D-HEVC standard in-loop filtering. This filter uses Wiener–based method to minimize the Mean Squared Error between filtered pixel and original pixels. The performance of adaptive loop filter in picture based level is evaluated. Results show up to of 0.2 dB PSNR improvement in Luminance component for the texture and 2.1 dB for the depth. In addition, we obtain up to 0.1 dB improvement in Chrominance component for the texture view after applying this filter in picture based filtering. Moreover, a design of an in-loop filtering with Fast Bilateral Filter for 3D-HEVC standard is proposed. Bilateral filter is a filter that smoothes an image while preserving strong edges and it can remove the artifacts in an image. Performance of the bilateral filter in picture based level for 3D-HEVC is evaluated. Test model HTM- 6.2 is used to demonstrate the results. Results show up to of 20 percent of reduction in processing time of 3D-HEVC with less than affecting PSNR of the encoded 3D video using Fast Bilateral Filter.
"Novel error resilient techniques for the robust transport of MPEG-4 video over error-prone networks." 2004. http://library.cuhk.edu.hk/record=b6073624.
Full text"May 2004."
Thesis (Ph.D.)--Chinese University of Hong Kong, 2004.
Includes bibliographical references (p. 117-131).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Mode of access: World Wide Web.
Abstracts in English and Chinese.
"Multiplexing video traffic using frame-skipping aggregation technique." 1998. http://library.cuhk.edu.hk/record=b5889564.
Full textThesis (M.Phil.)--Chinese University of Hong Kong, 1998.
Includes bibliographical references (leaves 53-[56]).
Abstract also in Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 2 --- MPEG Overview --- p.5
Chapter 3 --- Framework of Frame-Skipping Lossy Aggregation --- p.10
Chapter 3.1 --- Video Frames Delivery using Round-Robin Scheduling --- p.10
Chapter 3.2 --- Underflow Safety Margin on Receiver Buffers --- p.12
Chapter 3.3 --- Algorithm in Frame-Skipping Aggregation Controller --- p.13
Chapter 4 --- Replacement of Skipped Frames in MPEG Sequence --- p.17
Chapter 5 --- Subjective Assessment Test on Frame-Skipped Video --- p.21
Chapter 5.1 --- Test Settings and Material --- p.22
Chapter 5.2 --- Choice of Test Methods --- p.23
Chapter 5.3 --- Test Procedures --- p.25
Chapter 5.4 --- Test Results --- p.26
Chapter 6 --- Performance Study --- p.29
Chapter 6.1 --- Experiment 1: Number of Supportable Streams --- p.31
Chapter 6.2 --- Experiment 2: Frame-Skipping Rate When Multiplexing on a Leased T3 Link --- p.33
Chapter 6.3 --- Experiment 3: Bandwidth Usage --- p.35
Chapter 6.4 --- Experiment 4: Optimal USMT --- p.38
Chapter 7 --- Implementation Considerations --- p.41
Chapter 8 --- Conclusions --- p.45
Chapter A --- The Construction of Stuffed Artificial B Frame --- p.48
Bibliography --- p.53
Sevcenco, Ana-Maria. "Adaptive strategies and optimization techniques for JPEG-based low bit-rate image coding." Thesis, 2007. http://hdl.handle.net/1828/2282.
Full textNatu, Ambarish Shrikrishna. "Error resilience in JPEG2000 /." 2003. http://www.library.unsw.edu.au/~thesis/adt-NUN/public/adt-NUN20030519.163058/index.html.
Full textKesireddy, Akitha. "A new adaptive trilateral filter for in-loop filtering." Thesis, 2014. http://hdl.handle.net/1805/5927.
Full textHEVC has achieved significant coding efficiency improvement beyond existing video coding standard by employing many new coding tools. Deblocking Filter, Sample Adaptive Offset and Adaptive Loop Filter for in-loop filtering are currently introduced for the HEVC standardization. However these filters are implemented in spatial domain despite the fact of temporal correlation within video sequences. To reduce the artifacts and better align object boundaries in video , a new algorithm in in-loop filtering is proposed. The proposed algorithm is implemented in HM-11.0 software. This proposed algorithm allows an average bitrate reduction of about 0.7% and improves the PSNR of the decoded frame by 0.05%, 0.30% and 0.35% in luminance and chroma.
"Arbitrary block-size transform video coding." Thesis, 2011. http://library.cuhk.edu.hk/record=b6075117.
Full textIn this thesis, the development of simple but efficient order-16 transforms will be shown. Analysis and comparison with existing order-16 transforms have been carried out. The proposed order-16 transforms were integrated to the existing coding standard reference software individually so as to achieve a new ABT system. In the proposed ABT system, order-4, order-8 and order-16 transforms coexist. The selection of the most appropriate transform is based on the rate-distortion performance of these transforms. A remarkable improvement in coding performance is shown in the experiment results. A significant bit rate reduction can be achieved with our proposed ABT system with both subjective and objective qualities remain unchanged.
Prior knowledge of the coefficient distribution is a key to achieve better coding performance. This is very useful in many areas in coding such as rate control, rate distortion optimization, etc. It is also shown that coefficient distribution of predicted residue is closer to Cauchy distribution rather than traditionally expected Laplace distribution. This can effectively improve the existing processing techniques.
Three kinds of order-l 6 orthogonal DCT-like integer transforms are proposed in this thesis. The first one is the simple integer transform, which is expanded from existing order-8 ICT. The second one is the hybrid integer transform from the Dyadic Weighted Walsh Transform (DWWT). It is shown that it has a better performance than simple integer transform. The last one is a recursive transform. Order-2N transform can be derived from order-N one. It is very close to the DCT. This recursive transform can be implemented in two different ways and they are denoted as LLMICT and CSFICT. They have excellent coding performance. These proposed transforms are investigated and are implemented into the reference software of H.264 and AVS. They are also compared with other order-16 orthogonal integer transform. Experimental results show that the proposed transforms give excellent coding performance and ease to compute.
Transform is a very important coding tool in video coding. It decorrelates the pixel data and removes the redundancy among pixels so as to achieve compression. Traditionally, order-S transform is used in video and image coding. Latest video coding standards, such as H.264/AVC, adopt both order-4 and order-8 transforms. The adaptive use of more than one transforms of different sizes is known as Arbitrary Block-size Transform (ABT). Transforms other than order-4 and order-8 can also be used in ABT. It is expected larger transform size such as order-16 will benefit more in video sequences with higher resolutions such as nap and 1a8ap sequences. As a result, order-16 transform is introduced into ABT system.
Fong, Chi Keung.
Adviser: Wai Kuen Cham.
Source: Dissertation Abstracts International, Volume: 73-04, Section: B, page: .
Thesis (Ph.D.)--Chinese University of Hong Kong, 2011.
Includes bibliographical references.
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract also in Chinese.
Ya-WenChao and 趙雅雯. "Improved Image Resampling Filters for Spatial Scalable Video Coding Standards." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/86307557163892127402.
Full text國立成功大學
電機工程學系碩博士班
98
This thesis proposes a downsampling filter and an upsampling filter for spatial scalable video coding. The bilateral filter and adaptive filter length concepts are used in downsampling filter to reduce the loss of edge information in images. By smoothing the homogeneous area and preserving the details in the non-homogeneous area of images, the coding bits are reduced in the base layer coding. At the same time, the edge-preserving property in the base layer also provides a better prediction to save the coding bits in the enhancement layer. For upsampling filter, the direction information of an image is used. The local gradient determines the edges of an image. The missing pixels on the edges are obtained by performing the directional interpolation. Experimental results show that, for proposed downsampling filter, 1.5% bit-rate reduction is achieved in the enhancement layer while decreasing about 20% bit-rates on average in the base layers. For the roposed directional upsampling filter, the PSNR improvement and bit-rate reduction are 0.01dB~0.26dB and 0.2%~16.3%, respectively.
"Analysis, coding, and processing for high-definition videos." Thesis, 2010. http://library.cuhk.edu.hk/record=b6074847.
Full textSecondly, two techniques for HD video coding are developed based on the aforementioned analysis results. To exploit the spatial property, 2D order-16 transforms are proposed to code the higher correlated signals more efficiently. Specially, two series of 2D order-16 integer transforms, named modified integer cosine transform (MICT) and non-orthogonal integer cosine transform (NICT), are studied and developed to provide different trade-offs between the performance and the complexity. Based on the property of special PSD, parametric interpolation filter (PIF) is proposed for motion-compensated prediction (MCP). Not only can PIF track the non-stationary statistics of video signals as the related work shows, but also it represents interpolation filters by parameters instead of individual coefficients, thus solving the conflict of the accuracy of coefficients and the size of side information. The experimental results show the proposed two coding techniques significantly outperform their equivalents in the state-of-the-art international video coding standards.
Thirdly, interlaced HD videos are studied, and to satisfy different delay constraints, two real-time de-interlacing algorithms are proposed specially for H.264 coded videos. They adapt to local activities, according to the syntax element (SE) values. Accuracy analysis is also introduced to deal with the disparity between the SE values and the real motions and textures. The de-interlacers provide better visual quality than the commonly used ones and can de-interlace 1080i sequences in real time on PCs.
Today, High-Definition (HD) videos become more and more popular with many applications. This thesis analyzes the characteristics of HD videos and develops the appropriate coding and processing techniques accordingly for hybrid video coding.
Dong, Jie.
Adviser: King Ngi Ngan.
Source: Dissertation Abstracts International, Volume: 72-01, Section: B, page: .
Thesis (Ph.D.)--Chinese University of Hong Kong, 2010.
Includes bibliographical references (leaves 153-158).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract also in Chinese.