Rozprawy doktorskie na temat „Image coding standard”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Image coding standard.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Image coding standard”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Pao, I.-Ming. "Improved standard-conforming video coding techniques /". Thesis, Connect to this title online; UW restricted, 1999. http://hdl.handle.net/1773/5936.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Yeung, Yick Ming. "Fast rate control for JPEG2000 image coding /". View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202003%20YEUNG.

Pełny tekst źródła
Streszczenie:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 63-65). Also available in electronic version. Access restricted to campus users.
Style APA, Harvard, Vancouver, ISO itp.
3

Xin, Jun. "Improved standard-conforming video transcoding techniques /". Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/5871.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Giakoumakis, Michail D. "Refinements in a DCT based non-uniform embedding watermarking scheme". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Mar%5FGiakoumakis.pdf.

Pełny tekst źródła
Streszczenie:
Thesis (M.S. in Applied Math and M.S. in Systems Engineering)--Naval Postgraduate School, March 2003.
Thesis advisor(s): Roberto Cristi, Ron Pieper, Craig Rasmussen. Includes bibliographical references (p. 119-121). Also available online.
Style APA, Harvard, Vancouver, ISO itp.
5

Kamaras, Konstantinos. "JPEG2000 image compression and error resilience for transmission over wireless channels". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://sirsi.nps.navy.mil/uhtbin/hyperion-image/02Mar%5FKamaras.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Thorpe, Christopher. "Compression aided feature based steganalysis of perturbed quantization steganography in JPEG images". Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 114 p, 2008. http://proquest.umi.com/pqdweb?did=1459914021&sid=6&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Gupta, Amit Kumar Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Hardware optimization of JPEG2000". Awarded by:University of New South Wales. School of Electrical Engineering and Telecommunications, 2006. http://handle.unsw.edu.au/1959.4/30581.

Pełny tekst źródła
Streszczenie:
The Key algorithms of JPEG2000, the new image compression standard, have high computational complexity and thus present challenges for efficient implementation. This has led to research on the hardware optimization of JPEG2000 for its efficient realization. Luckily, in the last century the growth in Microelectronics allows us to realize dedicated ASIC solutions as well as hardware/software FPGA based solutions for complex algorithms such as JPEG2000. But an efficient implementation within hard constraints of area and throughput, demands investigations of key dependencies within the JPEG2000 system. This work presents algorithms and VLSI architectures to realize a high performance JPEG2000 compression system. The embedded block coding algorithm which lies at the heart of a JPEG2000 compression system is a main contributor to enhanced JPEG2000 complexity. This work first concentrates on algorithms to realize low-cost high throughput Block Coder (BC) system. For this purpose concurrent symbol processing capable Bit Plane Coder architecture is presented. Further optimal 2 sub-bank memory and an efficient buffer architectures are designed to keep the hardware cost low. The proposed overall BC system presents the highest Figure Of Merit (FOM) in terms of throughput versus hardware cost in comparison to existing BC solutions. Further, this work also investigates the challenges involved in the efficient integration of the BC with the overall JPEG2000 system. A novel low-cost distortion estimation approach with near-optimal performance is proposed which is necessary for accurate rate-control performance of JPEG2000. Additionally low bandwidth data storage and transfer techniques are proposed for efficient transfer of subband samples to the BC. Simulation results show that the proposed techniques have approximately 4 times less bandwidth than existing architectures. In addition, an efficient high throughput block decoder architecture based on the proposed selective sample-skipping algorithm is presented. The proposed architectures are designed and analyzed on both ASIC and FPGA platforms. Thus, the proposed algorithms, architectures and efficient BC integration strategies are useful for realizing a dedicated ASIC JPEG2000 system as well as a hardware/software FPGA based JPEG2000 solution. Overall this work presents algorithms and architectures to realize a high performance JPEG2000 system without imposing any restrictions in terms of coding modes or block size for the BC system.
Style APA, Harvard, Vancouver, ISO itp.
8

Abraham, Arun S. "Bandwidth-aware video transmission with adaptive image scaling". [Gainesville, Fla.] : University of Florida, 2003. http://purl.fcla.edu/fcla/etd/UFE0001221.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Gatica, Perez Daniel. "Extensive operators in lattices of partitions for digital video analysis /". Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/5874.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Uehara, Takeyuki. "Contributions to image encryption and authentication". Access electronically, 2003. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20040920.124409/index.html.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Dyer, Michael Ian Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Hardware Implementation Techniques for JPEG2000". Awarded by:University of New South Wales. Electrical Engineering and Telecommunications, 2007. http://handle.unsw.edu.au/1959.4/30510.

Pełny tekst źródła
Streszczenie:
JPEG2000 is a recently standardized image compression system that provides substantial improvements over the existing JPEG compression scheme. This improvement in performance comes with an associated cost in increased implementation complexity, such that a purely software implementation is inefficient. This work identifies the arithmetic coder as a bottleneck in efficient hardware implementations, and explores various design options to improve arithmetic coder speed and size. The designs produced improve the critical path of the existing arithmetic coder designs, and then extend the coder throughput to 2 or more symbols per clock cycle. Subsequent work examines more system level implementation issues. This work examines the communication between hardware blocks and utilizes certain modes of operation to add flexibility to buffering solutions. It becomes possible to significantly reduce the amount of intermediate buffering between blocks, whilst maintaining a loose synchronization. Full hardware implementations of the standard are necessarily limited in the amount of features that they can offer, in order to constrain complexity and cost. To circumvent this, a hardware / software codesign is produced using the Altera NIOS II softcore processor. By keeping the majority of the standard implemented in software and using hardware to accelerate those time consuming functions, generality of implementation can be retained, whilst implementation speed is improved. In addition to this, there is the opportunity to explore parallelism, by providing multiple identical hardware blocks to code multiple data units simultaneously.
Style APA, Harvard, Vancouver, ISO itp.
12

Meng, Bojun. "Efficient intra prediction algorithm in H.264 /". View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202003%20MENG.

Pełny tekst źródła
Streszczenie:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 66-68). Also available in electronic version. Access restricted to campus users.
Style APA, Harvard, Vancouver, ISO itp.
13

Frandina, Peter. "VHDL modeling and synthesis of the JPEG-XR inverse transform /". Online version of thesis, 2009. http://hdl.handle.net/1850/10755.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Lin, Li-Yang. "VLSI implementation for MPEG-1/Audio Layer III chip : bitstream processor - low power design /". [St. Lucia, Qld.], 2004. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe18396.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Nolte, Ernst Hendrik. "Image compression quality measurement : a comparison of the performance of JPEG and fractal compression on satellite images". Thesis, Stellenbosch : Stellenbosch University, 2000. http://hdl.handle.net/10019.1/51796.

Pełny tekst źródła
Streszczenie:
Thesis (MEng)--Stellenbosch University, 2000.
ENGLISH ABSTRACT: The purpose of this thesis is to investigate the nature of digital image compression and the calculation of the quality of the compressed images. The work is focused on greyscale images in the domain of satellite images and aerial photographs. Two compression techniques are studied in detail namely the JPEG and fractal compression methods. Implementations of both these techniques are then applied to a set of test images. The rest of this thesis is dedicated to investigating the measurement of the loss of quality that was introduced by the compression. A general method for quality measurement (signal To Noise Ratio) is discussed as well as a technique that was presented in literature quite recently (Grey Block Distance). Hereafter, a new measure is presented. After this, a means of comparing the performance of these measures is presented. It was found that the new measure for image quality estimation performed marginally better than the SNR algorithm. Lastly, some possible improvements on this technique are mentioned and the validity of the method used for comparing the quality measures is discussed.
AFRIKAANSE OPSOMMING: Die doel van hierdie tesis is om ondersoek in te stel na die aard van digitale beeldsamepersing en die berekening van beeldkwaliteit na samepersing. Daar word gekonsentreer op grysvlak beelde in die spesifieke domein van satellietbeelde en lugfotos. Twee spesifieke samepersingstegnieke word in diepte ondersoek naamlik die JPEG en fraktale samepersingsmetodes. Implementasies van beide hierdie tegnieke word op 'n stel toetsbeelde aangewend. Die res van hierdie tesis word dan gewy aan die ondersoek van die meting van die kwaliteitsverlies van hierdie saamgeperste beelde. Daar word gekyk na 'n metode wat in algemene gebruik in die praktyk is asook na 'n nuwer metode wat onlangs in die literatuur veskyn het. Hierna word 'n nuwe tegniek bekendgestel. Verder word daar 'n vergelyking van hierdie mates en 'n ondersoek na die interpretasie van die 'kwaliteit' van hierdie kwaliteitsmate gedoen. Daar is gevind dat die nuwe maatstaf vir kwaliteit net so goed en selfs beter werk as die algemene maat vir beeldkwaliteit naamlik die Sein tot Ruis Verhouding. Laastens word daar moontlike verbeterings op die maatstaf genoem en daar volg 'n bespreking oor die geldigheid van die metode wat gevolg is om die kwaliteit van die kwaliteitsmate te bepaal
Style APA, Harvard, Vancouver, ISO itp.
16

Shao, Wenbin. "Automatic annotation of digital photos". Access electronically, 2007. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20080403.120857/index.html.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Muller, Rikus. "Applying the MDCT to image compression". Thesis, Stellenbosch : University of Stellenbosch, 2009. http://hdl.handle.net/10019.1/1197.

Pełny tekst źródła
Streszczenie:
Thesis (DSc (Mathematical Sciences. Applied Mathematics))--University of Stellenbosch, 2009.
The replacement of the standard discrete cosine transform (DCT) of JPEG with the windowed modifed DCT (MDCT) is investigated to determine whether improvements in numerical quality can be achieved. To this end, we employ an existing algorithm for optimal quantisation, for which we also propose improvements. This involves the modelling and prediction of quantisation tables to initialise the algorithm, a strategy that is also thoroughly tested. Furthermore, the effects of various window functions on the coding results are investigated, and we find that improved quality can indeed be achieved by modifying JPEG in this fashion.
Style APA, Harvard, Vancouver, ISO itp.
18

Natu, Ambarish Shrikrishna Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Error resilience in JPEG2000". Awarded by:University of New South Wales. Electrical Engineering and Telecommunications, 2003. http://handle.unsw.edu.au/1959.4/18835.

Pełny tekst źródła
Streszczenie:
The rapid growth of wireless communication and widespread access to information has resulted in a strong demand for robust transmission of compressed images over wireless channels. The challenge of robust transmission is to protect the compressed image data against loss, in such a way as to maximize the received image quality. This thesis addresses this problem and provides an investigation of a forward error correction (FEC) technique that has been evaluated in the context of the emerging JPEG2000 standard. Not much effort has been made in the JPEG2000 project regarding error resilience. The only techniques standardized are based on insertion of marker codes in the code-stream, which may be used to restore high-level synchronization between the decoder and the code-stream. This helps to localize error and prevent it from propagating through the entire code-stream. Once synchronization is achieved, additional tools aim to exploit as much of the remaining data as possible. Although these techniques help, they cannot recover lost data. FEC adds redundancy into the bit-stream, in exchange for increased robustness to errors. We investigate unequal protection schemes for JPEG2000 by applying different levels of protection to different quality layers in the code-stream. More particularly, the results reported in this thesis provide guidance concerning the selection of JPEG2000 coding parameters and appropriate combinations of Reed-Solomon (RS) codes for typical wireless bit error rates. We find that unequal protection schemes together with the use of resynchronization makers and some additional tools can significantly improve the image quality in deteriorating channel conditions. The proposed channel coding scheme is easily incorporated into the existing JPEG2000 code-stream structure and experimental results clearly demonstrate the viability of our approach
Style APA, Harvard, Vancouver, ISO itp.
19

Choi, Kai-san. "Automatic source camera identification by lens aberration and JPEG compression statistics". Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B38902345.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Choi, Kai-san, i 蔡啟新. "Automatic source camera identification by lens aberration and JPEG compression statistics". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B38902345.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Muller, Rikus. "A study of image compression techniques, with specific focus on weighted finite automata". Thesis, Link to the online version, 2005. http://hdl.handle.net/10019/1128.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Shah, Syed Irtiza Ali. "Single camera based vision systems for ground and; aerial robots". Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/37143.

Pełny tekst źródła
Streszczenie:
Efficient and effective vision systems are proposed in this work for object detection for ground&aerial robots venturing into unknown environments with minimum vision aids, i.e. a single camera. The first problem attempted is that of object search and identification in a situation similar to a disaster site. Based on image analysis, typical pixel-based characteristics of a visual marker have been established to search for, using a block based search algorithm, along with a noise and interference filter. The proposed algorithm has been successfully utilized for the International Aerial Robotics competition 2009. The second problem deals with object detection for collision avoidance in 3D environments. It has been shown that a 3D model of the scene can be generated from 2D image information from a single camera flying through a very small arc of lateral flight around the object, without the need of capturing images from all sides. The forward flight simulations show that the depth extracted from forward motion is usable for large part of the image. After analyzing various constraints associated with this and other existing approaches, Motion Estimation has been proposed. Implementation of motion estimation on videos from onboard cameras resulted in various undesirable and noisy vectors. An in depth analysis of such vectors is presented and solutions are proposed and implemented, demonstrating desirable motion estimation for collision avoidance task.
Style APA, Harvard, Vancouver, ISO itp.
23

Jamrozik, Michele Lynn. "Spatio-temporal segmentation in the compressed domain". Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/15681.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Chien, Chi-Hao. "A comparison study of the implementation of digital camera's RAW and JPEG and scanner's TIFF file formats, and color management procedures for inkjet textile printing applications /". Online version of thesis, 2009. http://hdl.handle.net/1850/10886.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Kailasanathan, Chandrapal. "Securing digital images". Access electronically, 2003. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20041026.150935/index.html.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Samuel, Sindhu. "Digital rights management (DRM) : watermark encoding scheme for JPEG images". Pretoria : [s.n.], 2007. http://upetd.up.ac.za/thesis/available/etd-09122008-182920/.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Kang, James M. "A query engine of novelty in video streams /". Link to online version, 2005. https://ritdml.rit.edu/dspace/handle/1850/977.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Pevný, Tomáš. "Kernel methods in steganalysis". Diss., Online access via UMI:, 2008.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Lucero, Aldo. "Compressing scientific data with control and minimization of the L-infinity metric under the JPEG 2000 framework". To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2007. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Zitzmann, Cathel. "Détection statistique d'information cachée dans des images naturelles". Thesis, Troyes, 2013. http://www.theses.fr/2013TROY0012/document.

Pełny tekst źródła
Streszczenie:
La nécessité de communiquer de façon sécurisée n’est pas chose nouvelle : depuis l’antiquité des méthodes existent afin de dissimuler une communication. La cryptographie a permis de rendre un message inintelligible en le chiffrant, la stéganographie quant à elle permet de dissimuler le fait même qu’un message est échangé. Cette thèse s’inscrit dans le cadre du projet "Recherche d’Informations Cachées" financé par l’Agence Nationale de la Recherche, l’Université de Technologie de Troyes a travaillé sur la modélisation mathématique d’une image naturelle et à la mise en place de détecteurs d’informations cachées dans les images. Ce mémoire propose d’étudier la stéganalyse dans les images naturelles du point de vue de la décision statistique paramétrique. Dans les images JPEG, un détecteur basé sur la modélisation des coefficients DCT quantifiés est proposé et les calculs des probabilités du détecteur sont établis théoriquement. De plus, une étude du nombre moyen d’effondrements apparaissant lors de l’insertion avec les algorithmes F3 et F4 est proposée. Enfin, dans le cadre des images non compressées, les tests proposés sont optimaux sous certaines contraintes, une des difficultés surmontées étant le caractère quantifié des données
The need of secure communication is not something new: from ancient, methods exist to conceal communication. Cryptography helped make unintelligible message using encryption, steganography can hide the fact that a message is exchanged.This thesis is part of the project "Hidden Information Research" funded by the National Research Agency, Troyes University of Technology worked on the mathematical modeling of a natural image and creating detectors of hidden information in digital pictures.This thesis proposes to study the steganalysis in natural images in terms of parametric statistical decision. In JPEG images, a detector based on the modeling of quantized DCT coefficients is proposed and calculations of probabilities of the detector are established theoretically. In addition, a study of the number of shrinkage occurring during embedding by F3 and F4 algorithms is proposed. Finally, for the uncompressed images, the proposed tests are optimal under certain constraints, a difficulty overcome is the data quantization
Style APA, Harvard, Vancouver, ISO itp.
31

Beltrão, Gabriel Tedgue. "Rápida predição da direção do bloco para aplicação com transformadas direcionais". [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260075.

Pełny tekst źródła
Streszczenie:
Orientadores: Yuzo Iano, Rangel Arthur
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-21T22:39:06Z (GMT). No. of bitstreams: 1 Beltrao_GabrielTedgue_M.pdf: 7074938 bytes, checksum: 0a2d464733f2fb5dcc14430cc1844758 (MD5) Previous issue date: 2012
Resumo: As transformadas derivadas da DCT são amplamente utilizadas para compressão de vídeo. Recentemente, muitos autores têm destacado que os resíduos de predição normalmente apresentam estruturas direcionais que não podem ser eficientemente representadas pela DCT convencional. Nesse contexto, muitas transformadas direcionais têm sido propostas como forma de suplantar a deficiência da DCT em lidar com tais estruturas. Apesar do desempenho superior das transformadas direcionais sobre a DCT convencional, para a sua aplicação na compressão de vídeo é necessário avaliar o aumento no tempo de codificação e a complexidade para sua implementação. Este trabalho propõe um rápido algoritmo para se estimar as direções existentes em um bloco antes da aplicação das transformadas direcionais. O codificador identifica as direções predominantes em cada bloco e aplica apenas a transformada referente àquela direção. O algoritmo pode ser usado em conjunto com qualquer proposta de transformadas direcionais que utilize a técnica de otimização por taxa-distorção (RDO) para a seleção da direção a ser explorada, reduzindo a complexidade de implementação a níveis similares a quando apenas a DCT convencional é utilizada
Abstract: DCT-based transforms are widely adopted for video compression. Recently, many authors have highlighted that prediction residuals usually have directional structures that cannot be efficiently represented by conventional DCT. In this context, many directional transforms have been proposed as a way to overcome DCT's deficiency in dealing with such structures. Although directional transforms have superior performance over the conventional DCT, for application in video compression it is necessary to evaluate increase in coding time and complexity for its implementation. This work proposes a fast algorithm for estimating blocks directions before applying directional transforms. The encoder identifies predominant directions in each block, and only applies the transform referent to that direction. The algorithm can be used in conjunction with any proposed algorithm for directional transforms that uses the rate-distortion optimization (RDO) process for selection of the direction to be explored; reducing implementation complexity to similar levels when only conventional DCT is used
Mestrado
Telecomunicações e Telemática
Mestre em Engenharia Elétrica
Style APA, Harvard, Vancouver, ISO itp.
32

Zhou, Zhi. "Standards conforming video coding optimization /". Thesis, Connect to this title online; UW restricted, 2005. http://hdl.handle.net/1773/5984.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Zhang, Kui. "Knowledge based image sequence compression". Thesis, University of Surrey, 1998. http://epubs.surrey.ac.uk/843195/.

Pełny tekst źródła
Streszczenie:
In this thesis, most commonly encountered video compression techniques and international coding standards are studied. The study leads to the idea of a reconfigurable codec which can adapt itself to the specific requirements of diverse applications so as to achieve improved performance. Firstly, we propose a multiple layer affine motion compensated codec which acts as a basic building block of the reconfigurable multiple tool video codec. A detailed investigation of the properties of the proposed codec is carried out. The experimental results reveal that the gain in coding efficiency from improved motion prediction and segmentation is proportional to the spatial complexity of the sequence being encoded. Secondly, a framework for the reconfigurable multiple tool video codec is developed and its key parts are discussed in detail. Two important concepts virtual codec and virtual tool are introduced. A prototype of the proposed reconfigurable multiple tool video codec is implemented. The codec structure and the constituent tools of the codec included in the prototype are extensively tested and evaluated to prove the concept. The results confirm that different applications require different codec configurations to achieve optimum performance. Thirdly, a knowledge based tool selection system for the reconfigurable codec is proposed and developed. Human knowledge as well as sequence properties are taken into account in the tool selection procedure. It is shown that the proposed tool selection mechanism gives promising results. Finally, concluding remarks are offered and future research directions are suggested.
Style APA, Harvard, Vancouver, ISO itp.
34

Araujo, André Filgueiras de. "Uma proposta de estimação de movimento para o codificador de vídeo Dirac". [s.n.], 2010. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261689.

Pełny tekst źródła
Streszczenie:
Orientador: Yuzo Iano
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-16T03:46:01Z (GMT). No. of bitstreams: 1 Araujo_AndreFilgueirasde_M.pdf: 3583920 bytes, checksum: afbfc9cf561651fe74a6a3d075474fc8 (MD5) Previous issue date: 2010
Resumo: Este trabalho tem como objetivo principal a elaboração de um novo algoritmo responsável por tornar mais eficiente a estimação de movimento do codec Dirac. A estimação de movimento é uma etapa crítica à codificação de vídeo, na qual se encontra a maior parte do seu processamento. O codec Dirac, recentemente lançado, tem como base técnicas diferentes das habitualmente utilizadas nos codecs mais comuns (como os da linha MPEG). O Dirac objetiva alcançar eficiência comparável aos melhores codecs da atualidade (notadamente o H.264/AVC). Desta forma, este trabalho apresenta inicialmente estudos comparativos visando à avaliação de métodos de estado da arte de estimação de movimento e do codec Dirac, estudos que fornecem a base de conhecimento para o algoritmo que é proposto na sequência. A proposta consiste no algoritmo Modified Hierarchical Enhanced Adaptive Rood Pattern Search (MHEARPS). Este apresenta desempenho superior aos outros algoritmos de relevância em todos os casos analisados, provendo em média complexidade 79% menor mantendo a qualidade de reconstrução.
Abstract: The main purpose of this work is to design a new algorithm which enhance motion estimation in Dirac video codec. Motion estimation is a critical stage in video coding, in which most of the processing lies. Dirac codec, recently released, is based on techniques different from the usually employed (as in MPEG-based codecs). Dirac video codec aims at achieving efficiency comparable to the best codecs (such as H.264/AVC). This work initially presents comparative studies of state-of-the-art motion estimation techniques and Dirac codec which support the conception of the algorithm which is proposed in the sequel. The proposal consists in the algorithm Modified Hierarchical Enhaced Adaptive Rood Pattern Search (MHEARPS). This presents superior performance when compared to other relevant algorithms in every analysed case, providing on average 79% less computations with similar video reconstruction quality.
Mestrado
Telecomunicações e Telemática
Mestre em Engenharia Elétrica
Style APA, Harvard, Vancouver, ISO itp.
35

Almeida, Junior Jurandy Gomes de 1983. "Recuperação de vídeos comprimidos por conteúdo". [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275706.

Pełny tekst źródła
Streszczenie:
Orientador: Ricardo da Silva Torres
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-19T18:14:01Z (GMT). No. of bitstreams: 1 AlmeidaJunior_JurandyGomesde_D.pdf: 7003455 bytes, checksum: 9f2b66a600d6b6ae595b02265ceb1585 (MD5) Previous issue date: 2011
Resumo: Avanços recentes na tecnologia têm permitido o aumento da disponibilidade de dados de vídeo, criando grandes coleções de vídeo digital. Isso tem despertado grande interesse em sistemas capazes de gerenciar esses dados de forma eficiente. Fazer uso eficiente de informações de vídeo requer o desenvolvimento de ferramentas poderosas capazes de extrair representações semânticas de alto nível a partir de características de baixo nível do conteúdo de vídeo. Devido à complexidade desse material, existem cinco desafios principais na concepção de tais sistemas: (1) dividir o fluxo de vídeo em trechos manuseáveis de acordo com a sua estrutura de organização, (2) implementar algoritmos para codificar as propriedades de baixo nível de um trecho de vídeo em vetores de características, (3) desenvolver medidas de similaridade para comparar esses trechos a partir de seus vetores, (4) responder rapidamente a consultas por similaridade sobre uma enorme quantidade de sequências de vídeo e (5) apresentar os resultados de forma amigável a um usuário. Inúmeras técnicas têm sido propostas para atender a tais requisitos. A maioria dos trabalhos existentes envolve algoritmos e métodos computacionalmente custosos, em termos tanto de tempo quanto de espaço, limitando a sua aplicação apenas ao ambiente acadêmico e/ou a grandes empresas. Contrário a essa tendência, o mercado tem mostrado uma crescente demanda por dispositivos móveis e embutidos. Nesse cenário, é imperativo o desenvolvimento de técnicas tanto eficazes quanto eficientes a fim de permitir que um público maior tenha acesso a tecnologias modernas. Nesse contexto, este trabalho apresenta cinco abordagens originais voltadas a análise, indexação e recuperação de vídeos digitais. Todas essas contribuições são somadas na construção de um sistema de gestão de vídeos por conteudo computacionalmente rápido, capaz de atingir a um padrão de qualidade similar, ou até mesmo superior, a soluções atuais
Abstract: Recent advances in the technology have enabled the increase of the availability of video data, creating large digital video collections. This has spurred great interest in systems that are able to manage those data in a efficient way. Making efficient use of video information requires the development of powerful tools to extract high-level semantics from low-level features of the video content. Due to the complexity of the video material, there are five main challenges in designing such systems: (1) to divide the video stream into manageable segments according to its organization structure; (2) to implement algorithms for encoding the low-level features of each video segment into feature vectors; (3) to develop similarity measures for comparing these segments by using their feature vectors; (4) to quickly answer similarity queries over a huge amount of video sequences; and (5) to present the list of results in a user-friendly way. Numerous techniques have been proposed to support such requirements. Most of existing works involve algorithms and methods which are computationally expensive, in terms of both time and space, limiting their application to the academic world and/or big companies. Contrary to this trend, the market has shown a growing demand for mobile and embedded devices. In this scenario, it is imperative the development of techniques so effective as efficient in order to allow more people have access to modern technologies. In this context, this work presents five novel approaches for the analysis, indexing, and retrieval of digital videos. All these contributions are combined to create a computationally fast system for content-based video management, which is able to achieve a quality level similar, or even superior, to current solutions
Doutorado
Ciência da Computação
Doutor em Ciência da Computação
Style APA, Harvard, Vancouver, ISO itp.
36

Silva, Cauane Blumenberg. "Adaptive tiling algorithm based on highly correlated picture regions for the HEVC standard". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/96040.

Pełny tekst źródła
Streszczenie:
Esta dissertação de mestrado propõe um algoritmo adaptativo que é capaz de dinamicamente definir partições tile para quadros intra- e inter-preditos com o objetivo de reduzir o impacto na eficiência de codificação. Tiles são novas ferramentas orientadas ao paralelismo que integram o padrão de codificação de vídeos de alta eficiência (HEVC – High Efficiency Video Coding standard), as quais dividem o quadro em regiões retangulares independentes que podem ser processadas paralelamente. Para viabilizar o paralelismo, os tiles quebram as dependências de codificação através de suas bordas, gerando impactos na eficiência de codificação. Este impacto pode ser ainda maior caso os limites dos tiles dividam regiões altamente correlacionadas do quadro, porque a maior parte das ferramentas de codificação usam informações de contexto durante o processo de codificação. Assim, o algoritmo proposto agrupa as regiões do quadro que são altamente correlacionadas dentro de um mesmo tile para reduzir o impacto na eficiência de codificação que é inerente ao uso de tiles. Para localizar as regiões altamente correlacionadas do quadro de uma maneira inteligente, as características da imagem e também as informações de codificação são analisadas, gerando mapas de particionamento que servem como parâmetro de entrada para o algoritmo. Baseado nesses mapas, o algoritmo localiza as quebras naturais de contexto presentes nos quadros do vídeo e define os limites dos tiles nessas regiões. Dessa maneira, as quebras de dependência causadas pelas bordas dos tiles coincidem com as quebras de contexto naturais do quadro, minimizando as perdas na eficiência de codificação causadas pelo uso dos tiles. O algoritmo proposto é capaz de reduzir mais de 0.4% e mais de 0.5% o impacto na eficiência de codificação causado pelos tiles em quadros intra-preditos e inter-preditos, respectivamente, quando comparado com tiles uniformes.
This Master Thesis proposes an adaptive algorithm that is able to dynamically choose suitable tile partitions for intra- and inter-predicted frames in order to reduce the impact on coding efficiency arising from such partitioning. Tiles are novel parallelismoriented tools that integrate the High Efficiency Video Coding (HEVC) standard, which divide the frame into independent rectangular regions that can be processed in parallel. To enable the parallelism, tiles break the coding dependencies across their boundaries leading to coding efficiency impacts. These impacts can be even higher if tile boundaries split highly correlated picture regions, because most of the coding tools use context information during the encoding process. Hence, the proposed algorithm clusters the highly correlated picture regions inside the same tile to reduce the inherent coding efficiency impact of using tiles. To wisely locate the highly correlated picture regions, image characteristics and encoding information are analyzed, generating partitioning maps that serve as the algorithm input. Based on these maps, the algorithm locates the natural context break of the picture and defines the tile boundaries on these key regions. This way, the dependency breaks caused by the tile boundaries match the natural context breaks of a picture, then minimizing the coding efficiency losses caused by the use of tiles. The proposed adaptive tiling algorithm, in some cases, provides over 0.4% and over 0.5% of BD-rate savings for intra- and inter-predicted frames respectively, when compared to uniform-spaced tiles, an approach which does not consider the picture context to define the tile partitions.
Style APA, Harvard, Vancouver, ISO itp.
37

Nalluri, Purnachand. "A fast motion estimation algorithm and its VLSI architecture for high efficiency video coding". Doctoral thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/15442.

Pełny tekst źródła
Streszczenie:
Doutoramento em Engenharia Eletrotécnica
Video coding has been used in applications like video surveillance, video conferencing, video streaming, video broadcasting and video storage. In a typical video coding standard, many algorithms are combined to compress a video. However, one of those algorithms, the motion estimation is the most complex task. Hence, it is necessary to implement this task in real time by using appropriate VLSI architectures. This thesis proposes a new fast motion estimation algorithm and its implementation in real time. The results show that the proposed algorithm and its motion estimation hardware architecture out performs the state of the art. The proposed architecture operates at a maximum operating frequency of 241.6 MHz and is able to process 1080p@60Hz with all possible variables block sizes specified in HEVC standard as well as with motion vector search range of up to ±64 pixels.
A codificação de vídeo tem sido usada em aplicações tais como, vídeovigilância, vídeo-conferência, video streaming e armazenamento de vídeo. Numa norma de codificação de vídeo, diversos algoritmos são combinados para comprimir o vídeo. Contudo, um desses algoritmos, a estimação de movimento é a tarefa mais complexa. Por isso, é necessário implementar esta tarefa em tempo real usando arquiteturas de hardware apropriadas. Esta tese propõe um algoritmo de estimação de movimento rápido bem como a sua implementação em tempo real. Os resultados mostram que o algoritmo e a arquitetura de hardware propostos têm melhor desempenho que os existentes. A arquitetura proposta opera a uma frequência máxima de 241.6 MHz e é capaz de processar imagens de resolução 1080p@60Hz, com todos os tamanhos de blocos especificados na norma HEVC, bem como um domínio de pesquisa de vetores de movimento até ±64 pixels.
Style APA, Harvard, Vancouver, ISO itp.
38

"Object-based scalable wavelet image and video coding". Thesis, 2008. http://library.cuhk.edu.hk/record=b6074669.

Pełny tekst źródła
Streszczenie:
The first part of this thesis studies advanced wavelet transform techniques for scalable still image object coding. In order to adapt to the content of a given signal and obtain more flexible adaptive representation, two advanced wavelet transform techniques, wavelet packet transform and directional wavelet transform, are developed for object-based image coding. Extensive experiments demonstrate that the new wavelet image coding systems perform comparable to or better than state-of-the-art in image compression while possessing some attractive features such as object-based coding functionality and high coding scalability.
The objective of this thesis is to develop an object-based coding framework built upon a family of wavelet coding techniques for a variety of arbitrarily shaped visual object scalable coding applications. Two kinds of arbitrarily shaped visual object scalable coding techniques are investigated in this thesis. One is object-based scalable wavelet still image coding; another is object-based scalable wavelet video coding.
The second part of this thesis investigates various components of object-based scalable wavelet video coding. A generalized 3-D object-based directional threading, which unifies the concepts of temporal motion threading and spatial directional threading, is seamlessly incorporated into 3-D shape-adaptive directional wavelet transform to exploit the spatio-temporal correlation inside the 3-D video object. To improve the computational efficiency of multi-resolution motion estimation (MRME) in shift-invariant wavelet domain, two fast MRME algorithms are proposed for wavelet-based scalable video coding. As demonstrated in the experiments, the proposed 3-D object-based wavelet video coding techniques consistently outperform MPEG-4 and other wavelet-based schemes for coding arbitrarily shaped video object, while providing full spatio-temporal-quality scalability with non-redundant 3-D subband decomposition.
Liu, Yu.
Adviser: King Ngi Ngan.
Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3693.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2008.
Includes bibliographical references (leaves 166-173).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts in English and Chinese.
School code: 1307.
Style APA, Harvard, Vancouver, ISO itp.
39

"Error-resilient coding tools in MPEG-4". 1998. http://library.cuhk.edu.hk/record=b5889563.

Pełny tekst źródła
Streszczenie:
by Cheng Shu Ling.
Thesis submitted in: July 1997.
Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.
Includes bibliographical references (leaves 70-71).
Abstract also in Chinese.
Chapter Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Image Coding Standard: JPEG --- p.1
Chapter 1.2 --- Video Coding Standard: MPEG --- p.6
Chapter 1.2.1 --- MPEG history --- p.6
Chapter 1.2.2 --- MPEG video compression algorithm overview --- p.8
Chapter 1.2.3 --- More MPEG features --- p.10
Chapter 1.3 --- Summary --- p.17
Chapter Chapter 2 --- Error Resiliency --- p.18
Chapter 2.1 --- Introduction --- p.18
Chapter 2.2 --- Traditional approaches --- p.19
Chapter 2.2.1 --- Channel coding --- p.19
Chapter 2.2.2 --- ARQ --- p.20
Chapter 2.2.3 --- Multi-layer coding --- p.20
Chapter 2.2.4 --- Error Concealment --- p.20
Chapter 2.3 --- MPEG-4 work on error resilience --- p.21
Chapter 2.3.1 --- Resynchronization --- p.21
Chapter 2.3.2 --- Data Recovery --- p.25
Chapter 2.3.3 --- Error Concealment --- p.28
Chapter 2.4 --- Summary --- p.29
Chapter Chapter 3 --- Fixed length codes --- p.30
Chapter 3.1 --- Introduction --- p.30
Chapter 3.2 --- Tunstall code --- p.31
Chapter 3.3 --- Lempel-Ziv code --- p.34
Chapter 3.3.1 --- LZ-77 --- p.35
Chapter 3.3.2 --- LZ-78 --- p.36
Chapter 3.4 --- Simulation --- p.38
Chapter 3.4.1 --- Experiment Setup --- p.38
Chapter 3.4.2 --- Results --- p.39
Chapter 3.4.3 --- Concluding Remarks --- p.42
Chapter Chapter 4 --- Self-Synchronizable codes --- p.44
Chapter 4.1 --- Introduction --- p.44
Chapter 4.2 --- Scholtz synchronizable code --- p.45
Chapter 4.2.1 --- Definition --- p.45
Chapter 4.2.2 --- Construction procedure --- p.45
Chapter 4.2.3 --- Synchronizer --- p.48
Chapter 4.2.4 --- Effects of errors --- p.51
Chapter 4.3 --- Simulation --- p.52
Chapter 4.3.1 --- Experiment Setup --- p.52
Chapter 4.3.2 --- Results --- p.56
Chapter 4.4 --- Concluding Remarks --- p.68
Chapter Chapter 5 --- Conclusions --- p.69
References --- p.70
Style APA, Harvard, Vancouver, ISO itp.
40

"Model- and image-based scene representation". 1999. http://library.cuhk.edu.hk/record=b5889926.

Pełny tekst źródła
Streszczenie:
Lee Kam Sum.
Thesis (M.Phil.)--Chinese University of Hong Kong, 1999.
Includes bibliographical references (leaves 97-101).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.2
Chapter 1.1 --- Video representation using panorama mosaic and 3D face model --- p.2
Chapter 1.2 --- Mosaic-based Video Representation --- p.3
Chapter 1.3 --- "3D Human Face modeling ," --- p.7
Chapter 2 --- Background --- p.13
Chapter 2.1 --- Video Representation using Mosaic Image --- p.13
Chapter 2.1.1 --- Traditional Video Compression --- p.17
Chapter 2.2 --- 3D Face model Reconstruction via Multiple Views --- p.19
Chapter 2.2.1 --- Shape from Silhouettes --- p.19
Chapter 2.2.2 --- Head and Face Model Reconstruction --- p.22
Chapter 2.2.3 --- Reconstruction using Generic Model --- p.24
Chapter 3 --- System Overview --- p.27
Chapter 3.1 --- Panoramic Video Coding Process --- p.27
Chapter 3.2 --- 3D Face model Reconstruction Process --- p.28
Chapter 4 --- Panoramic Video Representation --- p.32
Chapter 4.1 --- Mosaic Construction --- p.32
Chapter 4.1.1 --- Cylindrical Panorama Mosaic --- p.32
Chapter 4.1.2 --- Cylindrical Projection of Mosaic Image --- p.34
Chapter 4.2 --- Foreground Segmentation and Registration --- p.37
Chapter 4.2.1 --- Segmentation Using Panorama Mosaic --- p.37
Chapter 4.2.2 --- Determination of Background by Local Processing --- p.38
Chapter 4.2.3 --- Segmentation from Frame-Mosaic Comparison --- p.40
Chapter 4.3 --- Compression of the Foreground Regions --- p.44
Chapter 4.3.1 --- MPEG-1 Compression --- p.44
Chapter 4.3.2 --- MPEG Coding Method: I/P/B Frames --- p.45
Chapter 4.4 --- Video Stream Reconstruction --- p.48
Chapter 5 --- Three Dimensional Human Face modeling --- p.52
Chapter 5.1 --- Capturing Images for 3D Face modeling --- p.53
Chapter 5.2 --- Shape Estimation and Model Deformation --- p.55
Chapter 5.2.1 --- Head Shape Estimation and Model deformation --- p.55
Chapter 5.2.2 --- Face organs shaping and positioning --- p.58
Chapter 5.2.3 --- Reconstruction with both intrinsic and extrinsic parameters --- p.59
Chapter 5.2.4 --- Reconstruction with only Intrinsic Parameter --- p.63
Chapter 5.2.5 --- Essential Matrix --- p.65
Chapter 5.2.6 --- Estimation of Essential Matrix --- p.66
Chapter 5.2.7 --- Recovery of 3D Coordinates from Essential Matrix --- p.67
Chapter 5.3 --- Integration of Head Shape and Face Organs --- p.70
Chapter 5.4 --- Texture-Mapping --- p.71
Chapter 6 --- Experimental Result & Discussion --- p.74
Chapter 6.1 --- Panoramic Video Representation --- p.74
Chapter 6.1.1 --- Compression Improvement from Foreground Extraction --- p.76
Chapter 6.1.2 --- Video Compression Performance --- p.78
Chapter 6.1.3 --- Quality of Reconstructed Video Sequence --- p.80
Chapter 6.2 --- 3D Face model Reconstruction --- p.91
Chapter 7 --- Conclusion and Future Direction --- p.94
Bibliography --- p.101
Style APA, Harvard, Vancouver, ISO itp.
41

"The effects of evaluation and rotation on descriptors and similarity measures for a single class of image objects". Thesis, 2008. http://hdl.handle.net/10210/564.

Pełny tekst źródła
Streszczenie:
“A picture is worth a thousand words”. If this proverb were taken literally we all know that every person interprets images or photos differently in terms of its content. This is due to the semantics contained in these images. Content-based image retrieval has become a vast area of research in order to successfully describe and retrieve images according to the content. In military applications, intelligence images such as those obtained by the defence intelligence group are taken (mostly on film), developed and then manually annotated thereafter. These photos are then stored in a filing system according to certain attributes such as the location, content etc. To retrieve these images at a later stage might take days or even weeks to locate. Thus, the need for a digital annotation system has arisen. The images of the military contain various military vehicles and buildings that need to be detected, described and stored in a database. For our research we want to look at the effects that the rotation and elevation angle of an object in an image has on the retrieval performance. We chose model cars in order to be able to control the environment the photos were taken in such as the background, lighting, distance between the objects, and the camera etc. There are also a wide variety of shapes and colours of these models to obtain and work with. We look at the MPEG-7 descriptor schemes that are recommended by the MPEG group for video and image retrieval as well as implement three of them. For the military it could be required that when the defence intelligence group is in the field, that the images be directly transmitted via satellite to the headquarters. We have therefore included the JPEG2000 standard which gives a compression performance increase of 20% over the original JPEG standard. It is also capable to transmit images wirelessly as well as securely. Including the MPEG-7 descriptors that we have implemented, we have also implemented the fuzzy histogram and colour correlogram descriptors. For our experimentation we implemented a series of experiments in order to determine the effects that rotation and elevation has on our model vehicle images. Observations are made when each vehicle is considered separately and when the vehicles are described and combined into a single database. After the experiments are done we look at the descriptors and determine which adjustments could be made in order to improve the retrieval performance thereof.
Dr. W.A. Clarke
Style APA, Harvard, Vancouver, ISO itp.
42

Amiri, Delaram. "Bilateral and adaptive loop filter implementations in 3D-high efficiency video coding standard". Thesis, 2015. http://hdl.handle.net/1805/7983.

Pełny tekst źródła
Streszczenie:
Indiana University-Purdue University Indianapolis (IUPUI)
In this thesis, we describe a different implementation for in loop filtering method for 3D-HEVC. First we propose the use of adaptive loop filtering (ALF) technique for 3D-HEVC standard in-loop filtering. This filter uses Wiener–based method to minimize the Mean Squared Error between filtered pixel and original pixels. The performance of adaptive loop filter in picture based level is evaluated. Results show up to of 0.2 dB PSNR improvement in Luminance component for the texture and 2.1 dB for the depth. In addition, we obtain up to 0.1 dB improvement in Chrominance component for the texture view after applying this filter in picture based filtering. Moreover, a design of an in-loop filtering with Fast Bilateral Filter for 3D-HEVC standard is proposed. Bilateral filter is a filter that smoothes an image while preserving strong edges and it can remove the artifacts in an image. Performance of the bilateral filter in picture based level for 3D-HEVC is evaluated. Test model HTM- 6.2 is used to demonstrate the results. Results show up to of 20 percent of reduction in processing time of 3D-HEVC with less than affecting PSNR of the encoded 3D video using Fast Bilateral Filter.
Style APA, Harvard, Vancouver, ISO itp.
43

"Novel error resilient techniques for the robust transport of MPEG-4 video over error-prone networks". 2004. http://library.cuhk.edu.hk/record=b6073624.

Pełny tekst źródła
Streszczenie:
Bo Yan.
"May 2004."
Thesis (Ph.D.)--Chinese University of Hong Kong, 2004.
Includes bibliographical references (p. 117-131).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Mode of access: World Wide Web.
Abstracts in English and Chinese.
Style APA, Harvard, Vancouver, ISO itp.
44

"Multiplexing video traffic using frame-skipping aggregation technique". 1998. http://library.cuhk.edu.hk/record=b5889564.

Pełny tekst źródła
Streszczenie:
by Alan Yeung.
Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.
Includes bibliographical references (leaves 53-[56]).
Abstract also in Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 2 --- MPEG Overview --- p.5
Chapter 3 --- Framework of Frame-Skipping Lossy Aggregation --- p.10
Chapter 3.1 --- Video Frames Delivery using Round-Robin Scheduling --- p.10
Chapter 3.2 --- Underflow Safety Margin on Receiver Buffers --- p.12
Chapter 3.3 --- Algorithm in Frame-Skipping Aggregation Controller --- p.13
Chapter 4 --- Replacement of Skipped Frames in MPEG Sequence --- p.17
Chapter 5 --- Subjective Assessment Test on Frame-Skipped Video --- p.21
Chapter 5.1 --- Test Settings and Material --- p.22
Chapter 5.2 --- Choice of Test Methods --- p.23
Chapter 5.3 --- Test Procedures --- p.25
Chapter 5.4 --- Test Results --- p.26
Chapter 6 --- Performance Study --- p.29
Chapter 6.1 --- Experiment 1: Number of Supportable Streams --- p.31
Chapter 6.2 --- Experiment 2: Frame-Skipping Rate When Multiplexing on a Leased T3 Link --- p.33
Chapter 6.3 --- Experiment 3: Bandwidth Usage --- p.35
Chapter 6.4 --- Experiment 4: Optimal USMT --- p.38
Chapter 7 --- Implementation Considerations --- p.41
Chapter 8 --- Conclusions --- p.45
Chapter A --- The Construction of Stuffed Artificial B Frame --- p.48
Bibliography --- p.53
Style APA, Harvard, Vancouver, ISO itp.
45

Sevcenco, Ana-Maria. "Adaptive strategies and optimization techniques for JPEG-based low bit-rate image coding". Thesis, 2007. http://hdl.handle.net/1828/2282.

Pełny tekst źródła
Streszczenie:
The field of digital image compression has been intensively explored to obtain ever improved performance corresponding to a given bit budget. The DCT-based JPEG standard remains to be one of the most popular image compression standards due to its reasonable coding performance, fast implementations, friendly low-cost architecture. flexibility and adaptivity on block level. In this thesis, we consider the problem of low bit-rate image coding and present new approaches using adaptive strategies and optimization techniques for performance enhancement, while employing the DCT block-based JPEG standard as the main framework with several pre- and post-processing , steps. We propose an adaptive coding approach which involves a variable quality factor in the quantization step of JPEG compression to make the compression more flexible in respect to bit budget requirements. We also propose an adaptive sampling approach based on variable down-'up-scaling rate and local image characteristics. In addition. we study an adaptive filtering approach in which the optimal filters coefficients are determined by making use of optimization methods and symmetric extension techniques. Simulation results are presented to demonstrate the effectiveness of the proposed techniques relative to recent works in the field of low bit-rate image coding.
Style APA, Harvard, Vancouver, ISO itp.
46

Natu, Ambarish Shrikrishna. "Error resilience in JPEG2000 /". 2003. http://www.library.unsw.edu.au/~thesis/adt-NUN/public/adt-NUN20030519.163058/index.html.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Kesireddy, Akitha. "A new adaptive trilateral filter for in-loop filtering". Thesis, 2014. http://hdl.handle.net/1805/5927.

Pełny tekst źródła
Streszczenie:
Indiana University-Purdue University Indianapolis (IUPUI)
HEVC has achieved significant coding efficiency improvement beyond existing video coding standard by employing many new coding tools. Deblocking Filter, Sample Adaptive Offset and Adaptive Loop Filter for in-loop filtering are currently introduced for the HEVC standardization. However these filters are implemented in spatial domain despite the fact of temporal correlation within video sequences. To reduce the artifacts and better align object boundaries in video , a new algorithm in in-loop filtering is proposed. The proposed algorithm is implemented in HM-11.0 software. This proposed algorithm allows an average bitrate reduction of about 0.7% and improves the PSNR of the decoded frame by 0.05%, 0.30% and 0.35% in luminance and chroma.
Style APA, Harvard, Vancouver, ISO itp.
48

"Arbitrary block-size transform video coding". Thesis, 2011. http://library.cuhk.edu.hk/record=b6075117.

Pełny tekst źródła
Streszczenie:
Besides ABT with higher order transform, a transform based template matching is also investigated. A fast method of template matching, called Fast Walsh Search, is developed. This search method has similar accuracy as exhaustive search but significantly lower computation requirement.
In this thesis, the development of simple but efficient order-16 transforms will be shown. Analysis and comparison with existing order-16 transforms have been carried out. The proposed order-16 transforms were integrated to the existing coding standard reference software individually so as to achieve a new ABT system. In the proposed ABT system, order-4, order-8 and order-16 transforms coexist. The selection of the most appropriate transform is based on the rate-distortion performance of these transforms. A remarkable improvement in coding performance is shown in the experiment results. A significant bit rate reduction can be achieved with our proposed ABT system with both subjective and objective qualities remain unchanged.
Prior knowledge of the coefficient distribution is a key to achieve better coding performance. This is very useful in many areas in coding such as rate control, rate distortion optimization, etc. It is also shown that coefficient distribution of predicted residue is closer to Cauchy distribution rather than traditionally expected Laplace distribution. This can effectively improve the existing processing techniques.
Three kinds of order-l 6 orthogonal DCT-like integer transforms are proposed in this thesis. The first one is the simple integer transform, which is expanded from existing order-8 ICT. The second one is the hybrid integer transform from the Dyadic Weighted Walsh Transform (DWWT). It is shown that it has a better performance than simple integer transform. The last one is a recursive transform. Order-2N transform can be derived from order-N one. It is very close to the DCT. This recursive transform can be implemented in two different ways and they are denoted as LLMICT and CSFICT. They have excellent coding performance. These proposed transforms are investigated and are implemented into the reference software of H.264 and AVS. They are also compared with other order-16 orthogonal integer transform. Experimental results show that the proposed transforms give excellent coding performance and ease to compute.
Transform is a very important coding tool in video coding. It decorrelates the pixel data and removes the redundancy among pixels so as to achieve compression. Traditionally, order-S transform is used in video and image coding. Latest video coding standards, such as H.264/AVC, adopt both order-4 and order-8 transforms. The adaptive use of more than one transforms of different sizes is known as Arbitrary Block-size Transform (ABT). Transforms other than order-4 and order-8 can also be used in ABT. It is expected larger transform size such as order-16 will benefit more in video sequences with higher resolutions such as nap and 1a8ap sequences. As a result, order-16 transform is introduced into ABT system.
Fong, Chi Keung.
Adviser: Wai Kuen Cham.
Source: Dissertation Abstracts International, Volume: 73-04, Section: B, page: .
Thesis (Ph.D.)--Chinese University of Hong Kong, 2011.
Includes bibliographical references.
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract also in Chinese.
Style APA, Harvard, Vancouver, ISO itp.
49

Ya-WenChao i 趙雅雯. "Improved Image Resampling Filters for Spatial Scalable Video Coding Standards". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/86307557163892127402.

Pełny tekst źródła
Streszczenie:
碩士
國立成功大學
電機工程學系碩博士班
98
This thesis proposes a downsampling filter and an upsampling filter for spatial scalable video coding. The bilateral filter and adaptive filter length concepts are used in downsampling filter to reduce the loss of edge information in images. By smoothing the homogeneous area and preserving the details in the non-homogeneous area of images, the coding bits are reduced in the base layer coding. At the same time, the edge-preserving property in the base layer also provides a better prediction to save the coding bits in the enhancement layer. For upsampling filter, the direction information of an image is used. The local gradient determines the edges of an image. The missing pixels on the edges are obtained by performing the directional interpolation. Experimental results show that, for proposed downsampling filter, 1.5% bit-rate reduction is achieved in the enhancement layer while decreasing about 20% bit-rates on average in the base layers. For the roposed directional upsampling filter, the PSNR improvement and bit-rate reduction are 0.01dB~0.26dB and 0.2%~16.3%, respectively.
Style APA, Harvard, Vancouver, ISO itp.
50

"Analysis, coding, and processing for high-definition videos". Thesis, 2010. http://library.cuhk.edu.hk/record=b6074847.

Pełny tekst źródła
Streszczenie:
Firstly, the characteristics of HD videos are studied quantitatively. The results show that HD videos distinguish from other lower resolution videos by higher spatial correlation and special power spectral density (PSD), mainly distributed along the vertical and horizontal directions.
Secondly, two techniques for HD video coding are developed based on the aforementioned analysis results. To exploit the spatial property, 2D order-16 transforms are proposed to code the higher correlated signals more efficiently. Specially, two series of 2D order-16 integer transforms, named modified integer cosine transform (MICT) and non-orthogonal integer cosine transform (NICT), are studied and developed to provide different trade-offs between the performance and the complexity. Based on the property of special PSD, parametric interpolation filter (PIF) is proposed for motion-compensated prediction (MCP). Not only can PIF track the non-stationary statistics of video signals as the related work shows, but also it represents interpolation filters by parameters instead of individual coefficients, thus solving the conflict of the accuracy of coefficients and the size of side information. The experimental results show the proposed two coding techniques significantly outperform their equivalents in the state-of-the-art international video coding standards.
Thirdly, interlaced HD videos are studied, and to satisfy different delay constraints, two real-time de-interlacing algorithms are proposed specially for H.264 coded videos. They adapt to local activities, according to the syntax element (SE) values. Accuracy analysis is also introduced to deal with the disparity between the SE values and the real motions and textures. The de-interlacers provide better visual quality than the commonly used ones and can de-interlace 1080i sequences in real time on PCs.
Today, High-Definition (HD) videos become more and more popular with many applications. This thesis analyzes the characteristics of HD videos and develops the appropriate coding and processing techniques accordingly for hybrid video coding.
Dong, Jie.
Adviser: King Ngi Ngan.
Source: Dissertation Abstracts International, Volume: 72-01, Section: B, page: .
Thesis (Ph.D.)--Chinese University of Hong Kong, 2010.
Includes bibliographical references (leaves 153-158).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract also in Chinese.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii