Segui questo link per vedere altri tipi di pubblicazioni sul tema: JPEG (Image coding standard).

Tesi sul tema "JPEG (Image coding standard)"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "JPEG (Image coding standard)".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Yeung, Yick Ming. "Fast rate control for JPEG2000 image coding /". View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202003%20YEUNG.

Testo completo
Abstract (sommario):
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 63-65). Also available in electronic version. Access restricted to campus users.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Giakoumakis, Michail D. "Refinements in a DCT based non-uniform embedding watermarking scheme". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Mar%5FGiakoumakis.pdf.

Testo completo
Abstract (sommario):
Thesis (M.S. in Applied Math and M.S. in Systems Engineering)--Naval Postgraduate School, March 2003.
Thesis advisor(s): Roberto Cristi, Ron Pieper, Craig Rasmussen. Includes bibliographical references (p. 119-121). Also available online.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Kamaras, Konstantinos. "JPEG2000 image compression and error resilience for transmission over wireless channels". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://sirsi.nps.navy.mil/uhtbin/hyperion-image/02Mar%5FKamaras.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Thorpe, Christopher. "Compression aided feature based steganalysis of perturbed quantization steganography in JPEG images". Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 114 p, 2008. http://proquest.umi.com/pqdweb?did=1459914021&sid=6&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Frandina, Peter. "VHDL modeling and synthesis of the JPEG-XR inverse transform /". Online version of thesis, 2009. http://hdl.handle.net/1850/10755.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Gupta, Amit Kumar Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Hardware optimization of JPEG2000". Awarded by:University of New South Wales. School of Electrical Engineering and Telecommunications, 2006. http://handle.unsw.edu.au/1959.4/30581.

Testo completo
Abstract (sommario):
The Key algorithms of JPEG2000, the new image compression standard, have high computational complexity and thus present challenges for efficient implementation. This has led to research on the hardware optimization of JPEG2000 for its efficient realization. Luckily, in the last century the growth in Microelectronics allows us to realize dedicated ASIC solutions as well as hardware/software FPGA based solutions for complex algorithms such as JPEG2000. But an efficient implementation within hard constraints of area and throughput, demands investigations of key dependencies within the JPEG2000 system. This work presents algorithms and VLSI architectures to realize a high performance JPEG2000 compression system. The embedded block coding algorithm which lies at the heart of a JPEG2000 compression system is a main contributor to enhanced JPEG2000 complexity. This work first concentrates on algorithms to realize low-cost high throughput Block Coder (BC) system. For this purpose concurrent symbol processing capable Bit Plane Coder architecture is presented. Further optimal 2 sub-bank memory and an efficient buffer architectures are designed to keep the hardware cost low. The proposed overall BC system presents the highest Figure Of Merit (FOM) in terms of throughput versus hardware cost in comparison to existing BC solutions. Further, this work also investigates the challenges involved in the efficient integration of the BC with the overall JPEG2000 system. A novel low-cost distortion estimation approach with near-optimal performance is proposed which is necessary for accurate rate-control performance of JPEG2000. Additionally low bandwidth data storage and transfer techniques are proposed for efficient transfer of subband samples to the BC. Simulation results show that the proposed techniques have approximately 4 times less bandwidth than existing architectures. In addition, an efficient high throughput block decoder architecture based on the proposed selective sample-skipping algorithm is presented. The proposed architectures are designed and analyzed on both ASIC and FPGA platforms. Thus, the proposed algorithms, architectures and efficient BC integration strategies are useful for realizing a dedicated ASIC JPEG2000 system as well as a hardware/software FPGA based JPEG2000 solution. Overall this work presents algorithms and architectures to realize a high performance JPEG2000 system without imposing any restrictions in terms of coding modes or block size for the BC system.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Dyer, Michael Ian Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Hardware Implementation Techniques for JPEG2000". Awarded by:University of New South Wales. Electrical Engineering and Telecommunications, 2007. http://handle.unsw.edu.au/1959.4/30510.

Testo completo
Abstract (sommario):
JPEG2000 is a recently standardized image compression system that provides substantial improvements over the existing JPEG compression scheme. This improvement in performance comes with an associated cost in increased implementation complexity, such that a purely software implementation is inefficient. This work identifies the arithmetic coder as a bottleneck in efficient hardware implementations, and explores various design options to improve arithmetic coder speed and size. The designs produced improve the critical path of the existing arithmetic coder designs, and then extend the coder throughput to 2 or more symbols per clock cycle. Subsequent work examines more system level implementation issues. This work examines the communication between hardware blocks and utilizes certain modes of operation to add flexibility to buffering solutions. It becomes possible to significantly reduce the amount of intermediate buffering between blocks, whilst maintaining a loose synchronization. Full hardware implementations of the standard are necessarily limited in the amount of features that they can offer, in order to constrain complexity and cost. To circumvent this, a hardware / software codesign is produced using the Altera NIOS II softcore processor. By keeping the majority of the standard implemented in software and using hardware to accelerate those time consuming functions, generality of implementation can be retained, whilst implementation speed is improved. In addition to this, there is the opportunity to explore parallelism, by providing multiple identical hardware blocks to code multiple data units simultaneously.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Choi, Kai-san. "Automatic source camera identification by lens aberration and JPEG compression statistics". Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B38902345.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Choi, Kai-san, e 蔡啟新. "Automatic source camera identification by lens aberration and JPEG compression statistics". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B38902345.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Uehara, Takeyuki. "Contributions to image encryption and authentication". Access electronically, 2003. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20040920.124409/index.html.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Nolte, Ernst Hendrik. "Image compression quality measurement : a comparison of the performance of JPEG and fractal compression on satellite images". Thesis, Stellenbosch : Stellenbosch University, 2000. http://hdl.handle.net/10019.1/51796.

Testo completo
Abstract (sommario):
Thesis (MEng)--Stellenbosch University, 2000.
ENGLISH ABSTRACT: The purpose of this thesis is to investigate the nature of digital image compression and the calculation of the quality of the compressed images. The work is focused on greyscale images in the domain of satellite images and aerial photographs. Two compression techniques are studied in detail namely the JPEG and fractal compression methods. Implementations of both these techniques are then applied to a set of test images. The rest of this thesis is dedicated to investigating the measurement of the loss of quality that was introduced by the compression. A general method for quality measurement (signal To Noise Ratio) is discussed as well as a technique that was presented in literature quite recently (Grey Block Distance). Hereafter, a new measure is presented. After this, a means of comparing the performance of these measures is presented. It was found that the new measure for image quality estimation performed marginally better than the SNR algorithm. Lastly, some possible improvements on this technique are mentioned and the validity of the method used for comparing the quality measures is discussed.
AFRIKAANSE OPSOMMING: Die doel van hierdie tesis is om ondersoek in te stel na die aard van digitale beeldsamepersing en die berekening van beeldkwaliteit na samepersing. Daar word gekonsentreer op grysvlak beelde in die spesifieke domein van satellietbeelde en lugfotos. Twee spesifieke samepersingstegnieke word in diepte ondersoek naamlik die JPEG en fraktale samepersingsmetodes. Implementasies van beide hierdie tegnieke word op 'n stel toetsbeelde aangewend. Die res van hierdie tesis word dan gewy aan die ondersoek van die meting van die kwaliteitsverlies van hierdie saamgeperste beelde. Daar word gekyk na 'n metode wat in algemene gebruik in die praktyk is asook na 'n nuwer metode wat onlangs in die literatuur veskyn het. Hierna word 'n nuwe tegniek bekendgestel. Verder word daar 'n vergelyking van hierdie mates en 'n ondersoek na die interpretasie van die 'kwaliteit' van hierdie kwaliteitsmate gedoen. Daar is gevind dat die nuwe maatstaf vir kwaliteit net so goed en selfs beter werk as die algemene maat vir beeldkwaliteit naamlik die Sein tot Ruis Verhouding. Laastens word daar moontlike verbeterings op die maatstaf genoem en daar volg 'n bespreking oor die geldigheid van die metode wat gevolg is om die kwaliteit van die kwaliteitsmate te bepaal
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Samuel, Sindhu. "Digital rights management (DRM) : watermark encoding scheme for JPEG images". Pretoria : [s.n.], 2007. http://upetd.up.ac.za/thesis/available/etd-09122008-182920/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Muller, Rikus. "A study of image compression techniques, with specific focus on weighted finite automata". Thesis, Link to the online version, 2005. http://hdl.handle.net/10019/1128.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Natu, Ambarish Shrikrishna Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Error resilience in JPEG2000". Awarded by:University of New South Wales. Electrical Engineering and Telecommunications, 2003. http://handle.unsw.edu.au/1959.4/18835.

Testo completo
Abstract (sommario):
The rapid growth of wireless communication and widespread access to information has resulted in a strong demand for robust transmission of compressed images over wireless channels. The challenge of robust transmission is to protect the compressed image data against loss, in such a way as to maximize the received image quality. This thesis addresses this problem and provides an investigation of a forward error correction (FEC) technique that has been evaluated in the context of the emerging JPEG2000 standard. Not much effort has been made in the JPEG2000 project regarding error resilience. The only techniques standardized are based on insertion of marker codes in the code-stream, which may be used to restore high-level synchronization between the decoder and the code-stream. This helps to localize error and prevent it from propagating through the entire code-stream. Once synchronization is achieved, additional tools aim to exploit as much of the remaining data as possible. Although these techniques help, they cannot recover lost data. FEC adds redundancy into the bit-stream, in exchange for increased robustness to errors. We investigate unequal protection schemes for JPEG2000 by applying different levels of protection to different quality layers in the code-stream. More particularly, the results reported in this thesis provide guidance concerning the selection of JPEG2000 coding parameters and appropriate combinations of Reed-Solomon (RS) codes for typical wireless bit error rates. We find that unequal protection schemes together with the use of resynchronization makers and some additional tools can significantly improve the image quality in deteriorating channel conditions. The proposed channel coding scheme is easily incorporated into the existing JPEG2000 code-stream structure and experimental results clearly demonstrate the viability of our approach
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Muller, Rikus. "Applying the MDCT to image compression". Thesis, Stellenbosch : University of Stellenbosch, 2009. http://hdl.handle.net/10019.1/1197.

Testo completo
Abstract (sommario):
Thesis (DSc (Mathematical Sciences. Applied Mathematics))--University of Stellenbosch, 2009.
The replacement of the standard discrete cosine transform (DCT) of JPEG with the windowed modifed DCT (MDCT) is investigated to determine whether improvements in numerical quality can be achieved. To this end, we employ an existing algorithm for optimal quantisation, for which we also propose improvements. This involves the modelling and prediction of quantisation tables to initialise the algorithm, a strategy that is also thoroughly tested. Furthermore, the effects of various window functions on the coding results are investigated, and we find that improved quality can indeed be achieved by modifying JPEG in this fashion.
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Lucero, Aldo. "Compressing scientific data with control and minimization of the L-infinity metric under the JPEG 2000 framework". To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2007. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Zitzmann, Cathel. "Détection statistique d'information cachée dans des images naturelles". Thesis, Troyes, 2013. http://www.theses.fr/2013TROY0012/document.

Testo completo
Abstract (sommario):
La nécessité de communiquer de façon sécurisée n’est pas chose nouvelle : depuis l’antiquité des méthodes existent afin de dissimuler une communication. La cryptographie a permis de rendre un message inintelligible en le chiffrant, la stéganographie quant à elle permet de dissimuler le fait même qu’un message est échangé. Cette thèse s’inscrit dans le cadre du projet "Recherche d’Informations Cachées" financé par l’Agence Nationale de la Recherche, l’Université de Technologie de Troyes a travaillé sur la modélisation mathématique d’une image naturelle et à la mise en place de détecteurs d’informations cachées dans les images. Ce mémoire propose d’étudier la stéganalyse dans les images naturelles du point de vue de la décision statistique paramétrique. Dans les images JPEG, un détecteur basé sur la modélisation des coefficients DCT quantifiés est proposé et les calculs des probabilités du détecteur sont établis théoriquement. De plus, une étude du nombre moyen d’effondrements apparaissant lors de l’insertion avec les algorithmes F3 et F4 est proposée. Enfin, dans le cadre des images non compressées, les tests proposés sont optimaux sous certaines contraintes, une des difficultés surmontées étant le caractère quantifié des données
The need of secure communication is not something new: from ancient, methods exist to conceal communication. Cryptography helped make unintelligible message using encryption, steganography can hide the fact that a message is exchanged.This thesis is part of the project "Hidden Information Research" funded by the National Research Agency, Troyes University of Technology worked on the mathematical modeling of a natural image and creating detectors of hidden information in digital pictures.This thesis proposes to study the steganalysis in natural images in terms of parametric statistical decision. In JPEG images, a detector based on the modeling of quantized DCT coefficients is proposed and calculations of probabilities of the detector are established theoretically. In addition, a study of the number of shrinkage occurring during embedding by F3 and F4 algorithms is proposed. Finally, for the uncompressed images, the proposed tests are optimal under certain constraints, a difficulty overcome is the data quantization
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Chien, Chi-Hao. "A comparison study of the implementation of digital camera's RAW and JPEG and scanner's TIFF file formats, and color management procedures for inkjet textile printing applications /". Online version of thesis, 2009. http://hdl.handle.net/1850/10886.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Kang, James M. "A query engine of novelty in video streams /". Link to online version, 2005. https://ritdml.rit.edu/dspace/handle/1850/977.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Pevný, Tomáš. "Kernel methods in steganalysis". Diss., Online access via UMI:, 2008.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Kailasanathan, Chandrapal. "Securing digital images". Access electronically, 2003. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20041026.150935/index.html.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Nguyen, Anthony Ngoc. "Importance Prioritised Image Coding in JPEG 2000". Thesis, Queensland University of Technology, 2005. https://eprints.qut.edu.au/16005/1/Anthony_Nguyen_Thesis.pdf.

Testo completo
Abstract (sommario):
Importance prioritised coding is a principle aimed at improving the interpretability (or image content recognition) versus bit-rate performance of image coding systems. This can be achieved by (1) detecting and tracking image content or regions of interest (ROI) that are crucial to the interpretation of an image, and (2)compressing them in such a manner that enables ROIs to be encoded with higher fidelity and prioritised for dissemination or transmission. Traditional image coding systems prioritise image data according to an objective measure of distortion and this measure does not correlate well with image quality or interpretability. Importance prioritised coding, on the other hand, aims to prioritise image contents according to an 'importance map', which provides a means for modelling and quantifying the relative importance of parts of an image. In such a coding scheme the importance in parts of an image containing ROIs would be higher than other parts of the image. The encoding and prioritisation of ROIs means that the interpretability in these regions would be improved at low bit-rates. An importance prioritised image coder incorporated within the JPEG 2000 international standard for image coding, called IMP-J2K, is proposed to encode and prioritise ROIs according to an 'importance map'. The map can be automatically generated using image processing algorithms that result in a limited number of ROIs, or manually constructed by hand-marking OIs using a priori knowledge. The proposed importance prioritised coder coder provides a user of the encoder with great flexibility in defining single or multiple ROIs with arbitrary degrees of importance and prioritising them using IMP-J2K. Furthermore, IMP-J2K codestreams can be reconstructed by generic JPEG 2000 decoders, which is important for interoperability between imaging systems and processes. The interpretability performance of IMP-J2K was quantitatively assessed using the subjective National Imagery Interpretability Rating Scale (NIIRS). The effect of importance prioritisation on image interpretability was investigated, and a methodology to relate the NIIRS ratings, ROI importance scores and bit-rates was proposed to facilitate NIIRS specifications for importance prioritised coding. In addition, a technique is proposed to construct an importance map by allowing a user of the encoder to use gaze patterns to automatically determine and assign importance to fixated regions (or ROIs) in an image. The importance map can be used by IMP-J2K to bias the encoding of the image to these ROIs, and subsequently to allow a user at the receiver to reconstruct the image as desired by the user of the encoder. Ultimately, with the advancement of automated importance mapping techniques that can reliably predict regions of visual attention, IMP-J2K may play a significant role in matching an image coding scheme to the human visual system.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Nguyen, Anthony Ngoc. "Importance Prioritised Image Coding in JPEG 2000". Queensland University of Technology, 2005. http://eprints.qut.edu.au/16005/.

Testo completo
Abstract (sommario):
Importance prioritised coding is a principle aimed at improving the interpretability (or image content recognition) versus bit-rate performance of image coding systems. This can be achieved by (1) detecting and tracking image content or regions of interest (ROI) that are crucial to the interpretation of an image, and (2)compressing them in such a manner that enables ROIs to be encoded with higher fidelity and prioritised for dissemination or transmission. Traditional image coding systems prioritise image data according to an objective measure of distortion and this measure does not correlate well with image quality or interpretability. Importance prioritised coding, on the other hand, aims to prioritise image contents according to an 'importance map', which provides a means for modelling and quantifying the relative importance of parts of an image. In such a coding scheme the importance in parts of an image containing ROIs would be higher than other parts of the image. The encoding and prioritisation of ROIs means that the interpretability in these regions would be improved at low bit-rates. An importance prioritised image coder incorporated within the JPEG 2000 international standard for image coding, called IMP-J2K, is proposed to encode and prioritise ROIs according to an 'importance map'. The map can be automatically generated using image processing algorithms that result in a limited number of ROIs, or manually constructed by hand-marking OIs using a priori knowledge. The proposed importance prioritised coder coder provides a user of the encoder with great flexibility in defining single or multiple ROIs with arbitrary degrees of importance and prioritising them using IMP-J2K. Furthermore, IMP-J2K codestreams can be reconstructed by generic JPEG 2000 decoders, which is important for interoperability between imaging systems and processes. The interpretability performance of IMP-J2K was quantitatively assessed using the subjective National Imagery Interpretability Rating Scale (NIIRS). The effect of importance prioritisation on image interpretability was investigated, and a methodology to relate the NIIRS ratings, ROI importance scores and bit-rates was proposed to facilitate NIIRS specifications for importance prioritised coding. In addition, a technique is proposed to construct an importance map by allowing a user of the encoder to use gaze patterns to automatically determine and assign importance to fixated regions (or ROIs) in an image. The importance map can be used by IMP-J2K to bias the encoding of the image to these ROIs, and subsequently to allow a user at the receiver to reconstruct the image as desired by the user of the encoder. Ultimately, with the advancement of automated importance mapping techniques that can reliably predict regions of visual attention, IMP-J2K may play a significant role in matching an image coding scheme to the human visual system.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Chandrasekaran, Balaji. "COMPARISON OF SPARSE CODING AND JPEG CODING SCHEMES FOR BLURRED RETINAL IMAGES". Master's thesis, University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2732.

Testo completo
Abstract (sommario):
Overcomplete representations are currently one of the highly researched areas especially in the field of signal processing due to their strong potential to generate sparse representation of signals. Sparse representation implies that given signal can be represented with components that are only rarely significantly active. It has been strongly argued that the mammalian visual system is highly related towards sparse and overcomplete representations. The primary visual cortex has overcomplete responses in representing an input signal which leads to the use of sparse neuronal activity for further processing. This work investigates the sparse coding with an overcomplete basis set representation which is believed to be the strategy employed by the mammalian visual system for efficient coding of natural images. This work analyzes the Sparse Code Learning algorithm in which the given image is represented by means of linear superposition of sparse statistically independent events on a set of overcomplete basis functions. This algorithm trains and adapts the overcomplete basis functions such as to represent any given image in terms of sparse structures. The second part of the work analyzes an inhibition based sparse coding model in which the Gabor based overcomplete representations are used to represent the image. It then applies an iterative inhibition algorithm based on competition between neighboring transform coefficients to select subset of Gabor functions such as to represent the given image with sparse set of coefficients. This work applies the developed models for the image compression applications and tests the achievable levels of compression of it. The research towards these areas so far proves that sparse coding algorithms are inefficient in representing high frequency sharp image features. So this work analyzes the performance of these algorithms only on the natural images which does not have sharp features and compares the compression results with the current industrial standard coding schemes such as JPEG and JPEG 2000. It also models the characteristics of an image falling on the retina after the distortion effects of the eye and then applies the developed algorithms towards these images and tests compression results.
M.S.E.E.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Electrical Engineering MSEE
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Oh, Han, Ali Bilgin e Michael Marcellin. "Visually Lossless JPEG 2000 for Remote Image Browsing". MDPI AG, 2016. http://hdl.handle.net/10150/621987.

Testo completo
Abstract (sommario):
Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of ( spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting ( multiple) quantization step sizes into a single JPEG 2000 codestream. This codestream is JPEG 2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG 2000 Interactive Protocol ( JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results.
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Pao, I.-Ming. "Improved standard-conforming video coding techniques /". Thesis, Connect to this title online; UW restricted, 1999. http://hdl.handle.net/1773/5936.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Erlid, Frøy Brede Tureson. "MCTF and JPEG 2000 Based Wavelet Video Coding Compared to the Future HEVC Standard". Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for elektronikk og telekommunikasjon, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-18822.

Testo completo
Abstract (sommario):
Video and multimedia content has over the years become an important part of our everyday life. At the same time, the technology available to consumers has become more and more advanced. These technologies, such as streaming services and advanced displays, has enabled us to watch video content on a large variety of devices, from small, battery powered mobile phones to large TV-sets.Streaming of video over the Internet is a technology that is getting increasingly popular. As bandwidth is a limited resource, efficient compression techniques are clearly needed. The wide variety of devices capable of streaming and displaying video suggest a need for scalable video coders, as different devices might support different sets of resolutions and frame rates.As a response to the demands for efficient coding standards, VCEG and MPEG are jointly developing an emerging video compression standard called High Efficiency Video Coding (HEVC). The goal for this standard is to improve the coding efficiency as compared to H.264, without affecting image quality. A scalable video coding extension to HEVC is also planned to be developed.HEVC is based on the classic hybrid coding approach. This however, is not the only way to compress video, and attention is given to wavelet coders in the literature. JPEG 2000 is a wavelet image coder that offers spatial and quality scalability. Combining JPEG 2000 with Motion Compensated Temporal Filtering (MCTF) gives a wavelet video coder which offers both temporal, spatial and quality scalability, without the need for complex extensions.In this thesis, a wavelet video coder based on the combination of MCTF and JPEG 2000 was implemented. This coder was compared to HEVC by performing objective and subjective assessments, with the use case being streaming of video with a typical consumer broadband connection. The objective assessment showed that HEVC was the superior system in terms of both PSNR and SSIM. The subjective assessment revealed that observers preferred the distortion produced by HEVC over the proposed system. However, the results also indicated that improvements to the proposed system can be made that could possibly enhance the objective and subjective quality. In addition, indications were also found that suggest that a use case operating with higher bit rates is more suitable for the proposed system.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Grecos, Christos. "Low cost algorithms for image/video coding and rate control". Thesis, University of South Wales, 2001. https://pure.southwales.ac.uk/en/studentthesis/low-cost-algorithms-for-imagevideo-coding-and-rate-control(40ae7449-3372-4f21-aaec-91ad339907e9).html.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Chan, Syin. "The use of the JPEG image compression standard and the problems of recompression". Thesis, University of Kent, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316248.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Eklund, Anders. "Image coding with H.264 I-frames". Thesis, Linköping University, Department of Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8920.

Testo completo
Abstract (sommario):

In this thesis work a part of the video coding standard H.264 has been implemented. The part of the video coder that is used to code the I-frames has been implemented to see how well suited it is for regular image coding. The big difference versus other image coding standards, such as JPEG and JPEG2000, is that this video coder uses both a predictor and a transform to compress the I-frames, while JPEG and JPEG2000 only use a transform. Since the prediction error is sent instead of the actual pixel values, a lot of the values are zero or close to zero before the transformation and quantization. The method is much like a video encoder but the difference is that blocks of an image are predicted instead of frames in a video sequence.


I det här examensarbetet har en del av videokodningsstandarden H.264 implementerats. Den del av videokodaren som används för att koda s.k. I-bilder har implementerats för att testa hur bra den fungerar för ren stillbildskodning. Den stora skillnaden mot andra stillbildskodningsmetoder, såsom JPEG och JPEG2000, är att denna videokodaren använder både en prediktor och en transform för att komprimera stillbilderna, till skillnad från JPEG och JPEG2000 som bara använder en transform. Eftersom prediktionsfelen skickas istället för själva pixelvärdena så är många värden lika med noll eller nära noll redan innan transformationen och kvantiseringen. Metoden liknar alltså till mycket en ren videokodare, med skillnaden att man predikterar block i en bild istället för bilder i en videosekvens.

Gli stili APA, Harvard, Vancouver, ISO e altri
31

Xin, Jun. "Improved standard-conforming video transcoding techniques /". Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/5871.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Abraham, Arun S. "Bandwidth-aware video transmission with adaptive image scaling". [Gainesville, Fla.] : University of Florida, 2003. http://purl.fcla.edu/fcla/etd/UFE0001221.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Gatica, Perez Daniel. "Extensive operators in lattices of partitions for digital video analysis /". Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/5874.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Kadri, Imen. "Controlled estimation algorithms of disparity map using a compensation compression scheme for stereoscopic image coding". Thesis, Paris 13, 2020. http://www.theses.fr/2020PA131002.

Testo completo
Abstract (sommario):
Ces dernières années ont vu apparaître de nombreuses applications utilisant la technologie 3D tels que les écrans de télévisions 3D, les écrans auto-stéréoscopiques ou encore la visio-conférence stéréoscopique. Cependant ces applications nécessitent des techniques bien adaptées pour comprimer efficacement le volume important de données à transmettre ou à stocker. Les travaux développés dans cette thèse concernent le codage d’images stéréoscopiques et s’intéressent en particulier à l'amélioration de l'estimation de la carte de disparité dans un schéma de Compression avec Compensation de Disparité (CCD). Habituellement, l'algorithme d’appariement de blocs similaires dans les deux vues permet d’estimer la carte de disparité en cherchant à minimiser l’erreur quadratique moyenne entre la vue originale et sa version reconstruite sans compensation de disparité. L’erreur de reconstruction est ensuite codée puis décodée afin d’affiner (compenser) la vue prédite. Pour améliorer la qualité de la vue reconstruite, dans un schéma de codage par CCD, nous avons prouvé que le concept de sélectionner la disparité en fonction de l'image compensée plutôt que de l'image prédite donne de meilleurs résultats. En effet, les simulations montrent que notre algorithme non seulement réduit la redondance inter-vue mais également améliore la qualité de la vue reconstruite et compensée par rapport à la méthode habituelle de codage avec compensation de disparité. Cependant, cet algorithme de codage requiert une grande complexité de calculs. Pour remédier à ce problème, une modélisation simplifiée de la manière dont le codeur JPEG (à savoir la quantification des composantes DCT) impacte la qualité de l’information codée est proposée. En effet, cette modélisation a permis non seulement de réduire la complexité de calculs mais également d’améliorer la qualité de l’image stéréoscopique décodée dans un contexte CCD. Dans la dernière partie, une métrique minimisant conjointement la distorsion et le débit binaire est proposée pour estimer la carte de disparité en combinant deux algorithmes de codage d’images stéréoscopiques dans un schéma CCD
Nowadays, 3D technology is of ever growing demand because stereoscopic imagingcreate an immersion sensation. However, the price of this realistic representation is thedoubling of information needed for storage or transmission purpose compared to 2Dimage because a stereoscopic pair results from the generation of two views of the samescene. This thesis focused on stereoscopic image coding and in particular improving thedisparity map estimation when using the Disparity Compensated Compression (DCC)scheme.Classically, when using Block Matching algorithm with the DCC, a disparity mapis estimated between the left image and the right one. A predicted image is thencomputed.The difference between the original right view and its prediction is called theresidual error. This latter, after encoding and decoding, is injected to reconstruct theright view by compensation (i.e. refinement) . Our first developed algorithm takes intoaccount this refinement to estimate the disparity map. This gives a proof of conceptshowing that selecting disparity according to the compensated image instead of thepredicted one is more efficient. But this done at the expense of an increased numericalcomplexity. To deal with this shortcoming, a simplified modelling of how the JPEGcoder, exploiting the quantization of the DCT components, used for the residual erroryields with the compensation is proposed. In the last part, to select the disparity mapminimizing a joint bitrate-distortion metric is proposed. It is based on the bitrateneeded for encoding the disparity map and the distortion of the predicted view.This isby combining two existing stereoscopic image coding algorithms
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Flordal, Oskar. "A study of CABAC hardware acceleration with configurability in multi-standard media processing". Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-4477.

Testo completo
Abstract (sommario):

To achieve greater compression ratios new video and image CODECs like H.264 and JPEG 2000 take advantage of Context adaptive binary arithmetic coding. As it contains computationally heavy algorithms, fast implementations have to be made when they are performed on large amount of data such as compressing high resolution formats like HDTV. This document describes how entropy coding works in general with a focus on arithmetic coding and CABAC. Furthermore the document dicusses the demands of the different CABACs and propose different options to hardware and instruction level optimisation. Testing and benchmarking of these implementations are done to ease evaluation. The main contribution of the thesis is parallelising and unifying the CABACs which is discussed and partly implemented. The result of the ILA is improved program flow through a specialised branching operations. The result of the DHA is a two bit parallel accelerator with hardware sharing between JPEG 2000 and H.264 encoder with limited decoding support.

Gli stili APA, Harvard, Vancouver, ISO e altri
36

Meng, Bojun. "Efficient intra prediction algorithm in H.264 /". View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202003%20MENG.

Testo completo
Abstract (sommario):
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 66-68). Also available in electronic version. Access restricted to campus users.
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Wu, David, e dwu8@optusnet com au. "Perceptually Lossless Coding of Medical Images - From Abstraction to Reality". RMIT University. Electrical & Computer Engineering, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080617.160025.

Testo completo
Abstract (sommario):
This work explores a novel vision model based coding approach to encode medical images at a perceptually lossless quality, within the framework of the JPEG 2000 coding engine. Perceptually lossless encoding offers the best of both worlds, delivering images free of visual distortions and at the same time providing significantly greater compression ratio gains over its information lossless counterparts. This is achieved through a visual pruning function, embedded with an advanced model of the human visual system to accurately identify and to efficiently remove visually irrelevant/insignificant information. In addition, it maintains bit-stream compliance with the JPEG 2000 coding framework and subsequently is compliant with the Digital Communications in Medicine standard (DICOM). Equally, the pruning function is applicable to other Discrete Wavelet Transform based image coders, e.g., The Set Partitioning in Hierarchical Trees. Further significant coding gains are ex ploited through an artificial edge segmentation algorithm and a novel arithmetic pruning algorithm. The coding effectiveness and qualitative consistency of the algorithm is evaluated through a double-blind subjective assessment with 31 medical experts, performed using a novel 2-staged forced choice assessment that was devised for medical experts, offering the benefits of greater robustness and accuracy in measuring subjective responses. The assessment showed that no differences of statistical significance were perceivable between the original images and the images encoded by the proposed coder.
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Lin, Li-Yang. "VLSI implementation for MPEG-1/Audio Layer III chip : bitstream processor - low power design /". [St. Lucia, Qld.], 2004. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe18396.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Shao, Wenbin. "Automatic annotation of digital photos". Access electronically, 2007. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20080403.120857/index.html.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Jamrozik, Michele Lynn. "Spatio-temporal segmentation in the compressed domain". Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/15681.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Shah, Syed Irtiza Ali. "Single camera based vision systems for ground and; aerial robots". Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/37143.

Testo completo
Abstract (sommario):
Efficient and effective vision systems are proposed in this work for object detection for ground&aerial robots venturing into unknown environments with minimum vision aids, i.e. a single camera. The first problem attempted is that of object search and identification in a situation similar to a disaster site. Based on image analysis, typical pixel-based characteristics of a visual marker have been established to search for, using a block based search algorithm, along with a noise and interference filter. The proposed algorithm has been successfully utilized for the International Aerial Robotics competition 2009. The second problem deals with object detection for collision avoidance in 3D environments. It has been shown that a 3D model of the scene can be generated from 2D image information from a single camera flying through a very small arc of lateral flight around the object, without the need of capturing images from all sides. The forward flight simulations show that the depth extracted from forward motion is usable for large part of the image. After analyzing various constraints associated with this and other existing approaches, Motion Estimation has been proposed. Implementation of motion estimation on videos from onboard cameras resulted in various undesirable and noisy vectors. An in depth analysis of such vectors is presented and solutions are proposed and implemented, demonstrating desirable motion estimation for collision avoidance task.
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Beltrão, Gabriel Tedgue. "Rápida predição da direção do bloco para aplicação com transformadas direcionais". [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260075.

Testo completo
Abstract (sommario):
Orientadores: Yuzo Iano, Rangel Arthur
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-21T22:39:06Z (GMT). No. of bitstreams: 1 Beltrao_GabrielTedgue_M.pdf: 7074938 bytes, checksum: 0a2d464733f2fb5dcc14430cc1844758 (MD5) Previous issue date: 2012
Resumo: As transformadas derivadas da DCT são amplamente utilizadas para compressão de vídeo. Recentemente, muitos autores têm destacado que os resíduos de predição normalmente apresentam estruturas direcionais que não podem ser eficientemente representadas pela DCT convencional. Nesse contexto, muitas transformadas direcionais têm sido propostas como forma de suplantar a deficiência da DCT em lidar com tais estruturas. Apesar do desempenho superior das transformadas direcionais sobre a DCT convencional, para a sua aplicação na compressão de vídeo é necessário avaliar o aumento no tempo de codificação e a complexidade para sua implementação. Este trabalho propõe um rápido algoritmo para se estimar as direções existentes em um bloco antes da aplicação das transformadas direcionais. O codificador identifica as direções predominantes em cada bloco e aplica apenas a transformada referente àquela direção. O algoritmo pode ser usado em conjunto com qualquer proposta de transformadas direcionais que utilize a técnica de otimização por taxa-distorção (RDO) para a seleção da direção a ser explorada, reduzindo a complexidade de implementação a níveis similares a quando apenas a DCT convencional é utilizada
Abstract: DCT-based transforms are widely adopted for video compression. Recently, many authors have highlighted that prediction residuals usually have directional structures that cannot be efficiently represented by conventional DCT. In this context, many directional transforms have been proposed as a way to overcome DCT's deficiency in dealing with such structures. Although directional transforms have superior performance over the conventional DCT, for application in video compression it is necessary to evaluate increase in coding time and complexity for its implementation. This work proposes a fast algorithm for estimating blocks directions before applying directional transforms. The encoder identifies predominant directions in each block, and only applies the transform referent to that direction. The algorithm can be used in conjunction with any proposed algorithm for directional transforms that uses the rate-distortion optimization (RDO) process for selection of the direction to be explored; reducing implementation complexity to similar levels when only conventional DCT is used
Mestrado
Telecomunicações e Telemática
Mestre em Engenharia Elétrica
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Araujo, André Filgueiras de. "Uma proposta de estimação de movimento para o codificador de vídeo Dirac". [s.n.], 2010. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261689.

Testo completo
Abstract (sommario):
Orientador: Yuzo Iano
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-16T03:46:01Z (GMT). No. of bitstreams: 1 Araujo_AndreFilgueirasde_M.pdf: 3583920 bytes, checksum: afbfc9cf561651fe74a6a3d075474fc8 (MD5) Previous issue date: 2010
Resumo: Este trabalho tem como objetivo principal a elaboração de um novo algoritmo responsável por tornar mais eficiente a estimação de movimento do codec Dirac. A estimação de movimento é uma etapa crítica à codificação de vídeo, na qual se encontra a maior parte do seu processamento. O codec Dirac, recentemente lançado, tem como base técnicas diferentes das habitualmente utilizadas nos codecs mais comuns (como os da linha MPEG). O Dirac objetiva alcançar eficiência comparável aos melhores codecs da atualidade (notadamente o H.264/AVC). Desta forma, este trabalho apresenta inicialmente estudos comparativos visando à avaliação de métodos de estado da arte de estimação de movimento e do codec Dirac, estudos que fornecem a base de conhecimento para o algoritmo que é proposto na sequência. A proposta consiste no algoritmo Modified Hierarchical Enhanced Adaptive Rood Pattern Search (MHEARPS). Este apresenta desempenho superior aos outros algoritmos de relevância em todos os casos analisados, provendo em média complexidade 79% menor mantendo a qualidade de reconstrução.
Abstract: The main purpose of this work is to design a new algorithm which enhance motion estimation in Dirac video codec. Motion estimation is a critical stage in video coding, in which most of the processing lies. Dirac codec, recently released, is based on techniques different from the usually employed (as in MPEG-based codecs). Dirac video codec aims at achieving efficiency comparable to the best codecs (such as H.264/AVC). This work initially presents comparative studies of state-of-the-art motion estimation techniques and Dirac codec which support the conception of the algorithm which is proposed in the sequel. The proposal consists in the algorithm Modified Hierarchical Enhaced Adaptive Rood Pattern Search (MHEARPS). This presents superior performance when compared to other relevant algorithms in every analysed case, providing on average 79% less computations with similar video reconstruction quality.
Mestrado
Telecomunicações e Telemática
Mestre em Engenharia Elétrica
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Almeida, Junior Jurandy Gomes de 1983. "Recuperação de vídeos comprimidos por conteúdo". [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275706.

Testo completo
Abstract (sommario):
Orientador: Ricardo da Silva Torres
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-19T18:14:01Z (GMT). No. of bitstreams: 1 AlmeidaJunior_JurandyGomesde_D.pdf: 7003455 bytes, checksum: 9f2b66a600d6b6ae595b02265ceb1585 (MD5) Previous issue date: 2011
Resumo: Avanços recentes na tecnologia têm permitido o aumento da disponibilidade de dados de vídeo, criando grandes coleções de vídeo digital. Isso tem despertado grande interesse em sistemas capazes de gerenciar esses dados de forma eficiente. Fazer uso eficiente de informações de vídeo requer o desenvolvimento de ferramentas poderosas capazes de extrair representações semânticas de alto nível a partir de características de baixo nível do conteúdo de vídeo. Devido à complexidade desse material, existem cinco desafios principais na concepção de tais sistemas: (1) dividir o fluxo de vídeo em trechos manuseáveis de acordo com a sua estrutura de organização, (2) implementar algoritmos para codificar as propriedades de baixo nível de um trecho de vídeo em vetores de características, (3) desenvolver medidas de similaridade para comparar esses trechos a partir de seus vetores, (4) responder rapidamente a consultas por similaridade sobre uma enorme quantidade de sequências de vídeo e (5) apresentar os resultados de forma amigável a um usuário. Inúmeras técnicas têm sido propostas para atender a tais requisitos. A maioria dos trabalhos existentes envolve algoritmos e métodos computacionalmente custosos, em termos tanto de tempo quanto de espaço, limitando a sua aplicação apenas ao ambiente acadêmico e/ou a grandes empresas. Contrário a essa tendência, o mercado tem mostrado uma crescente demanda por dispositivos móveis e embutidos. Nesse cenário, é imperativo o desenvolvimento de técnicas tanto eficazes quanto eficientes a fim de permitir que um público maior tenha acesso a tecnologias modernas. Nesse contexto, este trabalho apresenta cinco abordagens originais voltadas a análise, indexação e recuperação de vídeos digitais. Todas essas contribuições são somadas na construção de um sistema de gestão de vídeos por conteudo computacionalmente rápido, capaz de atingir a um padrão de qualidade similar, ou até mesmo superior, a soluções atuais
Abstract: Recent advances in the technology have enabled the increase of the availability of video data, creating large digital video collections. This has spurred great interest in systems that are able to manage those data in a efficient way. Making efficient use of video information requires the development of powerful tools to extract high-level semantics from low-level features of the video content. Due to the complexity of the video material, there are five main challenges in designing such systems: (1) to divide the video stream into manageable segments according to its organization structure; (2) to implement algorithms for encoding the low-level features of each video segment into feature vectors; (3) to develop similarity measures for comparing these segments by using their feature vectors; (4) to quickly answer similarity queries over a huge amount of video sequences; and (5) to present the list of results in a user-friendly way. Numerous techniques have been proposed to support such requirements. Most of existing works involve algorithms and methods which are computationally expensive, in terms of both time and space, limiting their application to the academic world and/or big companies. Contrary to this trend, the market has shown a growing demand for mobile and embedded devices. In this scenario, it is imperative the development of techniques so effective as efficient in order to allow more people have access to modern technologies. In this context, this work presents five novel approaches for the analysis, indexing, and retrieval of digital videos. All these contributions are combined to create a computationally fast system for content-based video management, which is able to achieve a quality level similar, or even superior, to current solutions
Doutorado
Ciência da Computação
Doutor em Ciência da Computação
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Thanh, V. T. Kieu(Vien Tat Kieu). "Post-processing of JPEG decompressed images". Thesis, 2002. https://eprints.utas.edu.au/22088/1/whole_ThanhVienTatKieu2002_thesis.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Sevcenco, Ana-Maria. "Adaptive strategies and optimization techniques for JPEG-based low bit-rate image coding". Thesis, 2007. http://hdl.handle.net/1828/2282.

Testo completo
Abstract (sommario):
The field of digital image compression has been intensively explored to obtain ever improved performance corresponding to a given bit budget. The DCT-based JPEG standard remains to be one of the most popular image compression standards due to its reasonable coding performance, fast implementations, friendly low-cost architecture. flexibility and adaptivity on block level. In this thesis, we consider the problem of low bit-rate image coding and present new approaches using adaptive strategies and optimization techniques for performance enhancement, while employing the DCT block-based JPEG standard as the main framework with several pre- and post-processing , steps. We propose an adaptive coding approach which involves a variable quality factor in the quantization step of JPEG compression to make the compression more flexible in respect to bit budget requirements. We also propose an adaptive sampling approach based on variable down-'up-scaling rate and local image characteristics. In addition. we study an adaptive filtering approach in which the optimal filters coefficients are determined by making use of optimization methods and symmetric extension techniques. Simulation results are presented to demonstrate the effectiveness of the proposed techniques relative to recent works in the field of low bit-rate image coding.
Gli stili APA, Harvard, Vancouver, ISO e altri
47

"The effects of evaluation and rotation on descriptors and similarity measures for a single class of image objects". Thesis, 2008. http://hdl.handle.net/10210/564.

Testo completo
Abstract (sommario):
“A picture is worth a thousand words”. If this proverb were taken literally we all know that every person interprets images or photos differently in terms of its content. This is due to the semantics contained in these images. Content-based image retrieval has become a vast area of research in order to successfully describe and retrieve images according to the content. In military applications, intelligence images such as those obtained by the defence intelligence group are taken (mostly on film), developed and then manually annotated thereafter. These photos are then stored in a filing system according to certain attributes such as the location, content etc. To retrieve these images at a later stage might take days or even weeks to locate. Thus, the need for a digital annotation system has arisen. The images of the military contain various military vehicles and buildings that need to be detected, described and stored in a database. For our research we want to look at the effects that the rotation and elevation angle of an object in an image has on the retrieval performance. We chose model cars in order to be able to control the environment the photos were taken in such as the background, lighting, distance between the objects, and the camera etc. There are also a wide variety of shapes and colours of these models to obtain and work with. We look at the MPEG-7 descriptor schemes that are recommended by the MPEG group for video and image retrieval as well as implement three of them. For the military it could be required that when the defence intelligence group is in the field, that the images be directly transmitted via satellite to the headquarters. We have therefore included the JPEG2000 standard which gives a compression performance increase of 20% over the original JPEG standard. It is also capable to transmit images wirelessly as well as securely. Including the MPEG-7 descriptors that we have implemented, we have also implemented the fuzzy histogram and colour correlogram descriptors. For our experimentation we implemented a series of experiments in order to determine the effects that rotation and elevation has on our model vehicle images. Observations are made when each vehicle is considered separately and when the vehicles are described and combined into a single database. After the experiments are done we look at the descriptors and determine which adjustments could be made in order to improve the retrieval performance thereof.
Dr. W.A. Clarke
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Natu, Ambarish Shrikrishna. "Error resilience in JPEG2000 /". 2003. http://www.library.unsw.edu.au/~thesis/adt-NUN/public/adt-NUN20030519.163058/index.html.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Darbyshire, Paul. "An investigation into the parallel implementation of JPEG for image compression". Thesis, 1998. https://vuir.vu.edu.au/17911/.

Testo completo
Abstract (sommario):
This research develops a parallel algorithm to implement the JPEG standard for continuous tone still picture compression to be run on a group of transputers. The processor-farm paradigm is adopted. This research shows this to be the best paradigm for use with the JPEG baseline algorithm on the measured component times within the algorithm. The speedup of the parallel algorithm is investigated and measured against a single processor version. A n optimal distribution of J P E G components on the processors within the processor farm is established. The research focuses on the investigation of the optimal number of processors, which can be used effectively for a JPEG implementation adopting the processor-farm paradigm. This optimal number is termed the saturation point. Once the saturation point has been reached, it is shown that the parallel algorithm's speedup cannot be improved without the redistribution of tasks in the farm, regardless of h o w many extra processors are used. Further distributions of processing tasks are investigated with the aim of extending the saturation point. It is shown that the saturation point can be extended, and the distributions of tasks to achieve this are demonstrated. It is also shown that while the saturation point can be increased, the gains are minimal and m a y not be worth the cost of the extra processor. In fact, the algorithm speedup diminishes after the addition of the third processor, up to saturation point. A simulation algorithm is devised using Java, which takes advantage of the multithreaded nature of the language. A technique is developed for simulating the processorfarm paradigm. This technique uses the concept of the Java threadgroup as a basis for a simulated processor, and a Java thread allocated to that group, as a process belonging to this processor. A process scheduling scheme is refined which allows the simulated parallel system to be monitored over simulated scheduling rounds. A scheme is also shown that simulates the message passing of the transputer. This simulated system allows the investigation of the saturation point regardless of the number of processors physically available. Further data on the saturation point supports the hypothesis that the saturation point is around seven processors. The hypothesis is based on the extrapolation of the results obtained using a limited number of processors. Using the simulation, the behaviour of a parallel system can be observed with an arbitrary number of processors. Since this simulation is written in Java, it is also platform independent, and defines an algorithm suitable for a distributed system.
Gli stili APA, Harvard, Vancouver, ISO e altri
50

In, Jaehan. "Rd optimized progressive image coding using JPEG". Thesis, 1998. http://hdl.handle.net/2429/7921.

Testo completo
Abstract (sommario):
The JPEG standard allows four modes of operation. They are the hierarchical (HJPEG), progressive (PJPEG), sequential (SJPEG), and lossless modes1: The HJPEG and PJPEG modes inherently support progressive image coding. In HJPEG, an image is decomposed into subimages of different resolution, each of which is then coded using one of the other three modes of JPEG. Progressiveness within a resolution in HJPEG can be achieved when each subimage is coded using PJPEG. An image coded using PJPEG consists of scans, each of which contributes to a portion of the reconstructed image quality. While SJPEG yields essentially the same level of compression performance for most encoder implementations, the performance of PJPEG depends highly upon the designed encoder structure. This is due to the flexibility the standard leaves open in designing PJPEG encoders. In this thesis, an efficient progressive image, coding algorithm is developed that is compliant with the JPEG still image compression standard. The JPEG-compliant progressive image encoder is a HJPEG encoder that employs a rate-distortion optimized PJPEG encoding algorithm for each image resolution. Our encoder outperforms an op- timized SJPEG encoder in terms of compression efficiency, substantially at low and high bit rates. Moreover, unlike existing J P EG compliant encoders, our encoder can achieve precise rate control for each fixed resolution. Such good compression performance at low bit rates and precise rate control are two highly desired features currently sought for the emerging JPEG-2000 standard. 1 Lossless JPEG algorithms are rarely used since their performance levels are significantly lower than those of other lossless image compression algorithms, and are therefore not widely used.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia