Dissertations / Theses on the topic 'JPEG 2000'

To see the other types of publications on this topic, follow the link: JPEG 2000.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'JPEG 2000.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Nguyen, Anthony Ngoc. "Importance Prioritised Image Coding in JPEG 2000." Thesis, Queensland University of Technology, 2005. https://eprints.qut.edu.au/16005/1/Anthony_Nguyen_Thesis.pdf.

Full text
Abstract:
Importance prioritised coding is a principle aimed at improving the interpretability (or image content recognition) versus bit-rate performance of image coding systems. This can be achieved by (1) detecting and tracking image content or regions of interest (ROI) that are crucial to the interpretation of an image, and (2)compressing them in such a manner that enables ROIs to be encoded with higher fidelity and prioritised for dissemination or transmission. Traditional image coding systems prioritise image data according to an objective measure of distortion and this measure does not correlate well with image quality or interpretability. Importance prioritised coding, on the other hand, aims to prioritise image contents according to an 'importance map', which provides a means for modelling and quantifying the relative importance of parts of an image. In such a coding scheme the importance in parts of an image containing ROIs would be higher than other parts of the image. The encoding and prioritisation of ROIs means that the interpretability in these regions would be improved at low bit-rates. An importance prioritised image coder incorporated within the JPEG 2000 international standard for image coding, called IMP-J2K, is proposed to encode and prioritise ROIs according to an 'importance map'. The map can be automatically generated using image processing algorithms that result in a limited number of ROIs, or manually constructed by hand-marking OIs using a priori knowledge. The proposed importance prioritised coder coder provides a user of the encoder with great flexibility in defining single or multiple ROIs with arbitrary degrees of importance and prioritising them using IMP-J2K. Furthermore, IMP-J2K codestreams can be reconstructed by generic JPEG 2000 decoders, which is important for interoperability between imaging systems and processes. The interpretability performance of IMP-J2K was quantitatively assessed using the subjective National Imagery Interpretability Rating Scale (NIIRS). The effect of importance prioritisation on image interpretability was investigated, and a methodology to relate the NIIRS ratings, ROI importance scores and bit-rates was proposed to facilitate NIIRS specifications for importance prioritised coding. In addition, a technique is proposed to construct an importance map by allowing a user of the encoder to use gaze patterns to automatically determine and assign importance to fixated regions (or ROIs) in an image. The importance map can be used by IMP-J2K to bias the encoding of the image to these ROIs, and subsequently to allow a user at the receiver to reconstruct the image as desired by the user of the encoder. Ultimately, with the advancement of automated importance mapping techniques that can reliably predict regions of visual attention, IMP-J2K may play a significant role in matching an image coding scheme to the human visual system.
APA, Harvard, Vancouver, ISO, and other styles
2

Nguyen, Anthony Ngoc. "Importance Prioritised Image Coding in JPEG 2000." Queensland University of Technology, 2005. http://eprints.qut.edu.au/16005/.

Full text
Abstract:
Importance prioritised coding is a principle aimed at improving the interpretability (or image content recognition) versus bit-rate performance of image coding systems. This can be achieved by (1) detecting and tracking image content or regions of interest (ROI) that are crucial to the interpretation of an image, and (2)compressing them in such a manner that enables ROIs to be encoded with higher fidelity and prioritised for dissemination or transmission. Traditional image coding systems prioritise image data according to an objective measure of distortion and this measure does not correlate well with image quality or interpretability. Importance prioritised coding, on the other hand, aims to prioritise image contents according to an 'importance map', which provides a means for modelling and quantifying the relative importance of parts of an image. In such a coding scheme the importance in parts of an image containing ROIs would be higher than other parts of the image. The encoding and prioritisation of ROIs means that the interpretability in these regions would be improved at low bit-rates. An importance prioritised image coder incorporated within the JPEG 2000 international standard for image coding, called IMP-J2K, is proposed to encode and prioritise ROIs according to an 'importance map'. The map can be automatically generated using image processing algorithms that result in a limited number of ROIs, or manually constructed by hand-marking OIs using a priori knowledge. The proposed importance prioritised coder coder provides a user of the encoder with great flexibility in defining single or multiple ROIs with arbitrary degrees of importance and prioritising them using IMP-J2K. Furthermore, IMP-J2K codestreams can be reconstructed by generic JPEG 2000 decoders, which is important for interoperability between imaging systems and processes. The interpretability performance of IMP-J2K was quantitatively assessed using the subjective National Imagery Interpretability Rating Scale (NIIRS). The effect of importance prioritisation on image interpretability was investigated, and a methodology to relate the NIIRS ratings, ROI importance scores and bit-rates was proposed to facilitate NIIRS specifications for importance prioritised coding. In addition, a technique is proposed to construct an importance map by allowing a user of the encoder to use gaze patterns to automatically determine and assign importance to fixated regions (or ROIs) in an image. The importance map can be used by IMP-J2K to bias the encoding of the image to these ROIs, and subsequently to allow a user at the receiver to reconstruct the image as desired by the user of the encoder. Ultimately, with the advancement of automated importance mapping techniques that can reliably predict regions of visual attention, IMP-J2K may play a significant role in matching an image coding scheme to the human visual system.
APA, Harvard, Vancouver, ISO, and other styles
3

Oh, Han, Ali Bilgin, and Michael Marcellin. "Visually Lossless JPEG 2000 for Remote Image Browsing." MDPI AG, 2016. http://hdl.handle.net/10150/621987.

Full text
Abstract:
Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of ( spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting ( multiple) quantization step sizes into a single JPEG 2000 codestream. This codestream is JPEG 2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG 2000 Interactive Protocol ( JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results.
APA, Harvard, Vancouver, ISO, and other styles
4

Tovslid, Magnus Jeffs. "JPEG 2000 Quality Scalability in an IP Networking Scenario." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for elektronikk og telekommunikasjon, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-18465.

Full text
Abstract:
In this thesis, the JPEG 2000 quality scalability feature was investigated in thecontext of transporting video over IP networks. The goals of the investigation wastwo-fold. First, it was desired to nd a way of choosing the number of quality layersto embed in a JPEG 2000 codestream. In previous work, this choice has been moreor less arbitrary. Second, it was desired to nd how low the video bitrate could bedropped before it became perceptible to a viewer. This information can be usedin an IP networking scenario to e.g. adapt the video bitrate blindly according tothe measured channel capacity as long as the drop in bitrate is expected to beimperceptible. When the drop in bitrate is expected to be perceptible, a switchcould be made to a smoother bitrate adaptation.A way of choosing the total number of quality layers to embed in a codestreamwas found by minimizing the dierence in predicted quality between direct andscaled compression. Scaled compression is the compression which is achieved byextracting quality layers. The minimization procedure was bound by the speed ofthe encoder, as it takes longer for an encoder to embed more quality layers. It wasfound that the procedure was highly dependent on the desired bitrate range.A subjective test was run in order to measure how large a drop in video bitrate hadto be for it to become perceptible. A newly developed JPEG 2000 quality layerscaler was used to produce the dierent bitrates in the test. The number of qualitylayers to embed in codestream was found by using the minimization procedurementioned above. It was found that, for the bitrate range used in the test, 2 - 30Mbits/s for a resolution of 1280x720 at 25 frames per second, the magnitude ofthe drop in bitrate had to be at least 10 Mbits/s before the participants in the testnoticed it. A comparison with objective quality metrics, SSIM and PSNR, revealedthat it was very dicult to predict the visibility of the drops in bitrate by usingthese metrics. Designing the type of rate control mentioned in the rst paragraphwill therefore have to wait until a parameter with good predictive properties canbe found.
APA, Harvard, Vancouver, ISO, and other styles
5

Aouadi, Imed. "Optimisation de JPEG 2000 sur système sur puce programmable." Paris 11, 2005. https://pastel.archives-ouvertes.fr/pastel-00001658.

Full text
Abstract:
Récemment le domaine du traitement de l’image, de la vidéo, et l’audio a connu plusieurs évolutions importantes au niveau des algorithmes et des architectures. L’une de ces évolutions est l’apparition du nouveau standard ISO/IEC de compression d’image JPEG2000 qui succède à JPEG. Ce nouveau standard présente de nombreuses fonctionnalités et caractéristiques qui lui permettent d’être adapté à une large panoplie d’applications. Mais ces caractéristiques se sont accompagnées d’une complexité algorithmique beaucoup plus élevée que JPEG et qui le rend très difficile à optimiser pour certaines implémentations ayant des contraintes très sévères en terme de surface, de temps d’exécution ou de consommation d’énergie ou de l’ensemble de ces contraintes. L’une des étapes clé dans le processus de compression JPEG2000 est le codeur entropique qui constitue à lui seul environ 70% du temps de traitement global pour la compression d’une image. Il est donc essentiel d’analyser les possibilités d’optimisation d’implémentations de JPEG2000. Les circuits FPGA sont aujourd’hui les principaux circuits reconfigurables disponibles sur le marché. S’ils ont longtemps été utilisés uniquement pour le prototypage des ASIC, ils sont aujourd’hui en mesure de fournir une solution efficace à la réalisation matérielle d’applications dans de nombreux domaines. Vu le progrès que connaît l’industrie des composants FPGA du point de vue capacité d’intégration et fréquence de fonctionnement, les architectures reconfigurables constituent aujourd’hui une solution efficace et compétitive pour répondre aussi bien aux besoins du prototypage qu’à ceux des implémentations matérielles
Recently the field of video, image and audio processing has experienced several significant progresses on both the algorithms and the architectures levels. One of these evolutions is the emergence of the new ISO/IEC JPEG2000 image compression standard which succeeds to JPEG. This new standard presents many functionalities and features which allows it to be adapted to a large spectrum of applications. However, these features bring up new algorithmic complexities of higher degree than those of JPEG which in turn makes it very difficult to be optimized for certain implementations under very hard constraints. Those constraints could be area, timing or power constraints or more likely all of them. One of the key steps during the JPEG2000 processing is entropy coding that takes about 70 % of the total execution time when compressing an image. It is therefore essential to analyze the potentialities of optimizations of implementations of JPEG2000. FPGA devices are currently the main reconfigurable circuits available on the market. Although they have been used for a long time only for ASIC prototyping, they are able today to provide an effective solution to the hardware implementation of applications in many fields. Considering the progress experienced by the FPGA semiconductor industry on integration capacity and working frequency, reconfigurable architectures are now an effective and competitive solution to meet the needs of both prototyping and final hardware implementations. In this work we propose a methodology for the study of the possibilities of implementation of JPEG2000. This study starts with the evaluation of software implementations on commercial platforms
APA, Harvard, Vancouver, ISO, and other styles
6

Taylor, James Cary, Jacklynn Hall, and Tony Yuan. "Dean's Innovation Challenge: Researching the JPEG 2000 Image Decoder." Thesis, The University of Arizona, 2012. http://hdl.handle.net/10150/244833.

Full text
Abstract:
The goal of this thesis is to analyze the current commercialization process of the University of Arizona as well as the Office of Technology Transfer, or OTT, and potential opportunities for strengthening the process. This will be done through an initial review of a patented technology, JPEG 2000 Corrupt Codestream Decoder, as well as its parent technology, JPEG 2000 image standard. The JPEG 2000 decoder is used to decode corrupt images that are transferred in real time, in order to utilize the "usable" information as efficiently as possible. The technology itself will be analyzed, including the strengths and weaknesses, and areas of opportunities. Next, the commercialization history of the technology will also be looked upon, such as patent dates, related licensees, and direction of the technology. Emphasis will be placed on processes and environments that helped the technology, as well as those that have hindered it. More specifically, since the technology was never implemented in a commercialized setting, there will be a glance as to why the technology was not successfully licensed and commercialized. Finally, the commercialization process of OTT will be examined, in a broader context that applies to all technologies that OTT deals with. This will look at the tasks of OTT, shortfalls of the Office, as well as the process of commercialization. Once all items are addressed, areas of recommendations will be described with the aim of improving the efficiency and resourcefulness of OTT.
APA, Harvard, Vancouver, ISO, and other styles
7

Park, Min Jee, Jae Taeg Yu, Myung Han Hyun, and Sung Woong Ra. "A Development of Real Time Video Compression Module Based on Embedded Motion JPEG 2000." International Foundation for Telemetering, 2015. http://hdl.handle.net/10150/596452.

Full text
Abstract:
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV
In this paper, we develop a miniaturized real time video compression module (VCM) based on embedded motion JPEG 2000 using ADV212 and FPGA. We consider layout of components, values of damping resistors, and lengths of the pattern lines for optimal hardware design. For software design, we consider compression steps to monitor the status of the system and make the system robust. The weight of the developed VCM is approximately 4 times lighter than the previous development. Furthermore, experimental results show that the PSNR is increased about 3dB and the compression processing time is approximately 2 times faster than the previous development.
APA, Harvard, Vancouver, ISO, and other styles
8

Erlid, Frøy Brede Tureson. "MCTF and JPEG 2000 Based Wavelet Video Coding Compared to the Future HEVC Standard." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for elektronikk og telekommunikasjon, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-18822.

Full text
Abstract:
Video and multimedia content has over the years become an important part of our everyday life. At the same time, the technology available to consumers has become more and more advanced. These technologies, such as streaming services and advanced displays, has enabled us to watch video content on a large variety of devices, from small, battery powered mobile phones to large TV-sets.Streaming of video over the Internet is a technology that is getting increasingly popular. As bandwidth is a limited resource, efficient compression techniques are clearly needed. The wide variety of devices capable of streaming and displaying video suggest a need for scalable video coders, as different devices might support different sets of resolutions and frame rates.As a response to the demands for efficient coding standards, VCEG and MPEG are jointly developing an emerging video compression standard called High Efficiency Video Coding (HEVC). The goal for this standard is to improve the coding efficiency as compared to H.264, without affecting image quality. A scalable video coding extension to HEVC is also planned to be developed.HEVC is based on the classic hybrid coding approach. This however, is not the only way to compress video, and attention is given to wavelet coders in the literature. JPEG 2000 is a wavelet image coder that offers spatial and quality scalability. Combining JPEG 2000 with Motion Compensated Temporal Filtering (MCTF) gives a wavelet video coder which offers both temporal, spatial and quality scalability, without the need for complex extensions.In this thesis, a wavelet video coder based on the combination of MCTF and JPEG 2000 was implemented. This coder was compared to HEVC by performing objective and subjective assessments, with the use case being streaming of video with a typical consumer broadband connection. The objective assessment showed that HEVC was the superior system in terms of both PSNR and SSIM. The subjective assessment revealed that observers preferred the distortion produced by HEVC over the proposed system. However, the results also indicated that improvements to the proposed system can be made that could possibly enhance the objective and subjective quality. In addition, indications were also found that suggest that a use case operating with higher bit rates is more suitable for the proposed system.
APA, Harvard, Vancouver, ISO, and other styles
9

Ye, Wei. "Development of a Remote Medical Image Browsing and Interaction System." Wright State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=wright1278676228.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lucero, Aldo. "Compressing scientific data with control and minimization of the L-infinity metric under the JPEG 2000 framework." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2007. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Silva, Sandreane Poliana. "Comparação entre os métodos de compressão fractal e JPEG 2000 em um sistema de reconhecimento de íris." Universidade Federal de Uberlândia, 2008. https://repositorio.ufu.br/handle/123456789/14385.

Full text
Abstract:
Currently living in the digital age, so the manipulation of data and images is often all day. Due to the problem of space for storage of pictures and time of transmission, many compression techniques had been developed, and a great challenge is to make these techniques to bring good results in terms of compression rate, picture quality and processing time. The Fractal Compression technique developed by Fisher, was described, implemented and tested in this work and it brought great results, and considerable improvement in terms of execution time, which was rather low. Another area that has been emphasizing is the use of biometric techniques to the people recognition. A very used technique is the iris recognition that has shown enough reliability. Thus, connecting the two technologies brings great benefits. In this work, images of iris were compressed by the method implemented here and were made simulations of the technique iris recognition developed by Libor Maseck. The results show that it is possible to compress fractally the images without damage the recognition system. Comparisons were made and was possible realize that even with changes in pixels of images, the system remains very reliable, bringing benefits to storage space.
Atualmente vive-se na era digital, por isso a manipulação de dados e imagens é freqüente todos os dias. Devido ao problema de espaço para armazenamento dessas imagens e tempo de transmissão, foram desenvolvidas várias técnicas de compressão, e um grande desafio é fazer com que essas técnicas tragam bons resultados em termos de taxa de compressão, qualidade da imagem e tempo de processamento. A técnica de compressão Fractal desenvolvida por Fisher, foi descrita, implementada e testada neste trabalho e trouxe ótimos resultados, e melhoria considerável em termos de tempo de execução, que foi bastante reduzido. Outra área que vem se destacando é o uso de técnicas biométricas para reconhecimento de pessoas. Uma técnica muito usada é o reconhecimento de íris que tem mostrado bastante contabilidade. Assim, aliar as duas tecnologias traz grandes benefícios. No presente trabalho, imagens de íris foram comprimidas pelo método aqui implementado e foram realizadas simulações da técnica de reconhecimento de íris desenvolvida por Maseck. Os resultados mostram que é possível comprimir fractalmente as imagens sem prejudicar o sistema de reconhecimento. Comparações foram realizadas e foi possível perceber que mesmo havendo mudanças nos pixels das imagens, o sistema permanece bastante confiavel, trazendo vantagens em espaço de armazenamento.
Mestre em Ciências
APA, Harvard, Vancouver, ISO, and other styles
12

Agueh, Djidjoho Max. "Protection inégale pour la transmission d'images et vidéo codées en Jpeg 2000 sur les canaux sans fil." Nantes, 2008. http://www.theses.fr/2008NANT2069.

Full text
Abstract:
La protection des données dans les systèmes multimédia sans fil est un enjeu majeur. Ainsi, le nouveau standard de représentation d’image JPEG 2000 spécifie dans sa onzième partie (JPEG 2000 sans fil), des techniques de protection afin d’améliorer la robustesse des images et vidéos JPEG 2000 contre les erreurs de transmission dans ces réseaux. Bien que la norme JPEG 2000 définisse les codes de Reed-Solomon (RS) à utiliser, elle ne spécifie pas de méthodes de sélection de ces codes. Dans cette étude nous analysons des traces réelles de canaux sans fil de type 802. 11 puis nous en déduisons un modèle comportemental d’erreur au niveau applicatif dans les réseaux sans fil (modèle de Gilbert). A partir des modèles de Gilbert obtenus, nous proposons une méthodologie de compensation de perte a priori qui consiste à sélectionner empiriquement les codes RS. Nous montrons que les performances en termes de Peak Signal to Noise Ratio (PSNR) de la méthodologie de compensation de perte a priori sont supérieures à celles d’une transmission sans protection. Toutefois, l’efficacité de la méthodologie de compensation de perte a priori est limitée car cette dernière n’est pas en mesure d’adapter le niveau de protection des données aux changements de l’environnement de transmission. Nous proposons alors une méthodologie de compensation de perte dynamique qui est basée sur la protection inégale des couches de qualité des images et vidéos JPEG 2000. Cette méthode offre des performances de l’ordre de 10% supérieures aux méthodes proposées dans la littérature telles que la méthode de protection proposée par Zhaohui Guo et al tant en termes de PSNR que de taux de décodage d’image avec succès. Malgré l’avantage que présentent ces méthodes du fait de leur faible complexité ces méthodes sont sub-optimales car elles n’exploitent pas la différence entre les paquets d’une même couche de qualité. Nous proposons alors une méthodologie de compensation de perte optimale basée sur la protection inégale des paquets JPEG 2000. Pour des canaux fortement perturbés et pour des images constituées d’un nombre raisonnable de paquets JPEG 2000 (inférieur à 1000 paquets), la compensation de perte optimale offre des performances supérieures aux techniques précédentes tout en offrant l’avantage d’être légèrement plus complexe. Au-delà des méthodologies de protection proposée, notre étude se veut être un pas vers la garantie de Qualité de Service (QoS) dans les applications multimédia sans fil
Nowadays, data protection against transmission errors in wireless multimedia systems is a crucial issue. JPEG 2000, the new image representation system, addresses this issue by defining in its 11th part (Wireless JPEG 2000 - JPWL) some tools such as Forward Error Correction with Reed-solomon (RS) codes in order to enhance the robustness of JPEG 2000 based images and videos against transmission errors. Although JPWL defines a set of RS codes for data protection, it does not specify how to select those codes in order to handle error rate in wireless multimedia systems. In our work, based on the analysis of 802. 11 based Ad-hoc network traces, we first derive application level channel models (Gilbert model). Then, we propose a methodology for JPEG 2000 images and videos protection with a priori and empiric channel code rate selection. We highlight the interest of the a priori FEC allocation methodology by comparing it to non protected data transmission. We show that the effectiveness of the proposed scheme can be drastically reduced when the channel state changes because the FEC rate allocation is not adaptative. In this context, we propose a dynamic FEC rate allocation methodology which is a layer-based unequal error protection scheme. We demonstrate the effectiveness of this scheme thanks to a wireless client/server JPEG 2000 based images and video streaming application. The dynamic FEC rate allocation scheme overcomes in order of 10% other existing schemes such as the layer-based unequal error protection scheme proposed by Zhaohoui Guo et al both in terms of Peak Signal to Noise Ratio (PSNR) and successful image decoding rate. However, despite their effectiveness layer-based schemes are sub-optimal because they do not take into account the importance of data packets which limits their performances particularly in highly varying environments. We then propose an optimal FEC rate allocation methodology which is a packet-based unequal error protection scheme. The proposed scheme is slightly more complex than layer-based schemes as far as the number of packets constituting the JPEG 2000 image is low (under 1000 packets) and it offers superior performances in terms of PSNR and successful decoding rate. However, if the number of JPEG 2000 packet is significantly important (more than 1000 packets) the optimal methodology can be inefficient for real-time streaming application due it complexity. Beyond the proposition of FEC rate allocation methodologies, our work can be viewed as a step toward the guaranty of Quality of Service (QoS) in wireless multimedia applications
APA, Harvard, Vancouver, ISO, and other styles
13

Preethy, Byju Akshara. "Advanced Methods for Content Based Image Retrieval and Scene Classification in JPEG 2000 Compressed Remote Sensing Image Archives." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/281771.

Full text
Abstract:
Recent advances in satellite imaging technologies have paved its way to the RS big data era. Efficient storage, management and utilization of massive amounts of data is one of the major challenges faced by the remote sensing (RS) community. To minimize the storage requirements and speed up the transmission rate, RS images are compressed before archiving. Accordingly, developing efficient Content Based Image Retrieval (CBIR) and scene classification techniques to effectively utilize these huge volume of data is one among the most researched areas in RS. With the continual growth in the volume of compressed RS data, the dominant aspect that plays a key role in the development of these techniques is the decompression time required by these images. Existing CBIR and scene classification methods in RS require fully decompressed RS images as input, which is a computationally complex and time consuming task to perform. Among several compression algorithms introduced to RS, JPEG 2000 is the most widely used in operational satellites due to its multiresolution paradigm, scalability and high compression ratio. In light of this, the goal of this thesis is to develop novel methods to achieve image retrieval and scene classification for JPEG 2000 compressed RS image archives. The first contribution of the thesis addresses the possibility of performing CBIR directly on compressed RS images. The aim of the proposed method is to achieve efficient image characterization and retrieval within the JPEG 2000 compressed domain. The proposed progressive image retrieval approach achieves a coarse to fine image description and retrieval in the partially decoded JPEG 2000 compressed domain. Its aims to reduce the computational time required by the CBIR system for compressed RS image archives. The second contribution of the thesis concerns the possibility of achieving scene classification for JPEG 2000 compressed RS image archives. Recently, deep learning methods have demonstrated a cutting edge improvement in scene classification performance in large-scale RS image archives. In view of this, the proposed method is based on deep learning and aims to achieve maximum scene classification accuracy with minimal decoding. The proposed approximation approach learns the high-level hierarchical image description in a partially decoded domain thereby avoiding the requirement to fully decode the images from the archive before any scene classification is performed. Quantitative as well as qualitative experimental results demonstrate the efficiency of the proposed methods, which show significant improvements over state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
14

Benderli, Oguz. "A Real-time, Low-latency, Fpga Implementation Of The Two Dimensional Discrete Wavelet Transform." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1056282/index.pdf.

Full text
Abstract:
This thesis presents an architecture and an FPGA implementation of the two dimensional discrete wavelet transformation (DWT) for applications where row-based raw image data is streamed in at high bandwidths and local buffering of the entire image is not feasible. The architecture is especially suited for multi-spectral imager systems, such as on board an imaging satellite, however can be used in any application where time to next image constraints require real-time processing of multiple images. The latency that is introduced as the images stream through the iii DWT module and the amount of locally stored image data, is a function of the image and tile size. For an n1 ×
n2 size image processed using (n1/k1) ×
(n2/k2) sized tiles the latency is equal to the time elapsed to accumulate a (1/k1) portion of one image. In addition, a (2/k1) portion of each image is buffered locally. The proposed hardware has been implemented on an FPGA and is part of a JPEG 2000 compression system designed as a payload for a low earth orbit (LEO) micro-satellite to be launched in September 2003. The architecture can achieve a throughput of up to 160Mbit/s. The latency introduced is 0.105 sec (6.25% of total transmission time) for tile sizes of 256×
256. The local storage size required for the tiling operation is 2 MB. The internal storage requirement is 1536 pixels. Equivalent gate count for the design is 292,447.
APA, Harvard, Vancouver, ISO, and other styles
15

Bouchoux, Sophie. "Apport de la reconfiguration dynamique au traitement d'images embarqué : étude de cas : implantation du décodeur entropique de JPEG 2000." Dijon, 2005. http://www.theses.fr/2005DIJOS027.

Full text
Abstract:
L'apparition sur le march´e des FPGAs reprogrammables partiellement et rapidement a permis le d´eveloppement de nouvelles techniques comme la reconfiguration dynamique. Afin d'´etudier les apports de la reconfiguration dynamique par rapport `a la configuration statique, une carte ´electronique a ´et´e mise au point : la carte ARDOISE. Cette th`ese porte sur l'implantation de l'algorithme JPEG 2000, et plus particuli`erement du d´ecodeur entropique, sur cette architecture et sur l'´etude des performances ainsi obtenues. Pour effectuer une comparaison des r´esultats entre les deux m´ethodes, des crit`eres d'´evaluation portant sur les coˆuts, les performances et les rendements ont ´et´e d´efinis. Les implantations r´ealis´ees sont : implantation en reconfiguration dynamique partielle du d´ecodeur arithm´etique sur ARDOISE, implantation en configuration statique du d´ecodeur entropique sur un FPGA Xilinx et implantation en reconfiguration dynamique du d´ecodeur entropique sur ARDOISE
The appearance on the market of partially and quickly reprogrammable FPGAs, led to the development of new techniques, like dynamic reconfiguration. In order to study improvement of dynamic reconfiguration in comparison with static configuration, an electronic board was developed : the ARDOISE board. This thesis relates to the implementation of JPEG 2000 algorithm, and particularly of the entropic decoder, on this architecture and to the study of the performances obtained. To carry out a comparison of the results between the two methods, some evaluation criteria relating to costs, performances and efficiencies were defined. Implementations carried out are : implementation in partial dynamic reconfiguration of the arithmetic decoder on ARDOISE, implementation in static configuration of the entropic decoder on a Xilinx FPGA and implementation in dynamic reconfiguration of the entropic decoder on ARDOISE
APA, Harvard, Vancouver, ISO, and other styles
16

Zeybek, Emre. "Compression multimodale du signal et de l’image en utilisant un seul codeur." Thesis, Paris Est, 2011. http://www.theses.fr/2011PEST1060/document.

Full text
Abstract:
Cette thèse a pour objectif d'étudier et d'analyser une nouvelle stratégie de compression, dont le principe consiste à compresser conjointement des données issues de plusieurs modalités, en utilisant un codeur unique. Cette approche est appelée « Compression Multimodale ». Dans ce contexte, une image et un signal audio peuvent être compressés conjointement et uniquement par un codeur d'image (e.g. un standard), sans la nécessité d'intégrer un codec audio. L'idée de base développée dans cette thèse consiste à insérer les échantillons d'un signal en remplacement de certains pixels de l'image « porteuse » tout en préservant la qualité de l'information après le processus de codage et de décodage. Cette technique ne doit pas être confondue aux techniques de tatouage ou de stéganographie puisqu'il ne s'agit pas de dissimuler une information dans une autre. En Compression Multimodale, l'objectif majeur est, d'une part, l'amélioration des performances de la compression en termes de débit-distorsion et d'autre part, l'optimisation de l'utilisation des ressources matérielles d'un système embarqué donné (e.g. accélération du temps d'encodage/décodage). Tout au long de ce rapport, nous allons étudier et analyser des variantes de la Compression Multimodale dont le noyau consiste à élaborer des fonctions de mélange et de séparation, en amont du codage et de séparation. Une validation est effectuée sur des images et des signaux usuels ainsi que sur des données spécifiques telles que les images et signaux biomédicaux. Ce travail sera conclu par une extension vers la vidéo de la stratégie de la Compression Multimodale
The objective of this thesis is to study and analyze a new compression strategy, whose principle is to compress the data together from multiple modalities by using a single encoder. This approach is called “Multimodal Compression” during which, an image and an audio signal is compressed together by a single image encoder (e.g. a standard), without the need for an integrating audio codec. The basic idea developed in this thesis is to insert samples of a signal by replacing some pixels of the "carrier's image” while preserving the quality of information after the process of encoding and decoding. This technique should not be confused with techniques like watermarking or stéganographie, since Multimodal Compression does not conceal any information with another. Two main objectives of Multimodal Compression are to improve the compression performance in terms of rate-distortion and to optimize the use of material resources of a given embedded system (e.g. acceleration of encoding/decoding time). In this report we study and analyze the variations of Multimodal Compression whose core function is to develop mixing and separation prior to coding and separation. Images and common signals as well as specific data such as biomedical images and signals are validated. This work is concluded by discussing the video of the strategy of Multimodal Compression
APA, Harvard, Vancouver, ISO, and other styles
17

Miller, Jessica Barbara [Verfasser]. "Evaluation des Einflusses von Dosis und Schichtdicke auf die verlustbehaftete JPEG- 2000-Kompression in der digitalen Mammographie unter Verwendung von 600 Aufnahmen des CDMAM-Phantoms / Jessica Barbara Miller." Berlin : Medizinische Fakultät Charité - Universitätsmedizin Berlin, 2011. http://d-nb.info/1025239148/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Yang, Hsueh-szu, and Benjamin Kupferschmidt. "Time Stamp Synchronization in Video Systems." International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/605988.

Full text
Abstract:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California
Synchronized video is crucial for data acquisition and telecommunication applications. For real-time applications, out-of-sync video may cause jitter, choppiness and latency. For data analysis, it is important to synchronize multiple video channels and data that are acquired from PCM, MIL-STD-1553 and other sources. Nowadays, video codecs can be easily obtained to play most types of video. However, a great deal of effort is still required to develop the synchronization methods that are used in a data acquisition system. This paper will describe several methods that TTC has adopted in our system to improve the synchronization of multiple data sources.
APA, Harvard, Vancouver, ISO, and other styles
19

Flordal, Oskar. "A study of CABAC hardware acceleration with configurability in multi-standard media processing." Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-4477.

Full text
Abstract:

To achieve greater compression ratios new video and image CODECs like H.264 and JPEG 2000 take advantage of Context adaptive binary arithmetic coding. As it contains computationally heavy algorithms, fast implementations have to be made when they are performed on large amount of data such as compressing high resolution formats like HDTV. This document describes how entropy coding works in general with a focus on arithmetic coding and CABAC. Furthermore the document dicusses the demands of the different CABACs and propose different options to hardware and instruction level optimisation. Testing and benchmarking of these implementations are done to ease evaluation. The main contribution of the thesis is parallelising and unifying the CABACs which is discussed and partly implemented. The result of the ILA is improved program flow through a specialised branching operations. The result of the DHA is a two bit parallel accelerator with hardware sharing between JPEG 2000 and H.264 encoder with limited decoding support.

APA, Harvard, Vancouver, ISO, and other styles
20

Kivci, Erdem Turker. "Development Of A Methodology For Geospatial Image Streaming." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612570/index.pdf.

Full text
Abstract:
Serving geospatial data collected from remote sensing methods (satellite images, areal photos, etc.) have become crutial in many geographic information system (GIS) applications such as disaster management, municipality applications, climatology, environmental observations, military applications, etc. Even in today&rsquo
s highly developed information systems, geospatial image data requies huge amount of physical storage spaces and such characteristics of geospatial image data make its usage limited in above mentioned applications. For this reason, web-based GIS applications can benefit from geospatial image streaming through web-based architectures. Progressive transmission of geospatial image and map data on web-based architectures is implemented with the developed image streaming methodology. The software developed allows user interaction in such a way that the users will visualize the images according to their level of detail. In this way geospatial data is served to the users in an efficient way. The main methods used to transmit geospatial images are serving tiled image pyramids and serving wavelet based compressed bitstreams. Generally, in GIS applications, tiled image pyramids that contain copies of raster datasets at different resolutions are used rather than differences between resolutions. Thus, redundant data is transmitted from GIS server with different resolutions of a region while using tiled image pyramids. Wavelet based methods decreases redundancy. On the other hand methods that use wavelet compressed bitsreams requires to transform the whole dataset before the transmission. A hybrid streaming methodology is developed to decrease the redundancy of tiled image pyramids integrated with wavelets which does not require transforming and encoding whole dataset. Tile parts&rsquo
coefficients produced with the methodlogy are encoded with JPEG 2000, which is an efficient technology to compress images at wavelet domain.
APA, Harvard, Vancouver, ISO, and other styles
21

Abot, Julien. "Stratégie de codage conjoint pour la transmission d'images dans un système MIMO." Thesis, Poitiers, 2012. http://www.theses.fr/2012POIT2296/document.

Full text
Abstract:
Ce travail de thèse présente une stratégie de transmission exploitant la diversité spatiale pour la transmission d'images sur canal sans fil. On propose ainsi une approche originale mettant en correspondance la hiérarchie de la source avec celle des sous-canauxSISO issus de la décomposition d'un canal MIMO. On évalue les performances des précodeurs usuels dans le cadre de cette stratégie via une couche physique réaliste, respectant la norme IEEE802.11n, et associé à un canal de transmission basé sur un modèle de propagation à tracé de rayons 3D. On montre ainsi que les précodeurs usuels sont mal adaptés pour la transmission d'un contenu hiérarchisé. On propose alors un algorithme de précodage allouant successivement la puissance sur les sous-canaux SISO afin de maximiser la qualité des images reçues. Le précodeur proposé permet d'atteindre un TEB cible compte tenu ducodage canal, de la modulation et du SNR des sous-canaux SISO. A partir de cet algorithme de précodage, on propose une solution d'adaptation de lien permettant de régler dynamiquement les paramètres de la chaîne en fonction des variations sur le canal de transmission. Cette solution détermine la configuration de codage/transmission maximisant la qualité de l'image en réception. Enfin, on présente une étude sur la prise en compte de contraintes psychovisuelles dans l'appréciation de la qualité des images reçues. On propose ainsi l'intégration d'une métrique à référence réduite basée sur des contraintes psychovisuelles permettant d'assister le décodeur vers la configuration de décodage offrant la meilleure qualité d'expérience. Des tests subjectifs confirment l'intérêt de l'approche proposée
This thesis presents a transmission strategy for exploiting the spatial diversity for image transmission over wireless channel. We propose an original approach based on the matching between the source hierarchy and the SISO sub-channels hierarchy, resulting from the MIMO channel decomposition. We evaluate common precoder performance in the context of this strategy via a realistic physical layer respecting the IEEE802.11n standard and associated with a transmission channel based on a 3D-ray tracer propagation model. It is shown that common precoders are not adapted for the transmission of a hierarchical content. Then, we propose a precoding algorithm which successively allocates power over SISO subchannels in order to maximize the received images quality. The proposed precoder achieves a target BER according to the channel coding, the modulation and the SISO subchannels SNR. From this precoding algorithm, we propose a link adaptation scheme to dynamically adjust the system parameters depending on the variations of the transmission channel. This solution determines the optimal coding/transmission configuration maximizing the image quality in reception. Finally, we present a study for take into account some psychovisual constraints in the assessment of the received images quality. We propose the insertion of a reduced reference metric based on psychovisual constraints, to assist the decoder in order to determine the decoding configuration providing the highest quality of experience. Subjective tests confirm the interest of the proposed approach
APA, Harvard, Vancouver, ISO, and other styles
22

Mhamdi, Maroua. "Méthodes de transmission d'images optimisées utilisant des techniques de communication numériques avancées pour les systèmes multi-antennes." Thesis, Poitiers, 2017. http://www.theses.fr/2017POIT2281/document.

Full text
Abstract:
Cette thèse est consacrée à l'amélioration des performances de codage/décodage de systèmes de transmission d'images fixes sur des canaux bruités et réalistes. Nous proposons, à cet effet, le développement de méthodes de transmission d'images optimisées en se focalisant sur les deux couches application et physique des réseaux sans fil. Au niveau de la couche application et afin d'assurer une bonne qualité de service, on utilise des algorithmes de compression efficaces permettant au récepteur de reconstruire l'image avec un maximum de fidélité (JPEG2000 et JPWL). Afin d'assurer une transmission sur des canaux sans fil avec un minimum de TEB à la réception, des techniques de transmission, de codage et de modulation avancées sont utilisées au niveau de la couche physique (système MIMO-OFDM, modulation adaptative, CCE, etc). Dans un premier temps, nous proposons un système de transmission robuste d'images codées JPWL intégrant un schéma de décodage conjoint source-canal basé sur des techniques de décodage à entrées pondérées. On considère, ensuite, l'optimisation d'une chaîne de transmission d'images sur un canal MIMO-OFDM sans fil réaliste. La stratégie de transmission d'images optimisée s'appuie sur des techniques de décodage à entrées pondérées et une approche d'adaptation de lien. Ainsi, le schéma de transmission proposé offre la possibilité de mettre en oeuvre conjointement de l'UEP, de l'UPA, de la modulation adaptative, du codage de source adaptatif et de décodage conjoint pour améliorer la qualité de l'image à la réception. Dans une seconde partie, nous proposons un système robuste de transmission de flux progressifs basé sur le principe de turbo décodage itératif de codes concaténés offrant une stratégie de protection inégale de données. Ainsi, l'originalité de cette étude consiste à proposer des solutions performantes d'optimisation globale d'une chaîne de communication numérique pour améliorer la qualité de transmission
This work is devoted to improve the coding/ decoding performance of a transmission scheme over noisy and realistic channels. For this purpose, we propose the development of optimized image transmission methods by focusing on both application and physical layers of wireless networks. In order to ensure a better quality of services, efficient compression algorithms (JPEG2000 and JPWL) are used in terms of the application layer enabling the receiver to reconstruct the images with maximum fidelity. Furthermore, to insure a transmission on wireless channels with a minimum BER at reception, some transmission, coding and advanced modulation techniques are used in the physical layer (MIMO-OFDM system, adaptive modulation, FEC, etc). First, we propose a robust transmission system of JPWL encoded images integrating a joint source-channel decoding scheme based on soft input decoding techniques. Next, the optimization of an image transmission scheme on a realistic MIMO-OFDM channel is considered. The optimized image transmission strategy is based on soft input decoding techniques and a link adaptation approach. The proposed transmission scheme offers the possibility of jointly implementing, UEP, UPA, adaptive modulation, adaptive source coding and joint decoding strategies, in order to improve the image visual quality at the reception. Then, we propose a robust transmission system for embedded bit streams based on concatenated block coding mechanism offering an unequal error protection strategy. Thus, the novelty of this study consists in proposing efficient solutions for the global optimization of wireless communication system to improve transmission quality
APA, Harvard, Vancouver, ISO, and other styles
23

Zeybek, Emre. "Compression multimodale du signal et de l'image en utilisant un seul codeur." Phd thesis, Université Paris-Est, 2011. http://tel.archives-ouvertes.fr/tel-00665757.

Full text
Abstract:
Cette thèse a pour objectif d'étudier et d'analyser une nouvelle stratégie de compression, dont le principe consiste à compresser conjointement des données issues de plusieurs modalités, en utilisant un codeur unique. Cette approche est appelée " Compression Multimodale ". Dans ce contexte, une image et un signal audio peuvent être compressés conjointement et uniquement par un codeur d'image (e.g. un standard), sans la nécessité d'intégrer un codec audio. L'idée de base développée dans cette thèse consiste à insérer les échantillons d'un signal en remplacement de certains pixels de l'image " porteuse " tout en préservant la qualité de l'information après le processus de codage et de décodage. Cette technique ne doit pas être confondue aux techniques de tatouage ou de stéganographie puisqu'il ne s'agit pas de dissimuler une information dans une autre. En Compression Multimodale, l'objectif majeur est, d'une part, l'amélioration des performances de la compression en termes de débit-distorsion et d'autre part, l'optimisation de l'utilisation des ressources matérielles d'un système embarqué donné (e.g. accélération du temps d'encodage/décodage). Tout au long de ce rapport, nous allons étudier et analyser des variantes de la Compression Multimodale dont le noyau consiste à élaborer des fonctions de mélange et de séparation, en amont du codage et de séparation. Une validation est effectuée sur des images et des signaux usuels ainsi que sur des données spécifiques telles que les images et signaux biomédicaux. Ce travail sera conclu par une extension vers la vidéo de la stratégie de la Compression Multimodale
APA, Harvard, Vancouver, ISO, and other styles
24

Kaše, David. "Komprese obrazu pomocí vlnkové transformace." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-234996.

Full text
Abstract:
This thesis deals with image compression using wavelet, contourlet and shearlet transformation. It starts with quick look at image compression problem a quality measurement. Next are presented basic concepts of wavelets, multiresolution analysis and scaling function and detailed look at each transform. Representatives of algorithms for coeficients coding are EZW, SPIHT and marginally EBCOT. In second part is described design and implementation of constructed library. Last part compare result of transforms with format JPEG 2000. Comparison resulted in determining type of image in which implemented contourlet and shearlet transform were more effective than wavelet. Format JPEG 2000 was not exceeded.
APA, Harvard, Vancouver, ISO, and other styles
25

Wu, David, and dwu8@optusnet com au. "Perceptually Lossless Coding of Medical Images - From Abstraction to Reality." RMIT University. Electrical & Computer Engineering, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080617.160025.

Full text
Abstract:
This work explores a novel vision model based coding approach to encode medical images at a perceptually lossless quality, within the framework of the JPEG 2000 coding engine. Perceptually lossless encoding offers the best of both worlds, delivering images free of visual distortions and at the same time providing significantly greater compression ratio gains over its information lossless counterparts. This is achieved through a visual pruning function, embedded with an advanced model of the human visual system to accurately identify and to efficiently remove visually irrelevant/insignificant information. In addition, it maintains bit-stream compliance with the JPEG 2000 coding framework and subsequently is compliant with the Digital Communications in Medicine standard (DICOM). Equally, the pruning function is applicable to other Discrete Wavelet Transform based image coders, e.g., The Set Partitioning in Hierarchical Trees. Further significant coding gains are ex ploited through an artificial edge segmentation algorithm and a novel arithmetic pruning algorithm. The coding effectiveness and qualitative consistency of the algorithm is evaluated through a double-blind subjective assessment with 31 medical experts, performed using a novel 2-staged forced choice assessment that was devised for medical experts, offering the benefits of greater robustness and accuracy in measuring subjective responses. The assessment showed that no differences of statistical significance were perceivable between the original images and the images encoded by the proposed coder.
APA, Harvard, Vancouver, ISO, and other styles
26

Urbánek, Pavel. "Komprese obrazu pomocí vlnkové transformace." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236385.

Full text
Abstract:
This thesis is focused on subject of image compression using wavelet transform. The first part of this document provides reader with information about image compression, presents well known contemporary algorithms and looks into details of wavelet compression and following encoding schemes. Both JPEG and JPEG 2000 standards are introduced. Second part of this document analyzes and describes implementation of image compression tool including inovations and optimalizations. The third part is dedicated to comparison and evaluation of achievements.
APA, Harvard, Vancouver, ISO, and other styles
27

Bařina, David. "Jádra schématu lifting pro vlnkovou transformaci." Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-261233.

Full text
Abstract:
Práce se zaměřuje na efektivní výpočet dvourozměrné diskrétní vlnkové transformace. Současné metody jsou v práci rozšířeny v několika směrech a to tak, aby spočetly tuto transformaci v jediném průchodu, a to případně víceúrovňově, použitím kompaktního jádra. Tohle jádro dále může být vhodně přeorganizováno za účelem minimalizace užití některých prostředků. Představený přístup krásně zapadá do běžně používaných rozšíření SIMD, využívá hierarchii cache pamětí moderních procesorů a je vhodný k paralelnímu výpočtu. Prezentovaný přístup je nakonec začleněn do kompresního řetězce formátu JPEG 2000, ve kterém se ukázal být zásadně rychlejší než široce používané implementace.
APA, Harvard, Vancouver, ISO, and other styles
28

Savaton, Guillaume. "Méthodologie de conception de composants virtuels comportementaux pour une chaîne de traitement du signal embarquée." Phd thesis, Université de Bretagne Sud, 2002. http://tel.archives-ouvertes.fr/tel-00003048.

Full text
Abstract:
Les futures générations de satellites d'observation de la Terre doivent concilier des besoins croissants en résolution, précision et qualité des images avec un coût élevé de stockage des données à bord et une bande passante limitée des canaux de transmission. Ces contraintes imposent de recourir à de nouvelles techniques de compression des images parmi lesquelles le standard JPEG2000 est un candidat prometteur. Face à la complexité croissante des applications et des technologies, et aux fortes contraintes d'intégration - faible encombrement, faible consommation, tolérance aux radiations, traitement des informations en temps réel - les outils et méthodologies de conception et de vérification classiques apparaissent inadaptés à la réalisation des systèmes embarqués dans des délais raisonnables. Les nouvelles approches envisagées reposent sur une élévation du niveau d'abstraction de la spécification d'un système et sur la réutilisation de composants matériels pré-définis et pré-vérifiés (composants virtuels , ou blocs IP pour Intellectual Property). Dans cette thèse, nous nous intéressons à la conception de composants matériels réutilisables pour des applications intégrant des fonctions de traitement du signal et de l'image. Notre travail a ainsi consisté à définir une méthodologie de conception de composants virtuels hautement flexibles décrits au niveau comportemental et orientés vers les outils de synthèse de haut niveau. Nous avons expérimenté notre méthodologie sur l'implantation sous forme d'un composant virtuel comportemental d'un algorithme de transformation en ondelettes bidimensionnelle pour la compression d'images au format JPEG2000.
APA, Harvard, Vancouver, ISO, and other styles
29

Lipinskas, Saulius. "Vienlusčių sistemų programų specializavimo metodų tyrimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2009. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2008~D_20090304_094817-80068.

Full text
Abstract:
Technologijos greitai keičiasi, kasdien atsiranda mokslo ir technikos naujovių, kurios daugiau ar mažiau įtakoja mūsų kasdieninį gyvenimą. Skaitmeninės technologijos atnešė daug naujovių ir galima numanyti, kad jų bus dar daugiau. Skaitmenizavimo procesas tęsiasi ir apima dar likusias analogines erdves. Ypatingai šis procesas svarbus komunikacijų srityje Darbe stengtasi išsiaiškinti ir ištirti DVI veikimo principus bei suprojektuoti šios sąsajos modifikaciją, besiremiančia suspaustų vaizdų perdavimo technologija. Darbe susiduriama su vaizdo duomenų formavimo, perdavimo ir atvaizdavimo mechanizmais. Tuo pat panagrinėjant įvairius nestandartinius šio mechanizmo atvejus, kurie galbūt dabar nėra komerciškai efektyvūs, bet galėtų rasti labai specifinį panaudojimą ir būtent jame taptų nepakeičiamai naudingi. Nuspręsta imtis skaitmeninės DVI sąsajos (angl. digital visual interface). Atliekamų funkcijų atskyrimui naudojama blokinė dekompozicija, apžvelgiamos aparatūrinės ir programinės užsibrėžtos sistemos kūrimo priemonės. Kadangi DVI sąsaja jungiamas kompiuteris ir skaitmeninis monitorius paprastai yra vienas nuo kito netoli, nuspręsta pabandyti kaip pavyktų šį atstumą pailginti neapibrėžtai daug, kitaip tariant vaizdą perduoti naudojant įprastą ethernet tipo laidą, duomenų formatą pakeičiant į paketinį - IP (interneto protokolą) bei su kokiomis pagrindinėmis problemomis susiduriama tai darant. Darbe nagrinėjama ir iškeltos funkcijos ar jos dalies perkėlimo į lustą galimybė.
The study tries to clear out how does the digital video data transfer system works. The scope is about DVI interface and similar systems. Also the design of compressed video data transfer mechanisms. The things touched are very different and not even very popular in deed but in some cases they can become a very useful decision for some non standard applications. The study mainly touches digital visual interface – DVI. In case that DVI interface connected computer and monitor must be really near each other it was decided to enlong this distance. In other words – to transfer video data through Ethernet cable and changing and compressing DVI video data format into packets. All study stands for it and tries to clarify the main problems and difficulties doing that.
APA, Harvard, Vancouver, ISO, and other styles
30

Brunet, Dominique. "Métriques perceptuelles pour la compression d'images : éude et comparaison des algorithmes JPEG et JPEG2000." Master's thesis, Université Laval, 2007. http://hdl.handle.net/20.500.11794/19752.

Full text
Abstract:
Les algorithmes de compression d'images JPEG et JPEG2000 sont présentés, puis comparés grâce à une métrique perceptuelle. L'algorithme JPEG décompose une image par la transformée en cosinus discrète, approxime les coefficients transformés par une quantisation uniforme et encode le résultat par l'algorithme de Huffman. Pour l'algorithme JPEG2000, on utilise une transformée en ondelettes décomposant une image en plusieurs résolutions. On décrit et justifie la construction d'ondelettes orthogonales ou biorthogonales ayant le maximum de propriétés parmi les suivantes: valeurs réelles, support compact, plusieurs moments, régularité et symétrie. Ensuite, on explique sommairement le fonctionnement de l'algorithme JPEG2000, puis on montre que la métrique RMSE n'est pas bonne pour mesurer l'erreur perceptuelle. On présente donc quelques idées pour la construction d'une métrique perceptuelle se basant sur le fonctionnement du système de vision humain, décrivant en particulier la métrique SSIM. On utilise finalement cette dernière métrique pour conclure que JPEG2000 fait mieux que JPEG.
In the present work we describe the image compression algorithms: JPEG and JPEG2000. We then compare them using a perceptual metric. JPEG algorithm decomposes an image with the discrete cosine transform, the transformed map is then quantized and encoded with the Huffman code. Whereas the JPEG2000 algorithm uses wavelet transform to decompose an image in many resolutions. We describe a few properties of wavelets and prove their utility in image compression. The wavelets properties are for instance: orthogonality or biorthogonality, real wavelets, compact support, number of moments, regularity and symmetry. We then briefly show how does JPEG2000 work. After we prove that RMSE error is clearly not the best perceptual metric. So forth we suggest other metrics based on a human vision system model. We describe the SSIM index and suggest it as a tool to evaluate image quality. Finally, using the SSIM metric, we show that JPEG2000 surpasses JPEG.
APA, Harvard, Vancouver, ISO, and other styles
31

Brunet, Dominique. "Métriques perceptuelles pour la compression d'images. Étude et comparaison des algorithmes JPEG et JPEG2000." Thesis, Université Laval, 2007. http://www.theses.ulaval.ca/2007/25159/25159.pdf.

Full text
Abstract:
Les algorithmes de compression d'images JPEG et JPEG2000 sont présentés, puis comparés grâce à  une métrique perceptuelle. L'algorithme JPEG décompose une image par la transformée en cosinus discrète, approxime les coefficients transformés par une quantisation uniforme et encode le résultat par l'algorithme de Huffman. Pour l'algorithme JPEG2000, on utilise une transformée en ondelettes décomposant une image en plusieurs résolutions. On décrit et justifie la construction d'ondelettes orthogonales ou biorthogonales ayant le maximum de propriétés parmi les suivantes: valeurs réelles, support compact, plusieurs moments, régularité et symétrie. Ensuite, on explique sommairement le fonctionnement de l'algorithme JPEG2000, puis on montre que la métrique RMSE n'est pas bonne pour mesurer l'erreur perceptuelle. On présente donc quelques idées pour la construction d'une métrique perceptuelle se basant sur le fonctionnement du système de vision humain, décrivant en particulier la métrique SSIM. On utilise finalement cette dernière métrique pour conclure que JPEG2000 fait mieux que JPEG.
In the present work we describe the image compression algorithms: JPEG and JPEG2000. We then compare them using a perceptual metric. JPEG algorithm decomposes an image with the discrete cosine transform, the transformed map is then quantized and encoded with the Huffman code. Whereas the JPEG2000 algorithm uses wavelet transform to decompose an image in many resolutions. We describe a few properties of wavelets and prove their utility in image compression. The wavelets properties are for instance: orthogonality or biorthogonality, real wavelets, compact support, number of moments, regularity and symmetry. We then briefly show how does JPEG2000 work. After we prove that RMSE error is clearly not the best perceptual metric. So forth we suggest other metrics based on a human vision system model. We describe the SSIM index and suggest it as a tool to evaluate image quality. Finally, using the SSIM metric, we show that JPEG2000 surpasses JPEG.
APA, Harvard, Vancouver, ISO, and other styles
32

Geijer, Mia. "Makten över monumenten : restaurering av vasaslott 1850-2000 /." Stockholm : Nordiska museets förlag, 2007. http://www.nordiskamuseet.se/Upload/images/4709.jpg.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Lin, Zih-Chen, and 林子辰. "Fast JPEG 2000 Encryptor." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/03990532894756652275.

Full text
Abstract:
碩士
國防大學中正理工學院
電子工程研究所
96
With the progress of information science, the network has become important. The image security is getting more important too. The image compression technologies change with each passing day. They do not only improve compression efficiency, but also provide different characteristics for a variety of applications. So, an efficient image encryption method should be developed according to the characteristics of the compression technique itself. JPEG 2000 is an emerging standard for still image compression. JPEG 2000 provides various functionalities to solve the problems for different image applications and possibly become a most popular image format. Therefore, JPEG 2000 image encryption has become a hot topic for the researches of image security. One of the important properties of JPEG 2000 codestream is that the two consecutive bytes in the packet body should be in the interval [0x0000, 0xFF8F] so that a standard JPEG 2000 decoder can exactly decode the JPEG 2000 compressed codestream. This is so called the compatibility of JPEG 2000 and should be followed by an effective JPEG 2000 encryption method. This thesis proposes a fast JPEG 2000 encryptor which uses cryptographical technology to encrypt most of the JPEG 2000 compressed data, and uses the hardware to improve the shortcoming of the slow performance of the software encryption methods. The experimental results show that the proposed JPEG 2000 encryptor can encrypt most of JPEG 2000 compressed data. Moreover, the encrypted JPEG 2000 images can be decoded by standard JPEG 2000 decoders and can be exactly recovered by the proposed decryptor.
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Yung-Chen, and 陳詠哲. "Wavelet-Based JPEG 2000 Image Compression." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/66542893741935624689.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

LIN, NEIN-HSIEN, and 林念賢. "A Jpeg-2000-Based Deforming Mesh Streaming." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/25369764370956605840.

Full text
Abstract:
碩士
國立臺灣大學
資訊網路與多媒體研究所
95
For PC and even mobile device, video and image streaming technologies, such as H.264 and JPEG/JPEG 2000, are already mature. However, the streaming technology for 3D model or so-called mesh data is still far from practical use. Therefore, in this paper, we propose a mesh streaming method based on JPEG 2000 standard and integrate it into an existed multimedia streaming server, so that our mesh streaming method can directly benefit from current image and video streaming technologies. In this method, the mesh data of a 3D model is first converted into a JPEG 2000 image, and then based on the JPEG 2000 streaming technique, the mesh data can then be transmitted over the Internet as a mesh streaming. Furthermore, we also extend this mesh streaming method for deforming meshes as the extension from a JPEG 2000 image to a Motion JPEG 2000 video, so that our mesh streaming method is not only for transmitting a static 3D model but also a 3D animation model. To increase the usability of our method, the mesh stream can also be inserted into a X3D scene as an extension node of X3D. Moreover, since this method is based on the JPEG 2000 standard, our system is much suitable to be integrated into any existed client-server or pear-to-pear multimedia streaming system.
APA, Harvard, Vancouver, ISO, and other styles
36

Hu, Guang-nan, and 胡光南. "Fast downsizing transcoder for JPEG 2000 images." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/64887184539077129199.

Full text
Abstract:
碩士
南華大學
資訊管理學研究所
96
This paper presents a fast downsizing method for JPEG 2000 images. The proposed method is used to downsize the JPEG 2000 images under frequency domain. Compared with the traditional method, which downsizes the JPEG 2000 images under spatial domain, our proposed method can effectivelyreduce both required memory space and execution time. Experimental results reveal that the proposed frequency domain downsizing method can reduce the average execution time of the spatial domain downsizing method by 6% to 73% for images are downscaled to 1/2n×1/2n of the original image sizes.
APA, Harvard, Vancouver, ISO, and other styles
37

Lian, Chung-Jr, and 連崇志. "Design and Implementation of Image Coding Systems: JPEG, JPEG 2000 and MPEG-4 VTC." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/77902516298772727288.

Full text
Abstract:
博士
國立臺灣大學
電機工程學研究所
91
In this dissertation, the hardware architecture design and implementation of image coding systems are presented. The research focuses on three image coding standards: JPEG, JPEG2000, and MPEG-4 Visual Texture Coding (VTC). JPEG is a well-known and matured standard. It has been widely used for natural image compression, especially very popular for digital still camera applications. In the first part of this dissertation, we proposed fully pipelined JPEG encoder and decoder for high speed image processing requirements in post-PC electronic appliances. The processing power is 50 million samples per second at 50 MHz working frequency. The proposed architectures can handle million-pixel digital images'''''''' encoding and decoding in very high speed. Other feature is that both the encoder and decoder are stand-alone and full-function solutions. They can encode or decode the JPEG compliant file without any aids from extra processor. JPEG2000 is the latest image coding standard. It is defined to be a more powerful standard after JPEG. JPEG2000 provids better compression performance, especially at low bitrates. It also provides various features, such as quality and resolution progressive, region of interest coding, lossy and lossless coding in an unified framework, etc. The performance of JPEG2000 comes at the cost of higher computational complexity. In the second part of the dissertation, we discuss the challenges and issues of the design of a JPEG2000 coding system. Cycle efficient block encoding and decoding engines, and computation reduction techniques by Tier-2 feedback are proposed for the most critical module, Embedded Block Coding with Optimized Truncation (EBCOT). With the proposed parallel checking and skipping-based coding schemes, the scanning cycles can be reduced to 40% of the direct bit-by-bit implementation. As for the Tier-2 feedback control in lossy coding mode, the execution cycles and therefore power consumption can be lowered to 50% in the case of about 10 times compression. MPEG-4 Visual Texture Coding (VTC) tool is another compression algorithm that also adopts the wavelet-based algorithm. In VTC, Zero-tree coding algorithm is adopted to generate the context symbols for arithmetic coder. In the third part, the design of the zero-tree coding algorithm is discussed. Tree-depth scan with multiple quantization mode are realized. Dedicated data access scheme are designed for smooth coding flow. In each chapter, detailed analysis of the algorithms are provided first. Then, efficient hardware architectures are proposed exploiting special algorithm characteristics. The proposed dedicated architectures can greatly improve the processing performance compared with a general processor-based solution. For non-PC consumer applications, these architectures are more competitive solutions for cost-efficient and high performance requirements.
APA, Harvard, Vancouver, ISO, and other styles
38

Chuang, Ming-Chuan, and 莊銘權. "Suitable for JPEG 2000 Image Encryption/Decryption Scheme." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/85047454584863342890.

Full text
Abstract:
碩士
國防大學中正理工學院
電子工程研究所
96
Image encryption is a very important part for image security. Image compression technologies change with each passing day. They not only improve compression efficiency, but also provide a rich set of features for a variety of applications. Therefore, an efficient image encryption method should be developed according to the characteristics of the compression technique itself. JPEG 2000 is an emerging standard for still image compression. JPEG 2000 provides various functionalities to solve the problems for different image applications and possibly become a most popular image format. Therefore, JPEG 2000 image encryption has become a hot topic for image security. One of the important properties of JPEG 2000 codestream is that the two consecutive bytes in the packet body should be in the interval [0x0000, 0xFF8F] so that a standard JPEG 2000 decoder can exactly decode the JPEG 2000 compressed codestream. This is so called the compatibility of JPEG 2000 and should be followed by an effective JPEG 2000 encryption method. This thesis proposes a cryptography-based JPEG 2000 image encryption technique which uses the stream cipher to encrypt the JPEG 2000 codestream. To be compatible with the syntax of JPEG 2000, the proposed technique replaces the syntax non-compliant bytes with syntax compliant bytes and records the positions of these bytes as the deciphering information. The deciphering information is then embedded in the header of the JPEG 2000 codestream making use of the characteristics of the syntax of the JPEG 2000 to facilitate the decryption in the decoding side. Experimental results show that the proposed JPEG 2000 image encryption scheme not only can be compliant with the syntax of JPEG 2000, but also can encrypt the entire packets of the JPEG 2000 codestream. That is, the proposed technique has good compatibility and security. Moreover, because the extra deciphering information generated in the encryption process is very small, the proposed technique is also compression equivalent. According to the good properties mentioned above, the proposed JPEG 2000 image encryption technique can provide effective protection of JPEG 2000 images in various applications.
APA, Harvard, Vancouver, ISO, and other styles
39

Lin, Chung-Fu, and 林重甫. "Chip Design and System Integration of JPEG 2000." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/84965757346056105810.

Full text
Abstract:
博士
國立交通大學
電機與控制工程系所
94
JPEG 2000 is a new image coding system that delivers superior compression performance and provides many advanced features in scalability, flexibility, and system functionalities. The two key technologies of JPEG 2000 are Discrete Wavelet Transform (DWT) and Embedded block coding with optimized truncation (EBCOT). DWT can decompose the signals into different sub-bands with both time and frequency information and facilitate to achieve high compression ratio. Embedded block coding with optimized truncation (EBCOT) is another important technology in JPEG 2000. It coded each coding block independently and can achieve rate-distortion optimization and scalable coding. The attractive features also brings many imaging applications such as the Internet, wireless, security, and digital cinema. In this thesis, we focus on some design challenges of JPEG 2000 implementation (i.e. memory issue, processing speed and throughput of DWT design and JPEG 2000 coprocessor). Firstly, for the implementation of two-dimensional Discrete Wavelet Transform (DWT) operating in whole image, it uses internal memory to store the intermediate column-processed data, whose size is proportional to the image dimension. Besides, the pipeline stages of DWT data path would also prolong the data dependency and increase the memory size. Thus, in the high-speed and memory-efficient design, we explore the issues between the critical path and internal memory size with lossless 5/3 and lossy 9/7 filters of JPEG 2000. To ease the tradeoff between the pipeline stages of 1-D architecture and memory requirement of 2-D implementation, a modified algorithm is proposed for the designs of 1-D and 2-D pipeline architectures. Based on the modified data path of lifting-based DWT, the proposed architecture can achieve high-speed processing frequency by inserting more pipeline stages without increasing the internal memory size (i.e. the detail is mentioned in Chapter 3). As for the integration issue of JPEG 2000 coprocessor for DWT, BPC (bit-plane coder), AC (arithmetic coder) components, the overall encoding system may suffer performance degradations and need more hardware resources, since different components require different I/O bandwidth and buffers. To decrease the internal memory size and increase the overall throughput, we propose a (Quad code-block) QCB-based DWT engine to ease the performance degradation of integration issue. Based on the changed output timing of the DWT process, three code blocks are iteratively generated every fixed execution time slice. The DWT and BPC processes can reach higher parallelism than the traditional DWT method. Moreover, the overall performance can preserve the high performance of the individual component and the internal memory size is also reduced (i.e. the detail is mentioned in Chapter 4). To verify the proposed architectures for DWT and JPEG 2000 coprocessor, we implement the DWT design and JPEG 2000-based image system on ARM-based platform (i.e. Integrator System). To make the JPEG 2000 coprocessor more applicable, we wrap the design through AHB (Advanced High-performance Bus) interface. Following the bus communication, the JPEG 2000 coprocessor can communicate to JPEG 2000 coprocessor and complete the JPEG 2000 compression process. The overall system is realized by the integration of ARM processor and JPEG 2000 coprocessor (i.e. the detail is mentioned in Chapter 5). Finally, we give a brief conclusion and future works in Chapter 6.
APA, Harvard, Vancouver, ISO, and other styles
40

Chuang, Yan-Tse, and 莊彥澤. "Embedded Edge Image within JPEG-2000 Compression System." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/71510167702242108119.

Full text
Abstract:
碩士
國立臺灣大學
電機工程學研究所
89
JPEG-2000 system is the newest standard for still image compression. In this Thesis, we discuss the basic architecture of JPEG-2000 system, which could be viewed as an evolution of image compression techniques during recent years. While over the past decades, a different class of image coding schemes, generally referred to as second-generation image coding technique, has been proposed, telling us that edges have features more rich in information used for recognition or perceiving an image. Derived from this concept, we propose two methods to combine the JPEG-2000 system with the second-generation image coding techniques, to display edges image first, then other parts of the image not belonging to the edges. These coding schemes allow us to perceive an image from the contour first and could be applied to several conventional application-specific image systems where edges contain critical information, such as pattern recognition, motion estimation, etc.
APA, Harvard, Vancouver, ISO, and other styles
41

Chen, Kuan-Fu, and 陳冠夫. "EFFICIENT ARCHITECTURE DESIGN OF EBCOT FOR JPEG-2000." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/76214593485370109133.

Full text
Abstract:
碩士
國立臺灣大學
電機工程學研究所
89
As the rapid increase of Internet demanding and the use of digital still camera (DSC), still image is broadly used as a storage and transmission media. JPEG-2000 is a new still image compression standard. It has better compression performance than conventional JPEG standard, and it provides many useful features. The hardware implementation of JPEG-2000, therefore, becomes essential technique of digital still camera. In this thesis, a high performance hardware architecture design of EBCOT block coder for JPEG-2000 is proposed. Speedup methods and pipelining technique are adopted according to the characteristic of EBCOT block coding algorithm. By using this architecture, process time can be reduced to about 40% of previous work. There are two major parts in this block coder: context formation (CF) and arithmetic encoder (AE). The main idea to improve the process speed of context formation is to skip no-operation samples. We achieve this by column-based coding architecture and two speedup methods. In arithmetic encoder, a four-stage pipeline is used to reduce the clock cycle time. Besides, a look-ahead technique is used in probability table look up to process two identical contexts continuously inputted. A prototyping chip is implemented to verify the proposed architecture. The technology used is CMOS 0.35um 1P4M technology. The area of this chip is 3.67x3.67 mm2. The clock frequency is 50 MHz. It can encode 4.6 million pixels image within 1 second, corresponding to 2400x1800 image size, or 452x340 video sequence with 30 frames per second. Therefore, this chip is compliant with the upcoming motion JPEG-2000 standard.
APA, Harvard, Vancouver, ISO, and other styles
42

Соколова, В. К. "Характеристика стандарту jpeg 2000 для кодування растрових зображень." Thesis, 2020. http://openarchive.nure.ua/handle/document/13953.

Full text
Abstract:
Стандарт JPEG 2000 дає можливості вибору значень численних параметрів кодування растрових зображень, які суттєво впливають на розмір кодового потоку та якість відновленого зображення. При декодуванні значення параметрів зчитуються з заголовків кодового потоку, і декодер повинен забезпечити коректне відновлення вихідного зображення.
APA, Harvard, Vancouver, ISO, and other styles
43

Hsieh, Hsing-Chin, and 謝幸芝. "Apply JPEG 2000 to Multiple Description Transmission System." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/01596421170934545844.

Full text
Abstract:
碩士
中原大學
電機工程研究所
90
Abstract A novel JPEG 2000-based multiple description algorithm is present in this thesis. The JPEG 2000 outperforms many existing image coding algorithms. However, if transmitting images thought the noisy channel and the markers check to be wrong, the images won’t reconstruct. So the algorithm partitions the wavelet coefficients into several groups, and use JPEG 2000 to compress these groups. Each group is delivered over noisy channel to disperse risk. After that, this algorithm can improve the probability of image reconstructed by decoder, and it can save the bandwidth because it doesn’t need to retransmit these bit-streams. Since the JPEG 2000 is effective for image coding, the multiple description is useful for image delivery over networks, the algorithm to be developed in this paper can effectively combine these two techniques so that systems with high rate-distortion performance, low bandwidth requirement, and excellent error-resilient ability can be attained. Keywords: JPEG 2000, multiple description transmission system, error resilience.
APA, Harvard, Vancouver, ISO, and other styles
44

Tseng, Chien-Ming, and 曾建銘. "Implementation of Wavelet Transform in JPEG 2000 Using FPGA." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/90934348816649845447.

Full text
Abstract:
碩士
國立高雄第一科技大學
電腦與通訊工程所
90
JPEG 2000 is the emerging standard for coding and compressing still images. JPEG 2000 is based upon a discrete wavelet transformation (DWT), as it compares to the existing JPEG standard that uses the discrete cosine transform (DCT). The JPEG 2000 provides the numerous advantages over the existing JPEG standard. Performance gains include improved compression efficiency at low bit rates or for large images, while new functionalities include, progressive transmission, lossy or lossless compression, and region-of-interest (ROI). The JPEG 2000 standard is being designed to serve a number of different markets and applications such as low bandwidth transmission of images on networks over the Web, Internet, medical imaging, digital photography, e-commerce and scanner applications. With the advantage of faster implementation of the discrete wavelet transform, fully in-place calculation, and the inverse wavelet transform which can be found by undoing the operations of the forward wavelet transform, the lifting scheme has been chosen in the JPEG 2000 standard. First, we perform a symmetric extension of the input signals. Then, based on the adder-based Distributed Arithmetic(DA), we can use CSD form to reduce the nonzero bits of the 「9 / 7」 filter coefficients and find the share terms of multipliers to reduce the hardware. Finally, the proposed architecture is described in Verilog HDL, synthesized and verified on Xilinx FPGA.
APA, Harvard, Vancouver, ISO, and other styles
45

Hung-Chjeh, Hsin, and 辛鴻杰. "The Design and Implementation of A JPEG 2000 Parser." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/07776127057855019852.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
89
With the increasing use of multimedia technologies, image compression requires higher performance as well as new features. To address this need in the specific area of still image encoding, a new standard is currently being developed, the JPEG 2000. It is not only intended to provide rate-distortion and subjective image quality performance superior to existing standards, but also to provide features and functionalities that current standards can either not address efficiently or in many cases cannot address at all. In this thesis, we studied and discussed the JPEG 2000 image coding standard. We also implemented a software parser which contains the JPEG 2000 part I encoder and can record the information in the encoding process. Moreover, with it the users can observe progress result during the embedded process. We hope that this system can help people understanding JPEG 2000, and moreover, can develop better algorithms in JPEG 2000.
APA, Harvard, Vancouver, ISO, and other styles
46

"JPEG 2000 and parity bit replenishment for remote video browsing." Université catholique de Louvain, 2008. http://edoc.bib.ucl.ac.be:81/ETD-db/collection/available/BelnUcetd-09162008-160601/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Hung-Chi, Fang. "Design and Implementation of High Performance JPEG 2000 Encoding System." 2005. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-0706200509583700.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Shih-Wei, Lin, and 林詩緯. "A Progressive Image Authentication Scheme Base on JPEG 2000 Codec." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/23977291748077565877.

Full text
Abstract:
碩士
國防大學中正理工學院
電子工程研究所
94
Image authentication is a very important part for image security. Image compression technologies change with each passing day. They not only improve compression efficiency, but also provide a rich set of features for variety of applications. An efficient image authentication method should be developed according to the characteristics of the compression technique itself. JPEG 2000 is an emerging standard for still image compression and is becoming the solution of choice for many digital imaging fields and applications. One important aspect of JPEG 2000 is its scalable property. Decoder allows only extraction of sub-image from a single compressed image codestream for essential image data. Therefore, the thesis proposes a progressive image authentication scheme based on JPEG 2000 codec. Recognizing that security is an important concern in many applications, the Joint Photographic Experts Group (JPEG) is on-going an activity known as Secure JPEG 2000 or JPSEC. Its goal is to extend the JPEG 2000 standard to provide a standardized framework for secure imaging. For tendency towards developing of future, the thesis also proposes another progression image authentication scheme integrated into JPSEC syntax for backward compatible with the scalable JPEG 2000 codestream. The proposed image authentication schemes use the packets as the authentication units. Because the packets are also the basic units of the JPEG 2000 codestream, the proposed techniques fully support different progressive strategies for JPEG 2000 image coding. Experimental results show that the proposed authentication schemes can not only extract sub-codestream for progressive image authentication, but also locate the tampered image. Therefore, the proposed image authentication schemes are practical for various applications of progressive JPEG 2000 image authentication.
APA, Harvard, Vancouver, ISO, and other styles
49

Wu, Ming-Chu, and 吳名珠. "Detection and Concealment of Transmission Errors in JPEG-2000 images." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/88870651632598196056.

Full text
Abstract:
碩士
國立中正大學
資訊工程研究所
88
In this study, the detection and concealment approach to transmission errors in JPEG-2000 images is proposed. For entropy-coded JPEG-2000 images, a transmission error in a codeword (subband sample) may cause the underlining codeword (subband sample) and its subsequent codewords (subband samples) to be misinterpreted, resulting in a great degradation of the received images. Here a transmission error may be a single-bit error or a burst error containing N successive error bits. The objective of the proposed approach is to detect and conceal transmission errors in JPEG-2000 images, i.e., to recover high-quality JPEG-2000 images from their corresponding corrupted images. In this proposed approach, it is assumed that the synchronization capability of JPEG-2000 images is enabled, i.e., the 256 unique Resync markers are inserted into the JPEG-2000 image bitstream periodically so that the synchronization problem can be solved by the proposed Resync marker regulation procedure. The difference between the variance of a received block and that of the corresponding original block is used in this study to determine whether the received block is corrupted or not. In the proposed error concealment approach, linear prediction and crossband information are employed. The subband samples in any corrupted block on level 3-5 are simply replaced by zeros. The subband samples in any corrupted block on levels 1-2 are concealed by the error concealment algorithms developed in [77] with the subband samples from LL0 being set to the corresponding 16 mean values of subband samples of 16 subblocks in LL0. The corrupted subband LL0 is first concealed by replacing the subband samples on LL0 by the corresponding 16 mean values of subband samples of the 16 subblocks in LL0. Then adaptive linear prediction and crossband information from all correctly-received subbands on levels 1 and 2 are used to improve the concealed results of LL0. Finally, a 3 3 sliding window is used to further improve the concealed results of LL0. Based on the simulation results obtained in this study, the proposed approach can recover the JPEG-2000 images from their corresponding corrupted JPEG-2000 images. This shows the feasibility of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
50

Fang, Hung-Chi, and 方弘吉. "Design and Implementation of High Performance JPEG 2000 Encoding System." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/45562301877274091679.

Full text
Abstract:
博士
國立臺灣大學
電子工程學研究所
93
JPEG 2000 is a new still image coding standard that has not only better coding efficiency but also abundant useful features. However, its high computational complexity and memory requirements has obstructed its entering the market. In this dissertation, we proposed a high performance JPEG 2000 encoding system to solve this problem. For the embedded block coding, we proposed a parallel architecture to increase the throughput by processing a coefficient at a time. Thus, the state variable memory and code-block memory are eliminated. This greatly reduces the hardware cost since these memories occupy more than 80% area of the embedded block coding engine in conventional architectures. Moreover, the processing speed is increased by more than 6 times compared with the best result in the literature. Therefore, the proposed parallel architecture is high performance for its high speed and low cost. The rate-distortion optimization is an important function of JPEG 2000. However, the post-compression rate-distortion optimization algorithm recommended in the reference software requires that the original image is losslessly coded regardless of target bit rate. This wastes the computational power and time to process the unnecessary data, and requires a large memory to buffer the lossless bit stream. To solve this problem, we propose a pre-compression rate-distortion optimization algorithm, which can perform the rate-distortion optimization before the embedded block coding. Thus, the embedded block coding only needs to process necessary data. This greatly reduces the processing time and computation power of the embedded block coding. Moreover, it does not need to buffer the bit stream. Therefore, the proposed pre-compression rate-distortion optimization algorithm presents low power, high speed, and low cost capability. Based on the above two new techniques, a high speed parallel JPEG 2000 encoder chip is implemented. It can encode HDTV 720p video in real-time. For the discrete wavelet transform, we adopt the multi-level line-based 2-D architecture. The memory bandwidth requirement of this chip is therefore minimized, i.e. each pixel is read one and only one time. The chip is fabricated by TSMC 0.25 µm CMOS technology, and the core area is 5.5 mm2. The power consumption is 348 mW at 81 MHz. This encoder has the highest throughput on smallest silicon area compared with all other encoders in the literature. Finally, we propose a stripe pipeline scheme for large tile size. By use of this scheme, the on-chip memory requirement of a JPEG 2000 encoder is proportional to the square root of the tile size while it is proportional to the tile size in previous works. For a tile size of 256×256, the tile memory requirement is reduced to only 8.5% of previous works. To achieve the stripe pipeline scheme, the level switch discrete wavelet transform and the code-block switch embedded block coding has been proposed. The level switch discrete wavelet transform is a multi-level blockbased scan architecture, and the code-block switch embedded block coding can process 13 code-blocks in parallel. As a result, the hardware cost of this pipeline architecture is about 30% of the parallel encoder when the tile size is 256×256, and the area saving increases as the increase of the tile size. With the algorithms and architectures proposed in this dissertation, the cost of the JPEG 2000 encoder can be reduced to only several times of that of the JPEG encoder. Moreover, all the features and functionalities of JPEG 2000 are retained.Therefore, we believe that JPEG 2000 will start to take the place of JPEG as the core technology of still image coding systems in the near future.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography