Dissertations / Theses on the topic 'JPEG 2000'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'JPEG 2000.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Nguyen, Anthony Ngoc. "Importance Prioritised Image Coding in JPEG 2000." Thesis, Queensland University of Technology, 2005. https://eprints.qut.edu.au/16005/1/Anthony_Nguyen_Thesis.pdf.
Full textNguyen, Anthony Ngoc. "Importance Prioritised Image Coding in JPEG 2000." Queensland University of Technology, 2005. http://eprints.qut.edu.au/16005/.
Full textOh, Han, Ali Bilgin, and Michael Marcellin. "Visually Lossless JPEG 2000 for Remote Image Browsing." MDPI AG, 2016. http://hdl.handle.net/10150/621987.
Full textTovslid, Magnus Jeffs. "JPEG 2000 Quality Scalability in an IP Networking Scenario." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for elektronikk og telekommunikasjon, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-18465.
Full textAouadi, Imed. "Optimisation de JPEG 2000 sur système sur puce programmable." Paris 11, 2005. https://pastel.archives-ouvertes.fr/pastel-00001658.
Full textRecently the field of video, image and audio processing has experienced several significant progresses on both the algorithms and the architectures levels. One of these evolutions is the emergence of the new ISO/IEC JPEG2000 image compression standard which succeeds to JPEG. This new standard presents many functionalities and features which allows it to be adapted to a large spectrum of applications. However, these features bring up new algorithmic complexities of higher degree than those of JPEG which in turn makes it very difficult to be optimized for certain implementations under very hard constraints. Those constraints could be area, timing or power constraints or more likely all of them. One of the key steps during the JPEG2000 processing is entropy coding that takes about 70 % of the total execution time when compressing an image. It is therefore essential to analyze the potentialities of optimizations of implementations of JPEG2000. FPGA devices are currently the main reconfigurable circuits available on the market. Although they have been used for a long time only for ASIC prototyping, they are able today to provide an effective solution to the hardware implementation of applications in many fields. Considering the progress experienced by the FPGA semiconductor industry on integration capacity and working frequency, reconfigurable architectures are now an effective and competitive solution to meet the needs of both prototyping and final hardware implementations. In this work we propose a methodology for the study of the possibilities of implementation of JPEG2000. This study starts with the evaluation of software implementations on commercial platforms
Taylor, James Cary, Jacklynn Hall, and Tony Yuan. "Dean's Innovation Challenge: Researching the JPEG 2000 Image Decoder." Thesis, The University of Arizona, 2012. http://hdl.handle.net/10150/244833.
Full textPark, Min Jee, Jae Taeg Yu, Myung Han Hyun, and Sung Woong Ra. "A Development of Real Time Video Compression Module Based on Embedded Motion JPEG 2000." International Foundation for Telemetering, 2015. http://hdl.handle.net/10150/596452.
Full textIn this paper, we develop a miniaturized real time video compression module (VCM) based on embedded motion JPEG 2000 using ADV212 and FPGA. We consider layout of components, values of damping resistors, and lengths of the pattern lines for optimal hardware design. For software design, we consider compression steps to monitor the status of the system and make the system robust. The weight of the developed VCM is approximately 4 times lighter than the previous development. Furthermore, experimental results show that the PSNR is increased about 3dB and the compression processing time is approximately 2 times faster than the previous development.
Erlid, Frøy Brede Tureson. "MCTF and JPEG 2000 Based Wavelet Video Coding Compared to the Future HEVC Standard." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for elektronikk og telekommunikasjon, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-18822.
Full textYe, Wei. "Development of a Remote Medical Image Browsing and Interaction System." Wright State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=wright1278676228.
Full textLucero, Aldo. "Compressing scientific data with control and minimization of the L-infinity metric under the JPEG 2000 framework." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2007. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.
Full textSilva, Sandreane Poliana. "Comparação entre os métodos de compressão fractal e JPEG 2000 em um sistema de reconhecimento de íris." Universidade Federal de Uberlândia, 2008. https://repositorio.ufu.br/handle/123456789/14385.
Full textAtualmente vive-se na era digital, por isso a manipulação de dados e imagens é freqüente todos os dias. Devido ao problema de espaço para armazenamento dessas imagens e tempo de transmissão, foram desenvolvidas várias técnicas de compressão, e um grande desafio é fazer com que essas técnicas tragam bons resultados em termos de taxa de compressão, qualidade da imagem e tempo de processamento. A técnica de compressão Fractal desenvolvida por Fisher, foi descrita, implementada e testada neste trabalho e trouxe ótimos resultados, e melhoria considerável em termos de tempo de execução, que foi bastante reduzido. Outra área que vem se destacando é o uso de técnicas biométricas para reconhecimento de pessoas. Uma técnica muito usada é o reconhecimento de íris que tem mostrado bastante contabilidade. Assim, aliar as duas tecnologias traz grandes benefícios. No presente trabalho, imagens de íris foram comprimidas pelo método aqui implementado e foram realizadas simulações da técnica de reconhecimento de íris desenvolvida por Maseck. Os resultados mostram que é possível comprimir fractalmente as imagens sem prejudicar o sistema de reconhecimento. Comparações foram realizadas e foi possível perceber que mesmo havendo mudanças nos pixels das imagens, o sistema permanece bastante confiavel, trazendo vantagens em espaço de armazenamento.
Mestre em Ciências
Agueh, Djidjoho Max. "Protection inégale pour la transmission d'images et vidéo codées en Jpeg 2000 sur les canaux sans fil." Nantes, 2008. http://www.theses.fr/2008NANT2069.
Full textNowadays, data protection against transmission errors in wireless multimedia systems is a crucial issue. JPEG 2000, the new image representation system, addresses this issue by defining in its 11th part (Wireless JPEG 2000 - JPWL) some tools such as Forward Error Correction with Reed-solomon (RS) codes in order to enhance the robustness of JPEG 2000 based images and videos against transmission errors. Although JPWL defines a set of RS codes for data protection, it does not specify how to select those codes in order to handle error rate in wireless multimedia systems. In our work, based on the analysis of 802. 11 based Ad-hoc network traces, we first derive application level channel models (Gilbert model). Then, we propose a methodology for JPEG 2000 images and videos protection with a priori and empiric channel code rate selection. We highlight the interest of the a priori FEC allocation methodology by comparing it to non protected data transmission. We show that the effectiveness of the proposed scheme can be drastically reduced when the channel state changes because the FEC rate allocation is not adaptative. In this context, we propose a dynamic FEC rate allocation methodology which is a layer-based unequal error protection scheme. We demonstrate the effectiveness of this scheme thanks to a wireless client/server JPEG 2000 based images and video streaming application. The dynamic FEC rate allocation scheme overcomes in order of 10% other existing schemes such as the layer-based unequal error protection scheme proposed by Zhaohoui Guo et al both in terms of Peak Signal to Noise Ratio (PSNR) and successful image decoding rate. However, despite their effectiveness layer-based schemes are sub-optimal because they do not take into account the importance of data packets which limits their performances particularly in highly varying environments. We then propose an optimal FEC rate allocation methodology which is a packet-based unequal error protection scheme. The proposed scheme is slightly more complex than layer-based schemes as far as the number of packets constituting the JPEG 2000 image is low (under 1000 packets) and it offers superior performances in terms of PSNR and successful decoding rate. However, if the number of JPEG 2000 packet is significantly important (more than 1000 packets) the optimal methodology can be inefficient for real-time streaming application due it complexity. Beyond the proposition of FEC rate allocation methodologies, our work can be viewed as a step toward the guaranty of Quality of Service (QoS) in wireless multimedia applications
Preethy, Byju Akshara. "Advanced Methods for Content Based Image Retrieval and Scene Classification in JPEG 2000 Compressed Remote Sensing Image Archives." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/281771.
Full textBenderli, Oguz. "A Real-time, Low-latency, Fpga Implementation Of The Two Dimensional Discrete Wavelet Transform." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1056282/index.pdf.
Full textn2 size image processed using (n1/k1) ×
(n2/k2) sized tiles the latency is equal to the time elapsed to accumulate a (1/k1) portion of one image. In addition, a (2/k1) portion of each image is buffered locally. The proposed hardware has been implemented on an FPGA and is part of a JPEG 2000 compression system designed as a payload for a low earth orbit (LEO) micro-satellite to be launched in September 2003. The architecture can achieve a throughput of up to 160Mbit/s. The latency introduced is 0.105 sec (6.25% of total transmission time) for tile sizes of 256×
256. The local storage size required for the tiling operation is 2 MB. The internal storage requirement is 1536 pixels. Equivalent gate count for the design is 292,447.
Bouchoux, Sophie. "Apport de la reconfiguration dynamique au traitement d'images embarqué : étude de cas : implantation du décodeur entropique de JPEG 2000." Dijon, 2005. http://www.theses.fr/2005DIJOS027.
Full textThe appearance on the market of partially and quickly reprogrammable FPGAs, led to the development of new techniques, like dynamic reconfiguration. In order to study improvement of dynamic reconfiguration in comparison with static configuration, an electronic board was developed : the ARDOISE board. This thesis relates to the implementation of JPEG 2000 algorithm, and particularly of the entropic decoder, on this architecture and to the study of the performances obtained. To carry out a comparison of the results between the two methods, some evaluation criteria relating to costs, performances and efficiencies were defined. Implementations carried out are : implementation in partial dynamic reconfiguration of the arithmetic decoder on ARDOISE, implementation in static configuration of the entropic decoder on a Xilinx FPGA and implementation in dynamic reconfiguration of the entropic decoder on ARDOISE
Zeybek, Emre. "Compression multimodale du signal et de l’image en utilisant un seul codeur." Thesis, Paris Est, 2011. http://www.theses.fr/2011PEST1060/document.
Full textThe objective of this thesis is to study and analyze a new compression strategy, whose principle is to compress the data together from multiple modalities by using a single encoder. This approach is called “Multimodal Compression” during which, an image and an audio signal is compressed together by a single image encoder (e.g. a standard), without the need for an integrating audio codec. The basic idea developed in this thesis is to insert samples of a signal by replacing some pixels of the "carrier's image” while preserving the quality of information after the process of encoding and decoding. This technique should not be confused with techniques like watermarking or stéganographie, since Multimodal Compression does not conceal any information with another. Two main objectives of Multimodal Compression are to improve the compression performance in terms of rate-distortion and to optimize the use of material resources of a given embedded system (e.g. acceleration of encoding/decoding time). In this report we study and analyze the variations of Multimodal Compression whose core function is to develop mixing and separation prior to coding and separation. Images and common signals as well as specific data such as biomedical images and signals are validated. This work is concluded by discussing the video of the strategy of Multimodal Compression
Miller, Jessica Barbara [Verfasser]. "Evaluation des Einflusses von Dosis und Schichtdicke auf die verlustbehaftete JPEG- 2000-Kompression in der digitalen Mammographie unter Verwendung von 600 Aufnahmen des CDMAM-Phantoms / Jessica Barbara Miller." Berlin : Medizinische Fakultät Charité - Universitätsmedizin Berlin, 2011. http://d-nb.info/1025239148/34.
Full textYang, Hsueh-szu, and Benjamin Kupferschmidt. "Time Stamp Synchronization in Video Systems." International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/605988.
Full textSynchronized video is crucial for data acquisition and telecommunication applications. For real-time applications, out-of-sync video may cause jitter, choppiness and latency. For data analysis, it is important to synchronize multiple video channels and data that are acquired from PCM, MIL-STD-1553 and other sources. Nowadays, video codecs can be easily obtained to play most types of video. However, a great deal of effort is still required to develop the synchronization methods that are used in a data acquisition system. This paper will describe several methods that TTC has adopted in our system to improve the synchronization of multiple data sources.
Flordal, Oskar. "A study of CABAC hardware acceleration with configurability in multi-standard media processing." Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-4477.
Full textTo achieve greater compression ratios new video and image CODECs like H.264 and JPEG 2000 take advantage of Context adaptive binary arithmetic coding. As it contains computationally heavy algorithms, fast implementations have to be made when they are performed on large amount of data such as compressing high resolution formats like HDTV. This document describes how entropy coding works in general with a focus on arithmetic coding and CABAC. Furthermore the document dicusses the demands of the different CABACs and propose different options to hardware and instruction level optimisation. Testing and benchmarking of these implementations are done to ease evaluation. The main contribution of the thesis is parallelising and unifying the CABACs which is discussed and partly implemented. The result of the ILA is improved program flow through a specialised branching operations. The result of the DHA is a two bit parallel accelerator with hardware sharing between JPEG 2000 and H.264 encoder with limited decoding support.
Kivci, Erdem Turker. "Development Of A Methodology For Geospatial Image Streaming." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612570/index.pdf.
Full texts highly developed information systems, geospatial image data requies huge amount of physical storage spaces and such characteristics of geospatial image data make its usage limited in above mentioned applications. For this reason, web-based GIS applications can benefit from geospatial image streaming through web-based architectures. Progressive transmission of geospatial image and map data on web-based architectures is implemented with the developed image streaming methodology. The software developed allows user interaction in such a way that the users will visualize the images according to their level of detail. In this way geospatial data is served to the users in an efficient way. The main methods used to transmit geospatial images are serving tiled image pyramids and serving wavelet based compressed bitstreams. Generally, in GIS applications, tiled image pyramids that contain copies of raster datasets at different resolutions are used rather than differences between resolutions. Thus, redundant data is transmitted from GIS server with different resolutions of a region while using tiled image pyramids. Wavelet based methods decreases redundancy. On the other hand methods that use wavelet compressed bitsreams requires to transform the whole dataset before the transmission. A hybrid streaming methodology is developed to decrease the redundancy of tiled image pyramids integrated with wavelets which does not require transforming and encoding whole dataset. Tile parts&rsquo
coefficients produced with the methodlogy are encoded with JPEG 2000, which is an efficient technology to compress images at wavelet domain.
Abot, Julien. "Stratégie de codage conjoint pour la transmission d'images dans un système MIMO." Thesis, Poitiers, 2012. http://www.theses.fr/2012POIT2296/document.
Full textThis thesis presents a transmission strategy for exploiting the spatial diversity for image transmission over wireless channel. We propose an original approach based on the matching between the source hierarchy and the SISO sub-channels hierarchy, resulting from the MIMO channel decomposition. We evaluate common precoder performance in the context of this strategy via a realistic physical layer respecting the IEEE802.11n standard and associated with a transmission channel based on a 3D-ray tracer propagation model. It is shown that common precoders are not adapted for the transmission of a hierarchical content. Then, we propose a precoding algorithm which successively allocates power over SISO subchannels in order to maximize the received images quality. The proposed precoder achieves a target BER according to the channel coding, the modulation and the SISO subchannels SNR. From this precoding algorithm, we propose a link adaptation scheme to dynamically adjust the system parameters depending on the variations of the transmission channel. This solution determines the optimal coding/transmission configuration maximizing the image quality in reception. Finally, we present a study for take into account some psychovisual constraints in the assessment of the received images quality. We propose the insertion of a reduced reference metric based on psychovisual constraints, to assist the decoder in order to determine the decoding configuration providing the highest quality of experience. Subjective tests confirm the interest of the proposed approach
Mhamdi, Maroua. "Méthodes de transmission d'images optimisées utilisant des techniques de communication numériques avancées pour les systèmes multi-antennes." Thesis, Poitiers, 2017. http://www.theses.fr/2017POIT2281/document.
Full textThis work is devoted to improve the coding/ decoding performance of a transmission scheme over noisy and realistic channels. For this purpose, we propose the development of optimized image transmission methods by focusing on both application and physical layers of wireless networks. In order to ensure a better quality of services, efficient compression algorithms (JPEG2000 and JPWL) are used in terms of the application layer enabling the receiver to reconstruct the images with maximum fidelity. Furthermore, to insure a transmission on wireless channels with a minimum BER at reception, some transmission, coding and advanced modulation techniques are used in the physical layer (MIMO-OFDM system, adaptive modulation, FEC, etc). First, we propose a robust transmission system of JPWL encoded images integrating a joint source-channel decoding scheme based on soft input decoding techniques. Next, the optimization of an image transmission scheme on a realistic MIMO-OFDM channel is considered. The optimized image transmission strategy is based on soft input decoding techniques and a link adaptation approach. The proposed transmission scheme offers the possibility of jointly implementing, UEP, UPA, adaptive modulation, adaptive source coding and joint decoding strategies, in order to improve the image visual quality at the reception. Then, we propose a robust transmission system for embedded bit streams based on concatenated block coding mechanism offering an unequal error protection strategy. Thus, the novelty of this study consists in proposing efficient solutions for the global optimization of wireless communication system to improve transmission quality
Zeybek, Emre. "Compression multimodale du signal et de l'image en utilisant un seul codeur." Phd thesis, Université Paris-Est, 2011. http://tel.archives-ouvertes.fr/tel-00665757.
Full textKaše, David. "Komprese obrazu pomocí vlnkové transformace." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-234996.
Full textWu, David, and dwu8@optusnet com au. "Perceptually Lossless Coding of Medical Images - From Abstraction to Reality." RMIT University. Electrical & Computer Engineering, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080617.160025.
Full textUrbánek, Pavel. "Komprese obrazu pomocí vlnkové transformace." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236385.
Full textBařina, David. "Jádra schématu lifting pro vlnkovou transformaci." Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-261233.
Full textSavaton, Guillaume. "Méthodologie de conception de composants virtuels comportementaux pour une chaîne de traitement du signal embarquée." Phd thesis, Université de Bretagne Sud, 2002. http://tel.archives-ouvertes.fr/tel-00003048.
Full textLipinskas, Saulius. "Vienlusčių sistemų programų specializavimo metodų tyrimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2009. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2008~D_20090304_094817-80068.
Full textThe study tries to clear out how does the digital video data transfer system works. The scope is about DVI interface and similar systems. Also the design of compressed video data transfer mechanisms. The things touched are very different and not even very popular in deed but in some cases they can become a very useful decision for some non standard applications. The study mainly touches digital visual interface – DVI. In case that DVI interface connected computer and monitor must be really near each other it was decided to enlong this distance. In other words – to transfer video data through Ethernet cable and changing and compressing DVI video data format into packets. All study stands for it and tries to clarify the main problems and difficulties doing that.
Brunet, Dominique. "Métriques perceptuelles pour la compression d'images : éude et comparaison des algorithmes JPEG et JPEG2000." Master's thesis, Université Laval, 2007. http://hdl.handle.net/20.500.11794/19752.
Full textIn the present work we describe the image compression algorithms: JPEG and JPEG2000. We then compare them using a perceptual metric. JPEG algorithm decomposes an image with the discrete cosine transform, the transformed map is then quantized and encoded with the Huffman code. Whereas the JPEG2000 algorithm uses wavelet transform to decompose an image in many resolutions. We describe a few properties of wavelets and prove their utility in image compression. The wavelets properties are for instance: orthogonality or biorthogonality, real wavelets, compact support, number of moments, regularity and symmetry. We then briefly show how does JPEG2000 work. After we prove that RMSE error is clearly not the best perceptual metric. So forth we suggest other metrics based on a human vision system model. We describe the SSIM index and suggest it as a tool to evaluate image quality. Finally, using the SSIM metric, we show that JPEG2000 surpasses JPEG.
Brunet, Dominique. "Métriques perceptuelles pour la compression d'images. Étude et comparaison des algorithmes JPEG et JPEG2000." Thesis, Université Laval, 2007. http://www.theses.ulaval.ca/2007/25159/25159.pdf.
Full textIn the present work we describe the image compression algorithms: JPEG and JPEG2000. We then compare them using a perceptual metric. JPEG algorithm decomposes an image with the discrete cosine transform, the transformed map is then quantized and encoded with the Huffman code. Whereas the JPEG2000 algorithm uses wavelet transform to decompose an image in many resolutions. We describe a few properties of wavelets and prove their utility in image compression. The wavelets properties are for instance: orthogonality or biorthogonality, real wavelets, compact support, number of moments, regularity and symmetry. We then briefly show how does JPEG2000 work. After we prove that RMSE error is clearly not the best perceptual metric. So forth we suggest other metrics based on a human vision system model. We describe the SSIM index and suggest it as a tool to evaluate image quality. Finally, using the SSIM metric, we show that JPEG2000 surpasses JPEG.
Geijer, Mia. "Makten över monumenten : restaurering av vasaslott 1850-2000 /." Stockholm : Nordiska museets förlag, 2007. http://www.nordiskamuseet.se/Upload/images/4709.jpg.
Full textLin, Zih-Chen, and 林子辰. "Fast JPEG 2000 Encryptor." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/03990532894756652275.
Full text國防大學中正理工學院
電子工程研究所
96
With the progress of information science, the network has become important. The image security is getting more important too. The image compression technologies change with each passing day. They do not only improve compression efficiency, but also provide different characteristics for a variety of applications. So, an efficient image encryption method should be developed according to the characteristics of the compression technique itself. JPEG 2000 is an emerging standard for still image compression. JPEG 2000 provides various functionalities to solve the problems for different image applications and possibly become a most popular image format. Therefore, JPEG 2000 image encryption has become a hot topic for the researches of image security. One of the important properties of JPEG 2000 codestream is that the two consecutive bytes in the packet body should be in the interval [0x0000, 0xFF8F] so that a standard JPEG 2000 decoder can exactly decode the JPEG 2000 compressed codestream. This is so called the compatibility of JPEG 2000 and should be followed by an effective JPEG 2000 encryption method. This thesis proposes a fast JPEG 2000 encryptor which uses cryptographical technology to encrypt most of the JPEG 2000 compressed data, and uses the hardware to improve the shortcoming of the slow performance of the software encryption methods. The experimental results show that the proposed JPEG 2000 encryptor can encrypt most of JPEG 2000 compressed data. Moreover, the encrypted JPEG 2000 images can be decoded by standard JPEG 2000 decoders and can be exactly recovered by the proposed decryptor.
Chen, Yung-Chen, and 陳詠哲. "Wavelet-Based JPEG 2000 Image Compression." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/66542893741935624689.
Full textLIN, NEIN-HSIEN, and 林念賢. "A Jpeg-2000-Based Deforming Mesh Streaming." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/25369764370956605840.
Full text國立臺灣大學
資訊網路與多媒體研究所
95
For PC and even mobile device, video and image streaming technologies, such as H.264 and JPEG/JPEG 2000, are already mature. However, the streaming technology for 3D model or so-called mesh data is still far from practical use. Therefore, in this paper, we propose a mesh streaming method based on JPEG 2000 standard and integrate it into an existed multimedia streaming server, so that our mesh streaming method can directly benefit from current image and video streaming technologies. In this method, the mesh data of a 3D model is first converted into a JPEG 2000 image, and then based on the JPEG 2000 streaming technique, the mesh data can then be transmitted over the Internet as a mesh streaming. Furthermore, we also extend this mesh streaming method for deforming meshes as the extension from a JPEG 2000 image to a Motion JPEG 2000 video, so that our mesh streaming method is not only for transmitting a static 3D model but also a 3D animation model. To increase the usability of our method, the mesh stream can also be inserted into a X3D scene as an extension node of X3D. Moreover, since this method is based on the JPEG 2000 standard, our system is much suitable to be integrated into any existed client-server or pear-to-pear multimedia streaming system.
Hu, Guang-nan, and 胡光南. "Fast downsizing transcoder for JPEG 2000 images." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/64887184539077129199.
Full text南華大學
資訊管理學研究所
96
This paper presents a fast downsizing method for JPEG 2000 images. The proposed method is used to downsize the JPEG 2000 images under frequency domain. Compared with the traditional method, which downsizes the JPEG 2000 images under spatial domain, our proposed method can effectivelyreduce both required memory space and execution time. Experimental results reveal that the proposed frequency domain downsizing method can reduce the average execution time of the spatial domain downsizing method by 6% to 73% for images are downscaled to 1/2n×1/2n of the original image sizes.
Lian, Chung-Jr, and 連崇志. "Design and Implementation of Image Coding Systems: JPEG, JPEG 2000 and MPEG-4 VTC." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/77902516298772727288.
Full text國立臺灣大學
電機工程學研究所
91
In this dissertation, the hardware architecture design and implementation of image coding systems are presented. The research focuses on three image coding standards: JPEG, JPEG2000, and MPEG-4 Visual Texture Coding (VTC). JPEG is a well-known and matured standard. It has been widely used for natural image compression, especially very popular for digital still camera applications. In the first part of this dissertation, we proposed fully pipelined JPEG encoder and decoder for high speed image processing requirements in post-PC electronic appliances. The processing power is 50 million samples per second at 50 MHz working frequency. The proposed architectures can handle million-pixel digital images'''''''' encoding and decoding in very high speed. Other feature is that both the encoder and decoder are stand-alone and full-function solutions. They can encode or decode the JPEG compliant file without any aids from extra processor. JPEG2000 is the latest image coding standard. It is defined to be a more powerful standard after JPEG. JPEG2000 provids better compression performance, especially at low bitrates. It also provides various features, such as quality and resolution progressive, region of interest coding, lossy and lossless coding in an unified framework, etc. The performance of JPEG2000 comes at the cost of higher computational complexity. In the second part of the dissertation, we discuss the challenges and issues of the design of a JPEG2000 coding system. Cycle efficient block encoding and decoding engines, and computation reduction techniques by Tier-2 feedback are proposed for the most critical module, Embedded Block Coding with Optimized Truncation (EBCOT). With the proposed parallel checking and skipping-based coding schemes, the scanning cycles can be reduced to 40% of the direct bit-by-bit implementation. As for the Tier-2 feedback control in lossy coding mode, the execution cycles and therefore power consumption can be lowered to 50% in the case of about 10 times compression. MPEG-4 Visual Texture Coding (VTC) tool is another compression algorithm that also adopts the wavelet-based algorithm. In VTC, Zero-tree coding algorithm is adopted to generate the context symbols for arithmetic coder. In the third part, the design of the zero-tree coding algorithm is discussed. Tree-depth scan with multiple quantization mode are realized. Dedicated data access scheme are designed for smooth coding flow. In each chapter, detailed analysis of the algorithms are provided first. Then, efficient hardware architectures are proposed exploiting special algorithm characteristics. The proposed dedicated architectures can greatly improve the processing performance compared with a general processor-based solution. For non-PC consumer applications, these architectures are more competitive solutions for cost-efficient and high performance requirements.
Chuang, Ming-Chuan, and 莊銘權. "Suitable for JPEG 2000 Image Encryption/Decryption Scheme." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/85047454584863342890.
Full text國防大學中正理工學院
電子工程研究所
96
Image encryption is a very important part for image security. Image compression technologies change with each passing day. They not only improve compression efficiency, but also provide a rich set of features for a variety of applications. Therefore, an efficient image encryption method should be developed according to the characteristics of the compression technique itself. JPEG 2000 is an emerging standard for still image compression. JPEG 2000 provides various functionalities to solve the problems for different image applications and possibly become a most popular image format. Therefore, JPEG 2000 image encryption has become a hot topic for image security. One of the important properties of JPEG 2000 codestream is that the two consecutive bytes in the packet body should be in the interval [0x0000, 0xFF8F] so that a standard JPEG 2000 decoder can exactly decode the JPEG 2000 compressed codestream. This is so called the compatibility of JPEG 2000 and should be followed by an effective JPEG 2000 encryption method. This thesis proposes a cryptography-based JPEG 2000 image encryption technique which uses the stream cipher to encrypt the JPEG 2000 codestream. To be compatible with the syntax of JPEG 2000, the proposed technique replaces the syntax non-compliant bytes with syntax compliant bytes and records the positions of these bytes as the deciphering information. The deciphering information is then embedded in the header of the JPEG 2000 codestream making use of the characteristics of the syntax of the JPEG 2000 to facilitate the decryption in the decoding side. Experimental results show that the proposed JPEG 2000 image encryption scheme not only can be compliant with the syntax of JPEG 2000, but also can encrypt the entire packets of the JPEG 2000 codestream. That is, the proposed technique has good compatibility and security. Moreover, because the extra deciphering information generated in the encryption process is very small, the proposed technique is also compression equivalent. According to the good properties mentioned above, the proposed JPEG 2000 image encryption technique can provide effective protection of JPEG 2000 images in various applications.
Lin, Chung-Fu, and 林重甫. "Chip Design and System Integration of JPEG 2000." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/84965757346056105810.
Full text國立交通大學
電機與控制工程系所
94
JPEG 2000 is a new image coding system that delivers superior compression performance and provides many advanced features in scalability, flexibility, and system functionalities. The two key technologies of JPEG 2000 are Discrete Wavelet Transform (DWT) and Embedded block coding with optimized truncation (EBCOT). DWT can decompose the signals into different sub-bands with both time and frequency information and facilitate to achieve high compression ratio. Embedded block coding with optimized truncation (EBCOT) is another important technology in JPEG 2000. It coded each coding block independently and can achieve rate-distortion optimization and scalable coding. The attractive features also brings many imaging applications such as the Internet, wireless, security, and digital cinema. In this thesis, we focus on some design challenges of JPEG 2000 implementation (i.e. memory issue, processing speed and throughput of DWT design and JPEG 2000 coprocessor). Firstly, for the implementation of two-dimensional Discrete Wavelet Transform (DWT) operating in whole image, it uses internal memory to store the intermediate column-processed data, whose size is proportional to the image dimension. Besides, the pipeline stages of DWT data path would also prolong the data dependency and increase the memory size. Thus, in the high-speed and memory-efficient design, we explore the issues between the critical path and internal memory size with lossless 5/3 and lossy 9/7 filters of JPEG 2000. To ease the tradeoff between the pipeline stages of 1-D architecture and memory requirement of 2-D implementation, a modified algorithm is proposed for the designs of 1-D and 2-D pipeline architectures. Based on the modified data path of lifting-based DWT, the proposed architecture can achieve high-speed processing frequency by inserting more pipeline stages without increasing the internal memory size (i.e. the detail is mentioned in Chapter 3). As for the integration issue of JPEG 2000 coprocessor for DWT, BPC (bit-plane coder), AC (arithmetic coder) components, the overall encoding system may suffer performance degradations and need more hardware resources, since different components require different I/O bandwidth and buffers. To decrease the internal memory size and increase the overall throughput, we propose a (Quad code-block) QCB-based DWT engine to ease the performance degradation of integration issue. Based on the changed output timing of the DWT process, three code blocks are iteratively generated every fixed execution time slice. The DWT and BPC processes can reach higher parallelism than the traditional DWT method. Moreover, the overall performance can preserve the high performance of the individual component and the internal memory size is also reduced (i.e. the detail is mentioned in Chapter 4). To verify the proposed architectures for DWT and JPEG 2000 coprocessor, we implement the DWT design and JPEG 2000-based image system on ARM-based platform (i.e. Integrator System). To make the JPEG 2000 coprocessor more applicable, we wrap the design through AHB (Advanced High-performance Bus) interface. Following the bus communication, the JPEG 2000 coprocessor can communicate to JPEG 2000 coprocessor and complete the JPEG 2000 compression process. The overall system is realized by the integration of ARM processor and JPEG 2000 coprocessor (i.e. the detail is mentioned in Chapter 5). Finally, we give a brief conclusion and future works in Chapter 6.
Chuang, Yan-Tse, and 莊彥澤. "Embedded Edge Image within JPEG-2000 Compression System." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/71510167702242108119.
Full text國立臺灣大學
電機工程學研究所
89
JPEG-2000 system is the newest standard for still image compression. In this Thesis, we discuss the basic architecture of JPEG-2000 system, which could be viewed as an evolution of image compression techniques during recent years. While over the past decades, a different class of image coding schemes, generally referred to as second-generation image coding technique, has been proposed, telling us that edges have features more rich in information used for recognition or perceiving an image. Derived from this concept, we propose two methods to combine the JPEG-2000 system with the second-generation image coding techniques, to display edges image first, then other parts of the image not belonging to the edges. These coding schemes allow us to perceive an image from the contour first and could be applied to several conventional application-specific image systems where edges contain critical information, such as pattern recognition, motion estimation, etc.
Chen, Kuan-Fu, and 陳冠夫. "EFFICIENT ARCHITECTURE DESIGN OF EBCOT FOR JPEG-2000." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/76214593485370109133.
Full text國立臺灣大學
電機工程學研究所
89
As the rapid increase of Internet demanding and the use of digital still camera (DSC), still image is broadly used as a storage and transmission media. JPEG-2000 is a new still image compression standard. It has better compression performance than conventional JPEG standard, and it provides many useful features. The hardware implementation of JPEG-2000, therefore, becomes essential technique of digital still camera. In this thesis, a high performance hardware architecture design of EBCOT block coder for JPEG-2000 is proposed. Speedup methods and pipelining technique are adopted according to the characteristic of EBCOT block coding algorithm. By using this architecture, process time can be reduced to about 40% of previous work. There are two major parts in this block coder: context formation (CF) and arithmetic encoder (AE). The main idea to improve the process speed of context formation is to skip no-operation samples. We achieve this by column-based coding architecture and two speedup methods. In arithmetic encoder, a four-stage pipeline is used to reduce the clock cycle time. Besides, a look-ahead technique is used in probability table look up to process two identical contexts continuously inputted. A prototyping chip is implemented to verify the proposed architecture. The technology used is CMOS 0.35um 1P4M technology. The area of this chip is 3.67x3.67 mm2. The clock frequency is 50 MHz. It can encode 4.6 million pixels image within 1 second, corresponding to 2400x1800 image size, or 452x340 video sequence with 30 frames per second. Therefore, this chip is compliant with the upcoming motion JPEG-2000 standard.
Соколова, В. К. "Характеристика стандарту jpeg 2000 для кодування растрових зображень." Thesis, 2020. http://openarchive.nure.ua/handle/document/13953.
Full textHsieh, Hsing-Chin, and 謝幸芝. "Apply JPEG 2000 to Multiple Description Transmission System." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/01596421170934545844.
Full text中原大學
電機工程研究所
90
Abstract A novel JPEG 2000-based multiple description algorithm is present in this thesis. The JPEG 2000 outperforms many existing image coding algorithms. However, if transmitting images thought the noisy channel and the markers check to be wrong, the images won’t reconstruct. So the algorithm partitions the wavelet coefficients into several groups, and use JPEG 2000 to compress these groups. Each group is delivered over noisy channel to disperse risk. After that, this algorithm can improve the probability of image reconstructed by decoder, and it can save the bandwidth because it doesn’t need to retransmit these bit-streams. Since the JPEG 2000 is effective for image coding, the multiple description is useful for image delivery over networks, the algorithm to be developed in this paper can effectively combine these two techniques so that systems with high rate-distortion performance, low bandwidth requirement, and excellent error-resilient ability can be attained. Keywords: JPEG 2000, multiple description transmission system, error resilience.
Tseng, Chien-Ming, and 曾建銘. "Implementation of Wavelet Transform in JPEG 2000 Using FPGA." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/90934348816649845447.
Full text國立高雄第一科技大學
電腦與通訊工程所
90
JPEG 2000 is the emerging standard for coding and compressing still images. JPEG 2000 is based upon a discrete wavelet transformation (DWT), as it compares to the existing JPEG standard that uses the discrete cosine transform (DCT). The JPEG 2000 provides the numerous advantages over the existing JPEG standard. Performance gains include improved compression efficiency at low bit rates or for large images, while new functionalities include, progressive transmission, lossy or lossless compression, and region-of-interest (ROI). The JPEG 2000 standard is being designed to serve a number of different markets and applications such as low bandwidth transmission of images on networks over the Web, Internet, medical imaging, digital photography, e-commerce and scanner applications. With the advantage of faster implementation of the discrete wavelet transform, fully in-place calculation, and the inverse wavelet transform which can be found by undoing the operations of the forward wavelet transform, the lifting scheme has been chosen in the JPEG 2000 standard. First, we perform a symmetric extension of the input signals. Then, based on the adder-based Distributed Arithmetic(DA), we can use CSD form to reduce the nonzero bits of the 「9 / 7」 filter coefficients and find the share terms of multipliers to reduce the hardware. Finally, the proposed architecture is described in Verilog HDL, synthesized and verified on Xilinx FPGA.
Hung-Chjeh, Hsin, and 辛鴻杰. "The Design and Implementation of A JPEG 2000 Parser." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/07776127057855019852.
Full text國立臺灣大學
資訊工程學研究所
89
With the increasing use of multimedia technologies, image compression requires higher performance as well as new features. To address this need in the specific area of still image encoding, a new standard is currently being developed, the JPEG 2000. It is not only intended to provide rate-distortion and subjective image quality performance superior to existing standards, but also to provide features and functionalities that current standards can either not address efficiently or in many cases cannot address at all. In this thesis, we studied and discussed the JPEG 2000 image coding standard. We also implemented a software parser which contains the JPEG 2000 part I encoder and can record the information in the encoding process. Moreover, with it the users can observe progress result during the embedded process. We hope that this system can help people understanding JPEG 2000, and moreover, can develop better algorithms in JPEG 2000.
"JPEG 2000 and parity bit replenishment for remote video browsing." Université catholique de Louvain, 2008. http://edoc.bib.ucl.ac.be:81/ETD-db/collection/available/BelnUcetd-09162008-160601/.
Full textHung-Chi, Fang. "Design and Implementation of High Performance JPEG 2000 Encoding System." 2005. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-0706200509583700.
Full textShih-Wei, Lin, and 林詩緯. "A Progressive Image Authentication Scheme Base on JPEG 2000 Codec." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/23977291748077565877.
Full text國防大學中正理工學院
電子工程研究所
94
Image authentication is a very important part for image security. Image compression technologies change with each passing day. They not only improve compression efficiency, but also provide a rich set of features for variety of applications. An efficient image authentication method should be developed according to the characteristics of the compression technique itself. JPEG 2000 is an emerging standard for still image compression and is becoming the solution of choice for many digital imaging fields and applications. One important aspect of JPEG 2000 is its scalable property. Decoder allows only extraction of sub-image from a single compressed image codestream for essential image data. Therefore, the thesis proposes a progressive image authentication scheme based on JPEG 2000 codec. Recognizing that security is an important concern in many applications, the Joint Photographic Experts Group (JPEG) is on-going an activity known as Secure JPEG 2000 or JPSEC. Its goal is to extend the JPEG 2000 standard to provide a standardized framework for secure imaging. For tendency towards developing of future, the thesis also proposes another progression image authentication scheme integrated into JPSEC syntax for backward compatible with the scalable JPEG 2000 codestream. The proposed image authentication schemes use the packets as the authentication units. Because the packets are also the basic units of the JPEG 2000 codestream, the proposed techniques fully support different progressive strategies for JPEG 2000 image coding. Experimental results show that the proposed authentication schemes can not only extract sub-codestream for progressive image authentication, but also locate the tampered image. Therefore, the proposed image authentication schemes are practical for various applications of progressive JPEG 2000 image authentication.
Wu, Ming-Chu, and 吳名珠. "Detection and Concealment of Transmission Errors in JPEG-2000 images." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/88870651632598196056.
Full text國立中正大學
資訊工程研究所
88
In this study, the detection and concealment approach to transmission errors in JPEG-2000 images is proposed. For entropy-coded JPEG-2000 images, a transmission error in a codeword (subband sample) may cause the underlining codeword (subband sample) and its subsequent codewords (subband samples) to be misinterpreted, resulting in a great degradation of the received images. Here a transmission error may be a single-bit error or a burst error containing N successive error bits. The objective of the proposed approach is to detect and conceal transmission errors in JPEG-2000 images, i.e., to recover high-quality JPEG-2000 images from their corresponding corrupted images. In this proposed approach, it is assumed that the synchronization capability of JPEG-2000 images is enabled, i.e., the 256 unique Resync markers are inserted into the JPEG-2000 image bitstream periodically so that the synchronization problem can be solved by the proposed Resync marker regulation procedure. The difference between the variance of a received block and that of the corresponding original block is used in this study to determine whether the received block is corrupted or not. In the proposed error concealment approach, linear prediction and crossband information are employed. The subband samples in any corrupted block on level 3-5 are simply replaced by zeros. The subband samples in any corrupted block on levels 1-2 are concealed by the error concealment algorithms developed in [77] with the subband samples from LL0 being set to the corresponding 16 mean values of subband samples of 16 subblocks in LL0. The corrupted subband LL0 is first concealed by replacing the subband samples on LL0 by the corresponding 16 mean values of subband samples of the 16 subblocks in LL0. Then adaptive linear prediction and crossband information from all correctly-received subbands on levels 1 and 2 are used to improve the concealed results of LL0. Finally, a 3 3 sliding window is used to further improve the concealed results of LL0. Based on the simulation results obtained in this study, the proposed approach can recover the JPEG-2000 images from their corresponding corrupted JPEG-2000 images. This shows the feasibility of the proposed approach.
Fang, Hung-Chi, and 方弘吉. "Design and Implementation of High Performance JPEG 2000 Encoding System." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/45562301877274091679.
Full text國立臺灣大學
電子工程學研究所
93
JPEG 2000 is a new still image coding standard that has not only better coding efficiency but also abundant useful features. However, its high computational complexity and memory requirements has obstructed its entering the market. In this dissertation, we proposed a high performance JPEG 2000 encoding system to solve this problem. For the embedded block coding, we proposed a parallel architecture to increase the throughput by processing a coefficient at a time. Thus, the state variable memory and code-block memory are eliminated. This greatly reduces the hardware cost since these memories occupy more than 80% area of the embedded block coding engine in conventional architectures. Moreover, the processing speed is increased by more than 6 times compared with the best result in the literature. Therefore, the proposed parallel architecture is high performance for its high speed and low cost. The rate-distortion optimization is an important function of JPEG 2000. However, the post-compression rate-distortion optimization algorithm recommended in the reference software requires that the original image is losslessly coded regardless of target bit rate. This wastes the computational power and time to process the unnecessary data, and requires a large memory to buffer the lossless bit stream. To solve this problem, we propose a pre-compression rate-distortion optimization algorithm, which can perform the rate-distortion optimization before the embedded block coding. Thus, the embedded block coding only needs to process necessary data. This greatly reduces the processing time and computation power of the embedded block coding. Moreover, it does not need to buffer the bit stream. Therefore, the proposed pre-compression rate-distortion optimization algorithm presents low power, high speed, and low cost capability. Based on the above two new techniques, a high speed parallel JPEG 2000 encoder chip is implemented. It can encode HDTV 720p video in real-time. For the discrete wavelet transform, we adopt the multi-level line-based 2-D architecture. The memory bandwidth requirement of this chip is therefore minimized, i.e. each pixel is read one and only one time. The chip is fabricated by TSMC 0.25 µm CMOS technology, and the core area is 5.5 mm2. The power consumption is 348 mW at 81 MHz. This encoder has the highest throughput on smallest silicon area compared with all other encoders in the literature. Finally, we propose a stripe pipeline scheme for large tile size. By use of this scheme, the on-chip memory requirement of a JPEG 2000 encoder is proportional to the square root of the tile size while it is proportional to the tile size in previous works. For a tile size of 256×256, the tile memory requirement is reduced to only 8.5% of previous works. To achieve the stripe pipeline scheme, the level switch discrete wavelet transform and the code-block switch embedded block coding has been proposed. The level switch discrete wavelet transform is a multi-level blockbased scan architecture, and the code-block switch embedded block coding can process 13 code-blocks in parallel. As a result, the hardware cost of this pipeline architecture is about 30% of the parallel encoder when the tile size is 256×256, and the area saving increases as the increase of the tile size. With the algorithms and architectures proposed in this dissertation, the cost of the JPEG 2000 encoder can be reduced to only several times of that of the JPEG encoder. Moreover, all the features and functionalities of JPEG 2000 are retained.Therefore, we believe that JPEG 2000 will start to take the place of JPEG as the core technology of still image coding systems in the near future.