To see the other types of publications on this topic, follow the link: JPEG2000.

Dissertations / Theses on the topic 'JPEG2000'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'JPEG2000.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Natu, Ambarish Shrikrishna Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Error resilience in JPEG2000." Awarded by:University of New South Wales. Electrical Engineering and Telecommunications, 2003. http://handle.unsw.edu.au/1959.4/18835.

Full text
Abstract:
The rapid growth of wireless communication and widespread access to information has resulted in a strong demand for robust transmission of compressed images over wireless channels. The challenge of robust transmission is to protect the compressed image data against loss, in such a way as to maximize the received image quality. This thesis addresses this problem and provides an investigation of a forward error correction (FEC) technique that has been evaluated in the context of the emerging JPEG2000 standard. Not much effort has been made in the JPEG2000 project regarding error resilience. The only techniques standardized are based on insertion of marker codes in the code-stream, which may be used to restore high-level synchronization between the decoder and the code-stream. This helps to localize error and prevent it from propagating through the entire code-stream. Once synchronization is achieved, additional tools aim to exploit as much of the remaining data as possible. Although these techniques help, they cannot recover lost data. FEC adds redundancy into the bit-stream, in exchange for increased robustness to errors. We investigate unequal protection schemes for JPEG2000 by applying different levels of protection to different quality layers in the code-stream. More particularly, the results reported in this thesis provide guidance concerning the selection of JPEG2000 coding parameters and appropriate combinations of Reed-Solomon (RS) codes for typical wireless bit error rates. We find that unequal protection schemes together with the use of resynchronization makers and some additional tools can significantly improve the image quality in deteriorating channel conditions. The proposed channel coding scheme is easily incorporated into the existing JPEG2000 code-stream structure and experimental results clearly demonstrate the viability of our approach
APA, Harvard, Vancouver, ISO, and other styles
2

Gupta, Amit Kumar Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Hardware optimization of JPEG2000." Awarded by:University of New South Wales. School of Electrical Engineering and Telecommunications, 2006. http://handle.unsw.edu.au/1959.4/30581.

Full text
Abstract:
The Key algorithms of JPEG2000, the new image compression standard, have high computational complexity and thus present challenges for efficient implementation. This has led to research on the hardware optimization of JPEG2000 for its efficient realization. Luckily, in the last century the growth in Microelectronics allows us to realize dedicated ASIC solutions as well as hardware/software FPGA based solutions for complex algorithms such as JPEG2000. But an efficient implementation within hard constraints of area and throughput, demands investigations of key dependencies within the JPEG2000 system. This work presents algorithms and VLSI architectures to realize a high performance JPEG2000 compression system. The embedded block coding algorithm which lies at the heart of a JPEG2000 compression system is a main contributor to enhanced JPEG2000 complexity. This work first concentrates on algorithms to realize low-cost high throughput Block Coder (BC) system. For this purpose concurrent symbol processing capable Bit Plane Coder architecture is presented. Further optimal 2 sub-bank memory and an efficient buffer architectures are designed to keep the hardware cost low. The proposed overall BC system presents the highest Figure Of Merit (FOM) in terms of throughput versus hardware cost in comparison to existing BC solutions. Further, this work also investigates the challenges involved in the efficient integration of the BC with the overall JPEG2000 system. A novel low-cost distortion estimation approach with near-optimal performance is proposed which is necessary for accurate rate-control performance of JPEG2000. Additionally low bandwidth data storage and transfer techniques are proposed for efficient transfer of subband samples to the BC. Simulation results show that the proposed techniques have approximately 4 times less bandwidth than existing architectures. In addition, an efficient high throughput block decoder architecture based on the proposed selective sample-skipping algorithm is presented. The proposed architectures are designed and analyzed on both ASIC and FPGA platforms. Thus, the proposed algorithms, architectures and efficient BC integration strategies are useful for realizing a dedicated ASIC JPEG2000 system as well as a hardware/software FPGA based JPEG2000 solution. Overall this work presents algorithms and architectures to realize a high performance JPEG2000 system without imposing any restrictions in terms of coding modes or block size for the BC system.
APA, Harvard, Vancouver, ISO, and other styles
3

Dyer, Michael Ian Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Hardware Implementation Techniques for JPEG2000." Awarded by:University of New South Wales. Electrical Engineering and Telecommunications, 2007. http://handle.unsw.edu.au/1959.4/30510.

Full text
Abstract:
JPEG2000 is a recently standardized image compression system that provides substantial improvements over the existing JPEG compression scheme. This improvement in performance comes with an associated cost in increased implementation complexity, such that a purely software implementation is inefficient. This work identifies the arithmetic coder as a bottleneck in efficient hardware implementations, and explores various design options to improve arithmetic coder speed and size. The designs produced improve the critical path of the existing arithmetic coder designs, and then extend the coder throughput to 2 or more symbols per clock cycle. Subsequent work examines more system level implementation issues. This work examines the communication between hardware blocks and utilizes certain modes of operation to add flexibility to buffering solutions. It becomes possible to significantly reduce the amount of intermediate buffering between blocks, whilst maintaining a loose synchronization. Full hardware implementations of the standard are necessarily limited in the amount of features that they can offer, in order to constrain complexity and cost. To circumvent this, a hardware / software codesign is produced using the Altera NIOS II softcore processor. By keeping the majority of the standard implemented in software and using hardware to accelerate those time consuming functions, generality of implementation can be retained, whilst implementation speed is improved. In addition to this, there is the opportunity to explore parallelism, by providing multiple identical hardware blocks to code multiple data units simultaneously.
APA, Harvard, Vancouver, ISO, and other styles
4

Oh, Han. "Perceptual Image Compression using JPEG2000." Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/202996.

Full text
Abstract:
Image sizes have increased exponentially in recent years. The resulting high-resolution images are typically encoded in a lossy fashion to achieve high compression ratios. Lossy compression can be categorized into visually lossless and visually lossy compression depending on the visibility of compression artifacts. This dissertation proposes visually lossless coding methods as well as a visually lossy coding method with perceptual quality control. All resulting codestreams are JPEG2000 Part-I compliant.Visually lossless coding is increasingly considered as an alternative to numerically lossless coding. In order to hide compression artifacts caused by quantization, visibility thresholds (VTs) are measured and used for quantization of subbands in JPEG2000. In this work, VTs are experimentally determined from statistically modeled quantization distortion, which is based on the distribution of wavelet coefficients and the dead-zone quantizer of JPEG2000. The resulting VTs are adjusted for locally changing background through a visual masking model, and then used to determine the minimum number of coding passes to be included in a codestream for visually lossless quality under desired viewing conditions. The proposed coding scheme successfully yields visually lossless images at competitive bitrates compared to those of numerically lossless coding and visually lossless algorithms in the literature.This dissertation also investigates changes in VTs as a function of display resolution and proposes a method which effectively incorporates multiple VTs for various display resolutions into the JPEG2000 framework. The proposed coding method allows for visually lossless decoding at resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely, this method can significantly reduce bandwidth usage.Contrary to images encoded in the visually lossless manner, highly compressed images inevitably have visible compression artifacts. To minimize these artifacts, many compression algorithms exploit the varying sensitivity of the human visual system (HVS) to different frequencies, which is typically obtained at the near-threshold level where distortion is just noticeable. However, it is unclear that the same frequency sensitivity applies at the supra-threshold level where distortion is highly visible. In this dissertation, the sensitivity of the HVS for several supra-threshold distortion levels is measured based on the JPEG2000 quantization distortion model. Then, a low-complexity JPEG2000 encoder using the measured sensitivity is described. The proposed visually lossy encoder significantly reduces encoding time while maintaining superior visual quality compared with conventional JPEG2000 encoders.
APA, Harvard, Vancouver, ISO, and other styles
5

Aulí, Llinàs Francesc. "Model-Based JPEG2000 rate control methods." Doctoral thesis, Universitat Autònoma de Barcelona, 2006. http://hdl.handle.net/10803/5806.

Full text
Abstract:
Aquesta recerca està centrada en l'escalabilitat qualitativa de l'estàndard de compressió d'imatges JPEG2000. L'escalabilitat qualitativa és una característica fonamental que permet el truncament de la tira de bits a diferents punts sense penalitzar la qualitat de la imatge recuperada. L'escalabilitat qualitativa és també fonamental en transmissions d'imatges interactives, ja que permet la transmissió de finestres d'interès a diferents qualitats.
El JPEG2000 aconsegueix escalabilitat qualitativa a partir del mètode de control de factor de compressió utilitzat en el procés de compressió, que empotra capes de qualitat a la tira de bits. En alguns escenaris, aquesta arquitectura pot causar dos problemàtiques: per una banda, quan el procés de codificació acaba, el número i distribució de les capes de qualitat és permanent, causant una manca d'escalabilitat qualitativa a tires de bits amb una o poques capes de qualitat. Per altra banda, el mètode de control de factor de compressió construeix capes de qualitat considerant la optimització de la raó distorsió per l'àrea completa de la imatge, i això pot provocar que la distribució de les capes de qualitat per la transmissió de finestres d'interès no sigui adequada.
Aquesta tesis introdueix tres mètodes de control de factor de compressió que proveeixen escalabilitat qualitativa per finestres d'interès, o per tota l'àrea de la imatge, encara que la tira de bits contingui una o poques capes de qualitat. El primer mètode està basat en una simple estratègia d'entrellaçat (CPI) que modela la raó distorsió a partir d'una aproximació clàssica. Un anàlisis acurat del CPI motiva el segon mètode, basat en un ordre d'escaneig invers i una concatenació de passades de codificació (ROC). El tercer mètode es beneficia dels models de raó distorsió del CPI i ROC, desenvolupant una novedosa aproximació basada en la caracterització de la raó distorsió dels blocs de codificació dins una subbanda (CoRD).
Els resultats experimentals suggereixen que tant el CPI com el ROC són capaços de proporcionar escalabilitat qualitativa a tires de bits, encara que continguin una o poques capes de qualitat, aconseguint un rendiment de codificació pràcticament equivalent a l'obtingut amb l'ús de capes de qualitat. Tot i això, els resultats del CPI no estan ben balancejats per les diferents raons de compressió i el ROC presenta irregularitats segons el corpus d'imatges. CoRD millora els resultats de CPI i ROC i aconsegueix un rendiment ben balancejat. A més, CoRD obté un rendiment de compressió una mica millor que l'aconseguit amb l'ús de capes de qualitat. La complexitat computacional del CPI, ROC i CoRD és, a la pràctica, negligible, fent-los adequats per el seu ús en transmissions interactives d'imatges.
This work is focused on the quality scalability of the JPEG2000 image compression standard. Quality scalability is an important feature that allows the truncation of the code-stream at different bit-rates without penalizing the coding performance. Quality scalability is also fundamental in interactive image transmissions to allow the delivery of Windows of Interest (WOI) at increasing qualities.
JPEG2000 achieves quality scalability through the rate control method used in the encoding process, which embeds quality layers to the code-stream. In some scenarios, this architecture might raise two drawbacks: on the one hand, when the coding process finishes, the number and bit-rates of quality layers are fixed, causing a lack of quality scalability to code-streams encoded with a single or few quality layers. On the other hand, the rate control method constructs quality layers considering the rate¬distortion optimization of the complete image, and this might not allocate the quality layers adequately for the delivery of a WOI at increasing qualities.
This thesis introduces three rate control methods that supply quality scalability for WOIs, or for the complete image, even if the code-stream contains a single or few quality layers. The first method is based on a simple Coding Passes Interleaving (CPI) that models the rate-distortion through a classical approach. An accurate analysis of CPI motivates the second rate control method, which introduces simple modifications to CPI based on a Reverse subband scanning Order and coding passes Concatenation (ROC). The third method benefits from the rate-distortion models of CPI and ROC, developing an approach based on a novel Characterization of the Rate-Distortion slope (CoRD) that estimates the rate-distortion of the code¬blocks within a subband.
Experimental results suggest that CPI and ROC are able to supply quality scalability to code-streams, even if they contain a single or few quality layers, achieving a coding performance almost equivalent to the one obtained with the use of quality layers. However, the results of CPI are unbalanced among bit-rates, and ROC presents an irregular coding performance for some corpus of images. CoRD outperforms CPI and ROC achieving well-balanced and regular results and, in addition, it obtains a slightly better coding performance than the one achieved with the use of quality layers. The computational complexity of CPI, ROC and CoRD is negligible in practice, making them suitable to control interactive image transmissions.
APA, Harvard, Vancouver, ISO, and other styles
6

Nilsson, Per. "Hardware / Software co-design for JPEG2000." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5796.

Full text
Abstract:

For demanding applications, for example image or video processing, there may be computations that aren’t very suitable for digital signal processors. While a DSP processor is appropriate for some tasks, the instruction set could be extended in order to achieve higher performance for the tasks that such a processor normally isn’t actually design for. The platform used in this project is flexible in the sense that new hardware can be designed to speed up certain computations.

This thesis analyzes the computational complex parts of JPEG2000. In order to achieve sufficient performance for JPEG2000, there may be a need for hardware acceleration.

First, a JPEG2000 decoder was implemented for a DSP processor in assembler. When the firmware had been written, the cycle consumption of the parts was measured and estimated. From this analysis, the bottlenecks of the system were identified. Furthermore, new processor instructions are proposed that could be implemented for this system. Finally the performance improvements are estimated.

APA, Harvard, Vancouver, ISO, and other styles
7

Pu, Lingling. "Joint Source/Channel Coding For JPEG2000." Diss., The University of Arizona, 2007. http://hdl.handle.net/10150/194377.

Full text
Abstract:
In today's world, demands of digital multimedia services are growing tremendously, together with the development of new communication technologies and investigation of new transmission media. Two common problems encountered in multimedia services are unreliable transmission channels and limited resources. This dissertation investigates advanced source coding and error control techniques, and is dedicated to designing joint source-channel coding schemes for robust image/video transmission. Error resilience properties of JPEG2000 codestreams are investigated first, and an LDPC-based joint iterative decoding scheme is proposed. Next, a progressive decoding method is presented for still and motion image transmission. The underlying channel codes are created using a Plotkin construction and offer the novel ability of using one long channel codeword to protect an entire image, yet still allowing progressive decoding. Progressive quality improvements occur in two ways: the first is the usual progressive refinement, where image quality is improved as more data are received; the second is that residual error rates of earlier received data are reduced as more data are received. Finally, multichannel systems are studied and an optimal rate allocation algorithm is proposed for parallel transmission of scalable images in multichannel systems. The proposed algorithm selects a subchannel as well as a channel code rate for each packet, based on the signal-to-noise ratios (SNR) of the subchannels. The resulting scheme provides unequal error protection of source bits and significant gains are obtained over equal error protection (EEP) schemes. An application of the proposed algorithm to JPEG2000 transmission shows the advantages of exploiting differences in SNRs between subchannels. Multiplexing of multiple sources is also considered, and additional gains are achieved by exploiting information diversity among the sources.
APA, Harvard, Vancouver, ISO, and other styles
8

Yeung, Yick Ming. "Fast rate control for JPEG2000 image coding /." View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202003%20YEUNG.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 63-65). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
9

Narayanan, Barath Narayanan. "Multiframe Super Resolution with JPEG2000 Compressed Images." University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1365597593.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lucking, David Joseph. "FPGA Implementation of the JPEG2000 MQ Decoder." University of Dayton / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1272050082.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Muñoz, Gómez Juan. "Contributions to computed tomography image coding for JPEG2000." Doctoral thesis, Universitat Autònoma de Barcelona, 2014. http://hdl.handle.net/10803/129099.

Full text
Abstract:
Avui dia, gràcies als avanços en la ciència mèdica, existeixen diverses tècniques d’imatges mèdiques destinades a tractar de revelar, diagnosticar o examinar una malaltia. Moltes d’aquestes tècniques produeixen grans quantitats de dades, especialment les modalitats de tomografia com- putada (CT), imatges per ressonància magnètica (MRI) i tomografia per emissió de positrons (PET). Per gestionar aquestes dades, els centres mèdics utilitzen PACS i l’estàndard DICOM per emmagatzemar, recuperar, distribuir i visualitzar imatges mèdiques. Com a resultat de l’alt cost d’emmagatzematge i transmissió d’imatges mèdiques digitals, la compressió de dades juga un paper clau. JPEG2000 és l’estat de l’art en tècniques de compressió d’imatges per a l’emmagatzematge i transmissió d’imatges mèdiques. És el més recent sistema de codificació inclòs en DICOM i propor- ciona algunes característiques que són interessants per a la codificació d’aquestes imatges. JPEG2000 permet l’ús de finestres d’interès, accés a diferents grandàries de la imatge o la decodificació una regió específica d’ella. Aquesta tesi aborda tres problemes diferents detectats en la codificació de CT. El primer prob- lema de la codificació d’aquestes imatges, és el soroll que tenen. Aquest soroll és produït per l’ús d’unes dosis baixes de radiació durant l’exploració, produint imatges de baixa qualitat i penalitzant el rendiment de la codificació. L’ús de diferents filtres de soroll, fa millorar la qualitat i també augmentar el rendiment de codificació. La segona qüestió que s’aborda en aquesta tesi, és l’ús de transformacions multi-component en la codificació de les CT. Depenent de la correlació entre les diferents imatges que formen una CT, el rendiment en la codificació usant aquestes transformacions pot variar, fins i tot disminuir pel que fa a JPEG2000. Finalment, l’última contribució d’aquesta tesi tracta sobre el paradigma de la codificació diagnòstica sense pèrdua, i proposa un nou mètode ivde segmentació. A través de la utilització de mètodes de segmentació, per detectar l’àrea biològica i descartar la zona no-biològica, JPEG2000 pot aconseguir millores de rendiment de més de 2 bpp.
Hoy en día, gracias a los avances en la ciencia médica, existen diversas técnicas de imágenes médicas destinadas a tratar de revelar, diagnosticar o examinar una enfermedad. Muchas de estas técnicas producen grandes cantidades de datos, en especial las modalidades de tomografía com- putarizada (CT), imágenes por resonancia magnética (MRI) y tomografía por emisión de positrones (PET). Para gestionar estos datos, los centros médicos utilizan PACS y el estándar DICOM para almacenar, recuperar, distribuir y visualizar imágenes médicas. Como resultado del alto coste de almacenamiento y transmisión de imágenes médicas digitales, la compresión de datos juega un papel clave. JPEG2000 es el estado del arte en técnicas de compresión de imágenes para el almacenamiento y transmisión de imágenes médicas. Es el más reciente sistema de codificación incluido en DICOM y proporciona algunas características que son interesantes para la codificación de estas imágenes. JPEG2000 permite el uso de ventanas de interés, acceso a diferentes tamaños de la imagen o decodificar una región específica de ella. Esta tesis aborda tres problemas diferentes detectados en la codificación de CT. El primer problema de la codificación de estas imágenes, es el ruido que tienen. Este ruido es producido por el uso de unas dosis bajas de radiación durante la exploración, lo cual produce imágenes de baja calidad y penaliza el rendimiento de la codificación. El uso de diferentes filtros de ruido, hace mejorar la calidad y también aumentar el rendimiento de codificación. La segunda cuestión que se aborda en esta tesis, es el uso de transformaciones multicomponente en la codificación de las CT. Dependiendo de la correlación entre las diferentes imágenes que forman una CT, el rendimiento en la codificación usando estas transformaciones puede variar, incluso disminuir con respecto a JPEG2000. Final- mente, la última contribución de esta tesis trata sobre el paradigma de la codificación diagnóstica sin pérdida, y propone un nuevo método de segmentación. A través de la utilización de métodos de segmentación, para detectar el área biológica y descartar la zona no-biológica, JPEG2000 puede lograr mejoras de rendimiento de más de 2bpp.
Nowadays, thanks to the advances in medical science, there exist many different medical imaging techniques aimed at seeking to reveal, diagnose, or examine a disease. Many of these techniques produce very large amounts of data, especially from Computed Tomography (CT), Magnetic Res- onance Imaging (MRI) and Positron Emission Tomography (PET) modalities. To manage these data, medical centers use PACS and the DICOM standard to store, retrieve, distribute, and display medical images. As a result of the high cost of storage and transmission of medical digital images, data compression plays a key role. JPEG2000 is the state-of-the-art of image compression for the storage and transmission of med- ical images. It is the latest coding system included in DICOM and it provides some interesting capabilities for medical image coding. JPEG2000 enables the use of use of windows of interest, access different resolutions sizes of the image or decode an specific region of the image. This thesis deals with three different problems detected in CT image coding. The first coding problem is the noise that CT have. These noise is produced by the use of low radiation dose during the scan and it produces a low quality images and penalizes the coding performance. The use of different noise filters, enhance the quality and also increase the coding performance. The second question addressed in this dissertation is the use of multi-component transforms in Computed Tomography image coding. Depending on the correlation among the slices of a Computed Tomography, the coding performance of these transforms can vary even decrease with respect to JPEG2000. Finally, the last contribution deals with the diagnostically lossless coding paradigm, and it is proposed a new segmentation method. Through the use of segmentation methods to detect the biological area and to discard the non-biological area, JPEG2000 can achieve improvements of more than 2bpp.
APA, Harvard, Vancouver, ISO, and other styles
12

Oh, Han, and Yookyung Kim. "Low-Complexity Perceptual JPEG2000 Encoder for Aerial Images." International Foundation for Telemetering, 2011. http://hdl.handle.net/10150/595684.

Full text
Abstract:
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada
A highly compressed image inevitably has visible compression artifacts. To minimize these artifacts, many compression algorithms exploit the varying sensitivity of the human visual system (HVS) to different frequencies. However, this sensitivity has typically been measured at the near-threshold level where distortion is just noticeable. Thus, it is unclear that the same sensitivity applies at the supra-threshold level where distortion is highly visible. In this paper, we measure the sensitivity of the HVS for several supra-threshold distortion levels based on our JPEG2000 distortion model. Then, a low-complexity JPEG2000 encoder using the measured sensitivity is described. For aerial images, the proposed encoder significantly reduces encoding time while maintaining superior visual quality compared with a conventional JPEG2000 encoder.
APA, Harvard, Vancouver, ISO, and other styles
13

Wu, Zhenyu, Ali Bilgin, and Michael W. Marcellin. "JOINT SOURCE/CHANNEL CODING FOR TRANSMISSION OF MULTIPLE SOURCES." International Foundation for Telemetering, 2005. http://hdl.handle.net/10150/604932.

Full text
Abstract:
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada
A practical joint source/channel coding algorithm is proposed for the transmission of multiple images and videos to reduce the overall reconstructed source distortion at the receiver within a given total bit rate. It is demonstrated that by joint coding of multiple sources with such an objective, both improved distortion performance as well as reduced quality variation can be achieved at the same time. Experimental results based on multiple images and video sequences justify our conclusion.
APA, Harvard, Vancouver, ISO, and other styles
14

Monteagudo, Pereira José Lino. "Preemptive Strategies for Data Transmission through JPEG2000 Interactiive Protocol." Doctoral thesis, Universitat Autònoma de Barcelona, 2013. http://hdl.handle.net/10803/125921.

Full text
Abstract:
Hoy en día, con el desarrollo de las tecnologías de la información y las comunicaciones, las imágenes son ampliamente utilizadas en muchos ámbitos de nuestra vida. Cuando se comparten o transmiten imágenes, el consumo de ancho de banda de la red es importante, especialmente para imágenes de gran resolución. En un escenario cliente-servidor, el consumo de ancho de banda aumenta con el número de imágenes solicitadas por el usuario y con el número de usuarios. Por lo tanto, se necesitan estrategias de transmisión eficientes para reducir los costes de transmisión y los tiempos de respuesta de los clientes. Se puede alcanzar una mayor eficiencia a través de la compresión y también mediante el uso de un protocolo de transmisión adecuado. JPEG2000 es un estándar de compresión de imágenes que está a la vanguardia y que destaca por su rendimiento en la codificación de imágenes, características avanzadas y por sus potentes capacidades para la transmisión interactiva de imágenes. El protocolo JPEG2000 Interactive Protocol (JPIP) destaca porque es capaz de lograr una navegación fluida y porque minimiza la información intercambiada entre el cliente y el servidor. Además, la eficiencia de JPIP puede mejorarse mediante: 1) los parámetros de codificación apropiados; 2) la reorganización de paquetes en el servidor; 3) el prefetching en los clientes; y 4) el despliegue de servidores proxy en la red. Las estrategias de prefetching mejoran la capacidad de respuesta, sin embargo, cuando los clientes se encuentran en una red de área local, no se aprovechan las redundancias entre los clientes y la conexión a Internet puede llegar a saturarse. Este trabajo propone el despliegue de mecanismos de prefetching en los proxy JPIP para mejorar el rendimiento global del sistema. El proxy JPIP aprovecha los instantes de inactividad de la conexión a Internet para precargar datos y anticipar posibles peticiones futuras de los clientes. Dado que el prefetching se realiza en el proxy, se tienen en cuenta las redundancias existentes entre todos los clientes, lo que minimiza la carga de la red. Tres son las estrategias de prefetching que se estudian en esta tesis para reducir la latencia. La primera estrategia considera probabilidades equiprobables para los futuros movimientos de los clientes. La segunda estrategia utiliza un modelo basado en el comportamiento de navegación del usuario. La tercera estrategia predice las regiones de las imágenes que tienen una mayor probabilidad de ser solicitadas basándose en el contenido semántico de la imagen. Todas estas estrategias están integradas en nuestra implementación de código libre de JPIP llamada CADI, que es también una contribución de esta tesis.
Nowadays, with the advent of information technology and communications, images are widely used in many areas of our life. When sharing or transmitting images, the network bandwidth is a major concern, especially for large resolution images. In a client-server scenario, the bandwidth consumption increases along with the number of images requested by a user and with the number of users. Thus, efficient transmission strategies are needed to reduce the transmission cost and the client’s response time. Efficiency can be achieved through compression and by using a suitable transmission protocol. JPEG2000 is a state-of-the-art image compression standard that excels for its coding performance, advanced features, and for its powerful interactive transmission capabilities. The JPEG2000 Interactive Protocol (JPIP) is key to achieve fluent image browsing and to minimize the information exchanged in a client-server scenario. Furthermore, the efficienty of JPIP can be improved with: 1) appropriate coding parameters; 2) packet re-sequencing at the server; 3) prefetching at clients; and 4) proxy servers over the network. Prefetching strategies improve the responsiveness, but when clients are in a local area network, redundancies among clients are commonly not exploited and the Internet connection may become saturated. This work proposes the deployment of prefetching mechanisms in JPIP proxy servers to enhance the overall system performance. The proposed JPIP proxy server takes advantage of idle times in the Internet connection to prefetch data that anticipate potential future requests from clients. Since the prefetching is performed in the proxy, redundancies among all the clients are considered, minimizing the network load. Three strategies are put forward to reduce the latency. The first strategy considers equal probability for next movements. The second strategy uses a user-navigation model. The third strategy predicts the regions of the images that are more likely to be requested employing a semantic map. All these strategies are implemented in our open source implementation of JPIP named CADI, which is also a contribution of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
15

Jiménez, Rodríguez Leandro. "Interactive transmission and visually lossless strategies for JPEG2000 imagery." Doctoral thesis, Universitat Autònoma de Barcelona, 2014. http://hdl.handle.net/10803/283654.

Full text
Abstract:
Cada dia, vídeos i imatges es transmeten per Internet. La compressió d’imatges permet reduir la quantitat total de dades transmeses i accelera la seva entrega. En escenaris de vídeo-sota-demanda, el vídeo s’ha de transmetre el més ràpid possible utilitzant la capacitat disponible del canal. En aquests escenaris, la compressió d’imatges es mandatària per transmetre el més ràpid possible. Comunament, els vídeos són codificats permeten pèrdua de qualitat en els fotogrames, el que es coneix com ha compressió amb pèrdua. Els mètodes de compressió amb pèrdua són els més utilitzats per transmetre per Internet degut als seus elevats factors de compressió. Un altre característica clau en els escenaris de vídeo-sota-demanda és la capacitat del canal. Dependent de la capacitat, un mètode de assignació de rati assigna la quantitat de dades que es deu transmetre per cada fotograma. La majoria d’aquests mètodes tenen com a objectiu aconseguir la millor qualitat possible donat un ample de banda. A la pràctica, l’ample de banda del canal pot sofrir variacions en la seva capacitat degut a la congestió en el canal o problemes en la seva infraestructura. Aquestes variacions poden causar el desbordament o buidament del buffer del client, provocant pauses en la reproducció del vídeo. La primera contribució d’aquesta tesis es un mètode d’assignació de rati basat en JPEG2000 per a canals variants en el temps. La seva principal avantatja és el ràpid processament aconseguint una qualitat quasi òptima en la transmissió. Encara que la compressió amb pèrdua sigui la més usada per la transmissió d’imatges i vídeo per Internet, hi ha situacions on la pèrdua de qualitat no està permesa, en aquests casos la compressió sense pèrdua ha de ser utilitzada. La compressió sense pèrdua pot no ser viable en alguns escenaris degut als seus baixos factors de compressió. Per superar aquest inconvenient, la compressió visualment sense pèrdua pot ser utilitzada. La compressió visualment sense pèrdua és una tècnica basada en el sistema de visió humana per codificar només les dades visualment rellevants. Això permet factors de compressió majors que els de la compressió sense pèrdua, aconseguint pèrdues no perceptibles a l’ull humà. La segona contribució d’aquesta tesis és un sistema de codificació visualment sense pèrdua per a imatges JPEG2000 prèviament codificades. El propòsit d’aquest mètode es permetre la descodificació i/o transmissió de imatges dins en un règim visualment sense pèrdua.
Cada día, vídeos e imágenes se transmiten por Internet. La compresión de imágenes permite reducir la cantidad total de datos transmitidos y acelera su entrega. En escenarios de vídeo-bajodemanda, el vídeo debe transmitirse lo más rápido posible usando la capacidad disponible del canal. En éstos escenarios, la compresión de imágenes es mandataria para transmitir lo más rápido posible. Comúnmente, los videos son codificados permitiendo pérdida de calidad en los fotogramas, lo que se conoce como compresión con pérdida. Los métodos de compresión con pérdida son los más usados para transmitir por Internet dados sus elevados factores de compresión. Otra característica clave en escenarios de video-bajo-demanda es la capacidad del canal. Dependiendo de la capacidad, un método de asignación de ratio asigna la cantidad de datos que deben ser transmitidos por cada fotograma. La mayoría de estos métodos tienen como objetivo conseguir la mejor calidad posible dado un ancho de banda. A la práctica, el ancho de banda del canal puede sufrir variaciones en su capacidad debido a congestión en el canal o problemas en su infraestructura. Estas variaciones pueden causar el desbordamiento o vaciado del buffer del cliente, provocando pausas en la reproducción del vídeo. La primera contribución de esta tesis es un método de asignación de ratio basado en JPEG2000 para canales variantes en el tiempo. Su principal ventaja es el procesado rápido consiguiendo una calidad casi óptima en la transmisión. Aunque la compresión con pérdida sea la más usada para la transmisión de imágenes y vídeos por Internet, hay situaciones donde la pérdida de calidad no está permitida, en éstos casos la compresión sin pérdida debe ser usada. La compresión sin pérdida puede no ser viable en escenarios debido sus bajos factores de compresión. Para superar este inconveniente, la compresión visualmente sin pérdida puede ser usada. La compresión visualmente sin pérdida es una técnica que está basada en el sistema de visión humano para codificar sólo los datos de una imagen que son visualmente relevantes. Esto permite mayores factores de compresión que en la compresión sin pérdida, consiguiendo pérdidas no perceptibles al ojo humano. La segunda contribución de esta tesis es un sistema de codificación visualmente sin pérdida para imágenes JPEG2000 que ya han sido codificadas previamente. El propósito de este método es permitir la decodificación y/o transmisión de imágenes en un régimen visualmente sin pérdida.
Every day, videos and images are transmitted over the Internet. Image compression allows to reduce the total amount of data transmitted and accelerates the delivery of such data. In video-on-demand scenarios, the video has to be transmitted as fast as possible employing the available channel capacity. In such scenarios, image compression is mandatory for faster transmission. Commonly, videos are coded allowing quality loss in every frame, which is referred to as lossy compression. Lossy coding schemes are the most used for Internet transmission due its high compression ratios. Another key feature in video-on-demand scenarios is the channel capacity. Depending on the capacity a rate allocation method decides the amount of data that is transmitted for every frame. Most rate allocation methods aim at achieving the best quality for a given channel capacity. In practice, the channel bandwidth may suffer variations on its capacity due traffic congestion or problems in its infrastructure. This variations may cause buffer under-/over-flows in the client that causes pauses while playing a video. The first contribution of this thesis is a JPEG2000 rate allocation method for time-varying channels. Its main advantage is that allows fast processing achieving transmission quality close to the optimal. Although lossy compression is the most used to transmit images and videos in Internet, when image quality loss is not allowed, lossless compression schemes must be used. Lossless compression may not be suitable in scenarios due its lower compression ratios. To overcome this drawback, visually lossless coding regimes can be used. Visually lossless compression is a technique based in the human visual system to encode only the visually relevant data of an image. It allows higher compression ratios than lossless compression achieving losses that are not perceptible to the human eye. The second contribution of this thesis is a visually lossless coding scheme aimed at JPEG2000 imagery that is already coded. The proposed method permits the decoding and/or transmission of images in a visually lossless regime.
APA, Harvard, Vancouver, ISO, and other styles
16

Mendoza, Jose Antonio. "Hardware and Software Codesign of a JPEG2000 Watermarking Encoder." Thesis, University of North Texas, 2008. https://digital.library.unt.edu/ark:/67531/metadc9752/.

Full text
Abstract:
Analog technology has been around for a long time. The use of analog technology is necessary since we live in an analog world. However, the transmission and storage of analog technology is more complicated and in many cases less efficient than digital technology. Digital technology, on the other hand, provides fast means to be transmitted and stored. Digital technology continues to grow and it is more widely used than ever before. However, with the advent of new technology that can reproduce digital documents or images with unprecedented accuracy, it poses a risk to the intellectual rights of many artists and also on personal security. One way to protect intellectual rights of digital works is by embedding watermarks in them. The watermarks can be visible or invisible depending on the application and the final objective of the intellectual work. This thesis deals with watermarking images in the discrete wavelet transform domain. The watermarking process was done using the JPEG2000 compression standard as a platform. The hardware implementation was achieved using the ALTERA DSP Builder and SIMULINK software to program the DE2 ALTERA FPGA board. The JPEG2000 color transform and the wavelet transformation blocks were implemented using the hardware-in-the-loop (HIL) configuration.
APA, Harvard, Vancouver, ISO, and other styles
17

Goudia, Dalila. "Tatouage conjoint a la compression d'images fixes dans JPEG2000." Thesis, Montpellier 2, 2011. http://www.theses.fr/2011MON20198.

Full text
Abstract:
Les technologies numériques et du multimédia ont connu de grandes avancées ces dernières années. La chaîne de transmission des images est constituée de plusieurs traitements divers et variés permettant de transmettre un flux de données toujours plus grand avec toujours plus de services à la clé. Nous citons par exemple, la compression, l'augmentation de contenu, la confidentialité, l'intégrité et l'authenticité des images pendant leur transmission. Dans ce contexte, les approches conjointes ont suscité un intérêt certain de la part de la communauté du traitement d'images car elles permettent d'obtenir des systèmes de faible complexité calculatoire pouvant être utilisés dans des applications nécessitant peu de ressources matérielles. La dissimulation de données ou Data Hiding, est l'art de cacher un message dans un support numérique. L'une des branches les plus importantes du data hiding est le tatouage numérique ou watermarking. La marque doit rester présente dans l'image hôte même si celle-ci subit des modifications appelées attaques. La compression d'images a comme objectif de réduire la taille des images stockées et transmises afin d'augmenter la capacité de stockage et de minimiser le temps de transmission. La compression représente une opération incontournable du stockage ou du transfert d'images. Elle est considérée par le data hiding comme une attaque particulièrement destructrice. La norme JPEG2000 est le dernier standard ISO/ITU-T pour le codage des images fixes. Dans cette thèse, nous étudions de manière conjointe la compression avec perte et le data hiding dans le domaine JPEG2000. L'approche conjointe offre de nombreux avantages dont le plus important est que la compression ne constitue plus une attaque vis-à-vis du data hiding. Les contraintes à respecter sont exprimées en termes de compromis à atteindre: compromis entre la quantité d'information insérée (payload), le taux de compression, la distorsion induite par l'insertion du message et la robustesse de la marque dans le cas du tatouage.Nos travaux de recherche ont conduit à l'élaboration de plusieurs schémas conjoints : un schéma conjoint d'insertion de données cachées et deux schémas conjoints de tatouage dans JPEG2000. Tous ces systèmes conjoints reposent sur des stratégies d'insertion informée basées sur la quantification codée par treillis (TCQ). Les propriétés de codage de canal de la TCQ sont exploitées pour pouvoir à la fois quantifier et insérer un message caché (ou une marque) pendant l'étape de quantification de JPEG2000
Technological advances in the fields of telecommunications and multimedia during the two last decades, derive to create novel image processing services such as copyright protection, data enrichment and information hiding applications. There is a strong need of low complexity applications to perform seveval image processing services within a single system. In this context, the design of joint systems have attracted researchers during the last past years. Data hiding techniques embed an invisible message within a multimedia content by modifying the media data. This process is done in such a way that the hidden data is not perceptible to an observer. Digital watermarking is one type of data hiding. The watermark should be resistant to a variety of manipulations called attacks. The purpose of image compression is to represent images with less data in order to save storage costs or transmission time. Compression is generally unavoidable for transmission or storage purposes and is considered as one of the most destructive attacks by the data hiding. JPEG2000 is the last ISO/ ITU-T standard for still image compression.In this thesis, joint compression and data hiding is investigated in the JPEG2000 framework. Instead of treating data hiding and compression separately, it is interesting and beneficial to look at the joint design of data hiding and compression system. The joint approach have many advantages. The most important thing is that compression is no longer considered as an attack by data hiding.The main constraints that must be considered are trade offs between payload, compression bitrate, distortion induced by the insertion of the hidden data or the watermark and robustness of watermarked images in the watermarking context. We have proposed several joint JPEG2000 compression and data hiding schemes. Two of these joint schemes are watermarking systems. All the embedding strategies proposed in this work are based on Trellis Coded Quantization (TCQ). We exploit the channel coding properties of TCQ to reliably embed data during the quantization stage of the JPEG2000 part 2 codec
APA, Harvard, Vancouver, ISO, and other styles
18

Jagiello, Kristin, Mahmut Zafer Aydin, and Wei-Ren Ng. "Joint JPEG2000/LDPC Code System Design for Image Telemetry." International Foundation for Telemetering, 2008. http://hdl.handle.net/10150/606217.

Full text
Abstract:
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California
This paper considers the joint selection of the source code rate and channel code rate in an image telemetry system. Specifically considered is the JPEG2000 image coder and an LDPC code family. The goal is to determine the optimum apportioning of bits between the source and channel codes for a given channel signal-to-noise ratio and total bit rate, R(total). Optimality is in the sense of maximum peak image SNR and the tradeoff is between the JPEG2000 bit rate R(source) and the LDPC code rate R(channel). For comparison, results are included for the industry standard rate-1/2, memory-6 convolutional code.
APA, Harvard, Vancouver, ISO, and other styles
19

Омельченко, А. В., Р. В. Самарський, and С. А. Наталюк. "Аналіз підходів до контролю завантаженості мережі." Thesis, Scientific Publishing Center "Sci-conf.com.ua", 2021. https://openarchive.nure.ua/handle/document/16464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Halsteinli, Erlend. "Real-Time JPEG2000 Video Decoding on General-Purpose Computer Hardware." Thesis, Norwegian University of Science and Technology, Department of Electronics and Telecommunications, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-8996.

Full text
Abstract:

There is widespread use of compression in multimedia content delivery, e.g. within video on demand services and transport links between live events and production sites. The content must undergo compression prior to transmission in order to deliver high quality video and audio over most networks, this is especially true for high definition video content. JPEG2000 is a recent image compression standard and a suitable compression algorithm for high definition, high rate video. With its highly flexible embedded lossless and lossy compression scheme, JPEG2000 has a number of advantages over existing video codecs. The only evident drawbacks with respect to real-time applications, are that the computational complexity is quite high and that JPEG2000, being an image compression codec as opposed to video codec, typically has higher bandwidth requirements. Special-purpose hardware can deliver high performance, but is expensive and not easily updated. A JPEG2000 decoder application running on general-purpose computer hardware can complement solutions depending on special-purpose hardware and will experience performance scaling together with the available processing power. In addition, production costs will be none-existing, once developed. The application implemented in this project is a streaming media player. It receives a compressed video stream through an IP interface, decodes it frame by frame and presents the decoded frames in a window. The decoder is designed to better take advantage of the processing power available in today's desktop computers. Specifically, decoding is performed on both CPU and GPU in order to decode minimum 50 frames per second of a 720p JPEG2000 video stream. The CPU executed part of the decoder application is written in C++, based on the Kakadu SDK and involve all decoding steps up to and including reverse wavelet transform. The GPU executed part of the decoder is enabled by the CUDA programming language, and include luma upsampling and irreversible color transform. Results indicate that general purpose computer hardware today easily can decode JPEG2000 video at bit rates up to 45 Mbit/s. However, when the video stream is received at 50 fps through the IP interface, packet loss at the socket level limits the attained frame rate to about 45 fps at rates of 40 Mbit/s or lower. If this packet loss could be eliminated, real-time decoding would be obtained up to 40 Mbit/s. At rates above 40 Mbit/s, the attained frame rate is limited by the decoder performance and not the packet loss. Higher codestream rates should be endurable if reverse wavelet transform could be mapped from the CPU to the GPU, since the current pipeline is highly unbalanced.

APA, Harvard, Vancouver, ISO, and other styles
21

Ouled, Zaid Azza. "Amélioration des performances des systèmes de compression JPEG et JPEG2000." Poitiers, 2002. http://www.theses.fr/2002POIT2294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Pu, Lingling, Zhenyu Wu, Ali Bilgin, Michael W. Marcellin, and Bane Vasic. "LDPC-BASED ITERATIVE JOINT SOURCE/CHANNEL DECODING SCHEME FOR JPEG2000." International Foundation for Telemetering, 2004. http://hdl.handle.net/10150/605781.

Full text
Abstract:
International Telemetering Conference Proceedings / October 18-21, 2004 / Town & Country Resort, San Diego, California
This paper presents a joint source-channel decoding scheme based on a JPEG2000 source coder and an LDPC channel coder. At the encoder, JPEG2000 is used to perform source coding with certain error resilience (ER) modes, and LDPC codes are used to perform channel coding. At the decoder, after one iteration of LDPC decoding, the output codestream is then decoded by JPEG2000. With the error resilience mode switches on, the source decoder detects the position of the first error within each codeblock of the JPEG2000 codestream. This information is fed back to the channel decoder, and incorporated into the calculation of likelihood values of variable nodes for the next iteration of LDPC decoding. Our results indicate that the proposed method has significant gains over conventional separate channel and source decoding.
APA, Harvard, Vancouver, ISO, and other styles
23

Nam, Ju-Hun, Byeong-Doo Choi, Sung-Jea Ko, Bok-Ki Kim, Woon-Moon Lee, Nam-Sik Lee, and Jea-Taeg Yu. "IMPLEMENTATION OF REAL-TIME AIRBORNE VIDEO TELEMETRY SYSTEM." International Foundation for Telemetering, 2005. http://hdl.handle.net/10150/605038.

Full text
Abstract:
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada
In this paper, we present an efficient real-time implementation technique for Motion-JPEG2000 video compression and its reconstruction used for a real-time Airborne Video Telemetry System. we utilize Motion JPEG2000 and 256-channel PCM Encoder was used for source coding in the developed system. Especially, in multiplexing and demultiplexing PCM encoded data, we use the continuous bit-stream format of the PCM encoded data so that any de-commutator can use it directly, after demultiplexing. Experimental results show that our proposed technique is a practical and an efficient DSP solution.
APA, Harvard, Vancouver, ISO, and other styles
24

Nguyen, Cung. "Fault tolerance analysis and design for JPEG-JPEG2000 image compression systems /." For electronic version search Digital dissertations database. Restricted to UC campuses. Access is free to UC campus dissertations, 2004. http://uclibs.org/PID/11984.

Full text
Abstract:
Thesis (Ph. D.)--University of California, Davis, 2004.
Degree granted in Electrical and Computer Engineering. Library does not have original title page. Also available via the World Wide Web. (Restricted to UC campuses)
APA, Harvard, Vancouver, ISO, and other styles
25

Natu, Ambarish Shrikrishna. "Error resilience in JPEG2000 /." 2003. http://www.library.unsw.edu.au/~thesis/adt-NUN/public/adt-NUN20030519.163058/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Jeng-wei, and 陳政威. "JPEG2000 Adaptive Quantizer Design." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/44e5d4.

Full text
Abstract:
碩士
朝陽科技大學
資訊工程系碩士班
94
In this paper, we propose an image compression system integrating Discrete Wavelet Transform (DWT) and adaptive quantizer design. The design considers the selection of quantization tables, the appropriate length for quantization intervals, and the DWT sub-bands quantization applied to. The proposed system can be efficiently employed in a variety of applications like digital cameras and computer vision. DWT has been applied in image compression for years since it provides considerable compression rate and supports multi-resolution representation. However, when the DWT coefficients are quantized by a fix quantizer, the image quality is a case-by-case parameter. Images with different characteristics result in different system performance values. With an adaptive quantizer, the users can select lossy or lossless mode according to their own requirements for image quality. Furthermore, the system performance becomes a stable parameter for the compression system.
APA, Harvard, Vancouver, ISO, and other styles
27

Chiu, Kuo-En, and 邱國恩. "VLSI Architecture of JPEG2000 Codec." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/73652927005268307145.

Full text
Abstract:
碩士
義守大學
電機工程學系碩士班
94
The hierarchical modular design and hardware implementation of JPEG2000 codec are presented in this thesis. The codec includes three major modules: DWT, Quantizer, and EBCOT Tier-1. On the premise of reducing the hardware resources requirement, we elaborate a high-performance architecture and discrete-event model for each functional module. Through the hardware synthesis methodology, the RTL hardware of each functional module is generated rapidly. The synthesized circuit possesses distributed architecture, good extensibility and ease to system integration. The experiment result shows the proposed JPEG2000 codec can achieve a satisfactory performance of 20 frames/sec coding/decoding speed on 512x512 format image with reduced resources requirement.
APA, Harvard, Vancouver, ISO, and other styles
28

Tsai, Ming-Wei, and 蔡明衛. "Watermarking of JPEG2000 Compressed Images." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/09097209423047377493.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Lin, Chien-sheng, and 林建勝. "A Perceptually Optimized JPEG2000 Encoder." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/56289061073509369148.

Full text
Abstract:
碩士
大同大學
電機工程學系(所)
93
Driven by a growing demand for transmission of visual data over media with limited capacity, increasing efforts have been made to strengthen compression techniques and maintain good visual quality of the compressed image by human visual model. JPEG2000 is the new ISO/ITU standard for still image compression. The multi-resolution wavelet decomposition and the two-tier coding structure of JPEG2000 make it suitable for incorporating the human visual model into the coding algorithm, but the JPEG2000 coder is intrinsically a rate-based distortion minimization algorithm, by which different images coded at the same bit rate always result in different visual qualities. The research will focus on enhancing the performance of the JPEG2000 coder by effectively excluding the perceptually redundant signals from the coding process such that color images encoded at low bit rates have consistent visual quality. By considering the varying sensitivities of the human visual perception to luminance and chrominance signals of different spatial frequencies, the full-band JND profile for each color channel will be decomposed into component JND profiles for different wavelet subbands. With error visibility thresholds provided by the JND profile of each subband, the perceptually insignificant wavelet coefficients in three color channels will be first removed. Without altering the format of the compressed bit stream, the encoder is modified in a way that the bit rate is inversely correlated with the perceptible distortion rather than the distortion of mean square errors. As compared to the JPEG2000 standard, the proposed algorithm can remove more perceptual redundancy from the original image, and the visual quality of the reconstructed image is much more acceptable at low rates.
APA, Harvard, Vancouver, ISO, and other styles
30

FU, SHENG-ZONG, and 傅聖中. "Hardware Implementation of JPEG2000 Encoder." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/17910103004386463997.

Full text
Abstract:
碩士
國立聯合大學
電子工程學系碩士班
105
In 2000, the Joint Photographic Experts Group committee published an image compression standard JPEG2000 that is DWT-based and supports lossy and lossless compressions. It supports flexible image transmissions such as the progressive transmission and the scaling transmission according to the property of JPEG2000 has the packetized image compression data. The core of JPEG2000 consists of three schemes: DWT, Embedded Block Coding with Optimal Truncation (EBCOT) and MQ-coder. Previous works on the JPEG2000 architecture most of them were focus on the architecture alteration and performance improvement of the individual scheme. A portion of studies investigated the relationship of EBCOT and MQ-coder. There is no study investigates the overall core architecture. Thus, our work investigates the overall core of JPEG2000 architecture with architecture alteration as well as the performance improvement of individual schemes. In hardware design, due to the pipelined and the parallel processing techniques to reduce the amount of memory and to increase execution speed. Thus, we integrated the pipelined and the parallel techniques to design our 2-D DWT. Our 2-D DWT design can immediately deal the entire tile of image with the size of N*N. Moreover, the comparison with the other works shows that our design can greatly reduce the amount of memory and logic component counts. In EBCOT, we extended the pass-parallel method to develop our design. Because our EBCOT architecture can immediately handle the entire code block such that it reduces the mount of registers and the computing time. In previous works, the MQ code cost a lot of running time because those used individual scheme to perform each MQ code pass. However, our design has full pipelined and parallel property ensures that the execution speed can be increased. Because the proposed JPEG2000 encoder that integrates our 2-D DWT architecture, the novel EBCOT coder and MQ coder to process the whole code block. Therefore, the proposed JPEG2000 encder has better performance than that of other works.
APA, Harvard, Vancouver, ISO, and other styles
31

Wang, Tzu-Ya, and 王姿雅. "Prototype Verification of JPEG2000 Encoder." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/30897921800195638512.

Full text
Abstract:
碩士
國立中正大學
電機工程所
97
In this thesis, we direct against Tier-2 part in the code procedure of JPEG2000 image process system, realize hardware structure, utilize access of person who store, is it take complexity of code to reduce to come. And set up a simulation AMBA behavior environment and carry on the function and prove, verify prototype of JPEG2000 encoder.
APA, Harvard, Vancouver, ISO, and other styles
32

Lin, Chien-Sheng, and 林建勝. "A Perceptually Optimized JPEG2000 Encoder." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/55151734289221938158.

Full text
Abstract:
碩士
大同大學
電機工程研究所
92
Driven by a growing demand for transmission of visual data over media with limited capacity, increasing efforts have been made to strengthen compression techniques and maintain good visual quality of the compressed image by human visual model. JPEG2000 is the new ISO/ITU standard for still image compression. The multi-resolution wavelet decomposition and the two-tier coding structure of JPEG2000 make it suitable for incorporating the human visual model into the coding algorithm, but the JPEG2000 coder is intrinsically a rate-based distortion minimization algorithm, by which different images coded at the same bit rate always result in different visual qualities. The research will focus on enhancing the performance of the JPEG2000 coder by effectively excluding the perceptually redundant signals from the coding process such that color images encoded at low bit rates have consistent visual quality. By considering the varying sensitivities of the human visual perception to luminance and chrominance signals of different spatial frequencies, the full-band JND profile for each color channel will be decomposed into component JND profiles for different wavelet subbands. With error visibility thresholds provided by the JND profile of each subband, the perceptually insignificant wavelet coefficients in three color channels will be first removed. Without altering the format of the compressed bit stream, the encoder is modified in a way that the bit rate is inversely correlated with the perceptible distortion rather than the distortion of mean square errors. As compared to the JPEG2000 standard, the proposed algorithm can remove more perceptual redundancy from the original image, and the visual quality of the reconstructed image is much more acceptable at low rates.
APA, Harvard, Vancouver, ISO, and other styles
33

Chen, Shi-Jin, and 陳錫錦. "Implementation and Verification of JPEG2000." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/38550405805809161454.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
91
Design verification at system level has been one of the most important and challenging jobs for system design. It is because that traditionally, hardware and software are often developed separately. The sequential process of hardware and software development increases time-to-market. In addition, a system-on-chip comprises of many components such as processors, timers, busses, memories and embedded software. Designs are getting bigger in size and larger in complexity. Furthermore, hardware often implement at the RTL level, the simulation of which take much time. These demand designers to describe designs at higher levels of abstraction. The purpose of this thesis is to investigate and practice system level integration and verification. We use JPEG2000, an image compression standard as a design vehicle to explore many system verification problems using SystemC. The major contributions of this thesis are summarized as follows. First, we implement/verify the EBCOT hardware design. To speedup the design, we have use pixel skipping, column based operation and pipeline techniques. The second contribution of this thesis is to build the entire system of JPEG2000 using both systemC and RTL verilog. The system allows us to verify the correctness of software and hardware together, and we can use this system to do lossless compression. Finally, we have integrated and performed simulation on both the SystemC and Verilog together. The platform used for integration is a tool called Cocentric.
APA, Harvard, Vancouver, ISO, and other styles
34

Соколова, В. К. "Характеристика стандарту jpeg 2000 для кодування растрових зображень." Thesis, 2020. http://openarchive.nure.ua/handle/document/13953.

Full text
Abstract:
Стандарт JPEG 2000 дає можливості вибору значень численних параметрів кодування растрових зображень, які суттєво впливають на розмір кодового потоку та якість відновленого зображення. При декодуванні значення параметрів зчитуються з заголовків кодового потоку, і декодер повинен забезпечити коректне відновлення вихідного зображення.
APA, Harvard, Vancouver, ISO, and other styles
35

Huang, Yi-Wei, and 黃奕桅. "Image Coding System Based on JPEG2000." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/07198912776558498245.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
90
JPEG2000 is the next generation still image compression standard. Besides its higher distortion/rate performance and superior error resilience than JPEG, it provides many features, such as progressive decoding by resolution, progressive decoding by quality, random access, region of interest coding, compressed domain processing, and rate-distortion optimization. In this paper, we survey several theories used in JPEG2000 and use these theories to realize our image coding system, which provides progressive decoding by resolution, random access, and rate-distortion optimization. Finally, we make some performance comparison in distortion/compression- rate and compression/decompression time, in our system, JPEG2000, and JPEG.
APA, Harvard, Vancouver, ISO, and other styles
36

Sung, Chieh-Hsiu, and 宋杰修. "A study on JPEG2000 with Applications." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/94183391428845263485.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
95
In order to satisfy the requirements of multimedia technologies, image compression requires higher performance, and how to enhance the compression ratio becomes more and more important. Therefore, a new standard of still image compression, JPEG2000, is currently being developed. JPEG2000 encoding system adopts Discrete Wavelet Transform (DWT) and Embedded Block Coding with Optimal Truncation (EBCOT). Compared with JPEG, JPEG2000 provides several characteristics such as lossy and lossless compression, ROI (Region of Interest), random access, progressive transmission and good error resilience for an image. It may replace JPEG as the most popular image compression format in the near future.
APA, Harvard, Vancouver, ISO, and other styles
37

Chen, Guan-Fu, and 陳冠甫. "An Adaptive Quantization Scheme for JPEG2000." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/54759016045566717314.

Full text
Abstract:
碩士
朝陽科技大學
資訊工程系碩士班
95
JPEG2000 is a common image compression standard which attracts much attention in recent years. Based on DWT (Discrete Wavelet Transform), we propose an adaptive quantization approach which quantizes the coefficients according to their characteristics. Compared with JPEG2000 quantization which uses the same quantization interval on all images, the proposed adaptive quantization leads to better image quality according to the experiment results.
APA, Harvard, Vancouver, ISO, and other styles
38

Wang, Li-Jhong, and 王立中. "A Fast Browsing System for JPEG2000." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/24169714255856434489.

Full text
Abstract:
碩士
崑山科技大學
數位生活科技研究所
98
When users have to browse images in a large image database and may dynamically change ROIs, the performance view become more important than quality view. Many techniques have been presented for both static and dynamic ROI (Region of Interest) coding for JPEG2000 standard. However, none of them discussed the performance view of the decoding process for browsing image database. In this paper, based on the multiple progression orders and a mixed static and dynamic ROI coding style, we propose a method to shorten the hard-disc access time via an appropriate arrangement of the coded file, and shorten the decoding time via an incremental reformation of a file. Experimental results show that the performance is much better than the one with a single progression order.
APA, Harvard, Vancouver, ISO, and other styles
39

Po-Wei-Liu and 劉柏緯. "The Implementation of Image Compression JPEG2000." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/93627896519056774677.

Full text
Abstract:
碩士
大葉大學
電機工程學系
102
Image compression is an important foundation for proceeding of image transmission. Therefore, image compression techniques play an important role in the transmission of images. Images are compressed with two different ways, which are lossless compressions and lossy compressions. In the lossless compression technique, decompressed images can be completely recovered; however the compression ratio is within limits. With lossy compression technique, higher compression ratios than lossless are obtained, but quality of recovered images are less than that of lossless. JPEG2000 is current one of the most popular image compression methods. In this work, we have used Matlab and C language to program JPEG2000 compression algorithm. Some parameters changed in different parts of compressing have achieved different compression ratios. Finally, a comparison of different compression ratio based on such parameters is illustrated. According to the fact that human eyes are more sensitive to brightness than chromaticity, the discrete wavelet transformation (DWT) is performed after the dimensions of image colors have been changed, and entropy coding is then employed to complete an image compression procedure. Some parameters have changed in different parts of image compressing and the corresponding quality and ratio are discussed. As a result, we can determine which parameters changed dominate image compressing
APA, Harvard, Vancouver, ISO, and other styles
40

林建佑. "High Performance EBCOT Design of JPEG2000." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/55262093547634266659.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
90
The current state of JPEG 2000 Part 1 is an international standard. Although it provides unprecedented features not available in other standards, lots of the technical bottlenecks are still unsolved in its image processing algorithms, especially in the novel embedded block encoding operations. In this thesis, bit-level we propose a new architecture for ECBOT. The architecture can perform parallel processing coding to increase the throughput of context formation. Column skipping can skip columns which have four no-operation bits. In addition, in the memory structure, we separate data and allocate into 9 memories. In the arithmetic encoder, a 4-stage pipeline is used to reduce the clock cycle time. Besides, a data-forward technique is used in 4-stage pipeline architecture to process two identical contexts continuously inputted. The proposed architecture is shown to have high throughput. We have average 22% improvement in throughput by comparing [2]. It needs 0.385 second to encode an image with 2400x1800 image size. This design can support further applications such as Motion-JPEG2000.
APA, Harvard, Vancouver, ISO, and other styles
41

Lin, Hsin-Yi, and 林昕儀. "Design and Implementation of JPEG2000 Encoder." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/23353494450614424282.

Full text
Abstract:
碩士
國立交通大學
電子工程系所
92
The ability to have scalability in resolution as well as image quality is the main attractiveness of JPEG2000. DWT (Discrete Wavelet Transform) and EBCOT (Embedded Block Coding with Optimal Truncation) which are two major technologies enable it, however, are also the parts that demand huge storage and computations. To reduce memory requirement, we combine five different computing orders of DWT with level-by-level or mixed-level and find that level-by-level optimal-z scan can reduce the temporal buffer in DWT as well as the buffer between DWT and EBCOT. We also adopt the new stripe-based computation order of EBCOT to further reduce 93.8% buffer size between DWT and EBCOT. The total buffer for the JPEG2000 encoder can be reduced to 66% of the original design. However, the stripe-based computing order will increase 14% more computation time. Thus, we proposed the zero-stripe skipping technique to skip the all-zero-bitplane. With this approach, we can eliminate this overhead and reduce 0.22% computation time further. To reduce the computation complexity, we share the multipliers and adders of the two directional DWT kernels, so that 1/3 of the area of DWT module can be saved. For EBCOT, a pass-level parallelism is adopted to speed up 3 times of the traditional processing time and to reduce 2/3 memory accesses. The gate count of proposed context formation is 6.8% of others. Finally, we proposed a plan to use one DWT module with three embedded block coders to integrate our JPEG2000 encoding system. It can achieve a throughput of 55.6 Msamples/sec at 100 MHz clock rate with lower cost and less memory requirement.
APA, Harvard, Vancouver, ISO, and other styles
42

Yen, Wen-Chi, and 顏文祺. "A Hardware/Software-Concurrent JPEG2000 Encoder." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/24263295495852495855.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
92
We implement a JPEG2000 encoder based on an internally developed hardware/software codesign methodology. We emphasize on the concurrent execution of hardware accelerator IPs and software running on the CPU. In a programmable SOC platform, hardware acceleration of DWT and EBCOT Tier-1 sequentially gives us 70% reduction in total execution time. The proposed concurrent scheme achieves additional 14% saving. We describe our experience in bringing up such a system.
APA, Harvard, Vancouver, ISO, and other styles
43

Chen, Shih-Hau, and 陳世豪. "Background Information Hiding Method for JPEG2000." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/19648661070868168896.

Full text
Abstract:
碩士
國立成功大學
電機工程學系碩博士班
94
The JPEG2000 is a new compressive standard for still images. It is gradually replacing JPEG. It is possessed of better compressive rates and image quality than JPEG. Besides, there are special functions in JPEG2000, like ROI and progressive transmission. Hence, it becomes important to do research on information hiding with minimum destruction of quality of the recovered images in JPEG2000. This paper proposes an approach that hides information in the part of background image, because human eyes are not sensitive to background images. Even more this method can ensure the quality of object parts of images and reduce the distortion of recovered images.   We use background information hiding method in image protections to avoid images taken without the agreement of owners. This method can be easily modified for the application of image protections. By experiment, it is demonstrated that the method is very suitable for image protections.
APA, Harvard, Vancouver, ISO, and other styles
44

Wu, HounChien, and 吳鴻謙. "Using JPEG2000 in Aerial Phot Compression." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/42365468266315852130.

Full text
Abstract:
碩士
國立交通大學
土木工程系
91
Using JPEG2000 in Aerial Photo Compression Student:Houn-Chien Wu Advisor:Tian- Yuan Shih Department of Civil Engineering National Chiao Tung University Abstract In this study, the performance of JPEG2000 is evaluated for aerial photo image compression. First, evaluation is performed on the full scene of the photos based on visual analysis and some indices are computed for different ratios with both JPEG2000 and JPEG, including SNR, PSNR, RMSE, Entropy, etc. Second, some objects selected in the scene are studied. Finally, the effects of different compression ratios of JPEG2000 on DSM auto-matching procedure are evaluated by using a commercial software. We can conclude from this study, that when the compression ratio is greater than 30, JPEG2000 can provide better image quality than JPEG can. The image quality indices of JPEG2000 will vary based on the content of JPEG2000. Compared to JPEG, the JPEG2000 has less influence on the image quality and increases the matching ratio for DSM production.
APA, Harvard, Vancouver, ISO, and other styles
45

Jia, Hwe-fen, and 賈惠芬. "A JPEG2000 Image Decoder Front-end." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/71009445521734164697.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
89
We proposed an architecture for the front-end part of a JPEG2000 image decoder including marker handler, arithmetic decoder and coefficient bit modeling module. The whole JPEG2000 decoder system is made up of the proposed architecture, an IDWT block and an RGB-to-BMP format converter. We have completed the RTL design and verified its functional correctness with gate-level simulation. The circuit has been synthesized targeting toward a 0.35 μm CMOS cell library. Its area is 24,700 gates and speed is 40 MHz.
APA, Harvard, Vancouver, ISO, and other styles
46

Huang, Chi-Wen, and 黃琪文. "AHB-based JPEG2000 Coprocessor System Design." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/s694zp.

Full text
Abstract:
碩士
國立交通大學
電機與控制工程系所
92
Because JPEG2000 is the state-of-the-art image compression technology, our lab has made efforts in developing a high-performance JPEG2000 chip and developed QDWT (Quad Discrete Wavelet Transform) which is more efficient than the traditional DWT (Discrete Wavelet Transform) . QDWT only needs the quarter of compute time than the traditional DWT does to generate the coefficients to EBCOT (Embedded Block Coding with Optimized Truncation). We also develop a high-performance AC (Arithmetic Entropy Coder). The pipeline architecture is used in the AC and we only use three pipes to reach the input rate, 1 CX-D pair/clock cycle. We will explain that how to organize the best system architecture to achieve small area and high throughputs by arranging the system work flow properly and analyzing the timing of the individual modules. If the ASIC developed can be popular to be integrated into different systems, the IP issue should be addressed. We wrapped the JPEG2000 Encoder developed by our team in AHB (Advanced High-performance Bus) Slave interface. AMBA, which is drawn up by ARM, is an on-chip communication standard for designing high-performance embedded microcontrollers and is wildly used in the consumer electronic market now. So, the AHB-based JPEG2000 Encoder we developed could be applied in an ARM-based embedded system. The Contribution of this thesis is to integrate the QDWT, Pass Parallel EBCOT Tier1 and Pipeline AC as a JPEG2000 coprocessor and show this architecture really could improve the performance. Besides, wrap the JPEG2000 coprocessor in AHB slave interface and make it cooperate with ARM CPU to finish the coding procedures of JPEG2000.
APA, Harvard, Vancouver, ISO, and other styles
47

Lin, Tsung-Ta, and 林宗達. "The VLSI Architecture Design of JPEG2000 Encoder." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/75112936611969139902.

Full text
Abstract:
碩士
淡江大學
電機工程學系碩士班
96
The amount of memory required for code-block is one of the most important issue in JPEG2000 encoder chip implementation. To overcome the drawbacks caused by the large amount of code-block memory in JPEG2000, this paper proposes a new JPEG2000 encoder architecture without code-block memory. Here we try to unify the output scanning order of the 2D-DWT (discrete wavelet transform) and the processing scanning of the EBCOT (embedded block coding with optimized truncation) and further the code-block memory can be completely eliminated. Since the code-block memory has been eliminated, we propose another approach for embedded block coding (EBC), code-block switch adaptive embedded block coding (CS-AEBC) that can skip the insignificant bit-planes (IBP) to reduce the computation time and save power consumption. Besides, a new rate distortion optimization (RDO) approach is proposed to reduce the computation time when the EBC processes lossy compression operation. The DWT used in this work is a code-block-based DWT, and it can process any tile size of picture and any levels of DWT operation. The total memory required for the proposed JPEG2000 is only 2.2KB internal memory, and the bandwidth required for the external memory is 2.1B/cycle. Compared to other JPEG2000 architectures, our new approach has the cost and performance advantage.
APA, Harvard, Vancouver, ISO, and other styles
48

Tzeng, Chao-Feng, and 曾照峰. "Efficient Embedded Block Coding Architecture For JPEG2000." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/06153697579765648578.

Full text
Abstract:
碩士
國立成功大學
電機工程學系碩博士班
95
JPEG2000 is the new international standard for still image compression. It provides superior performance in terms of visual quality and PSNR compared to JPEG. However, the computational complexity of JPEG2000 is much higher than JPEG. In this thesis, we present a efficient embedded block coding architecture for JPEG2000. For the fractional bit-plane coding, the most complicated part of JPEG2000, the architecture can process a bit-plane within one scan. This greatly improve the processing rate. Moreover, The gate counts and memory requirement are also reduced for hardware implementation.
APA, Harvard, Vancouver, ISO, and other styles
49

文亞南. "Implementation of pipelined arithmetic encoder in JPEG2000." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/39090163875121915466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

CHYAN, CHUN-AN, and 簡崇安. "Design and Implementation of JPEG2000 EBCOT coder." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/44684953777007916715.

Full text
Abstract:
碩士
國立臺灣大學
電機工程學研究所
90
JPEG2000 system is the newest standard for still image compression. In this Thesis, we discuss the basic architecture of JPEG2000 system, which could be viewed as an evolution of image compression techniques during recent years. However, the key component, which is called “EBCOT,” contains many bit-level computation and multiple scan, it makes JPEG2000 too slow to fit some applications if we use general purpose CPU to execute JPEG2000. We design and implement an ASIC to accrete EBCOT, the cycles needed are reduced to about 45% of the original algorithm, and the clock rate can reach 133MHz in our simulation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography