Tesi sul tema "JPEG"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: JPEG.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "JPEG".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Caplan, Paul Lomax. "JPEG : the quadruple object". Thesis, Birkbeck (University of London), 2013. http://bbktheses.da.ulcc.ac.uk/32/.

Testo completo
Abstract (sommario):
The thesis, together with its practice-research works, presents an object-oriented perspective on the JPEG standard. Using the object-oriented philosophy of Graham Harman as a theoretical and also practical starting point, the thesis looks to provide an account of the JPEG digital object and its enfolding within the governmental scopic regime. The thesis looks to move beyond accounts of digital objects and protocols within software studies that position the object in terms of issues of relationality, processuality and potentiality. From an object-oriented point of view, the digital object must be seen as exceeding its relations, as actual, present and holding nothing in reserve. The thesis presents an account of JPEG starting from that position as well as an object-oriented account of JPEG’s position within the distributed, governmental scopic regime via an analysis of Facebook’s Timeline, tagging and Haystack systems. As part of a practice-research project, the author looked to use that perspective within photographic and broader imaging practices as a spur to new work and also as a “laboratory” to explore Harman’s framework. The thesis presents the findings of those “experiments” in the form of a report alongside practice-research eBooks. These works were not designed to be illustrations of the theory, nor works to be “analysed”. Rather, following the lead of Ian Bogost and Mark Amerika, they were designed to be “philosophical works” in the sense of works that “did” philosophy.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Günther, Emanuel. "Entwicklung eines JPEG-Dateianalysators". Hochschule für Technik und Wirtschaft, 2021. https://htw-dresden.qucosa.de/id/qucosa%3A76129.

Testo completo
Abstract (sommario):
Die Arbeit befasst sich mit der Verbesserung und Erweiterung eines bestehenden Softwareprojektes zum Streaming von JPEG-Bildern per RTP. In diesem Projekt werden JPEG-Bilder aus MJPEG-Dateien von einem Server zum Client übertragen. Um eine fehlerfreie Übertragung zu gewährleisten, soll zuvor die Eignung der Dateien für eine solche geprüft werden. Die entsprechenden Anforderungen finden sich in RFC 2435, welcher die Übertragung von JPEG-Bildern per RTP standardisiert. Zur Automatisierung der Überprüfung wurde diese in einem eigenen Programm implementiert. Weitere Verbesserungen wurden hinsichtlich der Ausführbarkeit des Projektes auf verschiedener Hardware getroffen. So wurden interne Algorithmen verbessert, um auch auf schwächerer Hardware einen flüssigen Ablauf zu ermöglichen. Außerdem wurde die Kompatibilität der RTSP-Implementierung im Projekt mit jener im VLC Media Player hergestellt. Zuletzt wurde das Softwareprojekt hinsichtlich der Verschlüsselung der Übertragung erweitert. Die Grundlage dafür legt die Anlayse von Anforderungen an die Verschlüsselung von Mediendaten. Es wurden zwei verschiedene Verfahren betrachtet und implementiert: Zum einen das weitverbreitete SRTP-Protokoll, zum anderen eine eigene JPEG-Verschlüsselung. Anschließend wurde die Komplexität der Entwicklung eines Verschlüsselungsverfahrens gezeigt, indem das selbst ent wickelte Verfahren durch einen Ersetzungsangriff gebrochen wurde.
This work deals with the improvement and extension of an existing software project for streaming JPEG images via RTP. In the project JPEG images are read in from MJPEG files and transmitted from a server to a client. To guarantee a faultless transmission the fitness of the files for transmission should be checked. The corresponding requirements can be found in RFC 2435 in which the transmission of JPEG images via RTP is standardized. An automation of this verification is realized in an own program. Further improvements are done in regard to the executability of the project on different hardware. Internal algorithms are improved to get a smooth execution even on weaker hardware. Additionally the compatibility of the implementation of RTSP in the project with that in the VLC Media Player is established. Finally the software project is extended in terms of the encryption of the transmission. The requirements of media data encryption are analyzed and used as the base for the following considerations. There are two operations which were examined and implemented: On the one hand the SRTP protocoll which is widely used. On the other hand an own JPEG encryption. Following that the complexity of developing an own encryption method is shown by breaking the developed JPEG encryption with a replacement attack.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Jirák, Jakub. "Alternativní JPEG kodér/dekodér". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-317121.

Testo completo
Abstract (sommario):
The JPEG codec is currently the most widely used image format. This work deals with the design and implementation of an alternative JPEG codec using proximal algorithms in combination with the fixation of points from the original image to suppression of artifacts created in common JPEG coding. To solve the problem, the prox_TV and then the Douglas-Rachford algorithm were used, for which special functions using l_1-norm for image reconstruction were derived. The results of the proposed solution are very good because they can effectively suppress the artefacts created and the result corresponds to the image with a higher set qualitative factor. The proposed method achieves very good results for both simple images and photos, but in the case of large images (1024 × 1024 px) and larger, a large amount of computing time is required, so the method is more suitable for smaller images.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Tong, Henry Hoi-Yu. "A perceptually adaptive JPEG coder". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ29417.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Tuladhar, Punnya. "Nonattribution Properties of JPEG Quantization Tables". ScholarWorks@UNO, 2010. http://scholarworks.uno.edu/td/1261.

Testo completo
Abstract (sommario):
In digital forensics, source camera identification of digital images has drawn attention in recent years. An image does contain information of its camera and/or editing software somewhere in it. But the interest of this research is to find manufacturers (henceforth will be called make and model) of a camera using only the header information, such as quantization table and huffman table, of the JPEG encoding. Having done research on around 110, 000 images, we reached to state that "For all practical purposes, using quantization and huffman tables alone to predict a camera make and model isn't a viable approach". We found no correlation between quantization and huffman tables of images and makes of camera. Rather, quantization or huffman table is determined by the quality factors like resolution, RGB values, intensity etc.of an image and standard settings of the camera.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Goodenow, Daniel P. "A reference guide to JPEG compression /". Online version of thesis, 1993. http://hdl.handle.net/1850/11714.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Nguyen, Anthony Ngoc. "Importance Prioritised Image Coding in JPEG 2000". Thesis, Queensland University of Technology, 2005. https://eprints.qut.edu.au/16005/1/Anthony_Nguyen_Thesis.pdf.

Testo completo
Abstract (sommario):
Importance prioritised coding is a principle aimed at improving the interpretability (or image content recognition) versus bit-rate performance of image coding systems. This can be achieved by (1) detecting and tracking image content or regions of interest (ROI) that are crucial to the interpretation of an image, and (2)compressing them in such a manner that enables ROIs to be encoded with higher fidelity and prioritised for dissemination or transmission. Traditional image coding systems prioritise image data according to an objective measure of distortion and this measure does not correlate well with image quality or interpretability. Importance prioritised coding, on the other hand, aims to prioritise image contents according to an 'importance map', which provides a means for modelling and quantifying the relative importance of parts of an image. In such a coding scheme the importance in parts of an image containing ROIs would be higher than other parts of the image. The encoding and prioritisation of ROIs means that the interpretability in these regions would be improved at low bit-rates. An importance prioritised image coder incorporated within the JPEG 2000 international standard for image coding, called IMP-J2K, is proposed to encode and prioritise ROIs according to an 'importance map'. The map can be automatically generated using image processing algorithms that result in a limited number of ROIs, or manually constructed by hand-marking OIs using a priori knowledge. The proposed importance prioritised coder coder provides a user of the encoder with great flexibility in defining single or multiple ROIs with arbitrary degrees of importance and prioritising them using IMP-J2K. Furthermore, IMP-J2K codestreams can be reconstructed by generic JPEG 2000 decoders, which is important for interoperability between imaging systems and processes. The interpretability performance of IMP-J2K was quantitatively assessed using the subjective National Imagery Interpretability Rating Scale (NIIRS). The effect of importance prioritisation on image interpretability was investigated, and a methodology to relate the NIIRS ratings, ROI importance scores and bit-rates was proposed to facilitate NIIRS specifications for importance prioritised coding. In addition, a technique is proposed to construct an importance map by allowing a user of the encoder to use gaze patterns to automatically determine and assign importance to fixated regions (or ROIs) in an image. The importance map can be used by IMP-J2K to bias the encoding of the image to these ROIs, and subsequently to allow a user at the receiver to reconstruct the image as desired by the user of the encoder. Ultimately, with the advancement of automated importance mapping techniques that can reliably predict regions of visual attention, IMP-J2K may play a significant role in matching an image coding scheme to the human visual system.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Nguyen, Anthony Ngoc. "Importance Prioritised Image Coding in JPEG 2000". Queensland University of Technology, 2005. http://eprints.qut.edu.au/16005/.

Testo completo
Abstract (sommario):
Importance prioritised coding is a principle aimed at improving the interpretability (or image content recognition) versus bit-rate performance of image coding systems. This can be achieved by (1) detecting and tracking image content or regions of interest (ROI) that are crucial to the interpretation of an image, and (2)compressing them in such a manner that enables ROIs to be encoded with higher fidelity and prioritised for dissemination or transmission. Traditional image coding systems prioritise image data according to an objective measure of distortion and this measure does not correlate well with image quality or interpretability. Importance prioritised coding, on the other hand, aims to prioritise image contents according to an 'importance map', which provides a means for modelling and quantifying the relative importance of parts of an image. In such a coding scheme the importance in parts of an image containing ROIs would be higher than other parts of the image. The encoding and prioritisation of ROIs means that the interpretability in these regions would be improved at low bit-rates. An importance prioritised image coder incorporated within the JPEG 2000 international standard for image coding, called IMP-J2K, is proposed to encode and prioritise ROIs according to an 'importance map'. The map can be automatically generated using image processing algorithms that result in a limited number of ROIs, or manually constructed by hand-marking OIs using a priori knowledge. The proposed importance prioritised coder coder provides a user of the encoder with great flexibility in defining single or multiple ROIs with arbitrary degrees of importance and prioritising them using IMP-J2K. Furthermore, IMP-J2K codestreams can be reconstructed by generic JPEG 2000 decoders, which is important for interoperability between imaging systems and processes. The interpretability performance of IMP-J2K was quantitatively assessed using the subjective National Imagery Interpretability Rating Scale (NIIRS). The effect of importance prioritisation on image interpretability was investigated, and a methodology to relate the NIIRS ratings, ROI importance scores and bit-rates was proposed to facilitate NIIRS specifications for importance prioritised coding. In addition, a technique is proposed to construct an importance map by allowing a user of the encoder to use gaze patterns to automatically determine and assign importance to fixated regions (or ROIs) in an image. The importance map can be used by IMP-J2K to bias the encoding of the image to these ROIs, and subsequently to allow a user at the receiver to reconstruct the image as desired by the user of the encoder. Ultimately, with the advancement of automated importance mapping techniques that can reliably predict regions of visual attention, IMP-J2K may play a significant role in matching an image coding scheme to the human visual system.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Nguyen, Ricky D. (Ricky Do). "Rate control and bit allocations for JPEG transcoding". Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41667.

Testo completo
Abstract (sommario):
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (leaves 50-51).
An image transcoder that produces a baseline JPEG file from a baseline JPEG input is developed. The goal is to produce a high quality image while accurately meeting a filesize target and keeping computational complexity-especially the memory usage and number of passes at the input image--low. Building upon the work of He and Mitra, the JPEG transcoder exploits a linear relationship between the number of zero-valued quantized DCT coefficients and the bitrate. Using this relationship and a histogram of coefficients, it is possible to determine an effective way to scale the quantization tables of an image to approach a target filesize. As the image is being transcoded, an intra-image process makes minor corrections, saving more bits as needed throughout the transcoding of the image. This intra-image process decrements specific coefficients, minimizing the change in value (and hence image quality) while maximizing the savings in bitrate. The result is a fast JPEG transcoder that reliably achieves a target filesize and preserves as much image quality as possible. The proposed transcoder and several variations were tested on a set of twenty-nine images that gave a fair representation of typical JPEG photos. The evaluation metric consisted of three parts: first, the accuracy and precision of the output filesize with respect to the target filesize; second, the PSNR of the output image with respect to the original image; and third, the subjective visual image quality.
by Ricky D. Nguyen.
M.Eng.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Oh, Han, Ali Bilgin e Michael Marcellin. "Visually Lossless JPEG 2000 for Remote Image Browsing". MDPI AG, 2016. http://hdl.handle.net/10150/621987.

Testo completo
Abstract (sommario):
Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of ( spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting ( multiple) quantization step sizes into a single JPEG 2000 codestream. This codestream is JPEG 2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG 2000 Interactive Protocol ( JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Gondlyala, Siddharth Rao. "Enhancing the JPEG Ghost Algorithm using Machine Learning". Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20692.

Testo completo
Abstract (sommario):
Background: With the boom in the internet space and social media platforms, a large number of images are being shared. With this rise and advancements in technology, many image editing tools have made their way to giving rise to digital image manipulation. Being able to differentiate a forged image is vital to avoid misinformation or misrepresentation. This study focuses on the splicing image forgery to localizes the forged region in the tampered image. Objectives: The main purpose of the thesis is to extend the capability of the JPEG Ghost model by localizing the tampering in the image. This is done by analyzing the difference curves formed by compressions in the tampered image, and thereafter comparing the performance of the models. Methods: The study is carried out by two research methods; one being a Literature Review, whose main goal is gaining insights on the existing studies in terms of the approaches and techniques followed; and the second being Experiment; whose main goal is to improve the JPEG ghost algorithm by localizing the forged area in a tampered image and to compare three machine learning models based on the performance metrics. The machine learning models that are compared are Random Forest, XGBoost, and Support Vector Machine. Results: The performance of the above-mentioned models has been compared with each other on the same dataset. Results from the experiment showed that XGBoost had the best overall performance over other models with the Jaccard Index value of 79.8%. Conclusions: The research revolves around localization of the forged region in a tampered image using the concept of JPEG ghosts. This is We have concluded that the performance of XGBoost model is the best, followed by Random Forest and then Support Vector Machine.
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Zakaria, Ahmad. "Batch steganography and pooled steganalysis in JPEG images". Thesis, Montpellier, 2020. http://www.theses.fr/2020MONTS079.

Testo completo
Abstract (sommario):
RÉSUMÉ :La stéganographie par lot consiste à dissimuler un message en le répartissant dans un ensemble d’images, tandis que la stéganalyse groupée consiste à analyser un ensemble d’images pour conclure à la présence ou non d’un message caché. Il existe de nombreuses stratégies d’étalement d’un message et on peut raisonnablement penser que le stéganalyste ne connaît pas celle qui est utilisée, mais il peut supposer que le stéganographe utilise le même algorithme d’insertion pour toutes les images. Dans ce cas, on peut montrer que la solution la plus appropriée pour la stéganalyse groupée est d’utiliser un unique détecteur quantitatif (c'est-à-dire qui prédit la taille du message caché), d’évaluer pour chaque image la taille du message caché (qui peut être nulle s'il n'y en a pas) et de faire la moyenne des tailles (qui sont finalement considérées comme des scores) obtenues sur l'ensemble des images.Quelle serait la solution optimale si maintenant, le stéganalyste pouvait discriminer la stratégie d’étalement parmi un ensemble de stratégies connues. Le stéganalyste pourrait-il utiliser un algorithme de stéganalyse groupé meilleur que la moyenne des scores ? Le stéganalyste pourrait-il obtenir des résultats proches du scénario dit "clairvoyant" où l’on suppose qu’il connaît exactement la stratégie d’étalement ?Dans cette thèse, nous essayons de répondre à ces questions en proposant une architecture de stéganalyse groupée fondé sur un détecteur quantitatif d’images et une fonction de groupement optimisée des scores. La première contribution est une étude des algorithmes de stéganalyse quantitatifs afin de décider lequel est le mieux adapté à la stéganalyse groupée. Pour cela, nous proposons d’étendre cette comparaison aux algorithmes de stéganalyse binaires et nous proposons une méthodologie pour passer des résultats de la stéganalyse binaire en stéganalyse quantitative et réciproquement.Le cœur de la thèse se situe dans la deuxième contribution. Nous étudions le scénario où le stéganalyste ne connaît pas la stratégie d’étalement. Nous proposons alors une fonction de groupement optimisée des résultats fondés sur un ensemble de stratégies d’étalement ce qui permet d’améliorer la précision de la stéganalyse groupée par rapport à une simple moyenne. Cette fonction de groupement est calculée en utilisant des techniques d’apprentissage supervisé. Les résultats expérimentaux obtenus avec six stratégies d’étalement différentes et un détecteur quantitatif de l’état de l’art confirment notre hypothèse. Notre fonction de groupement obtient des résultats proches d’un stéganalyste clairvoyant qui est censé connaître la stratégie d’étalement.Mots clés : Sécurité multimédia, Stéganographie par lot, Stéganalyse groupée, Apprentissage machine
ABSTRACT:Batch steganography consists of hiding a message by spreading it out in a set of images, while pooled steganalysis consists of analyzing a set of images to conclude whether or not a hidden message is present. There are many strategies for spreading a message and it is reasonable to assume that the steganalyst does not know which one is being used, but it can be assumed that the steganographer uses the same embedding algorithm for all images. In this case, it can be shown that the most appropriate solution for pooled steganalysis is to use a single quantitative detector (i.e. one that predicts the size of the hidden message), to evaluate for each image the size, the hidden message (which can be zero if there is none), and to average the sizes (which are finally considered as scores) obtained over all the images.What would be the optimal solution if now the steganalyst could discriminate the spreading strategy among a set of known strategies. Could the steganalyst use a pooled steganalysis algorithm that is better than averaging the scores? Could the steganalyst obtain results close to the so-called "clairvoyant" scenario where it is assumed that the steganalyst knows exactly the spreading strategy?In this thesis, we try to answer these questions by proposing a pooled steganalysis architecture based on a quantitative image detector and an optimized score pooling function. The first contribution is a study of quantitative steganalysis algorithms in order to decide which one is best suited for pooled steganalysis. For this purpose, we propose to extend this comparison to binary steganalysis algorithms and we propose a methodology to switch from binary steganalysis results to quantitative steganalysis and vice versa.The core of the thesis lies in the second contribution. We study the scenario where the steganalyst does not know the spreading strategy. We then propose an optimized pooling function of the results based on a set of spreading strategies which improves the accuracy of the pooled steganalysis compared to a simple average. This pooling function is computed using supervised learning techniques. Experimental results obtained with six different spreading strategies and a state-of-the-art quantitative detector confirm our hypothesis. Our pooling function gives results close to a clairvoyant steganalyst who is supposed to know the spreading strategy.Keywords: Multimedia Security, Batch Steganography, Pooled Steganalysis, Machine Learning
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Nilsson, Henrik. "Gaze-based JPEG compression with varying quality factors". Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18237.

Testo completo
Abstract (sommario):
Background: With the rise of streaming services such as cloud gaming, a fast internet speed is required for the overall experience. The average internet connection is not suited for the requirements that cloud gaming require. A high quality and frame rate is important for the experience. A solution to this problem would be to have parts where the user is looking at in a image be displayed in higher quality compared to the rest of the image. Objectives: The objective of this thesis is to create a gaze-based lossy image compression algorithm that reduces quality where the user is not looking. By using different radial functions to determine the quality decrease, the perceptual quality is compared to traditional JPEG compression. The storage difference when using a gaze-based lossy image compression is also compared to the JPEG algorithm. Methods: A gaze-based image compression algorithm, which is based on the JPEG algorithm, is developed with DirectX 12. The algorithm uses Tobii eye tracker to get where the user is gazing at the screen. When the gaze-position is changed the algorithm is run again to compress the image. A user study is conducted to the test the perceived quality of this algorithm compared to traditional lossy JPEG image compression. Two different radial functions are tested with various parameters to determine which one is offering the best perceived quality. The algorithm is also tested along with the radial functions on how much of a storage difference there is when using this algorithm compared to traditional JPEG compression. Results: With 11 participants, the results show the gaze-based algorithm is perceptually the same on images that have few objects who are close together. Images with many objects that are spread throughout the image performed worse on the gaze-based algorithm and was less picked compared traditional JPEG compression. The radial functions that cover much of the screen is more often picked compared to other radial functions that have less area of the screen. The storage difference between the gaze-based algorithm compared to traditional JPEG compression was between 60% to 80% less depending on the image. Conclusions: The thesis concludes that there is substantial storage savings that can be made when using a gaze-based image compression compared to traditional JPEG compression. Images with few objects who are close together are perceptually not distinguishable when using the gaze-based algorithm.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Kinney, Albert C. "Analysis of M-JPEG video over an ATM network". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2001. http://handle.dtic.mil/100.2/ADA392117.

Testo completo
Abstract (sommario):
Thesis (M.S. in Electrical Engineering) Naval Postgraduate School, June 2001.
Thesis advisor(s): McEachen, John C. "June 2001." Includes bibliographical references (p. 83-84). Also Available in print.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Andersson, Mikael, e Per Karlström. "Parallel JPEG Processing with a Hardware Accelerated DSP Processor". Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2615.

Testo completo
Abstract (sommario):

This thesis describes the design of fast JPEG processing accelerators for a DSP processor.

Certain computation tasks are moved from the DSP processor to hardware accelerators. The accelerators are slave co processing machines and are controlled via a new instruction set. The clock cycle and power consumption is reduced by utilizing the custom built hardware. The hardware can perform the tasks in fewer clock cycles and several tasks can run in parallel. This will reduce the total number of clock cycles needed.

First a decoder and an encoder were implemented in DSP assembler. The cycle consumption of the parts was measured and from this the hardware/software partitioning was done. Behavioral models of the accelerators were then written in C++ and the assembly code was modified to work with the new hardware. Finally, the accelerators were implemented using Verilog.

Extension of the accelerator instructions was given following a custom design flow.

Gli stili APA, Harvard, Vancouver, ISO e altri
16

Yu, Lang. "Evaluating and Implementing JPEG XR Optimized for Video Surveillance". Thesis, Linköping University, Computer Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54307.

Testo completo
Abstract (sommario):

This report describes both evaluation and implementation of the new coming image compression standard JPEG XR. The intention is to determine if JPEG XR is an appropriate standard for IP based video surveillance purposes. Video surveillance, especially IP based video surveillance, currently has an increasing role in the security market. To be a good standard for surveillance, the video stream generated by the camera is required to be low bit-rate, low latency on the network and at the same time keep a high dynamic display range. The thesis start with a deep insightful study of JPEG XR encoding standard. Since the standard could have different settings,optimized settings are applied to JPEG XR encoder to fit the requirement of network video surveillance. Then, a comparative evaluation of the JPEG XR versusthe JPEG is delivered both in terms of objective and subjective way. Later, part of the JPEG XR encoder is implemented in hardware as an accelerator for further evaluation. SystemVerilog is the coding language. TSMC 40nm process library and Synopsys ASIC tool chain are used for synthesize. The throughput, area, power ofthe encoder are given and analyzed. Finally, the system integration of the JPEGXR hardware encoder to Axis ARTPEC-X SoC platform is discussed.

Gli stili APA, Harvard, Vancouver, ISO e altri
17

Tovslid, Magnus Jeffs. "JPEG 2000 Quality Scalability in an IP Networking Scenario". Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for elektronikk og telekommunikasjon, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-18465.

Testo completo
Abstract (sommario):
In this thesis, the JPEG 2000 quality scalability feature was investigated in thecontext of transporting video over IP networks. The goals of the investigation wastwo-fold. First, it was desired to nd a way of choosing the number of quality layersto embed in a JPEG 2000 codestream. In previous work, this choice has been moreor less arbitrary. Second, it was desired to nd how low the video bitrate could bedropped before it became perceptible to a viewer. This information can be usedin an IP networking scenario to e.g. adapt the video bitrate blindly according tothe measured channel capacity as long as the drop in bitrate is expected to beimperceptible. When the drop in bitrate is expected to be perceptible, a switchcould be made to a smoother bitrate adaptation.A way of choosing the total number of quality layers to embed in a codestreamwas found by minimizing the dierence in predicted quality between direct andscaled compression. Scaled compression is the compression which is achieved byextracting quality layers. The minimization procedure was bound by the speed ofthe encoder, as it takes longer for an encoder to embed more quality layers. It wasfound that the procedure was highly dependent on the desired bitrate range.A subjective test was run in order to measure how large a drop in video bitrate hadto be for it to become perceptible. A newly developed JPEG 2000 quality layerscaler was used to produce the dierent bitrates in the test. The number of qualitylayers to embed in codestream was found by using the minimization procedurementioned above. It was found that, for the bitrate range used in the test, 2 - 30Mbits/s for a resolution of 1280x720 at 25 frames per second, the magnitude ofthe drop in bitrate had to be at least 10 Mbits/s before the participants in the testnoticed it. A comparison with objective quality metrics, SSIM and PSNR, revealedthat it was very dicult to predict the visibility of the drops in bitrate by usingthese metrics. Designing the type of rate control mentioned in the rst paragraphwill therefore have to wait until a parameter with good predictive properties canbe found.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Žičevičius, Linas. "Atvirojo kodo JPEG realizacijų, aprašytų aparatūros aprašymo kalbomis, tyrimas". Master's thesis, Lithuanian Academic Libraries Network (LABT), 2013. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2012~D_20131104_110054-48993.

Testo completo
Abstract (sommario):
Šiame technologijų amžiuje vis daugiau technologijų sąveikauja su aplinka. Tai įvairūs prietaisai, sistemos, robotai, kurie geba apdoroti vaizdinę informaciją ir/arba ją interpretuoti. Kadangi vaizdinė informacija užima labai daug kietojo disko vietos, palyginus su kitokios rūšies informacija, iškyla jos saugojimo bei parsiuntimo problemos. Šioms problemoms spręsti pasitelkiamas duomenų glaudinimas (data compression). Šie sprendimai gali būti įgyvendinti programiškai (software) ir aparatūriškai (hardware). Aparatūriniai sprendimai pasižymi dideliu greičiu taip pat panaudojimo galimybėmis. Kadangi ateityje technologijos vis labiau sąveikaus su aplinka, įrenginiai, sugebantys greitai, tiksliai ir kuo mažesnėmis kietojo disko sąnaudomis apdoroti vaizdus, taps ypač reikšmingi. Tokius įrenginius patogu projektuoti ir tirti naudojantis aparatūros aprašymo kalbomis (angl. Hardware Description Language). Taip pat svarbus šių įrenginių aspektas – vis didėjantis jų sudėtingumas. Pakartotinio naudojimo (reuse) metodologija – naudojimas sukurtų ir verifikuotų komponentų – dabar yra kertinis lustų bei SOC (System On Chip) kūrimo akmuo, kadangi tai yra metodologija, kuri leidžia sudėtingus lustus projektuoti prieinama kaina, geresnės kokybės taip pat taupo žmogiškuosius išteklius ir laiką.
In this age of technological advances more and more technologies interact with environment. It is various devices, systems, robots which are capable to process and/or interpret visual information. Visual information is very space consuming information in comparison with other types of information, so problems like storing, downloading emerges and data compression is trying to resolve these problems. These data compression solutions may be implemented using software or hardware. Hardware solutions characterized with their high speed. In future more and more technologies will be capable to interact with environment, so tools capable of fast, precise and good compression render of images will become very important. These tools may be designed and studied using hardware description languages. Furthermore, visual information processing tools becoming very complex. Reuse methodology enables easier, faster and cheaper design of such tools.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Aouadi, Imed. "Optimisation de JPEG 2000 sur système sur puce programmable". Paris 11, 2005. https://pastel.archives-ouvertes.fr/pastel-00001658.

Testo completo
Abstract (sommario):
Récemment le domaine du traitement de l’image, de la vidéo, et l’audio a connu plusieurs évolutions importantes au niveau des algorithmes et des architectures. L’une de ces évolutions est l’apparition du nouveau standard ISO/IEC de compression d’image JPEG2000 qui succède à JPEG. Ce nouveau standard présente de nombreuses fonctionnalités et caractéristiques qui lui permettent d’être adapté à une large panoplie d’applications. Mais ces caractéristiques se sont accompagnées d’une complexité algorithmique beaucoup plus élevée que JPEG et qui le rend très difficile à optimiser pour certaines implémentations ayant des contraintes très sévères en terme de surface, de temps d’exécution ou de consommation d’énergie ou de l’ensemble de ces contraintes. L’une des étapes clé dans le processus de compression JPEG2000 est le codeur entropique qui constitue à lui seul environ 70% du temps de traitement global pour la compression d’une image. Il est donc essentiel d’analyser les possibilités d’optimisation d’implémentations de JPEG2000. Les circuits FPGA sont aujourd’hui les principaux circuits reconfigurables disponibles sur le marché. S’ils ont longtemps été utilisés uniquement pour le prototypage des ASIC, ils sont aujourd’hui en mesure de fournir une solution efficace à la réalisation matérielle d’applications dans de nombreux domaines. Vu le progrès que connaît l’industrie des composants FPGA du point de vue capacité d’intégration et fréquence de fonctionnement, les architectures reconfigurables constituent aujourd’hui une solution efficace et compétitive pour répondre aussi bien aux besoins du prototypage qu’à ceux des implémentations matérielles
Recently the field of video, image and audio processing has experienced several significant progresses on both the algorithms and the architectures levels. One of these evolutions is the emergence of the new ISO/IEC JPEG2000 image compression standard which succeeds to JPEG. This new standard presents many functionalities and features which allows it to be adapted to a large spectrum of applications. However, these features bring up new algorithmic complexities of higher degree than those of JPEG which in turn makes it very difficult to be optimized for certain implementations under very hard constraints. Those constraints could be area, timing or power constraints or more likely all of them. One of the key steps during the JPEG2000 processing is entropy coding that takes about 70 % of the total execution time when compressing an image. It is therefore essential to analyze the potentialities of optimizations of implementations of JPEG2000. FPGA devices are currently the main reconfigurable circuits available on the market. Although they have been used for a long time only for ASIC prototyping, they are able today to provide an effective solution to the hardware implementation of applications in many fields. Considering the progress experienced by the FPGA semiconductor industry on integration capacity and working frequency, reconfigurable architectures are now an effective and competitive solution to meet the needs of both prototyping and final hardware implementations. In this work we propose a methodology for the study of the possibilities of implementation of JPEG2000. This study starts with the evaluation of software implementations on commercial platforms
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Taylor, James Cary, Jacklynn Hall e Tony Yuan. "Dean's Innovation Challenge: Researching the JPEG 2000 Image Decoder". Thesis, The University of Arizona, 2012. http://hdl.handle.net/10150/244833.

Testo completo
Abstract (sommario):
The goal of this thesis is to analyze the current commercialization process of the University of Arizona as well as the Office of Technology Transfer, or OTT, and potential opportunities for strengthening the process. This will be done through an initial review of a patented technology, JPEG 2000 Corrupt Codestream Decoder, as well as its parent technology, JPEG 2000 image standard. The JPEG 2000 decoder is used to decode corrupt images that are transferred in real time, in order to utilize the "usable" information as efficiently as possible. The technology itself will be analyzed, including the strengths and weaknesses, and areas of opportunities. Next, the commercialization history of the technology will also be looked upon, such as patent dates, related licensees, and direction of the technology. Emphasis will be placed on processes and environments that helped the technology, as well as those that have hindered it. More specifically, since the technology was never implemented in a commercialized setting, there will be a glance as to why the technology was not successfully licensed and commercialized. Finally, the commercialization process of OTT will be examined, in a broader context that applies to all technologies that OTT deals with. This will look at the tasks of OTT, shortfalls of the Office, as well as the process of commercialization. Once all items are addressed, areas of recommendations will be described with the aim of improving the efficiency and resourcefulness of OTT.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Fawcett, Roger James. "Efficient practical image compression". Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365711.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Agostini, Luciano Volcan. "Projeto de arquiteturas integradas para a compressão de imagens JPEG". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2002. http://hdl.handle.net/10183/11431.

Testo completo
Abstract (sommario):
Esta dissertação apresenta o desenvolvimento de arquiteturas para a compressão JPEG, onde são apresentadas arquiteturas de um compressor JPEG para imagens em tons de cinza, de um compressor JPEG para imagens coloridas e de um conversor de espaço de cores de RGB para YCbCr. As arquiteturas desenvolvidas são detalhadamente apresentadas, tendo sido completamente descritas em VHDL, com sua síntese direcionada para FPGAs da família Flex10KE da Altera. A arquitetura integrada do compressor JPEG para imagens em tons de cinza possui uma latência mínima de 237 ciclos de clock e processa uma imagem de 640x480 pixels em 18,5ms, permitindo uma taxa de processamento de 54 imagens por segundo. As estimativas realizadas em torno da taxa de compressão obtida indicam que ela seria de aproximadamente 6,2 vezes ou de 84 %. A arquitetura integrada do compressor JPEG para imagens coloridas foi gerada a partir de adaptações na arquitetura do compressor para imagens em tons de cinza. Esta arquitetura também possui a latência mínima de 237 ciclos de clock, sendo capaz de processar uma imagem coloria de 640 x 480 pixels em 54,4ms, permitindo uma taxa de processamento de 18,4 imagens por segundo. A taxa de compressão obtida, segundo estimativas, seria de aproximadamente 14,4 vezes ou de 93 %. A arquitetura para o conversor de espaço de cores de RBG para YCbCr possui uma latência de 6 ciclos de clock e é capaz de processar uma imagem colorida de 640x480 pixels em 84,6ms, o que permite uma taxa de processamento de 11,8 imagens por segundo. Esta arquitetura não chegou a ser integrada com a arquitetura do compressor de imagens coloridas, mas algumas sugestões e estimativas foram realizadas nesta direção.
This dissertation presents the design of architectures for JPEG image compression. Architectures for a gray scale images JPEG compressor that were developed are herein presented. This work also addresses a color images JPEG compressor and a color space converter. The designed architectures are described in detail and they were completely described in VHDL, with synthesis directed for Altera Flex10KE family of FPGAs. The integrated architecture for gray scale images JPEG compressor has a minimum latency of 237 clock cycles and it processes an image of 640x480 pixels in 18,5ms, allowing a processing rate of 54 images per second. The compression rate, according to estimates, would be of 6,2 times or 84%, in percentage of bits compression. The integrated architecture for color images JPEG compression was generated starting from incremental changes in the architecture of gray scale images compressor. This architecture also has the minimum latency of 237 clock cycles and it can process a color image of 640 x 480 pixels in 54,4ms, allowing a processing rate of 18,4 images per second. The compression rate, according to estimates, would be of 14,4 times or 93%, in percentage of bits compression. The architecture for space color conversor from RBG to YCbCr has a latency of 6 clock cycles and it is able to process a color image of 640 x 480 pixels in 84,6ms, allowing a processing rate of 11,8 images per second. This architecture was finally not integrated with the color images compressor architecture, but some suggestions, alternatives and estimates were made in this direction.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Haddad, Sahar. "Protection of encrypted and/or compressed medical images by means of watermarking". Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2020. http://www.theses.fr/2020IMTA0184.

Testo completo
Abstract (sommario):
L’évolution des technologies du multimédia et des communications s’exprime dans le domaine de la santé par la mise à disposition de nouveaux moyens de partage et d’accès distant aux données de l’imagerie médicale. Dans un tel contexte, la question de la sécurité de ces données est particulièrement sensible, notamment en termes de confidentialité, fiabilité et traçabilité. Ces travaux de thèse ont trait à la combinaison voire la fusion de tatouage et de chiffrement de données pour la protection des images médicales. Cependant, le déploiement de ces mécanismes de sécurité dans le domaine de la santé doit prendre en compte les spécificités de ce domaine. Notamment, du fait que les images médicales constituent de grands volumes de données, elles sont le plus souvent encodées sous forme compressée afin de réduire les coûts de transmission et de stockage. Une première partie de ces travaux a porté sur le tatouage-compression conjoint d’images médicales de manière à pouvoir vérifier l’intégrité et l’authenticité d’une image sans la décompresser, à l’aide d’un message tatoué que l’on peut extraire sans décoder, même partiellement, le flux binaire. Dans la continuité de ces travaux, nous nous sommes intéressés à vérifier la fiabilité des images compressées et chiffrées tout en maintenant leur confidentialité. Son principe général est fondé sur l’insertion de deux messages contenant des attributs de sécurité dans l’image durant la compression et le chiffrement de cette dernière. Chacun d’eux n’est accessible que dans un domaine : le domaine compressé ou le domaine chiffré, sans avoir à décompresser ou déchiffrer l’image, même partiellement. Ces schémas ont par ailleurs été développés pour être compatible avec le standard DICOM. Une deuxième partie de ces travaux de recherche a focalisé sur le tatouage réversible d’images médicales. Cette propriété garantit la reconstruction de l’image originale après avoir retiré la marque insérée. Nous avons développé un schéma de tatouage original permettant d’insérer une marque de manière réversible dans une image chiffrée, sur la base d’une nouvelle modulation de tatouage par décalage d’histogramme robuste. Ce message est accessible dans les deux domaines chiffré et spatial. Enfin, dans le but de valider les solutions proposées vis-à-vis de la distorsion introduite par le tatouage, nous avons mis en place un protocole d’évaluation « psychovisuelle » du tatouage
The rapid growth of information and communication technologies have offered new possibilities to store, access and transfer medical images over networks. In this context, data leaks, robbery as well as data manipulations represent a real danger needing thus new protection solutions, in terms of data confidentiality, reliability control and data traceability. The work conducted during this Ph.D. thesis aims at the combination, or even the fusion of watermarking and data encryption for medical image security. Nevertheless, the deployment of these protection mechanisms in the healthcare domain should take into account its specificities. Particularly, as medical images constitute large volumes of data, they are usually encoded in lossy or lossless compressed form so as to minimize transmission and storage costs. The first part of this work focused on joint watermarking-compression of medical images so as to be able to verify image reliability from the compressed bitstream without decompressing it, even partially. In the continuity of this work, we were interested in verifying the reliability of compressed and encrypted images while maintaining their confidentiality. Its main principle is based on the insertion of two messages, containing security attributes, in the image during the image compression and encryption. Each of them is only accessible in one domain: the compressed domain or the encrypted domain, without having to decompress or decrypt the image, even partially. Note that these schemes have been developed to be compliant to the DICOM standard. A second part of these research activities focused on the reversible watermarking of encrypted medical images. The reversibility property guarantees the recovery of the original image after removing the inserted watermark. We have developed an original watermarking scheme allowing a reversible insertion of a message in an encrypted image, based on a new robust histogram shifting reversible watermarking modulation. This message is accessible in both encrypted and clear domains. Finally, in order to validate the distortion introduced by watermark embedding in the proposed solutions, we have implemented a “psychovisual” assessment protocol for medical image watermarking
Gli stili APA, Harvard, Vancouver, ISO e altri
24

CHEN, MAO-XIONG, e 陳茂雄. "The investigation of JPEG and JPEG-TEC system". Thesis, 1992. http://ndltd.ncl.edu.tw/handle/86000106695283386379.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Chou, Chin-tai, e 周清泰. "JPEG Study on Reducing Blocking Artifacts in JPEG Decoded Images". Thesis, 2005. http://ndltd.ncl.edu.tw/handle/74381511550239526258.

Testo completo
Abstract (sommario):
碩士
國立高雄第一科技大學
電腦與通訊工程所
93
Block-based image coding system, such as JPEG, MPEG, VQ-coder, H.263, and H.246, etc., easily causes the degradation of reconstruction image in high compression ratio (low bit-rate). So there is a discontinuous condition between block and block, which is called the blocking artifacts. In this thesis, we propose two new methods to improve the blocking artifacts, including: the methods using four-point discrete cosine transform (DCT) and using standard deviation equalizer. The DCT has the property of energy compaction on low-frequency coefficients for real-world digital images. The production of the blocking artifacts is mainly to contain high frequency components, so we will recant on the high frequency coefficients in order to reduce the blocking effects. Because the blocking artifacts are produced between block and block, there are obvious changes at the block boundary (high frequency components). We adjust high-frequency coefficient to reduce the blocking effect and keep the quality of the reconstruction image. The second method in this thesis is conceived from the inverse enhancement of an image by reducing the difference value at the boundary to approximate one standard deviation of uniform distribution. The standard deviation can represent the scattered degree of the discrete signal. At the area of larger contrast region, decaying the degree of mutual difference can reduce the contrast of the boundary so that the blocking artifacts are reduced. In the experimental results, it is shown that the two methods proposed in this thesis can really and effectively improve the blocking artifacts of the reconstruction image in the block-based coding system.
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Chang, Ih-Hwa, e 張益華. "Adaptive JPEG coding system". Thesis, 1995. http://ndltd.ncl.edu.tw/handle/30356233472343637102.

Testo completo
Abstract (sommario):
碩士
國立中央大學
電機工程研究所
83
JPEG has become the industrial standard for still image compression. At medium bitrate, JPEG performs well. At low bitrate, on the other hand, JPEG suffers from serious blocking effects. In this thesis, we present a JPEG-based coding system. By adding pre-/post- processing on JPEG, a simple and easy to implement system is constructed. The pre-/post- processing schemes employ two techniques - classification and subsampling. An input image is segmented into high/low activity blocks by combining the L1 norm and the mean difference method. Low activity blocks are subsampled and encoded by JPEG, while hige activity blocks are directly encoded by JPEG. As a result, it yields better, in comparison with JPEG, quality in both smooth and complex areas of image at low bitrates. Simpler schemes by applying only classification technique or subsampling technique are discussed also.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Lin, Zih-Chen, e 林子辰. "Fast JPEG 2000 Encryptor". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/03990532894756652275.

Testo completo
Abstract (sommario):
碩士
國防大學中正理工學院
電子工程研究所
96
With the progress of information science, the network has become important. The image security is getting more important too. The image compression technologies change with each passing day. They do not only improve compression efficiency, but also provide different characteristics for a variety of applications. So, an efficient image encryption method should be developed according to the characteristics of the compression technique itself. JPEG 2000 is an emerging standard for still image compression. JPEG 2000 provides various functionalities to solve the problems for different image applications and possibly become a most popular image format. Therefore, JPEG 2000 image encryption has become a hot topic for the researches of image security. One of the important properties of JPEG 2000 codestream is that the two consecutive bytes in the packet body should be in the interval [0x0000, 0xFF8F] so that a standard JPEG 2000 decoder can exactly decode the JPEG 2000 compressed codestream. This is so called the compatibility of JPEG 2000 and should be followed by an effective JPEG 2000 encryption method. This thesis proposes a fast JPEG 2000 encryptor which uses cryptographical technology to encrypt most of the JPEG 2000 compressed data, and uses the hardware to improve the shortcoming of the slow performance of the software encryption methods. The experimental results show that the proposed JPEG 2000 encryptor can encrypt most of JPEG 2000 compressed data. Moreover, the encrypted JPEG 2000 images can be decoded by standard JPEG 2000 decoders and can be exactly recovered by the proposed decryptor.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Chen, IChien, e 陳怡蒨. "JPEG-Based Moment Computations". Thesis, 2001. http://ndltd.ncl.edu.tw/handle/84349415593594808215.

Testo completo
Abstract (sommario):
碩士
國立臺灣科技大學
資訊管理系
89
Computing moments is important in image processing field. Especially the low--order moments are used widely in many practical applications. In this paper, we present an efficient algorithm for computing moments in the JPEG--compressed domain directly without the need to decompress the JPEG--compressed image. Experimental results reveal that the execution time improvement is about $98.8\%$ when compared to the indirect method by decompressing the JPEG--encoded image first, then applying the conventional moment--computation algorithm on the original image. In addition, the proposed algorithm is much faster than the conventional algorithm while preserving the similar accuracy.
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Ondruš, Jan. "Bezztrátová komprese JPEG grafiky". Master's thesis, 2009. http://www.nusl.cz/ntk/nusl-286256.

Testo completo
Abstract (sommario):
JPEG is a commonly used method of compression for photographic images. It consists of lossy and lossless part. Static Huffman coding is a last step. We can replace this step using advanced techniques and arithmetic coding. In this work we introduce method used for additional compression JPEG files (files in JFIF format) saved in baseline mode. Partial decompression is a general way we can use in this case. We invert only last lossless steps of JPEG compression algorithm. Compressed file is transformed into array of quantized DCT coefficients. We designed algorithm for prediction of the DCT coefficients. It returns particular linear combination of previous coded coefficients in current and neighbouring blocks for each from 64 coefficients in block matrix. We show how this prediction can improve efficiency of compression of JPEG files using Context Mixing algorithm implemented in PAQ8 by Matt Mahoney. Specific implementation is described and its compression ratio is compared with existing methods and applications for further lossless JPEG images compression.
Gli stili APA, Harvard, Vancouver, ISO e altri
30

劉姵菱. "JPEG-Based Steganographic Algorithms". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/59601611431232686953.

Testo completo
Abstract (sommario):
碩士
國立清華大學
資訊工程學系
100
Steganography is a method of data hiding. It is more and more important while the Internet communication grows up. Now, many steganographic algorithms or software have already been developed including image hiding techniques for JPEG images. JPEG is the most common image format used by digital cameras and transmitted in the Internet. This thesis proposes two methods to increase the capacity of F5 embedding algorithm. The first method is a pre-processing method, we randomly modifies the qDCT coefficients before the F5 embedding algorithm to get more nonzero qDCT coefficients to embed message. In the second method, we slightly modify the end of block (EOB) information in a JPEG image to increase the capacity of embedding after the F5 embedding algorithm. We change a qDCT coefficient on each qDCT 8×8 block to embed 3 more bits of message. Although the PSNR value is slightly decreased, our proposed algorithm significantly increases the capacity of embedding compared with many classical steganographic algorithms.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Dwivedi, Harsh Vardhan. "Design of JPEG Compressor". Thesis, 2009. http://ethesis.nitrkl.ac.in/1090/1/Thesis.pdf.

Testo completo
Abstract (sommario):
Images are generated, edited and transmitted on a very regular basis in a vast number of systems today. The raw image data generated by the sensors on a camera is very voluminous to store and hence not very efficient. It becomes especially cumbersome to move it around in bandwidth constrained systems or where bandwidth is to be conserved for cost purposes such as the World Wide Web. Such scenarios demand use of efficient image compressing techniques such as the JPEG algorithm technique which compresses the image to a high degree with little loss in perceived quality of the image. Today JPEG algorithm has become the de facto standard in image compression. MATLAB was used to write code for a program which could output a quantized DCT version of the input image and techniques for hardware implementation of JPEG algorithm in a speedy way were investigated.
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Lian, Chung-Jr, e 連崇志. "Design and Implementation of Image Coding Systems: JPEG, JPEG 2000 and MPEG-4 VTC". Thesis, 2003. http://ndltd.ncl.edu.tw/handle/77902516298772727288.

Testo completo
Abstract (sommario):
博士
國立臺灣大學
電機工程學研究所
91
In this dissertation, the hardware architecture design and implementation of image coding systems are presented. The research focuses on three image coding standards: JPEG, JPEG2000, and MPEG-4 Visual Texture Coding (VTC). JPEG is a well-known and matured standard. It has been widely used for natural image compression, especially very popular for digital still camera applications. In the first part of this dissertation, we proposed fully pipelined JPEG encoder and decoder for high speed image processing requirements in post-PC electronic appliances. The processing power is 50 million samples per second at 50 MHz working frequency. The proposed architectures can handle million-pixel digital images'''''''' encoding and decoding in very high speed. Other feature is that both the encoder and decoder are stand-alone and full-function solutions. They can encode or decode the JPEG compliant file without any aids from extra processor. JPEG2000 is the latest image coding standard. It is defined to be a more powerful standard after JPEG. JPEG2000 provids better compression performance, especially at low bitrates. It also provides various features, such as quality and resolution progressive, region of interest coding, lossy and lossless coding in an unified framework, etc. The performance of JPEG2000 comes at the cost of higher computational complexity. In the second part of the dissertation, we discuss the challenges and issues of the design of a JPEG2000 coding system. Cycle efficient block encoding and decoding engines, and computation reduction techniques by Tier-2 feedback are proposed for the most critical module, Embedded Block Coding with Optimized Truncation (EBCOT). With the proposed parallel checking and skipping-based coding schemes, the scanning cycles can be reduced to 40% of the direct bit-by-bit implementation. As for the Tier-2 feedback control in lossy coding mode, the execution cycles and therefore power consumption can be lowered to 50% in the case of about 10 times compression. MPEG-4 Visual Texture Coding (VTC) tool is another compression algorithm that also adopts the wavelet-based algorithm. In VTC, Zero-tree coding algorithm is adopted to generate the context symbols for arithmetic coder. In the third part, the design of the zero-tree coding algorithm is discussed. Tree-depth scan with multiple quantization mode are realized. Dedicated data access scheme are designed for smooth coding flow. In each chapter, detailed analysis of the algorithms are provided first. Then, efficient hardware architectures are proposed exploiting special algorithm characteristics. The proposed dedicated architectures can greatly improve the processing performance compared with a general processor-based solution. For non-PC consumer applications, these architectures are more competitive solutions for cost-efficient and high performance requirements.
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Chen, Yi-Lei, e 陳以雷. "Tampering Detection in JPEG Images". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/43195323821600534654.

Testo completo
Abstract (sommario):
碩士
國立清華大學
資訊工程學系
97
Since JPEG has been a popularly used image compression standard, tampering detection in JPEG images now plays an important role. Tampering on compressed images often involve recompression and tend to erase those tampering traces existed in uncompressed images. We could, however, try to discover new traces caused by recompression and use these traces to detect the recompression tampering. The artifacts introduced by lossy JPEG compression can be seen as an inherent signature for recompressed images. In this thesis, we first propose a robust tampered image detection approach by periodicity analysis with the compression artifacts both in spatial and DCT domain. To locate the forged regions, we then propose a forged regions localization method via quantization noise model and image restoration techniques. Finally, we conduct a series of experiments to demonstrate the validity of the proposed periodic features and quantization noise model, which all outperform the existing methods. Also, we show the effectiveness and feasibility of our forged regions localization method with proposed image restoration techniques.
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Lee, Jung-Chieh, e 李榮杰. "JPEG image compression chip design". Thesis, 2005. http://ndltd.ncl.edu.tw/handle/96388169020307901202.

Testo completo
Abstract (sommario):
碩士
國立暨南國際大學
電機工程學系
93
Today we can access the real-time video news by the cell phone. When we walk down the street there is a camera mechinge, lovers or friends like to go in and take a funny picture. The digital application is ubiquity and more and more million-pixel digital cameras appear in the markets. Howerer, it take a lot of storage space and transmission bandwidth to maintain the high resolution and high quality images. Hence decreasing the amount of data is a very important topic. JPEG is a widely used still image compression and matured standard, we choice it to solve these problems for its ability to provide high compression ratio and controllable trade-off between compression and image quality. First we use C language to simulate the JPEG encoder algorithm, then we try to implement some core techniques such as Discrete Cosine Transform (DCT), quantizer, and zigzag by register transfer level (RTL) of Verilog. The designed block is verified correct by Verilog simulations and is synthesized by TSMC 0.18 standard cell library. The core is very practical and will reuse to multimedia system and digital vedio applications, and even remote sensing image processing in the satellites.
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Chen, Yung-Chen, e 陳詠哲. "Wavelet-Based JPEG 2000 Image Compression". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/66542893741935624689.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Lopata, Jan. "Odstraňování artefaktů JPEG komprese obrazových dat". Master's thesis, 2014. http://www.nusl.cz/ntk/nusl-341236.

Testo completo
Abstract (sommario):
This thesis is concerned with the removal of artefacts typical for JPEG im- age compression. First, we describe the mathematical formulation of the JPEG format and the problem of artefact removal. We then formulate the problem as an optimization problem, where the minimized functional is obtained via Bayes' theorem and complex wavelets. We describe proximal operators and algorithms and apply them to the minimization of the given functional. The final algorithm is implemented in MATLAB and tested on several test problems. 1
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Thanh, V. T. Kieu(Vien Tat Kieu). "Post-processing of JPEG decompressed images". Thesis, 2002. https://eprints.utas.edu.au/22088/1/whole_ThanhVienTatKieu2002_thesis.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Tsai, Chi-Shien, e 蔡繼賢. "Image Quality Improvement for JPEG Images". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/07438556114344329220.

Testo completo
Abstract (sommario):
碩士
淡江大學
資訊工程學系資訊網路與通訊碩士班
101
In this paper we proposed a method to enhance the quality of the decompressed JPEG images, which is an improved version of the method proposed by Chang et.al.. A so-called Modified Coefficient Adjustment Block is used to record the round off situation while DCT coefficients in a block are quantized according to a given quantization block. In the decompression stage, the information recorded in the Modified Coefficient Adjustment Block is then referred to adjust the de-quantized DCT coefficients before the Inverse DCT transformation. We provide two choices of parameter setting, fixed and adaptive parameters. Experimental results show that no matter whether fixed or adaptive parameters are used, our method provides significantly better results than The method proposed by Chang et al..
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Jiun-HauJang e 張駿豪. "Hardware Design for JPEG Image Decoder". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/15216442530660602971.

Testo completo
Abstract (sommario):
碩士
國立成功大學
電機工程學系碩博士班
101
In this paper, we propose hardware design for JPEG image decoder. The design process of decoder is handled JPEG header file. Then it handled variable length decoding and inverse quantization. We use inverse zig-zag scan rearrange the data, and use the two-dimensional inverse discrete cosine transform the data from the frequency domain values back to the apace domain values. Finally, it transformed YCbCr coordinate values into RGB coordinate values. The foregoing are completed by hardware description languages (Verilog). JPEG read file, and the data of RGB after decoding is added BMP header file let it became BMP image. The two parts are completed by high level language(C language). According to the experimental data, the average speedup of the hardware JPEG decoder is about 2.9 to 4.3 times more than that of the software JPEG decoder. The larger the pixels of the input JPEG image are, the better the performance will be, particularly during the transformation from YcbCr to RGB.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Lo, Zhi-Yao, e 羅子堯. "Securing JPEG Architecture Based on Switching Chaos". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/77569658525601132313.

Testo completo
Abstract (sommario):
碩士
國立宜蘭大學
電子工程學系碩士班
104
Chaotic system has the high sensitivity to initial conditions and excellent random behaviors. It can be applied to cryptosystem. The computation of low-dimensional chaotic systems is fast, but there are some concerns on their security for the simple system parameters. The high-dimensional chaotic systems are more secure, but have more complex parameters that lead to high computation complexity. The switching chaotic system is introduced by adopting multiple low-dimensional chaotic systems, to promote the security of the system and the difficulties of attackers. JPEG is the most popular format of digital image. It is a common compress format of image which saves the storage space and transmission time greatly. The security of JPEG is an important topic for both theoretic research and practical application. For JPEG, we present an image encryption algorithm based on switching chaotic system. By the switching chaotic sequence, permutation operation and diffusion operation are performed on the JPEG images. In this thesis, we further apply the proposed switching image encryption algorithm to the JPEG internal encoder and decoder. The experimental results show that the proposed image encryption algorithm has less correlation between adjacent pixels, large key space, good key sensitivity and a better security for statistical analysis. In secure JPEG structure, the experimental results have good PSNR (Peak Signal to Noise Ratio) and less MSE (Mean Square Error).
Gli stili APA, Harvard, Vancouver, ISO e altri
41

LO, Shun-Teng, e 羅順騰. "Blocking Noise Suppression Schemes for JPEG Images". Thesis, 2002. http://ndltd.ncl.edu.tw/handle/77855472324138344824.

Testo completo
Abstract (sommario):
碩士
國立交通大學
電機與控制工程系
90
The image compression by JPEG is very common in our daily life, for example, Internet or the digital camera. JPEG, itself, is a standard of image compression to decrease the data rate. Unfortunately, annoying blocking artifacts would appear. The blocking noise makes us uncomfortable usually. It is necessary and important to remove the blocking noises. The thesis introduces several schemes for the JPEG blocking noise removal: the noise discussed here is the blocking noise resulted from the quantization of DCT coefficients. Traditionally, different filters are applied respectively to the monotone area and the edge area, aiming at smoothing the monotone area or enhancing the edge area. We propose two methods to remove the blocking noise. The first one is the use of the fuzzy rule-based filter (FRB). The fuzzy rule-based filter’s output is a weighted average the processing pixel itself and its neighborhood pixels, dependent on the gray level difference between pixels, the spatial distance and direction between pixels, and the variance in the local window. Using the LMS learning algorithm we can determine the best membership function for the FRB filter. For the second method, we only deal with pixels around the block boundaries, letting the pixels not so blocky. By the simulation results, we have demonstrated the effectiveness and robustness in blocking noise removal our proposal schemes.
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Hsu, Shou-Hua, e 許修華. "Error Isolation Progressive JPEG over Wireless Communication". Thesis, 1998. http://ndltd.ncl.edu.tw/handle/75498753504065489826.

Testo completo
Abstract (sommario):
碩士
國立中央大學
電機工程學系
86
With the development of mobile communication, data transmissionover wireless channels is getting more popular recently. We face two problems in wireless communication, limited bandwidth and high error rates. When using progressive coding, the decoder can decide where to stop receiving data to save bandwidth. In this thesis, we focus on the progressive mode in JPEG image compression standard over relatively high error rate channels. We utilize synchronization technique in JPEG standard to isolate error blocks, and then use the corresponding error concealment technique based on its error position occurring in spectral selection or successive approximation progressive mode. The concealment technique improves subjective quality of reconstruction image.
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Xu, Xiu-Hua, e 許修華. "Error Isolation Progressive JPEG over Wireless Communication". Thesis, 1998. http://ndltd.ncl.edu.tw/handle/70434434214941235497.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Shen, Chih-yi, e 沈志益. "A Secure Steganographic Technique for JPEG Images". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/96089069159659958717.

Testo completo
Abstract (sommario):
碩士
逢甲大學
通訊工程所
94
Image steganography, which differs from cryptography that directly encrypts the data transmitted, embeds important messages into a digital image that appears usual before transmission to improve the concealment of communication. Its security depends on the ability to bypass the awareness of others and to resist the detection of secret messages. Steganalysis is the art of detecting acts of secret communication. The JPEG image format is currently the most widely used storage format in the Internet. JPEG image steganography applies the JPEG format as the cover image, thus to develop an efficient, safety and useful JPEG image steganography technology is a meaningful and urgent problem. There are only a few JPEG image steganography techniques formerly proposed, yet most utilize the LSB flipping concept, which cannot resist detection attacks based on the variations of block effects. Observing the flaws mentioned above, we propose a method to remedy the problems. Our main concept is to embed as much message bits as possible while minimizing the number of modified bits. Our method is based on the data hiding method proposed by Chen, Pan and Tseng in 2002, and it only needs to adjust one coefficient of average in a block. In this way, the modified message will be sparsely distributed in the block, which significantly reduces statistical distortion that may occur at the block edges and also drastically diminishes the characteristics of repeating replacement. Experiment results show that our concept is correct and has the following properties: 1. Applying the most commonly used image storage format, the JPEG format. 2. Possesses high undetectability. 3. Is capable of resisting the Chi-square statistical detection attack. 4. Is capable of resisting the Calibration statistical detection attack. 5. Higher embedding capacity when compared with several renowned JPEG steganographic software such as J-Steg, F5 and Outguess.
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Wu, Bing-Ze, e 吳秉澤. "A Tamper Detection Scheme for JPEG Images". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/24884799067982841065.

Testo completo
Abstract (sommario):
碩士
大同大學
資訊工程學系(所)
98
This paper presents a tamper detection scheme that can locate the tampered areas in JPEG images. Unlike other schemes that deal with uncompressed images, the presented scheme accepts JPEG image as input. The presented scheme calculates the difference values as the watermark from the host JPEG image and embeds the watermark into the DCT frequency domain of the image. When there is a JPEG image that needs to be verified, the scheme compares the watermark to detect the tampered areas. The experimental result shows the scheme can effectively detect the tampered areas of images that have been compressed by JPEG.
Gli stili APA, Harvard, Vancouver, ISO e altri
46

LIN, NEIN-HSIEN, e 林念賢. "A Jpeg-2000-Based Deforming Mesh Streaming". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/25369764370956605840.

Testo completo
Abstract (sommario):
碩士
國立臺灣大學
資訊網路與多媒體研究所
95
For PC and even mobile device, video and image streaming technologies, such as H.264 and JPEG/JPEG 2000, are already mature. However, the streaming technology for 3D model or so-called mesh data is still far from practical use. Therefore, in this paper, we propose a mesh streaming method based on JPEG 2000 standard and integrate it into an existed multimedia streaming server, so that our mesh streaming method can directly benefit from current image and video streaming technologies. In this method, the mesh data of a 3D model is first converted into a JPEG 2000 image, and then based on the JPEG 2000 streaming technique, the mesh data can then be transmitted over the Internet as a mesh streaming. Furthermore, we also extend this mesh streaming method for deforming meshes as the extension from a JPEG 2000 image to a Motion JPEG 2000 video, so that our mesh streaming method is not only for transmitting a static 3D model but also a 3D animation model. To increase the usability of our method, the mesh stream can also be inserted into a X3D scene as an extension node of X3D. Moreover, since this method is based on the JPEG 2000 standard, our system is much suitable to be integrated into any existed client-server or pear-to-pear multimedia streaming system.
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Jiang, Wan-Shang, e 江萬晟. "Hardwar/Software Codesign of an JPEG Decoder". Thesis, 2001. http://ndltd.ncl.edu.tw/handle/58679345676521432485.

Testo completo
Abstract (sommario):
碩士
國立海洋大學
電機工程學系
89
The development process for consumer-electronics products is often characterized by relatively short time-to-market, because of intensive competition. An efficient development process is, therefore, crucial to the successful introduction of such a product into the marketplace. Hardware/software codesign is a development methodology which involves proper partitioning and scheduling of the hardware and software parts within a system. Through well-defined communication interfaces, the hardware and software modules can be designed, simulated, and verified in parallel and in a coordinated fashion. The results are shorter development time, lower system cost, and more efficient management of manpower. The capability to handle multimedia is a common feature of many consumer- electronics products. Due to limited storage capacity, image and sound are normally compressed in these devices. For the storage and display of still images, JPEG standard is widely supported due to its high compression ratio and relatively low distortion. This thesis describes our effort in building an embedded JPEG decoder by using a hardware/software codesign approach. In partitioning the JPEG decoding process, the inverse discrete cosine transform (IDCT) is found to be the most time-consuming computation and, therefore, together with the normalization operation, are implemented in hardware using an FPGA. On the other hand, the entropy decoding and dequantization are realized in software such that their execution overlaps the hardware operations in a pipelined fashion. The finished prototype for the embedded JPEG decoder successfully demonstrates the effectiveness of the underlying development methodology.
Gli stili APA, Harvard, Vancouver, ISO e altri
48

In, Jaehan. "Rd optimized progressive image coding using JPEG". Thesis, 1998. http://hdl.handle.net/2429/7921.

Testo completo
Abstract (sommario):
The JPEG standard allows four modes of operation. They are the hierarchical (HJPEG), progressive (PJPEG), sequential (SJPEG), and lossless modes1: The HJPEG and PJPEG modes inherently support progressive image coding. In HJPEG, an image is decomposed into subimages of different resolution, each of which is then coded using one of the other three modes of JPEG. Progressiveness within a resolution in HJPEG can be achieved when each subimage is coded using PJPEG. An image coded using PJPEG consists of scans, each of which contributes to a portion of the reconstructed image quality. While SJPEG yields essentially the same level of compression performance for most encoder implementations, the performance of PJPEG depends highly upon the designed encoder structure. This is due to the flexibility the standard leaves open in designing PJPEG encoders. In this thesis, an efficient progressive image, coding algorithm is developed that is compliant with the JPEG still image compression standard. The JPEG-compliant progressive image encoder is a HJPEG encoder that employs a rate-distortion optimized PJPEG encoding algorithm for each image resolution. Our encoder outperforms an op- timized SJPEG encoder in terms of compression efficiency, substantially at low and high bit rates. Moreover, unlike existing J P EG compliant encoders, our encoder can achieve precise rate control for each fixed resolution. Such good compression performance at low bit rates and precise rate control are two highly desired features currently sought for the emerging JPEG-2000 standard. 1 Lossless JPEG algorithms are rarely used since their performance levels are significantly lower than those of other lossless image compression algorithms, and are therefore not widely used.
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Hu, Guang-nan, e 胡光南. "Fast downsizing transcoder for JPEG 2000 images". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/64887184539077129199.

Testo completo
Abstract (sommario):
碩士
南華大學
資訊管理學研究所
96
This paper presents a fast downsizing method for JPEG 2000 images. The proposed method is used to downsize the JPEG 2000 images under frequency domain. Compared with the traditional method, which downsizes the JPEG 2000 images under spatial domain, our proposed method can effectivelyreduce both required memory space and execution time. Experimental results reveal that the proposed frequency domain downsizing method can reduce the average execution time of the spatial domain downsizing method by 6% to 73% for images are downscaled to 1/2n×1/2n of the original image sizes.
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Chih-chen, Chiang, e 江至晨. "Optimnu JPEG Watermarking Based on Genetic Algorithms". Thesis, 2005. http://ndltd.ncl.edu.tw/handle/62374334771209270590.

Testo completo
Abstract (sommario):
碩士
長榮大學
經營管理研究所
93
With the rapid growth of the Internet, Multimedia, and E-Commerce, it becomes more convenient and fast for the users to interchange the information. However, the image files are extremely huge, it is necessary to use the compression technology to reduce the file space. Digital watermarking is one of commonly used techniques to hide information in images to protect copyright. To increase the application of watermarking area, the two techniques, digital watermarking and high image compression standard of JPEG, will be integrated in the thesis. In the processing of embedding watermark, the quantization field will be obtained from Q. According to the quantization field, the coefficients of DCT will be modified in the processing of embedding the watermark. In order to embed the watermark, middle frequency and the position information of watermark are needed, and the technique of GA is used to obtain the best position for the sake of embedding watermark. By adoptting the GA algorithm, PSNR and NC value will be promoted.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia