Dissertations / Theses on the topic 'Image compression level'

To see the other types of publications on this topic, follow the link: Image compression level.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 dissertations / theses for your research on the topic 'Image compression level.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bejile, Brian. "Bi-level lossless compression techniques." Diss., Connect to the thesis, 2004. http://hdl.handle.net/10066/1481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Guo, Jianghong. "Analysis and Design of Lossless Bi-level Image Coding Systems." Thesis, University of Waterloo, 2000. http://hdl.handle.net/10012/845.

Full text
Abstract:
Lossless image coding deals with the problem of representing an image with a minimum number of binary bits from which the original image can be fully recovered without any loss of information. Most lossless image coding algorithms reach the goal of efficient compression by taking care of the spatial correlations and statistical redundancy lying in images. Context based algorithms are the typical algorithms in lossless image coding. One key probelm in context based lossless bi-level image coding algorithms is the design of context templates. By using carefully designed context templates, we can effectively employ the information provided by surrounding pixels in an image. In almost all image processing applications, image data is accessed in a raster scanning manner and is treated as 1-D integer sequence rather than 2-D data. In this thesis, we present a quadrisection scanning method which is better than raster scanning in that more adjacent surrounding pixels are incorporated into context templates. Based on quadrisection scanning, we develop several context templates and propose several image coding schemes for both sequential and progressive lossless bi-level image compression. Our results show that our algorithms perform better than those raster scanning based algorithms, such as JBIG1 used in this thesis as a reference. Also, the application of 1-D grammar based codes in lossless image coding is discussed. 1-D grammar based codes outperform those LZ77/LZ78 based compression utility software for general data compression. It is also effective in lossless image coding. Several coding schemes for bi-level image compression via 1-D grammar codes are provided in this thesis, especially the parallel switching algorithm which combines the power of 1-D grammar based codes and context based algorithms. Most of our results are comparable to or better than those afforded by JBIG1.
APA, Harvard, Vancouver, ISO, and other styles
3

Курлан, О. О., and С. В. Омельченко. "Аналіз методів компресії зображень формату JPEG для підвищення рівня стиснення." Thesis, ХНУРЕ, 2021. https://openarchive.nure.ua/handle/document/16486.

Full text
Abstract:
The fundamental theoretical techniques of image compression are considered. Their comparative analysis was performed. Approaches of images compression level increasing presented in JPEG format will be investigated. In the practical part, there will be a software implementation of the investigated approaches and a comparative characteristic.
APA, Harvard, Vancouver, ISO, and other styles
4

Trisiripisal, Phichet. "Image Approximation using Triangulation." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/33337.

Full text
Abstract:
An image is a set of quantized intensity values that are sampled at a finite set of sample points on a two-dimensional plane. Images are crucial to many application areas, such as computer graphics and pattern recognition, because they discretely represent the information that the human eyes interpret. This thesis considers the use of triangular meshes for approximating intensity images. With the help of the wavelet-based analysis, triangular meshes can be efficiently constructed to approximate the image data. In this thesis, this study will focus on local image enhancement and mesh simplification operations, which try to minimize the total error of the reconstructed image as well as the number of triangles used to represent the image. The study will also present an optimal procedure for selecting triangle types used to represent the intensity image. Besides its applications to image and video compression, this triangular representation is potentially very useful for data storage and retrieval, and for processing such as image segmentation and object recognition.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
5

Husberg, Björn. "A Portable DARC Fax Service." Thesis, Linköping University, Department of Electrical Engineering, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1373.

Full text
Abstract:

DARC is a technique for data broadcasting over the FM radio network. Sectra Wireless Technologies AB has developed a handheld DARC receiver known as the Sectra CitySurfer. The CitySurfer is equipped with a high-resolution display along with buttons and a joystick that allows the user to view and navigate through various types of information received over DARC.

Sectra Wireless Technologies AB has, among other services, also developed a paging system that enables personal message transmission over DARC. The background of this thesis is a wish to be able to send fax documents using the paging system and to be able to view received fax documents in the CitySurfer.

The presented solution is a central PC-based fax server. The fax server is responsible for receiving standard fax transmissions and converting the fax documents before redirecting them to the right receiver in the DARC network. The topics discussed in this thesis are fax document routing, fax document conversion and fax server system design.

APA, Harvard, Vancouver, ISO, and other styles
6

Grah, Joana Sarah. "Mathematical imaging tools in cancer research : from mitosis analysis to sparse regularisation." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/273243.

Full text
Abstract:
This dissertation deals with customised image analysis tools in cancer research. In the field of biomedical sciences, mathematical imaging has become crucial in order to account for advancements in technical equipment and data storage by sound mathematical methods that can process and analyse imaging data in an automated way. This thesis contributes to the development of such mathematically sound imaging models in four ways: (i) automated cell segmentation and tracking. In cancer drug development, time-lapse light microscopy experiments are conducted for performance validation. The aim is to monitor behaviour of cells in cultures that have previously been treated with chemotherapy drugs, since atypical duration and outcome of mitosis, the process of cell division, can be an indicator of successfully working drugs. As an imaging modality we focus on phase contrast microscopy, hence avoiding phototoxicity and influence on cell behaviour. As a drawback, the common halo- and shade-off effect impede image analysis. We present a novel workflow uniting both automated mitotic cell detection with the Hough transform and subsequent cell tracking by a tailor-made level-set method in order to obtain statistics on length of mitosis and cell fates. The proposed image analysis pipeline is deployed in a MATLAB software package called MitosisAnalyser. For the detection of mitotic cells we use the circular Hough transform. This concept is investigated further in the framework of image regularisation in the general context of imaging inverse problems, in which circular objects should be enhanced, (ii) exploiting sparsity of first-order derivatives in combination with the linear circular Hough transform operation. Furthermore, (iii) we present a new unified higher-order derivative-type regularisation functional enforcing sparsity of a vector field related to an image to be reconstructed using curl, divergence and shear operators. The model is able to interpolate between well-known regularisers such as total generalised variation and infimal convolution total variation. Finally, (iv) we demonstrate how we can learn sparsity promoting parametrised regularisers via quotient minimisation, which can be motivated by generalised Eigenproblems. Learning approaches have recently become very popular in the field of inverse problems. However, the majority aims at fitting models to favourable training data, whereas we incorporate knowledge about both fit and misfit data. We present results resembling behaviour of well-established derivative-based sparse regularisers, introduce novel families of non-derivative-based regularisers and extend this framework to classification problems.
APA, Harvard, Vancouver, ISO, and other styles
7

Tseng, Chi-Hung, and 曾吉宏. "A VQ-Based Image Compression for Grey-Level Image Sequences." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/29017028723748615469.

Full text
Abstract:
碩士
大葉大學
資訊工程學系碩士班
93
Abstract A number of methods have been proposed for the compression of continuous image sequences. However, they only deal with binary images, which greatly limit their popularity in applications. In this thesis, we proposed a VQ-based method for compressing continuous grey-level images which have great similarity between two adjacent images. Four sets of continuous image sequences, each consists of 9 images with image size of 256x256 pixels, were used for testing the performance of the proposed method. Each image was first segmented into a number of 3x3 or 4x4 blocks, and then LBG algorithm was used for training a set of codebook consisting of 512 codewords capable of delineating features of the continuous image sequence. For further increasing the compression performance, JPEG-LS algorithm was applied to compress the codebook and index images of the sequential images. The results show that the compression ratio achieved by using the proposed method is significantly higher than AVI, while the image quality of the reconstructed images has been hold at a satisfied level. Future works will expand the method to application of lossless compression in medical image sequences. Keywords - Vector quantization, continuous image, image compression, AVI.
APA, Harvard, Vancouver, ISO, and other styles
8

Ho, Yu-An, and 何玉安. "A Study on Bi-Level Image Data Hiding and Image Compression." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/69085995316289719990.

Full text
Abstract:
博士
國立中興大學
資訊科學與工程學系所
97
Data hiding, as the term itself suggests, means the hiding of secret data in a cover image, and the result is a so-called stego-image. Reversible data hiding is a kind of data hiding technique where not only the secret data can be extracted from the stego-image but the cover image can be completely rebuilt after the extraction of the secret data. Therefore, reversible data hiding is the choice in cases of secret data hiding where the recovery of the cover image is required. In this dissertation, we propose a high-capacity reversible data hiding scheme based on pattern substitution (PS). It gathers statistical data about the occurrence frequencies of different patterns and quantifies how the frequency of occurrence differs from pattern to pattern. This way, on top of the pattern occurrence frequency information, some pattern exchange relationships can be established, and PS can thus be used to do the data hiding. Then, in the extraction stage, we can reverse these patterns to their original forms and rebuild an undistorted cover image. Binary image is one of the commonly used image formats, such as FAX and document images. This dissertation proposes a binary image compression method, called QLS compression method, which uses BFT linear quadtree and logic-spectra techniques to losslessly compress a binary image. This method employs a breadth first traversal linear quadtree to divide the image into blocks, and then uses logic functions and spectral techniques to encode the blocks. This dissertation also presents a QLS hiding-compression method to encode the cover image and embed the secret data in the cover image during the encoding of the cover image. The stego-image created by the QLS hiding-compression method is quite similar to the cover image. Halftone image is commonly used by low memory space devices such as printers, fax machines, cell phones, etc. In this dissertation, a novel reversible data hiding scheme for halftone images is represented. After rendering the multi-tone image into a halftone image by error diffusion, the proposed scheme classifies blocks according to pixel permutation in the halftone image and then generates two patterns to hide secret data. The new scheme not only can securely conceal secret information, but it also can fully recover the original halftone image after the extraction of the secret information.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Her-Fa, and 王和發. "A Refined VQ Gray-Level Image Compression Method and A Low Lossy Color Image Compression Method." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/58533890694910759475.

Full text
Abstract:
碩士
朝陽科技大學
資訊管理系碩士班
92
This paper proposes two image compression methods. The first one is called A Refined VQ Gray-level image compression method which modifies the traditional VQ-based image compression method for encoding a gray-level image. It further lossless encodes the compression data which are generated by the traditional VQ-based image compression method. Although the PSNRs of the images decompressed by the proposed method and the traditional VQ-based image compression method are the same, the a refined VQ gray-level image compression method is more efficient in memory storage. The second method integrates the techniques of quadtree, standard deviation, and quadratic regression equation to compress a color image. Let f be a YIQ-formatted color image. This method employs a set of quadratic regression equations to portray the relationships between the color components Y and I, as well as Y and Q of the pixels in f. Then, it only marks down the coefficients of the quadratic regression equations and the Y component values of all the pixels in f so as to reduce the memory space required to hold f. Generally speaking, when the decompressed image is required for a high quality, this proposed method is better, compared with the JPEG method. Moreover, the proposed method usually provides a better performance for compressing an image with slighter variation among the colors of adjacent pixels too. The blocking and Gibbs effects occurring on the image decoded by the proposed method are much less than those appearing on the image decoompressed by JPEG method.
APA, Harvard, Vancouver, ISO, and other styles
10

Xia, Wen Nan, and 夏文南. "Image compression with classified interpolative multi-level block truncation coding." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/25322635484554056715.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Tseng, Feng-Fu, and 曾豐富. "A Study on Bi-Level Image Compression Using Improved MMR Coding." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/31757504962627555274.

Full text
Abstract:
碩士
國立高雄第一科技大學
電腦與通訊工程所
90
ABSTRACT In this thesis, we review several lossless compression algorithms for bi-level image compression to enhance the understanding of bi-level image coding and propose the improved version of modified MMR. MH (Modified Huffman) coding is a one-dimensional run-length coding (1-D RLC). MR (Modified READ) coding is a mixture of one- and two-dimensional (1-D and 2-D) coding, and MMR coding is only based on the 2-D MR algorithm. However, MR coding and MMR coding are line-to-line reference models. The compression codewords are based on the distance calculated between the next coding pixel and the reference pixel. Modified MMR coding inherits the codewords from MMR coding except the difference that Modified MMR coding uses macroblock-based reference instead of line-to-line reference. It is an effective binary shape coding for digital video compression. Improved MMR coding proposed by us modifies the procedure of modified MMR algorithm to reduce the complexity of the coding. MATLAB programming is utilized to implement the simulation for improved MMR coding and modified MMR coding, and the experimental results show that both of them could achieve lossless compression coding without any image distortion. Taking the compression rate (CR) to be compared between improved MMR and modified MMR coding, we find that the improved MMR coding obtains higher compression rate in order to reduce more storage space than modified MMR coding. And, the algorithm of improved MMR coding is easier to be implemented by using MATLAB programming, too.
APA, Harvard, Vancouver, ISO, and other styles
12

Augustine, Jacob. "Switching Theoretic Approach To Image Compression." Thesis, 1996. http://etd.iisc.ernet.in/handle/2005/1898.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Huang, Chen-Sheng, and 黃振生. "A Study and Software Implementation on International Standard for Bi-level Image Compression - JBIG." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/72501704640641236352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Wang, Yi Jen, and 王怡人. "The Research of System-Level Design—The Case Study of Implementation of Fractal Image Compression Using SystemC." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/55934070356781292078.

Full text
Abstract:
碩士
長榮大學
經營管理研究所
91
RTL level design is employed in the traditional chip design methodology. That is, engineer adopted the hardware description languages, for example, Verilog or VHDL to perform the chip design. However, with the progress of the semi-conductor processing and elevation of the high density on a chip, and also influenced by the System-on-Chip (SoC) design methodology, the traditional methodology cannot be satisfied with the newly trend that includes the emphasis of the user requirements and the Time-to-Market. Furthermore, more and more IA products, such as the design methodology of intelligent mobile devices, the requirement of video/audio and the high quality of the image quality are gradually increasing. The key factor of the kernel design methodology of IA products is to implement the algorithm rapidly and create an optimize design in the closed and low scalability system architecture. As the increased system complexity and decreased design time, a fast executable specification and design tools gradually become the precondition of chip design. For the previous reasons, the objective of this research is to probe the system-level design methodology and the implementation of fractal image compression algorithm by using SystemC, a kind of system-level design language, will be adopted as the case study. The emphasis of future trends of chip design methodology will be focus on the SoC that based on deduction of the algorithm and the system integration.
APA, Harvard, Vancouver, ISO, and other styles
15

Lin, Sheng-Yen, and 林聖晏. "An Effective LDPCA based Lossless Compression Scheme for Encrypted Gray-level Images." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/uk46aw.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
105
Lossless compression of encrypted images can be achieved through Slepian-Wolf (SW) coding, and the compression performance is highly related to how data dependency is exploited while decoding. In this thesis, to improve the compression performance, the statistics of current decoded subimage is estimated from the previous decoded subimages in the same resolution level and then is further refined by the decoded bit planes. Besides, an efficient approach for lossless compressing encrypted images, on the basis of the low-density parity-check accumulate (LDPCA) codes, is proposed and realized. Due to the intricate procedures, LDPCA decoding is the most time-consuming task in our scheme. As a result, a parallelized sum-product algorithm for LDPCA decoding based on CUDA is designed, and an early jump out detection mechanism is also proposed to avoid wasting computational resources on unnecessary operations. Experiment results show that the compression performance is improved about 7% in average, as compared with the state-of-the-art lossless compression scheme using SW coding, and the decoding time using parallel LDPCA decoder is about 40 times faster than the sequential LDPCA decoder.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography