Dissertations / Theses on the topic 'Compression scheme'

To see the other types of publications on this topic, follow the link: Compression scheme.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Compression scheme.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lim, Seng. "Image compression scheme for network transmission." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1995. http://handle.dtic.mil/100.2/ADA294959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Yun, Mårten Sjöström, Ulf Jennehag, Roger Olsson, and Tourancheau Sylvain. "Subjective Evaluation of an Edge-based Depth Image Compression Scheme." Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-18539.

Full text
Abstract:
Multi-view three-dimensional television requires many views, which may be synthesized from two-dimensional images with accompanying pixel-wise depth information. This depth image, which typically consists of smooth areas and sharp transitions at object borders, must be consistent with the acquired scene in order for synthesized views to be of good quality. We have previously proposed a depth image coding scheme that preserves significant edges and encodes smooth areas between these. An objective evaluation considering the structural similarity (SSIM) index for synthesized views demonstrated an advantage to the proposed scheme over the High Efficiency Video Coding (HEVC) intra mode in certain cases. However, there were some discrepancies between the outcomes from the objective evaluation and from our visual inspection, which motivated this study of subjective tests. The test was conducted according to ITU-R BT.500-13 recommendation with Stimulus-comparison methods. The results from the subjective test showed that the proposed scheme performs slightly better than HEVC with statistical significance at majority of the tested bit rates for the given contents.
APA, Harvard, Vancouver, ISO, and other styles
3

Mensmann, Jörg, Timo Ropinski, and Klaus Hinrichs. "A GPU-Supported Lossless Compression Scheme for Rendering Time-Varying Volume Data." University of Münster, Germany, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-92867.

Full text
Abstract:
Since the size of time-varying volumetric data sets typically exceeds the amount of available GPU and main memory, out-of-core streaming techniques are required to support interactive rendering. To deal with the performance bottlenecks of hard-disk transfer rate and graphics bus bandwidth, we present a hybrid CPU/GPU scheme for lossless compression and data streaming that combines a temporal prediction model, which allows to exploit coherence between time steps, and variable-length coding with a fast block compression algorithm. This combination becomes possible by exploiting the CUDA computing architecture for unpacking and assembling data packets on the GPU. The system allows near-interactive performance even for rendering large real-world data sets with a low signal-to-noise-ratio, while not degrading image quality. It uses standard volume raycasting and can be easily combined with existing acceleration methods and advanced visualization techniques.
APA, Harvard, Vancouver, ISO, and other styles
4

Bernat, Andrew. "Which partition scheme for what image?, partitioned iterated function systems for fractal image compression." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2002. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/MQ65602.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kadri, Imen. "Controlled estimation algorithms of disparity map using a compensation compression scheme for stereoscopic image coding." Thesis, Paris 13, 2020. http://www.theses.fr/2020PA131002.

Full text
Abstract:
Ces dernières années ont vu apparaître de nombreuses applications utilisant la technologie 3D tels que les écrans de télévisions 3D, les écrans auto-stéréoscopiques ou encore la visio-conférence stéréoscopique. Cependant ces applications nécessitent des techniques bien adaptées pour comprimer efficacement le volume important de données à transmettre ou à stocker. Les travaux développés dans cette thèse concernent le codage d’images stéréoscopiques et s’intéressent en particulier à l'amélioration de l'estimation de la carte de disparité dans un schéma de Compression avec Compensation de Disparité (CCD). Habituellement, l'algorithme d’appariement de blocs similaires dans les deux vues permet d’estimer la carte de disparité en cherchant à minimiser l’erreur quadratique moyenne entre la vue originale et sa version reconstruite sans compensation de disparité. L’erreur de reconstruction est ensuite codée puis décodée afin d’affiner (compenser) la vue prédite. Pour améliorer la qualité de la vue reconstruite, dans un schéma de codage par CCD, nous avons prouvé que le concept de sélectionner la disparité en fonction de l'image compensée plutôt que de l'image prédite donne de meilleurs résultats. En effet, les simulations montrent que notre algorithme non seulement réduit la redondance inter-vue mais également améliore la qualité de la vue reconstruite et compensée par rapport à la méthode habituelle de codage avec compensation de disparité. Cependant, cet algorithme de codage requiert une grande complexité de calculs. Pour remédier à ce problème, une modélisation simplifiée de la manière dont le codeur JPEG (à savoir la quantification des composantes DCT) impacte la qualité de l’information codée est proposée. En effet, cette modélisation a permis non seulement de réduire la complexité de calculs mais également d’améliorer la qualité de l’image stéréoscopique décodée dans un contexte CCD. Dans la dernière partie, une métrique minimisant conjointement la distorsion et le débit binaire est proposée pour estimer la carte de disparité en combinant deux algorithmes de codage d’images stéréoscopiques dans un schéma CCD
Nowadays, 3D technology is of ever growing demand because stereoscopic imagingcreate an immersion sensation. However, the price of this realistic representation is thedoubling of information needed for storage or transmission purpose compared to 2Dimage because a stereoscopic pair results from the generation of two views of the samescene. This thesis focused on stereoscopic image coding and in particular improving thedisparity map estimation when using the Disparity Compensated Compression (DCC)scheme.Classically, when using Block Matching algorithm with the DCC, a disparity mapis estimated between the left image and the right one. A predicted image is thencomputed.The difference between the original right view and its prediction is called theresidual error. This latter, after encoding and decoding, is injected to reconstruct theright view by compensation (i.e. refinement) . Our first developed algorithm takes intoaccount this refinement to estimate the disparity map. This gives a proof of conceptshowing that selecting disparity according to the compensated image instead of thepredicted one is more efficient. But this done at the expense of an increased numericalcomplexity. To deal with this shortcoming, a simplified modelling of how the JPEGcoder, exploiting the quantization of the DCT components, used for the residual erroryields with the compensation is proposed. In the last part, to select the disparity mapminimizing a joint bitrate-distortion metric is proposed. It is based on the bitrateneeded for encoding the disparity map and the distortion of the predicted view.This isby combining two existing stereoscopic image coding algorithms
APA, Harvard, Vancouver, ISO, and other styles
6

Philibert, Manon. "Cubes partiels : complétion, compression, plongement." Electronic Thesis or Diss., Aix-Marseille, 2021. http://www.theses.fr/2021AIXM0403.

Full text
Abstract:
Les sous-graphes isométriques d'hypercubes (dit cubes partiels) constituent une classe centrale de la théorie métrique des graphes. Ils englobent des familles de graphes importantes (arbres, graphes médians, etc.), provenant de différents domaines de recherche, tels que la géométrie discrète, la combinatoire ou la théorie géométrique des groupes. Nous étudions tout d'abord la structure des cubes partiels de VC-dimension 2. Nous montrons que ces graphes peuvent être obtenus par amalgamations à partir de deux types de cellules combinatoires.Cette décomposition nous permet d’obtenir diverses caractérisations. En particulier, tout cube partiel de VC-dimension 2 peut être complété en cube partiel ample de VC-dimension 2. Nous montrons ensuite que les graphes de topes des matroïdes orientés et des complexes de matroïdes orientés uniformes peuvent aussi être complétés en cubes partiels amples de même VC-dimension.En utilisant un résultat de Moran et Warmuth, nous établissons que ces classes vérifient la conjecture de Floyd et Warmuth, l'une des plus vielles conjectures en théorie de l'apprentissage. C'est-à-dire qu'elles admettent des schémas de compression (étiquetés non propres) de taille leur VC-dimension.Par la suite, nous décrivons un schéma de compression étiqueté propre de taille d pour les complexes de matroïdes orientés de VC-dimension d, généralisant ainsi le résultat de Moran et Warmuth pour les amples. Enfin, nous fournissons une caractérisation par pc-mineurs exclus et parsous-graphes isométriques interdits des cubes partiels plongeables isométriquement dans la grille Z^2 et dans le cylindre P_n \square C_{2k} pour un certain n et k > 4
Partial cubes (aka isometric subgraphs of hypercubes) are a fundamental class of metric graph theory. They comprise many important graph classes (trees, median graphs, tope graphs of complexes of oriented matroids, etc.), arising from different areas of research such as discrete geometry, combinatorics or geometric group theory.First, we investigate the structure of partial cubes of VC-dimension 2. We show that those graphs can be obtained via amalgams from even cycles and full subdivisions of complete graphs. This decomposition allows us to obtain various characterizations. In particular, any partial cube can be completed to an ample partial cube of VC-dimension 2. Then, we show that the tope graphs of oriented matroids and complexes of uniform oriented matroids can also be completed to ample partial cubes of the same VC-dimension.Using a result of Moran and Warmuth, we establish that those classes satisfy the conjecture of Floyd and Warmuth, one of the oldest open problems in computational machine learning. Particularly, they admit (improper labeled) compression schemes of size their VC-dimension.Next, we describe a proper labeled compression scheme of size d for complexes of oriented matroids of VC-dimension d, generalizing the result of Moran and Warmuth for ample sets. Finally, we give a characterization via excluded pc-minors and via forbidden isometric subgraphs of partial cubes isometrically embedded into the grid \mathbb{Z}^2 and the cylinder P_n \square C_{2k} for some n and k > 4
APA, Harvard, Vancouver, ISO, and other styles
7

Samuel, Sindhu. "Digital rights management (DRM) : watermark encoding scheme for JPEG images." Pretoria : [s.n.], 2007. http://upetd.up.ac.za/thesis/available/etd-09122008-182920/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bekkouche, Hocine. "Synthèse de bancs de filtres adaptés, application à la compression des images." Phd thesis, Université Paris Sud - Paris XI, 2007. http://tel.archives-ouvertes.fr/tel-00345288.

Full text
Abstract:
Les travaux développés dans cette thèse portent sur les décompositions multirésolution dans un cadre de lifting scheme, appliquées à la compression d'images. Pour commencer, une structure de décomposition consistant en un “lifting scheme généralisé” est proposée. Ce schéma permet d'exploiter toute l'information disponible au décodage dans l'étape de prédiction. Cela est rendu possible par l'ajout d'un filtre de prédiction supplémentaire par rapport à la structure classique de lifting scheme. Le schéma proposé est ensuite associé à deux méthodes d'adaptation. La première, appelée GAE, suppose une stationnarité globale de l'image, tandis que la seconde, LAE ne suppose qu'une stationnarité locale de l'image. Dans ce dernier cas, les filtres prédicteurs sont adaptatifs. Trois applications de ces méthodes en compression d'images sont ensuite proposées. Dans un premier temps, une comparaison des performances en compression sans perte sur des images de textures, synthétiques, gaussiennes, à stationnarités locale et globale (vérifiant les hypothèses plus haut), est réalisée. Sur ces signaux, les mesures d'entropie d'ordre 1 ont montré que les méthodes proposées offrent en moyenne un gain en codage de 0,5 bpp (GAE) et 0,8 bpp (LAE) par rapport à la décomposition en ondelette (9,7), de 0,8 bpp (GAE) et 1,11 bpp (LAE) par rapport à la (5,3) et de 0,41 bpp (GAE) et 0,65 bpp (LAE) par rapport à la méthode de Gerek et Çetin. La deuxième application concerne le codage sans perte d'images réelles de natures variées. Les gains par rapport à l'état de l'art se sont révélés plus faibles que ceux obtenus pour les images synthétiques. Enfin, la dernière application traite les cas du codage progressif et du codage avec perte. Pour la compression avec pertes, nous avons modifié la méthode LAE pour palier aux problèmes de divergence dus à l'impossibilité au niveau du décodeur de reconstruire les filtres prédicteurs à partir d'échantillons quantifiés. Elle se révèle plus efficace que lorsque l'on utilise les filtres usuels de longueur fixe (9,7) et (5,3).
APA, Harvard, Vancouver, ISO, and other styles
9

Ali, Azad [Verfasser], Neeraj [Akademischer Betreuer] Suri, Christian [Akademischer Betreuer] Becker, Stefan [Akademischer Betreuer] Katzenbeisser, Andy [Akademischer Betreuer] Schürr, and Marc [Akademischer Betreuer] Fischlin. "Fault-Tolerant Spatio-Temporal Compression Scheme for Wireless Sensor Networks / Azad Ali ; Neeraj Suri, Christian Becker, Stefan Katzenbeisser, Andy Schürr, Marc Fischlin." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2017. http://d-nb.info/1127225405/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Průša, Zdeněk. "Efektivní nástroj pro kompresi obrazu v jazyce Java." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217433.

Full text
Abstract:
This diploma thesis deals with digital image lossy compression. Lossy compression in general inserts some kind of distorsion to the resulting image. The distorsion should not be interupting or even noticable in the better case. For image analysis there is used process called transformation and for choosing relevant coefficients process called coding. Evaluation of image quallity can be done by objective or subjective method. There is encoder introduced and realized in this work. Encoder utilizes two-dimension wavelet transform and SPIHT algortihm for coefficient coding. It was made use of accelerated method of wavelet transform computation by lifting scheme. Coder can proccess color information of images using modificated original SPIHT algorithm. For implementation the JAVA programming language was employed. The object-oriented design principes was made use of and thus the program is easy to extended. At demonstaration pictures there are shown effectiveness and characteristic way of distorsion of the proposed coder at high compression rates.
APA, Harvard, Vancouver, ISO, and other styles
11

Tohidypour, Hamid Reza. "Complexity reduction schemes for video compression." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/60250.

Full text
Abstract:
With consumers having access to a plethora of video enabled devices, efficient transmission of video content with different quality levels and specifications has become essential. The primary way of achieving this task is using the simulcast approach, where different versions of the same video sequence are encoded and transmitted separately. This approach, however, requires significantly large amounts of bandwidth. Another solution is to use scalable Video Coding (SVC), where a single bitstream consists of a base layer (BL) and one or more enhancement layers (ELs). At the decoder side, based on bandwidth or type of application, the appropriate part of an SVC bit stream is used/decoded. While SVC enables delivery of different versions of the same video content within one bit stream at a reduced bitrate compared to simulcast approach, it significantly increases coding complexity. However, the redundancies introduced between the different versions of the same stream allow for complexity reduction, which in turn will result in simpler hardware and software implementation and facilitate the wide adoption of SVC. This thesis addresses complexity reduction for spatial scalability, SNR/Quality/Fidelity scalability, and multiview scalability for the High Efficiency Video Coding (HEVC) standard. First, we propose a fast method for motion estimation of spatial scalability, followed by a probabilistic method for predicting block partitioning for the same scalability. Next, we propose a content adaptive complexity reduction method, a mode prediction approach based on statistical studies, and a Bayesian based mode prediction method all for the quality scalability. An online-learning based mode prediction method is also proposed for quality scalability. For the same bitrate and quality, our methods outperform the original SVC approach by 39% for spatial scalability and by 45% for quality scalability. Finally, we propose a content adaptive complexity reduction scheme and a Bayesian based mode prediction scheme. Then, an online-learning based complexity reduction scheme is proposed for 3D scalability, which incorporates the two other schemes. Results show that our methods reduce the complexity by approximately 23% compared to the original 3D approach for the same quality/bitrate. In summary, our methods can significantly reduce the complexity of SVC, enabling its market adoption.
Applied Science, Faculty of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
12

Fgee, El-Bahlul. "A comparison of voice compression using wavelets with other compression schemes." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ39651.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Han, Bin. "Subdivision schemes, biorthogonal wavelets and image compression." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0013/NQ34774.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

方惠靑 and Wai-ching Fong. "Perceptual models and coding schemes for image compression." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1997. http://hub.hku.hk/bib/B31235785.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Fong, Wai-ching. "Perceptual models and coding schemes for image compression /." Hong Kong : University of Hong Kong, 1997. http://sunzi.lib.hku.hk/hkuto/record.jsp?B18716064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Dinakenyane, Otlhapile. "SIQXC : Schema Independent Queryable XML Compression for smartphones." Thesis, University of Sheffield, 2014. http://etheses.whiterose.ac.uk/7184/.

Full text
Abstract:
The explosive growth of XML use over the last decade has led to a lot of research on how to best store and access it. This growth has resulted in XML being described as a de facto standard for storage and exchange of data over the web. However, XML has high redundancy because of its self-describing nature making it verbose. The verbose nature of XML poses a storage problem. This has led to much research devoted to XML compression. It has become of more interest since the use of resource constrained devices is also on the rise. These devices are limited in storage space, processing power and also have finite energy. Therefore, these devices cannot cope with storing and processing large XML documents. XML queryable compression methods could be a solution but none of them has a query processor that runs on such devices. Currently, wireless connections are used to alleviate the problem but they have adverse effects on the battery life. They are therefore not a sustainable solution. This thesis describes an attempt to address this problem by proposing a queryable compressor (SIQXC) with a query processor that runs in a resource constrained environment thereby lowering wireless connection dependency yet alleviating the storage problem. It applies a novel simple 2 tuple integer encoding system, clustering and gzip. SIQXC achieves an average compression ratio of 70% which is higher than most queryable XML compressors and also supports a wide range of XPATH operators making it competitive approach. It was tested through a practical implementation evaluated against the real data that is usually used for XML benchmarking. The evaluation covered the compression ratio, compression time and query evaluation accuracy and response time. SIQXC allows users to some extent locally store and manipulate the otherwise verbose XML on their Smartphones.
APA, Harvard, Vancouver, ISO, and other styles
17

Kalajdzievski, Damjan. "Measurability Aspects of the Compactness Theorem for Sample Compression Schemes." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23133.

Full text
Abstract:
In 1998, it was proved by Ben-David and Litman that a concept space has a sample compression scheme of size $d$ if and only if every finite subspace has a sample compression scheme of size $d$. In the compactness theorem, measurability of the hypotheses of the created sample compression scheme is not guaranteed; at the same time measurability of the hypotheses is a necessary condition for learnability. In this thesis we discuss when a sample compression scheme, created from compression schemes on finite subspaces via the compactness theorem, have measurable hypotheses. We show that if $X$ is a standard Borel space with a $d$-maximum and universally separable concept class $\m{C}$, then $(X,\CC)$ has a sample compression scheme of size $d$ with universally Borel measurable hypotheses. Additionally we introduce a new variant of compression scheme called a copy sample compression scheme.
APA, Harvard, Vancouver, ISO, and other styles
18

Kovvuri, Prem. "Investigation of Different Video Compression Schemes Using Neural Networks." ScholarWorks@UNO, 2006. http://scholarworks.uno.edu/td/320.

Full text
Abstract:
Image/Video compression has great significance in the communication of motion pictures and still images. The need for compression has resulted in the development of various techniques including transform coding, vector quantization and neural networks. this thesis neural network based methods are investigated to achieve good compression ratios while maintaining the image quality. Parts of this investigation include motion detection, and weight retraining. An adaptive technique is employed to improve the video frame quality for a given compression ratio by frequently updating the weights obtained from training. More specifically, weight retraining is performed only when the error exceeds a given threshold value. Image quality is measured objectively, using the peak signal-to-noise ratio versus performance measure. Results show the improved performance of the proposed architecture compared to existing approaches. The proposed method is implemented in MATLAB and the results obtained such as compression ratio versus signalto- noise ratio are presented.
APA, Harvard, Vancouver, ISO, and other styles
19

Masupe, Shedden. "Low power VLSI implementation schemes for DCT-based image compression." Thesis, University of Edinburgh, 2001. http://hdl.handle.net/1842/12604.

Full text
Abstract:
The Discrete Cosine Transform(DCT) is the basis for current video standards like H.261, JPEG and MPEG. Since the DCT involves matrix multiplication, it is a very computationally intensive operation. Matrix multiplication entails repetitive sum of products which are carried out numerous times during the DCT computation. Therefore, as a result of the multiplications, a significant amount of switching activity takes place during the DCT process. This thesis proposes a number of new implementation schemes that reduce the switching capacitance within a DCT processor for either JPEG or MPEG environment. A number of generic schemes for low power VLSI implementation of the DCT are presented in this thesis. The schemes target reducing the effective switched capacitance within the datapath section of a DCT processor. Switched capacitance is reduced through manipulation and exploitation of correlation in pixel and cosine coefficients during the computation of the DCT coefficients. The first scheme concurrently processes blocks of cosine coefficient and pixel values during the multiplication procedure, with the aim of reducing the total switched capacitance within the multiplier circuit. The coefficients are presented to the multiplier inputs as a sequence, ordered according to bit correlation between successive cosine coefficients. The ordering of the cosine coefficients is applied to the columns. Hence the scheme is referred to as column-based processing. Column-Based processing exhibits power reductions of up to 50% within the multiplier unit. Another scheme, termed order-based, is based on the ordering of the cosine coefficients based on row segments. The scheme also utilises bit correlation between successive cosine coefficients. The effectiveness of this scheme is reflected in a power savings of up to 245. The final scheme is based on manipulating data representation of the cosine coefficients, through cosine word coding, in order to facilitate for a shift-only computational process. This eliminates the need for the multiplier unit, which poses a significant overhead in terms of power consumption, in the processing element. A maximum power saving of 41% was achieved with this implementation.
APA, Harvard, Vancouver, ISO, and other styles
20

Solé, Rojals Joel. "Optimization and generalization of lifting schemes: application to lossless image compression." Doctoral thesis, Universitat Politècnica de Catalunya, 2006. http://hdl.handle.net/10803/6897.

Full text
Abstract:
This Ph.D. thesis dissertation addresses multi-resolution image decomposition, a key issue in signal processing that in recent years has contributed to the emergence of the JPEG2000 image compression standard. JPEG2000 incorporates many interesting features, mainly due to the discrete wavelet transform stage and to the EBCOT entropy coder.

Wavelet analysis perform multi-resolution decompositions that decorrelate signal and separate information in useful frequency-bands, allowing flexible post-coding. In JPEG2000, decomposition is computed through the lifting scheme, the so-called second generation wavelets. This fact has focused the community interest on this tool. Many works have been recently proposed in which lifting is modified, improved, or included in a complete image coding algorithm.

The Ph.D. thesis dissertation follows this research line. Lifting is analyzed, proposals are made within the scheme, and their possibilities are explored. Image compression is the main objective and it is principally assessed by means of coding transformed signal with EBCOT and SPIHT coders. Starting from this context, the work diverges in two distinct paths, the linear and the nonlinear one.

The linear lifting filter construction is based on the idea of quadratic interpolation and the underlying linear restriction due to the wavelet transform coefficients. The result is a flexible framework that allows the creation of new transforms using different criteria and that may adapt to the image statistics.

The nonlinear part is founded on the adaptive lifting scheme, which is extensively analyzed and as a consequence, a generalization of the lifting is proposed. The discrete version of the generalized lifting is developed leading to filters that achieve good compression results, specially for biomedical and remote sensing images.
Esta tesis aborda el problema de la descomposición multi-resolución, tema clave en procesado del señal que ha llevado estos últimos años a la creación del sobresaliente estándar JPEG2000 de compresión de imágenes. JPEG2000 incorpora una serie de funcionalidades muy interesantes debido básicamente a la transformada wavelet discreta y al codificador entrópico EBCOT.

La transformada wavelet realiza una descomposición multi-resolución que decorrela la señal separando la información en un conjunto de bandas frecuenciales útiles para la posterior codificación. En JPEG2000, la descomposición se calcula mediante el esquema lifting, también llamado wavelet de segunda generación. La integración del esquema lifting en el estándar ha centrado el interés de muchos investigadores en esta herramienta. Recientemente, han aparecido numerosos trabajos proponiendo modificaciones y mejoras del lifting, así como su inclusión en nuevos algoritmos de codificación de imágenes.

La tesis doctoral sigue esta línea de investigación. Se estudia el lifting, se hacen propuestas dentro del esquema y sus posibilidades se exploran. Se ha fijado la compresión de imágenes como el principal objetivo para la creación de nuevas transformadas wavelet, que se evalúan en su mayor parte mediante la codificación de la señal transformada con EBCOT o SPIHT. Dentro de este contexto, el trabajo diverge en dos caminos distintos, el lineal y el no lineal.

La construcción de filtros lifting lineales se basa en la idea de interpolación cuadrática y la restricción lineal subyacente de los coeficientes wavelet. El resultado es un marco de trabajo flexible que permite la creación de transformadas con distintos criterios y adaptables a la estadística de la imagen.

La parte no lineal tiene sus fundamentos en el esquema lifting adaptativo, del cuál se ofrece un extenso análisis y como consecuencia se propone una generalización del lifting. Su versión discreta se desarrolla consiguiendo filtros lifting que obtienen buenos resultados, sobretodo en imágenes biomédicas y de detección remota.
APA, Harvard, Vancouver, ISO, and other styles
21

Aburas, Abdul Razag Ali. "Data compression schemes for pattern recognition in digital images using fractals." Thesis, De Montfort University, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Mo, Ching-Yuan, and 莫清原. "Image Compression Technique Using Fast Divisive Scheme." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/07222792958381290250.

Full text
Abstract:
碩士
國立屏東科技大學
資訊管理系所
99
In the field regarding image compression, as LBG is a fast and easily understanding method with simple construction, and its compressed quality after training is acceptable, LBG is always considered an important technique in VQ(Vector Quantization)field. The other divisive algorithm is Cell Divisive Algorithm. Its construction is much simpler as it removes the precedure of checking convergent status in LBG, and add or subtract vector's value after dividing one into two. The whole procedure is faster, but the PSNR value is not satisfactory. The thesis is to propose a faster divisive vector algorithm using cell divisive algorithm. Since the defect of disqualified compression quality should be avoided due to lack of optimized training, LBG method is also applied in the algorithm to improve PSNR value. The purpose of the algorithm is to set up a easily-constructed, quickly-compressed algorithm with well-trained quality.
APA, Harvard, Vancouver, ISO, and other styles
23

Minz, Manoranjan. "Efficient Image Compression Scheme for Still Images." Thesis, 2014. http://ethesis.nitrkl.ac.in/6306/1/110EC0172-2.pdf.

Full text
Abstract:
The raw image files take a large amount of disk space and it can be a huge disadvantage while storing or transmitting, and for this reason an efficient image compression technique is required. Image compression has been a widely addressed research area since many years. Many compression standards have been proposed. But there is still a scope for high compression with better quality reconstruction. Discrete Cosine Transform (DCT) is used for compression in a very popular standard called JPEG. First section of this paper aims at the analysis of compression using DCT transform method, and results obtained from certain experiments and execution of programs in MATLAB have been shown. There are many compression techniques, but still a technique which is better, simple to implement and with a good efficiency is suitable for user requirements. In the second section of this paper a new lossless variable length coding method is proposed for image comp¬ression and decompression which is inspired by a popular coding technique called Huffman coding. This new variable length coding technique is simple in implementation and the resulting compressed file utilizes comparatively less memory than the actual image. By using the new variable length coding technique a software algorithm has been written and implemented for compression and decompression of an image in a MATLAB platform.
APA, Harvard, Vancouver, ISO, and other styles
24

Shiu, Pei-Feng, and 徐培峯. "A DCT based watermarking scheme surviving JPEG compression." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/02411822415288436323.

Full text
Abstract:
碩士
靜宜大學
資訊工程學系
99
In this thesis, a DCT-based watermarking technique is proposed. This scheme is designed to increase its robustness of hidden watermark so that the hidden watermark can withstand JPEG compression attack. To achive our objective, the proposed watermark scheme embeds watermark into the DCT coefficients. To remain the visual quality of watermarked images, only low-frequency DCT coefficients are selected to carry hidden watermark by the concept of mathematical remainder. To enhance the robustness of the watermark, a voting mechanism is applied in the scheme. Experimental results confirm that the robustness of hidden watermark against JPEG compression with our proposed scheme is better than that of Lin et al.’s scheme and Patra et al.’s scheme.
APA, Harvard, Vancouver, ISO, and other styles
25

Tseng, Wei-Rong, and 曾緯榮. "Digital Image Watermarking Based on Fractal Compression Scheme." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/22085167290982840415.

Full text
Abstract:
碩士
國立高雄第一科技大學
電腦與通訊工程系
89
In the end of 20’th century, the digital revolution brings limitless convenience for people. Internet has taken the place of conventional medium. It becomes the most efficient medium. The creations show people rich and colorful appearance by the Internet and multimedia. The protection of intellectual property is very important. The creator and copyright owner don’t wish their work would be cribbed. The digital watermarking technology has been respected in order to protect every lawful owner’s rights. And more investigators are devoted to the study of this issue. In this thesis, our algorithm is a new way based on fractal compression scheme. The previous technology embeds the watermark into position’s parameters and rotation type of fractal code with searching procedure in fractal compression. We adjust gray mapping transforms parameters by way of sub-optimal least squares approach in order to embed the watermark into the fractal code. Compared with the previous skills, the algorithm has better result in resisting JPEG compression. And it could resist the overlapping, daubing, noise and various attacks.
APA, Harvard, Vancouver, ISO, and other styles
26

Kuo, Ta-Jung, and 郭大榮. "A Hybrid Coding Scheme for Color Image Compression." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/18416180146372995425.

Full text
Abstract:
碩士
國立高雄第一科技大學
電腦與通訊工程所
93
Two hybrid image-coding schemes are proposed by the combination of the advantages of four coding schemes: BTC, VQ, DCT coding, and predictive coding. The experiments show that the hybrid coding schemes can obtain high compression ratio, saved VQ codebook searching time, and the competitive image quality. BTC has the property of low computational complexity and the edge preservation. DCT has the property of high image quality and high compression ratio. VQ has the property of high compression ratio and the moderate fidelity. The input image is first coded through BTC to generate a bit-map and both high-mean and low-mean sub-images. The high-mean and low-mean sub-images are encoded through the DCT coding scheme. The predictive scheme here is used to reduce the required bit rate because the property of similarity exists between two neighboring blocks in a high-correlated image. The bit map generated by BTC is encoded by VQ and block predictive coding scheme. Especially the usage of block predictive coding scheme here in bit map is not only to reduce the bit rate but also to save the codebook searching time about 25% blocks than using VQ alone in bit map. Because it just need to simply assign two-bit indicator to stand for the block. For coding the color image, a common codebook is used to encoding the three color-component planes in order to reduce the storage space for codebook without any explicit degradation in image quality.
APA, Harvard, Vancouver, ISO, and other styles
27

Cheng, Chao-hsun, and 鄭昭勳. "Compression Scheme for waveform of Hardware Design Verification." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/27107029715915719378.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
89
Abstract Among VLSI circuit design, functional verification has become an important part due to rapid extension of circuits functionalities in many consumer and industry products. During circuit simulation, a very large trace file is written, containing value-changes for every node in the design, or a subsystem within the design. Upon completion, the designer could watch the entire simulation history through a waveform tool to achieve the verification tasks. Verilog's VCD file format has created a competitive market for waveform tools, which greatly improve the quality. However, the waveform data in VCD file format is too large. Some common compression algorithms may be used to decrease the file size of a waveform database, but they also consume a significant amount of computation power at the same time. Here we adopt a new idea to make use of the properities of waveform data. By exploiting the HDL source codes in compile time, we try to find out hints to guide the compression. A file, which is named signal dependency file, could be created to contain signal dependency rules among the source codes in circuit designes. "Time-value separation" is the key idea in our compression techniques. All signals transitions are separated into time sections and value sections in transition order. Then we could implement our compression ideas in each section separately. And of course, on decoompression tasks, we have to restore and merge these two parts in original transition order. In time section, our idea is based on some prediction strategies; in value section, our main idea is to replace the signal value by the corresponding behavior function for this signal to apply to. After our experiment processing, we could almost achieve 50% to 20% compression ratio comparing to the size of target VCD format waveform database.
APA, Harvard, Vancouver, ISO, and other styles
28

Yang, Sheng-Yu, and 楊勝裕. "A Constant Rate Block Based Image Compression Scheme Using Vector Quantization and Prediction Schemes." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/wrwyp4.

Full text
Abstract:
碩士
國立中興大學
電機工程學系所
107
This thesis proposes an embedded image compression system aimed at reducing the large amount of data transmission and storage along the display link path. Embedded compression focuses on low computing complexity and low hardware resources requirement, while providing a guarantee of compression performance. The algorithm proposed in this thesis is a constant rate block based image compression scheme with two scheme options. Both schemes will be examined at the same time and the better one is chosen. In order to support the "screen partial update" function of the Android system, a block based compression system is adopted. This means that all blocks are compressed independently, no information from the surrounding blocks is available. The block size is set as 2x4. The compression ratio is also fixed at three to ensure a constant bandwidth requirement. In addition, a Y-Co-Cg color space is used. The major techniques employed are shape gain vector quantization (VQ) and predictor. A 2x4 block is first converted to an 1x8 vector and encoded using pre-trained vector codebooks. By taking advantage of the correlation between color components, all color components share the same index in shape coding to save the bit budget while each color component has its own gain index. The shape gain VQ residuals of the most underperformed color component is further refined by using two techniques, i.e., DPCM and Integer DCT. DPCM achieves prediction by recording the difference between successive pixels. The Integer DCT approach converts the pixel residual values from the spatial domain to the frequency domain, and records the low frequency components only for the refinement. Experimental results, however, indicate that neither techniques achieves satisfactory refinement results. The final scheme applies shape gain VQ to the Cg and Co components only and employs a reference prediction scheme to the Y component. In this prediction scheme, the maximum of the pixel values in the block is first determined and all other pixel values are predicted as a reference the maximum. The reference can be either the difference or the ratio with respect to the maximum. Both differences and ratios are quantized using codebooks to reduce the bit requirement. The evaluation criteria for compression performance are PSNR and the maximum pixel error of the reconstructed image. Testbench includes images in various categories such as natural, portrait, engineering, and text. The compared scheme is a prior art reported in the thesis entitled "A Constant Rate Block Based Image Compression Scheme for Video Display Link Applications." The same compression specifications are employed in both schemes. The experimental results show that our algorithm performs better in natural and portrait images, and the PSNR advantage is about 1~2 dBs. The proposed algorithm performs inferior in engineering images. In terms of image size, our algorithm has better performance on low-resolution images. This is because the reference predictor and shape gain vector quantization schemes are more efficient in handling blocks consisting of sharply changing pixels.
APA, Harvard, Vancouver, ISO, and other styles
29

Liao, Hua-Cheng, and 廖華政. "A Novel Data Compression Scheme for Chinese Character Patterns." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/74259223131567530391.

Full text
Abstract:
碩士
逢甲大學
自動控制工程研究所
84
This thesis proposes an efficient lossless compression scheme for Chinese character patterns. The proposed scheme analyzes the characteristic of line-segment structures of Chinese character patterns. A novel matching algorithm is developed for the line- segment prediction used in the encoding and decoding processes. The bit rates achieved with the proposed lossless scheme are 0.2653, 0.2448 and 0.2583, for three Chinese fonts, respectively. Due to the black and white points in Chinese character patterns are highly correlated, subsampling and interpolation schemes are considered to further increase the compression ratio. With these schemes, the low bit rate is achieved. Two types of interpolation techniques are presented for the enlargement of Chinese character patterns. They are the 2-D interpolation and spline interpolation. Compared with the lossless compression scheme, the 2-D subsampling scheme can further reduce the bit rate as high as 43.19%, 41.83%, and 41.61% for three widely used Chinese fonts, respectively.
APA, Harvard, Vancouver, ISO, and other styles
30

Liu, Chia Liang, and 劉家良. "Hybrid Image Compression Scheme Based on PVQ and EVQ." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/91423278776285614954.

Full text
Abstract:
碩士
大葉大學
資訊工程學系碩士班
93
The image compression is used to vector quantization. It has not considered the relation between the image blocks, so we could be able to improve the bit rate. In this paper, we propose a predicted coding scheme based on VQ algorithm, prediction algorithm, and error VQ algorithm. The prediction algorithm is used to encode smooth blocks. Otherwise VQ and EVQ are used to encode edge blocks, respectively. The scheme not only improves the image quality of decompressed but also has low bit rate than VQ algorithm. The experimental results show that our scheme performs better than VQ algorithm. For example, the test image “Lena” achieves 35.02 dB of reconstructed image quality at 0.87 bpp. And “Lena” achieves 31.07 dB of reconstructed image quality at 0.31 bpp than VQ algorithm high 0.71 dB at 0.625 bpp. It is obvious that our proposed PVQ-EVQ scheme not only has high compression rate, but also has good reconstructed images quality. Keyword:VQ, Prediction, Hybrid image coding, PSNR
APA, Harvard, Vancouver, ISO, and other styles
31

Lin, Ang-Sheng, and 林昂賢. "A High Performance Compression Scheme for General XML Data." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/98661171696464125372.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
88
In this thesis, we propose a high performance compression scheme for general XML data. The scheme takes advantages of semisturcture data characters, text mining method, and existing compressing algorithms. To compress heterogeneous XML data, we incorporates and combines existing compressing such as zlib, the library function for gzip, as well as a collection of datatype specific compressors. In our scheme, we do not need schema information (such as a DTD or an XML-Schema), but can exploit those hints to further improve the compression ratio. According to our proposed approach, we implement a compressor/decompressor, and use them to test and verify our compression scheme.
APA, Harvard, Vancouver, ISO, and other styles
32

Liu, Hung-Chun, and 劉鴻鈞. "An Improved Image Coding Scheme with Less Compression Artifacts." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/04791490836438453971.

Full text
Abstract:
碩士
臺灣大學
資訊工程學研究所
98
JPEG is one of the most popular formats which are designed to reduce the bandwidth and memory space. A lossy compression algorithm is used in JPEG format, meaning that some information is lost and cannot be restored after compression. When high compression ratio is considered, certain artifacts are inevitable as a result of the degradation of image quality. In this thesis, an image enhancement algorithm is proposed to reduce artifacts which are caused by JPEG compression standard. We found that severe degradation mostly occurs in the area containing edges. The degradation is resulted from the quantization step where high frequency components are eliminated. In order to compensate this kind of information loss, the proposed edge block detection method is performed to extract out edge blocks and categorize those edge blocks into several types of edge models in DCT (Discrete Cosine Transform) domain. Then, according the type of edge model, the pre-defined DCT coefficients are added back to the edge block. It is demonstrated by the experimental results that the proposed method successfully provides better performance in terms of sharpness while comparing to JPEG.
APA, Harvard, Vancouver, ISO, and other styles
33

LIN, JIAN-FU, and 林建福. "A Chinese syllables compression scheme based upon perfect hashing." Thesis, 1990. http://ndltd.ncl.edu.tw/handle/29156709083274114521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Kung-Han, and 陳功瀚. "An Efficient Test Data Compression Scheme Using Selection Expansion." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/61101774897607876166.

Full text
Abstract:
碩士
淡江大學
電機工程學系碩士班
100
Because MISR (Multiple input shift register) can use one ATE data run a lot of times in it. We use this characteristic to let one data run lots of times in MISR to generate a lot of patterns. MISR is the foundation of our decompressor .And using Gauss-Elimination to get the ATE data. Selection with Flip-Flop can spread MISR data, because one Flip-Flop of MISR connected with two MUXs of Selection. The Selection connected with the MISR. Flip-Flops are connected with the Selection. The original ATE is spreading by our decompressor architecture. And using the Flip-Flops can restore the bits in it, we just changing the Flip-Flops bits when the data is changed. Because the bits of Flip-Flops are not changing frequently, we can save power. And one ATE data run a lot of cycles in the decompressor architecture, so the time is saving by this way.
APA, Harvard, Vancouver, ISO, and other styles
35

Sukerkar, Amol Nishikant. "Study of joint embedding and compression on JPEG compression scheme using multiple codebook data hiding." Thesis, 2005. http://library1.njit.edu/etd/fromwebvoyage.cfm?id=njit-etd2005-015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Cheng, Sung-Wei, and 鄭松瑋. "Test Data Compression Using Scan Chain Compaction and Broadcasting Scheme." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/16319084306417623328.

Full text
Abstract:
碩士
元智大學
資訊工程學系
97
In this paper we propose a compression method for scan testing. There are lots of test data in a test set that can be assigned 1 or 0 without affecting the test result, which are called don’t care bits(X-bits). These don''t care bits can be used for increasing the compatibility of test data in order to achieve the goal of reducing test data volume. In addition, we use the structure of multiple scan chains in which original single scan chain is partitioned into several sub-scan chains. Test data are then shifted by a broadcast way for reducing test application time and test data. The best case on broadcasting is that shifting one sub-test pattern to all sub-scan chains, and test data of these sub-scan chains must be compatible. Therefore, the result of broadcasting will be influenced by the volume of care bits in test set. There are some test patterns unable to be hit which cause some faults hardly to be detected and drop the fault coverage. Hence, we propose a different broadcasting technique to increase its efficiency. It is that starting sub-scan chain of broadcasting is chosen by increasing one de-multiplexer. The difference with other methods is no complicated decoder and much hardware is needed. It can achieve very good compression rate. In addition, for reducing test data we use the structure of scan tree and combine two methods. The compression rate can reach up to 72%.
APA, Harvard, Vancouver, ISO, and other styles
37

張寶田. "An image compression technique based on segmented image coding scheme." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/66941259033552666833.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Lin, Che-wei, and 林哲瑋. "A Power-aware Code-compression Scheme for RISC/VLIW Architecture." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/99333342572652783073.

Full text
Abstract:
碩士
國立臺灣科技大學
電子工程系
98
We studied the architecture of embedded computing systems from the memory power consumption point-of-view and used selective-code- compression (SCC) approach to realize our design. Based on the LZW (Lempel-Ziv-Welch) compression algorithm, we proposed a novel cost effective compression and decompression method. The goal of our study is to develop a new SCC approach with the extended decision policy based on the prediction of power consumption. Our decompression method has to be easily implemented in hardware and should collaborate with the processor at the same time. The decompression engine hardware was implemented using TSMC.18μm- 2p6m model with cell-based libraries. In order to calculate more accurate power consumption of the decompression engine, we used static analysis method to estimate the power overhead. We also used variable sized branch blocks and considered several characteristics of VLIW processors for our compression; the characteristics included the instruction level parallelism (ILP) technique, and instruction scheduling. Our code-compression methods are not limited to VLIW machines, but can be applied to other kinds of RISC architecture.
APA, Harvard, Vancouver, ISO, and other styles
39

Chang, Jin-Bang, and 張進邦. "DSP implementation of Lifting Scheme Wavelet Transform in Image Compression." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/56117301651056609954.

Full text
Abstract:
碩士
國立成功大學
工程科學系碩博士班
94
Abstract In this thesis, we will discuss the theoretical backgrounds of discrete wavelet transform (DWT) in image processing. The Lifting-based Discrete Wavelet Transform (LDWT) has been proposed to reduce the complexity of hardware implementation. An image processing board based on TMS320C6713 DSK (TI, American), which can provide floating-point arithmetic and 8 parallel process capabilities, is designed to implement the Lifting-based Discrete Wavelet Transform and Lifting-based Inverse Discrete Wavelet Transform and apply to image compression and encoding.
APA, Harvard, Vancouver, ISO, and other styles
40

Ali, Azad. "Fault-Tolerant Spatio-Temporal Compression Scheme for Wireless Sensor Networks." Phd thesis, 2017. https://tuprints.ulb.tu-darmstadt.de/5924/7/Azad-Final-Gray.pdf.

Full text
Abstract:
Wireless sensor networks are often deployed for environmental sampling and data gathering. A typical wireless sensor network consists, from hundreds to thousands, of battery powered sensor nodes fitted with various sensors to sample the environmental attributes, and one or more base stations, called the sink. Sensor nodes have limited computing power, memory and battery. Sensor nodes are wirelessly interconnected and transmit the sampled data in a multihop fashion to the sink. The sheer number of sensor nodes and the amount of sampled data can generate enormous amount of data to be transmitted to the sink, which subsequently can transform into network congestion problem resulting into data losses and rapid battery drain. Hence, one of the main challenges is to reduce the number of transmissions both to accommodate to the network bandwidth and to reduce the energy consumption. One possibility of reducing the data volume would be to reduce the sampling rates and shutdown sensor nodes. However, it can affect the spatial and temporal data resolution. Hence, we propose a compression scheme to minimize the transmissions instead of reducing the sampling. The sensor nodes are vulnerable to external/environmental effects and, being relatively cheap, are susceptible to various hardware faults, e.g., sensor saturation, memory corruption. These factors can cause the sensor nodes to malfunction or sample erroneous data. Hence, the second biggest challenge in data gathering is to be able to tolerate such faults. In this thesis we develop a spatio-temporal compression scheme that detects data redundancies both in space and time and applies data modeling techniques to compress the data to address the large data volume problem. The proposed scheme not only reduces the data volume but also the number of transmissions needed to transport the data to the sink, reducing the overall energy consumption. The proposed spatio-temporal compression scheme has the following major components: Temporal Data Modeling: Models are constructed from the sampled data of the sensor nodes, which are then transmitted to the sink instead of the raw samples. Low computing power, limited memory and battery force us to avoid computationally expensive operations and use simple models, which offer limited data compressibility (fewer samples are approximated). However, we are able to extend the compressibility in time through our model caching scheme while maintaining simple models. Hierarchical Clustering: The data sampled by the sensor nodes is often not only temporally correlated but also spatially correlated. Hence, the sensor nodes are initially grouped into 1-hop clusters based on sampled data. Only a single model is constructed for one cluster, essentially reducing the sampled data of all the sensor nodes to a single data model. However, we also observed through experiments that the data correlations often extend beyond 1-hop clusters. Hence, we devised a hierarchical clustering scheme, which uses the model of one 1-hop cluster to also approximate the sampled data in the neighboring clusters. All the 1-hop clusters approximated by a given model are grouped into a larger cluster. The devised scheme determines the clusters that can construct the data models, the dissimilation of the model to the neighboring clusters and finally the transmission of the data model to the sink. The accuracy of data to the single sensor node level is maintained through outliers for each sensor node, which are maintained by the cluster heads of the respective 1-hop clusters and cumulatively transmitted to the sink. The proposed spatio-temporal compression scheme reduces the total data volume, is computationally inexpensive, reduces the total network traffic and hence minimizes the overall energy consumption while maintaining the data accuracy as per the user requirements. This thesis also addresses the second problem related to data gathering in sensor networks caused by the faults that results into data errors. We have developed a fault-tolerance scheme that can detect the anomalies in the sampled data and classify them as errors and can often correct the resulting data errors. The proposed scheme can detect data errors that may arise from a range of fault classes including sporadic and permanent faults. It is also able to distinguish the data patterns that may occur due to both the data errors and a physical event. The proposed scheme is quite light weight as it exploits the underlying mechanisms already implemented by spatio-temporal compression scheme. The proposed fault-tolerance scheme uses the data models constructed by the compression scheme to additionally detect data errors and subsequently correct the erroneous samples.
APA, Harvard, Vancouver, ISO, and other styles
41

Lee, Vaughan Hubert. "A new HD video compression scheme using colourisation and entropy." Thesis, 2012. http://hdl.handle.net/10210/7882.

Full text
Abstract:
M.Ing.
There is a growing demand for HD video content. That demand requires significant bandwidth, even with sophisticated compression schemes. As the demand increases in popularity, the bandwidth requirement will become burdensome on networks. In an effort to satisfy the demand for HD, improved compression schemes need to be investigated together with increasing efficiency of transmission. The purpose of this literature study is to investigate existing video compression schemes and techniques used in software implementations. Then to build on existing work within the mature field of video compression, in order to propose a new scheme which would then be tested for viability. Two algorithms were proposed as a result of the literature study. The first algorithm is an improved way to add colour to luminance images of similar scene content. The second algorithm is an encoding scheme that is adaptive to the video content. The proposed encoding scheme adaptively selects to encode the next several frames using well established techniques or to use a technique proposed in this work. At the time of preparing this document, and from the available literature this second proposed algorithm is new. An interesting compression tool has been developed during this study. This tool can be used to obtain a visual expectation of the achievable compression before applying the compression. The tool is a quadrant plot of the difference in image entropy between successive frames and an estimation of the mean percentage motion content between these frames. Over the duration of a scene, the spread of results reveals the extent of the potential compression gain.
APA, Harvard, Vancouver, ISO, and other styles
42

郭建綱. "An Image Quadtree Coding Scheme for Compression and Progressive Transmission." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/37243270916169374016.

Full text
Abstract:
碩士
國防管理學院
資源管理研究所
84
Due to the characteristics of huge memory space and long period of processing time, the spatial data of image and graph will cost too much for computer systems; the situation is even more worst when the network transmission is applied. So, the compression scheme of spatial data becomes an important research domain of information technology because the requirement of graphic user interface( GUI ) and multimedia is inevitable nowadays. Quadtree is a hierarchical data structure to represent the spatial data. It be widely applied on computer graphics, image processing and geographic information system. A new scheme, Separately Bitwise Condensed Quadtree (SBCQ) is propose in this thesis, the proposed scheme approach is an error-free compression scheme for gray scale and color image. The conceptions of this method translate the binary codes of all image pixels into Gray code first; it then separate every byte of pixel into two sub-byte planes; finally, these sub-byte planes are coded by the SBCQ in breadth-first traversal rder. The proposed scheme is verified by empirical experiments which demonstrate that the proposed improve the compression ratio for gray scale and color images. The proposed scheme can be applied to solve problem of network congestion. In addition, when image data are transmitted on network, the overloading of network congestion can be solved by progressive transmission of images. Furthermore, the proposed scheme can also applied to detect edges. The result of tests reveals the facility of the proposed scheme for edge detection.
APA, Harvard, Vancouver, ISO, and other styles
43

GUO, XING-HONG, and 郭星宏. "A high compression pate lossless scheme for still color images." Thesis, 1991. http://ndltd.ncl.edu.tw/handle/63742156242313109350.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Li, Wei-Lin, and 李威霖. "A Novel Constructive Data Compression Scheme for Low-Power Testing." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/57004877392584365459.

Full text
Abstract:
碩士
淡江大學
電機工程學系碩士班
97
As the design trends of very large scale integration (VLSI) circuit evolve into system-on-a-chip (SoC) design, each chip contains several reusable intellectual property (IP) cores. In order to test the chip completely, we must generate a test set for testing in advance, and store these test patterns in memory of automatic test equipment (ATE). One can imagine that test data volume increases as the integrated circuits (ICs) become complex, yet the bandwidth and memory capacity of ATE is limited. Thus, it is difficult to transmit huge test data from ATE memory to SoC. Test data compression is one of the most often used methods to deal with this problem. This technique not only reduces the volume of test data, but also shortens test application time simultaneously. In this thesis, we present two test data compression scheme for low-power testing. In Chapter 3, a low power strategy for test data compression scheme with single scan chain is presented. In this method, we propose an efficient algorithm for scan chain reordering to deal with the power dissipation problem. In addition, we also propose a test slice difference (TSD) technique to improve test data compression. It is an efficient technique and only needs one scan cell. Consequently, hardware overhead is much lower than the cyclical scan chains (CSR) technique. In experimental results, our technique achieves high compression ratio for several large ISCAS’89 benchmark circuits. The power consumption is also better compared with other well-known compression technique. In Chapter 4, we present a novel constructive data compression scheme that reduces both test data volume and shifting-in power for multiple scan chains. In this scheme, we only store the changed point information in ATE and use “Read Selector” to filter unnecessary encoded data. The decompression architecture contains buffers to hold the preceding data. We also propose a new algorithm to assign multiple scan chains and a new linear dependency computation method to find the hidden dependency between test slices. Experimental results show that the proposed scheme respectively outperforms previous method (selective scan slice encoding) by 57% and 77% in test data volume and power consumption on larger circuits in ISCAS’89 benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
45

Jiang, Jianmin, and E. A. Edirisinghe. "A hybrid scheme for low-bit rate stereo image compression." 2009. http://hdl.handle.net/10454/2717.

Full text
Abstract:
No
We propose a hybrid scheme to implement an object driven, block based algorithm to achieve low bit-rate compression of stereo image pairs. The algorithm effectively combines the simplicity and adaptability of the existing block based stereo image compression techniques with an edge/contour based object extraction technique to determine appropriate compression strategy for various areas of the right image. Unlike the existing object-based coding such as MPEG-4 developed in the video compression community, the proposed scheme does not require any additional shape coding. Instead, the arbitrary shape is reconstructed by the matching object inside the left frame, which has been encoded by standard JPEG algorithm and hence made available at the decoding end for those shapes in right frames. Yet the shape reconstruction for right objects incurs no distortion due to the unique correlation between left and right frames inside stereo image pairs and the nature of the proposed hybrid scheme. Extensive experiments carried out support that significant improvements of up to 20% in compression ratios are achieved by the proposed algorithm in comparison with the existing block-based technique, while the reconstructed image quality is maintained at a competitive level in terms of both PSNR values and visual inspections
APA, Harvard, Vancouver, ISO, and other styles
46

McIntosh, Ian James. "Implementation of an application specific low bit rate video compression scheme." Thesis, 2001. http://hdl.handle.net/10413/5671.

Full text
Abstract:
The trend towards digital video has created huge demands all the link bandwidth required to carry the digital stream, giving rise to the growing research into video compression schemes. General video compression standards, which focus on providing the best compression for any type of video scene, have been shown to perform badly at low bit rates and thus are not often used for such applications. A suitable low bit rate scheme would be one that achieves a reasonable degree of quality over a range of compression ratios, while perhaps being limited to a small set of specific applications. One such application specific scheme. as presented in this thesis, is to provide a differentiated image quality, allowing a user-defined region of interest to be reproduced at a higher quality than the rest of the image. The thesis begins by introducing some important concepts that are used for video compression followed by a survey of relevant literature concerning the latest developments in video compression research. A video compression scheme, based on the Wavelet transform, and using an application specific idea, is proposed and implemented on a digital signal processor (DSP), the Philips Trimedia TM·1300. The scheme is able to capture and compress the video stream and transmit the compressed data via a low bit· rate serial link to be decompressed and displayed on a video monilor. A wide range of flexibility is supported, with the ability to change various compression parameters 'on-the-fly', The compression allgorithm is controlled by a PC application that displays the decompressed video and the original video for comparison, while displaying useful rate metrics such as Peak Signal to Noise Ratio (PSNR), Details of implementation and practicality are discussed. The thesis then presents examples and results from both implementation and testing before concluding with suggestions for further improvement.
Thesis (M.Sc.Eng.)-University of Natal, Durban, 2001.
APA, Harvard, Vancouver, ISO, and other styles
47

MENG, YI-HENG, and 蒙以亨. "A new data compression scheme based upon Lempel-Ziv universal algorithm." Thesis, 1988. http://ndltd.ncl.edu.tw/handle/33662737083984646399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Liu, Jin-Min, and 劉景民. "A Chinese Text Compression Scheme Based on Large-Alphabet BW-Transform." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/13718831705317109242.

Full text
Abstract:
碩士
國立臺灣科技大學
電機工程系
92
In this thesis, a Chinese text compression scheme based on large alphabet Burrows-Wheeler transform(BWT) is proposed. First, an inputted Chinese text file is parsed with a large alphabet consisting of characters from BIG-5 and ASCII codes. Then, the parsed token stream is processed by BWT, MTF(Move To Front), and arithmetic coding. To improve the speed of the proposed scheme, we have also studied a few ways for practical implementations of BWT, MTF and arithmetic coding under large-alphabet parsing condition. According to the compression scheme, a practically executable program is developed. When compared with other compression programs, i.e., Win-ZIP, Win-RAR, and BZIP2, our program is shown, in Chinese text file compression experiments, to have better compression rates. Rate improvements are 12.9%, 4.7%, and 1.7%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
49

Huang, Jian-Jhih, and 黃建智. "An efficient and effective image compression technique based on segmentation scheme." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/31808924591499938173.

Full text
Abstract:
碩士
國立屏東科技大學
資訊管理系所
101
Recently, a large number of multimedia transmits in the network. However, the transmission speed and storage space confront with several limitations. Therefore, to provide an efficient and effective image compression technique for multimedia transmission environment in networks is required. In the past, there are several image compression techniques have been presented. However, the quality and execution time of the image compression techniques still need to be improved. Therefore, this thesis proposes a new effective and efficient image compression method to overcome the limitation and improve the drawback. This work involves many critical steps to perform image compression task, such as select a used color, segment the colors with fixed size, compute the average colors for the segmented blocks, divide eight blocks for the average colors, assign codebook size to eight blocks depending on their weights, and allocate codebook size to the block using the maximum number of colors. For simulations and comparisons with different data clustering methods, there are two measure indicators have been utilized- time cost and PSNR. It is observed that the proposed algorithm outperforms several well-known image compression approaches in image compression
APA, Harvard, Vancouver, ISO, and other styles
50

Huang, Yu-Ming, and 黃昱銘. "A Constant Rate Block Based Image Compression Scheme and Its Applications." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/42049460879138072754.

Full text
Abstract:
碩士
國立中興大學
電機工程學系所
104
The display resolutions of 3C products are growing larger and larger nowadays. This results in a huge increase on the demand of the display data transmission bandwidth. It not only requires more hardware resource, but also increases power consumption significantly. As a result, approaches to alleviate the display transmission bandwidth without suffering perceivable image quality loss are the key to tackle the problems. Compared with existing highly compression efficient yet computationally complicated compression schemes such as H.264, the addressed compression schemes focus on low computing complexity and real time processing. Since these compression schemes are often considered embedded functions tailored to specific systems, they are also termed as embedded compression. In this thesis, we investigate on embedded video compression with a constant compression rate to assure the compliance with data bandwidth constraints. The proposed embedded compression scheme features an ensemble of compression techniques with each targeting different offset of images with a certain texture property for efficient compression. All compression techniques are evaluated concurrently and the best of all is selected. The proposed system also supports the partial update feature adopted in Android 5.0, and can largely reduce the data transmission bandwidth if only a small portion of the image is updated. To implement this feature, the compression is performed on a per block basis and all blocks should be compressed independently without using information from the adjacent blocks. This, however, poses significant challenges to the prediction accuracy of pixel values and the flexibility of bit allocation in coding, both are crucial to the compression efficiency. To lower the line buffer storage requirement, an image block of size 2×4 pixels is chosen as the basic compression unit. An integral color space transform, from RGB to YCgCo, is first applied to de-correlate the color components. After this pre-processing step, each color component is processed independently. Compression techniques employed in the proposed system include common value extraction coding, distinct value coding, interpolation-based coding, modified block truncation coding, and vector quantization coding. Among them, common value extraction coding, distinct value coding, and interpolation-based coding are uniquely developed for the proposed system. Modified block truncation coding is derived from an existing one but adds an integer DCT based refinement process. All these techniques aim at exploiting the data correlation in the spatial domain to facilitate efficient prediction and coding. Vector quantization treats the 2×4 block as a vector consisting of 8 tuples and codes the block in an entirety through finding a best match in a pre-defined code book. To enhance the compression efficiency, refinement processes performed in the frequency domain are further applied to the coding results of interpolation-based coding, modified block truncation coding and vector quantization. The compression ratio of the proposed system is fixed while best PSNR values are sought afterward. The compression ratio of the luminance is 2 and those of the two color components are 4. This leads to an overall 3 times constant rate compression. The compression efficiency of the proposed system is evaluated based on a set of test images. These images are captured from various scenarios such as user interface, engineering patterns, text, gaming, video playback and nature scene. They also feature different resolutions, texture complexities and contrasts. The PSNR values of perspective color components as well as the entire image are calculated and compared to the results achieved by the JPEG standard, which uses a 8×8 block as the basic coding unit. The results show that the proposed scheme outperforms JPEG mostly in artificial images containing texts or engineering patterns. JPEG achieves better results in the category of nature scenes or more complicated images. This is mainly attributed to an inherent advantage of a larger coding block adopted in JPEG. However, JPEG, due to its complexity, is not considered an embedded compression scheme, nor can it support a constant rate compression. Subjective test based on visual inspections is also conducted and the distortions caused by the proposed scheme are visually barely noticeable.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography