Dissertations / Theses on the topic 'Compression scheme'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Compression scheme.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Lim, Seng. "Image compression scheme for network transmission." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1995. http://handle.dtic.mil/100.2/ADA294959.
Full textLi, Yun, Mårten Sjöström, Ulf Jennehag, Roger Olsson, and Tourancheau Sylvain. "Subjective Evaluation of an Edge-based Depth Image Compression Scheme." Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-18539.
Full textMensmann, Jörg, Timo Ropinski, and Klaus Hinrichs. "A GPU-Supported Lossless Compression Scheme for Rendering Time-Varying Volume Data." University of Münster, Germany, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-92867.
Full textBernat, Andrew. "Which partition scheme for what image?, partitioned iterated function systems for fractal image compression." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2002. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/MQ65602.pdf.
Full textKadri, Imen. "Controlled estimation algorithms of disparity map using a compensation compression scheme for stereoscopic image coding." Thesis, Paris 13, 2020. http://www.theses.fr/2020PA131002.
Full textNowadays, 3D technology is of ever growing demand because stereoscopic imagingcreate an immersion sensation. However, the price of this realistic representation is thedoubling of information needed for storage or transmission purpose compared to 2Dimage because a stereoscopic pair results from the generation of two views of the samescene. This thesis focused on stereoscopic image coding and in particular improving thedisparity map estimation when using the Disparity Compensated Compression (DCC)scheme.Classically, when using Block Matching algorithm with the DCC, a disparity mapis estimated between the left image and the right one. A predicted image is thencomputed.The difference between the original right view and its prediction is called theresidual error. This latter, after encoding and decoding, is injected to reconstruct theright view by compensation (i.e. refinement) . Our first developed algorithm takes intoaccount this refinement to estimate the disparity map. This gives a proof of conceptshowing that selecting disparity according to the compensated image instead of thepredicted one is more efficient. But this done at the expense of an increased numericalcomplexity. To deal with this shortcoming, a simplified modelling of how the JPEGcoder, exploiting the quantization of the DCT components, used for the residual erroryields with the compensation is proposed. In the last part, to select the disparity mapminimizing a joint bitrate-distortion metric is proposed. It is based on the bitrateneeded for encoding the disparity map and the distortion of the predicted view.This isby combining two existing stereoscopic image coding algorithms
Philibert, Manon. "Cubes partiels : complétion, compression, plongement." Electronic Thesis or Diss., Aix-Marseille, 2021. http://www.theses.fr/2021AIXM0403.
Full textPartial cubes (aka isometric subgraphs of hypercubes) are a fundamental class of metric graph theory. They comprise many important graph classes (trees, median graphs, tope graphs of complexes of oriented matroids, etc.), arising from different areas of research such as discrete geometry, combinatorics or geometric group theory.First, we investigate the structure of partial cubes of VC-dimension 2. We show that those graphs can be obtained via amalgams from even cycles and full subdivisions of complete graphs. This decomposition allows us to obtain various characterizations. In particular, any partial cube can be completed to an ample partial cube of VC-dimension 2. Then, we show that the tope graphs of oriented matroids and complexes of uniform oriented matroids can also be completed to ample partial cubes of the same VC-dimension.Using a result of Moran and Warmuth, we establish that those classes satisfy the conjecture of Floyd and Warmuth, one of the oldest open problems in computational machine learning. Particularly, they admit (improper labeled) compression schemes of size their VC-dimension.Next, we describe a proper labeled compression scheme of size d for complexes of oriented matroids of VC-dimension d, generalizing the result of Moran and Warmuth for ample sets. Finally, we give a characterization via excluded pc-minors and via forbidden isometric subgraphs of partial cubes isometrically embedded into the grid \mathbb{Z}^2 and the cylinder P_n \square C_{2k} for some n and k > 4
Samuel, Sindhu. "Digital rights management (DRM) : watermark encoding scheme for JPEG images." Pretoria : [s.n.], 2007. http://upetd.up.ac.za/thesis/available/etd-09122008-182920/.
Full textBekkouche, Hocine. "Synthèse de bancs de filtres adaptés, application à la compression des images." Phd thesis, Université Paris Sud - Paris XI, 2007. http://tel.archives-ouvertes.fr/tel-00345288.
Full textAli, Azad [Verfasser], Neeraj [Akademischer Betreuer] Suri, Christian [Akademischer Betreuer] Becker, Stefan [Akademischer Betreuer] Katzenbeisser, Andy [Akademischer Betreuer] Schürr, and Marc [Akademischer Betreuer] Fischlin. "Fault-Tolerant Spatio-Temporal Compression Scheme for Wireless Sensor Networks / Azad Ali ; Neeraj Suri, Christian Becker, Stefan Katzenbeisser, Andy Schürr, Marc Fischlin." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2017. http://d-nb.info/1127225405/34.
Full textPrůša, Zdeněk. "Efektivní nástroj pro kompresi obrazu v jazyce Java." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217433.
Full textTohidypour, Hamid Reza. "Complexity reduction schemes for video compression." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/60250.
Full textApplied Science, Faculty of
Graduate
Fgee, El-Bahlul. "A comparison of voice compression using wavelets with other compression schemes." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ39651.pdf.
Full textHan, Bin. "Subdivision schemes, biorthogonal wavelets and image compression." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0013/NQ34774.pdf.
Full text方惠靑 and Wai-ching Fong. "Perceptual models and coding schemes for image compression." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1997. http://hub.hku.hk/bib/B31235785.
Full textFong, Wai-ching. "Perceptual models and coding schemes for image compression /." Hong Kong : University of Hong Kong, 1997. http://sunzi.lib.hku.hk/hkuto/record.jsp?B18716064.
Full textDinakenyane, Otlhapile. "SIQXC : Schema Independent Queryable XML Compression for smartphones." Thesis, University of Sheffield, 2014. http://etheses.whiterose.ac.uk/7184/.
Full textKalajdzievski, Damjan. "Measurability Aspects of the Compactness Theorem for Sample Compression Schemes." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23133.
Full textKovvuri, Prem. "Investigation of Different Video Compression Schemes Using Neural Networks." ScholarWorks@UNO, 2006. http://scholarworks.uno.edu/td/320.
Full textMasupe, Shedden. "Low power VLSI implementation schemes for DCT-based image compression." Thesis, University of Edinburgh, 2001. http://hdl.handle.net/1842/12604.
Full textSolé, Rojals Joel. "Optimization and generalization of lifting schemes: application to lossless image compression." Doctoral thesis, Universitat Politècnica de Catalunya, 2006. http://hdl.handle.net/10803/6897.
Full textWavelet analysis perform multi-resolution decompositions that decorrelate signal and separate information in useful frequency-bands, allowing flexible post-coding. In JPEG2000, decomposition is computed through the lifting scheme, the so-called second generation wavelets. This fact has focused the community interest on this tool. Many works have been recently proposed in which lifting is modified, improved, or included in a complete image coding algorithm.
The Ph.D. thesis dissertation follows this research line. Lifting is analyzed, proposals are made within the scheme, and their possibilities are explored. Image compression is the main objective and it is principally assessed by means of coding transformed signal with EBCOT and SPIHT coders. Starting from this context, the work diverges in two distinct paths, the linear and the nonlinear one.
The linear lifting filter construction is based on the idea of quadratic interpolation and the underlying linear restriction due to the wavelet transform coefficients. The result is a flexible framework that allows the creation of new transforms using different criteria and that may adapt to the image statistics.
The nonlinear part is founded on the adaptive lifting scheme, which is extensively analyzed and as a consequence, a generalization of the lifting is proposed. The discrete version of the generalized lifting is developed leading to filters that achieve good compression results, specially for biomedical and remote sensing images.
Esta tesis aborda el problema de la descomposición multi-resolución, tema clave en procesado del señal que ha llevado estos últimos años a la creación del sobresaliente estándar JPEG2000 de compresión de imágenes. JPEG2000 incorpora una serie de funcionalidades muy interesantes debido básicamente a la transformada wavelet discreta y al codificador entrópico EBCOT.
La transformada wavelet realiza una descomposición multi-resolución que decorrela la señal separando la información en un conjunto de bandas frecuenciales útiles para la posterior codificación. En JPEG2000, la descomposición se calcula mediante el esquema lifting, también llamado wavelet de segunda generación. La integración del esquema lifting en el estándar ha centrado el interés de muchos investigadores en esta herramienta. Recientemente, han aparecido numerosos trabajos proponiendo modificaciones y mejoras del lifting, así como su inclusión en nuevos algoritmos de codificación de imágenes.
La tesis doctoral sigue esta línea de investigación. Se estudia el lifting, se hacen propuestas dentro del esquema y sus posibilidades se exploran. Se ha fijado la compresión de imágenes como el principal objetivo para la creación de nuevas transformadas wavelet, que se evalúan en su mayor parte mediante la codificación de la señal transformada con EBCOT o SPIHT. Dentro de este contexto, el trabajo diverge en dos caminos distintos, el lineal y el no lineal.
La construcción de filtros lifting lineales se basa en la idea de interpolación cuadrática y la restricción lineal subyacente de los coeficientes wavelet. El resultado es un marco de trabajo flexible que permite la creación de transformadas con distintos criterios y adaptables a la estadística de la imagen.
La parte no lineal tiene sus fundamentos en el esquema lifting adaptativo, del cuál se ofrece un extenso análisis y como consecuencia se propone una generalización del lifting. Su versión discreta se desarrolla consiguiendo filtros lifting que obtienen buenos resultados, sobretodo en imágenes biomédicas y de detección remota.
Aburas, Abdul Razag Ali. "Data compression schemes for pattern recognition in digital images using fractals." Thesis, De Montfort University, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391231.
Full textMo, Ching-Yuan, and 莫清原. "Image Compression Technique Using Fast Divisive Scheme." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/07222792958381290250.
Full text國立屏東科技大學
資訊管理系所
99
In the field regarding image compression, as LBG is a fast and easily understanding method with simple construction, and its compressed quality after training is acceptable, LBG is always considered an important technique in VQ(Vector Quantization)field. The other divisive algorithm is Cell Divisive Algorithm. Its construction is much simpler as it removes the precedure of checking convergent status in LBG, and add or subtract vector's value after dividing one into two. The whole procedure is faster, but the PSNR value is not satisfactory. The thesis is to propose a faster divisive vector algorithm using cell divisive algorithm. Since the defect of disqualified compression quality should be avoided due to lack of optimized training, LBG method is also applied in the algorithm to improve PSNR value. The purpose of the algorithm is to set up a easily-constructed, quickly-compressed algorithm with well-trained quality.
Minz, Manoranjan. "Efficient Image Compression Scheme for Still Images." Thesis, 2014. http://ethesis.nitrkl.ac.in/6306/1/110EC0172-2.pdf.
Full textShiu, Pei-Feng, and 徐培峯. "A DCT based watermarking scheme surviving JPEG compression." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/02411822415288436323.
Full text靜宜大學
資訊工程學系
99
In this thesis, a DCT-based watermarking technique is proposed. This scheme is designed to increase its robustness of hidden watermark so that the hidden watermark can withstand JPEG compression attack. To achive our objective, the proposed watermark scheme embeds watermark into the DCT coefficients. To remain the visual quality of watermarked images, only low-frequency DCT coefficients are selected to carry hidden watermark by the concept of mathematical remainder. To enhance the robustness of the watermark, a voting mechanism is applied in the scheme. Experimental results confirm that the robustness of hidden watermark against JPEG compression with our proposed scheme is better than that of Lin et al.’s scheme and Patra et al.’s scheme.
Tseng, Wei-Rong, and 曾緯榮. "Digital Image Watermarking Based on Fractal Compression Scheme." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/22085167290982840415.
Full text國立高雄第一科技大學
電腦與通訊工程系
89
In the end of 20’th century, the digital revolution brings limitless convenience for people. Internet has taken the place of conventional medium. It becomes the most efficient medium. The creations show people rich and colorful appearance by the Internet and multimedia. The protection of intellectual property is very important. The creator and copyright owner don’t wish their work would be cribbed. The digital watermarking technology has been respected in order to protect every lawful owner’s rights. And more investigators are devoted to the study of this issue. In this thesis, our algorithm is a new way based on fractal compression scheme. The previous technology embeds the watermark into position’s parameters and rotation type of fractal code with searching procedure in fractal compression. We adjust gray mapping transforms parameters by way of sub-optimal least squares approach in order to embed the watermark into the fractal code. Compared with the previous skills, the algorithm has better result in resisting JPEG compression. And it could resist the overlapping, daubing, noise and various attacks.
Kuo, Ta-Jung, and 郭大榮. "A Hybrid Coding Scheme for Color Image Compression." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/18416180146372995425.
Full text國立高雄第一科技大學
電腦與通訊工程所
93
Two hybrid image-coding schemes are proposed by the combination of the advantages of four coding schemes: BTC, VQ, DCT coding, and predictive coding. The experiments show that the hybrid coding schemes can obtain high compression ratio, saved VQ codebook searching time, and the competitive image quality. BTC has the property of low computational complexity and the edge preservation. DCT has the property of high image quality and high compression ratio. VQ has the property of high compression ratio and the moderate fidelity. The input image is first coded through BTC to generate a bit-map and both high-mean and low-mean sub-images. The high-mean and low-mean sub-images are encoded through the DCT coding scheme. The predictive scheme here is used to reduce the required bit rate because the property of similarity exists between two neighboring blocks in a high-correlated image. The bit map generated by BTC is encoded by VQ and block predictive coding scheme. Especially the usage of block predictive coding scheme here in bit map is not only to reduce the bit rate but also to save the codebook searching time about 25% blocks than using VQ alone in bit map. Because it just need to simply assign two-bit indicator to stand for the block. For coding the color image, a common codebook is used to encoding the three color-component planes in order to reduce the storage space for codebook without any explicit degradation in image quality.
Cheng, Chao-hsun, and 鄭昭勳. "Compression Scheme for waveform of Hardware Design Verification." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/27107029715915719378.
Full text國立臺灣大學
資訊工程學研究所
89
Abstract Among VLSI circuit design, functional verification has become an important part due to rapid extension of circuits functionalities in many consumer and industry products. During circuit simulation, a very large trace file is written, containing value-changes for every node in the design, or a subsystem within the design. Upon completion, the designer could watch the entire simulation history through a waveform tool to achieve the verification tasks. Verilog's VCD file format has created a competitive market for waveform tools, which greatly improve the quality. However, the waveform data in VCD file format is too large. Some common compression algorithms may be used to decrease the file size of a waveform database, but they also consume a significant amount of computation power at the same time. Here we adopt a new idea to make use of the properities of waveform data. By exploiting the HDL source codes in compile time, we try to find out hints to guide the compression. A file, which is named signal dependency file, could be created to contain signal dependency rules among the source codes in circuit designes. "Time-value separation" is the key idea in our compression techniques. All signals transitions are separated into time sections and value sections in transition order. Then we could implement our compression ideas in each section separately. And of course, on decoompression tasks, we have to restore and merge these two parts in original transition order. In time section, our idea is based on some prediction strategies; in value section, our main idea is to replace the signal value by the corresponding behavior function for this signal to apply to. After our experiment processing, we could almost achieve 50% to 20% compression ratio comparing to the size of target VCD format waveform database.
Yang, Sheng-Yu, and 楊勝裕. "A Constant Rate Block Based Image Compression Scheme Using Vector Quantization and Prediction Schemes." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/wrwyp4.
Full text國立中興大學
電機工程學系所
107
This thesis proposes an embedded image compression system aimed at reducing the large amount of data transmission and storage along the display link path. Embedded compression focuses on low computing complexity and low hardware resources requirement, while providing a guarantee of compression performance. The algorithm proposed in this thesis is a constant rate block based image compression scheme with two scheme options. Both schemes will be examined at the same time and the better one is chosen. In order to support the "screen partial update" function of the Android system, a block based compression system is adopted. This means that all blocks are compressed independently, no information from the surrounding blocks is available. The block size is set as 2x4. The compression ratio is also fixed at three to ensure a constant bandwidth requirement. In addition, a Y-Co-Cg color space is used. The major techniques employed are shape gain vector quantization (VQ) and predictor. A 2x4 block is first converted to an 1x8 vector and encoded using pre-trained vector codebooks. By taking advantage of the correlation between color components, all color components share the same index in shape coding to save the bit budget while each color component has its own gain index. The shape gain VQ residuals of the most underperformed color component is further refined by using two techniques, i.e., DPCM and Integer DCT. DPCM achieves prediction by recording the difference between successive pixels. The Integer DCT approach converts the pixel residual values from the spatial domain to the frequency domain, and records the low frequency components only for the refinement. Experimental results, however, indicate that neither techniques achieves satisfactory refinement results. The final scheme applies shape gain VQ to the Cg and Co components only and employs a reference prediction scheme to the Y component. In this prediction scheme, the maximum of the pixel values in the block is first determined and all other pixel values are predicted as a reference the maximum. The reference can be either the difference or the ratio with respect to the maximum. Both differences and ratios are quantized using codebooks to reduce the bit requirement. The evaluation criteria for compression performance are PSNR and the maximum pixel error of the reconstructed image. Testbench includes images in various categories such as natural, portrait, engineering, and text. The compared scheme is a prior art reported in the thesis entitled "A Constant Rate Block Based Image Compression Scheme for Video Display Link Applications." The same compression specifications are employed in both schemes. The experimental results show that our algorithm performs better in natural and portrait images, and the PSNR advantage is about 1~2 dBs. The proposed algorithm performs inferior in engineering images. In terms of image size, our algorithm has better performance on low-resolution images. This is because the reference predictor and shape gain vector quantization schemes are more efficient in handling blocks consisting of sharply changing pixels.
Liao, Hua-Cheng, and 廖華政. "A Novel Data Compression Scheme for Chinese Character Patterns." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/74259223131567530391.
Full text逢甲大學
自動控制工程研究所
84
This thesis proposes an efficient lossless compression scheme for Chinese character patterns. The proposed scheme analyzes the characteristic of line-segment structures of Chinese character patterns. A novel matching algorithm is developed for the line- segment prediction used in the encoding and decoding processes. The bit rates achieved with the proposed lossless scheme are 0.2653, 0.2448 and 0.2583, for three Chinese fonts, respectively. Due to the black and white points in Chinese character patterns are highly correlated, subsampling and interpolation schemes are considered to further increase the compression ratio. With these schemes, the low bit rate is achieved. Two types of interpolation techniques are presented for the enlargement of Chinese character patterns. They are the 2-D interpolation and spline interpolation. Compared with the lossless compression scheme, the 2-D subsampling scheme can further reduce the bit rate as high as 43.19%, 41.83%, and 41.61% for three widely used Chinese fonts, respectively.
Liu, Chia Liang, and 劉家良. "Hybrid Image Compression Scheme Based on PVQ and EVQ." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/91423278776285614954.
Full text大葉大學
資訊工程學系碩士班
93
The image compression is used to vector quantization. It has not considered the relation between the image blocks, so we could be able to improve the bit rate. In this paper, we propose a predicted coding scheme based on VQ algorithm, prediction algorithm, and error VQ algorithm. The prediction algorithm is used to encode smooth blocks. Otherwise VQ and EVQ are used to encode edge blocks, respectively. The scheme not only improves the image quality of decompressed but also has low bit rate than VQ algorithm. The experimental results show that our scheme performs better than VQ algorithm. For example, the test image “Lena” achieves 35.02 dB of reconstructed image quality at 0.87 bpp. And “Lena” achieves 31.07 dB of reconstructed image quality at 0.31 bpp than VQ algorithm high 0.71 dB at 0.625 bpp. It is obvious that our proposed PVQ-EVQ scheme not only has high compression rate, but also has good reconstructed images quality. Keyword:VQ, Prediction, Hybrid image coding, PSNR
Lin, Ang-Sheng, and 林昂賢. "A High Performance Compression Scheme for General XML Data." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/98661171696464125372.
Full text國立臺灣大學
資訊工程學研究所
88
In this thesis, we propose a high performance compression scheme for general XML data. The scheme takes advantages of semisturcture data characters, text mining method, and existing compressing algorithms. To compress heterogeneous XML data, we incorporates and combines existing compressing such as zlib, the library function for gzip, as well as a collection of datatype specific compressors. In our scheme, we do not need schema information (such as a DTD or an XML-Schema), but can exploit those hints to further improve the compression ratio. According to our proposed approach, we implement a compressor/decompressor, and use them to test and verify our compression scheme.
Liu, Hung-Chun, and 劉鴻鈞. "An Improved Image Coding Scheme with Less Compression Artifacts." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/04791490836438453971.
Full text臺灣大學
資訊工程學研究所
98
JPEG is one of the most popular formats which are designed to reduce the bandwidth and memory space. A lossy compression algorithm is used in JPEG format, meaning that some information is lost and cannot be restored after compression. When high compression ratio is considered, certain artifacts are inevitable as a result of the degradation of image quality. In this thesis, an image enhancement algorithm is proposed to reduce artifacts which are caused by JPEG compression standard. We found that severe degradation mostly occurs in the area containing edges. The degradation is resulted from the quantization step where high frequency components are eliminated. In order to compensate this kind of information loss, the proposed edge block detection method is performed to extract out edge blocks and categorize those edge blocks into several types of edge models in DCT (Discrete Cosine Transform) domain. Then, according the type of edge model, the pre-defined DCT coefficients are added back to the edge block. It is demonstrated by the experimental results that the proposed method successfully provides better performance in terms of sharpness while comparing to JPEG.
LIN, JIAN-FU, and 林建福. "A Chinese syllables compression scheme based upon perfect hashing." Thesis, 1990. http://ndltd.ncl.edu.tw/handle/29156709083274114521.
Full textChen, Kung-Han, and 陳功瀚. "An Efficient Test Data Compression Scheme Using Selection Expansion." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/61101774897607876166.
Full text淡江大學
電機工程學系碩士班
100
Because MISR (Multiple input shift register) can use one ATE data run a lot of times in it. We use this characteristic to let one data run lots of times in MISR to generate a lot of patterns. MISR is the foundation of our decompressor .And using Gauss-Elimination to get the ATE data. Selection with Flip-Flop can spread MISR data, because one Flip-Flop of MISR connected with two MUXs of Selection. The Selection connected with the MISR. Flip-Flops are connected with the Selection. The original ATE is spreading by our decompressor architecture. And using the Flip-Flops can restore the bits in it, we just changing the Flip-Flops bits when the data is changed. Because the bits of Flip-Flops are not changing frequently, we can save power. And one ATE data run a lot of cycles in the decompressor architecture, so the time is saving by this way.
Sukerkar, Amol Nishikant. "Study of joint embedding and compression on JPEG compression scheme using multiple codebook data hiding." Thesis, 2005. http://library1.njit.edu/etd/fromwebvoyage.cfm?id=njit-etd2005-015.
Full textCheng, Sung-Wei, and 鄭松瑋. "Test Data Compression Using Scan Chain Compaction and Broadcasting Scheme." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/16319084306417623328.
Full text元智大學
資訊工程學系
97
In this paper we propose a compression method for scan testing. There are lots of test data in a test set that can be assigned 1 or 0 without affecting the test result, which are called don’t care bits(X-bits). These don''t care bits can be used for increasing the compatibility of test data in order to achieve the goal of reducing test data volume. In addition, we use the structure of multiple scan chains in which original single scan chain is partitioned into several sub-scan chains. Test data are then shifted by a broadcast way for reducing test application time and test data. The best case on broadcasting is that shifting one sub-test pattern to all sub-scan chains, and test data of these sub-scan chains must be compatible. Therefore, the result of broadcasting will be influenced by the volume of care bits in test set. There are some test patterns unable to be hit which cause some faults hardly to be detected and drop the fault coverage. Hence, we propose a different broadcasting technique to increase its efficiency. It is that starting sub-scan chain of broadcasting is chosen by increasing one de-multiplexer. The difference with other methods is no complicated decoder and much hardware is needed. It can achieve very good compression rate. In addition, for reducing test data we use the structure of scan tree and combine two methods. The compression rate can reach up to 72%.
張寶田. "An image compression technique based on segmented image coding scheme." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/66941259033552666833.
Full textLin, Che-wei, and 林哲瑋. "A Power-aware Code-compression Scheme for RISC/VLIW Architecture." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/99333342572652783073.
Full text國立臺灣科技大學
電子工程系
98
We studied the architecture of embedded computing systems from the memory power consumption point-of-view and used selective-code- compression (SCC) approach to realize our design. Based on the LZW (Lempel-Ziv-Welch) compression algorithm, we proposed a novel cost effective compression and decompression method. The goal of our study is to develop a new SCC approach with the extended decision policy based on the prediction of power consumption. Our decompression method has to be easily implemented in hardware and should collaborate with the processor at the same time. The decompression engine hardware was implemented using TSMC.18μm- 2p6m model with cell-based libraries. In order to calculate more accurate power consumption of the decompression engine, we used static analysis method to estimate the power overhead. We also used variable sized branch blocks and considered several characteristics of VLIW processors for our compression; the characteristics included the instruction level parallelism (ILP) technique, and instruction scheduling. Our code-compression methods are not limited to VLIW machines, but can be applied to other kinds of RISC architecture.
Chang, Jin-Bang, and 張進邦. "DSP implementation of Lifting Scheme Wavelet Transform in Image Compression." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/56117301651056609954.
Full text國立成功大學
工程科學系碩博士班
94
Abstract In this thesis, we will discuss the theoretical backgrounds of discrete wavelet transform (DWT) in image processing. The Lifting-based Discrete Wavelet Transform (LDWT) has been proposed to reduce the complexity of hardware implementation. An image processing board based on TMS320C6713 DSK (TI, American), which can provide floating-point arithmetic and 8 parallel process capabilities, is designed to implement the Lifting-based Discrete Wavelet Transform and Lifting-based Inverse Discrete Wavelet Transform and apply to image compression and encoding.
Ali, Azad. "Fault-Tolerant Spatio-Temporal Compression Scheme for Wireless Sensor Networks." Phd thesis, 2017. https://tuprints.ulb.tu-darmstadt.de/5924/7/Azad-Final-Gray.pdf.
Full textLee, Vaughan Hubert. "A new HD video compression scheme using colourisation and entropy." Thesis, 2012. http://hdl.handle.net/10210/7882.
Full textThere is a growing demand for HD video content. That demand requires significant bandwidth, even with sophisticated compression schemes. As the demand increases in popularity, the bandwidth requirement will become burdensome on networks. In an effort to satisfy the demand for HD, improved compression schemes need to be investigated together with increasing efficiency of transmission. The purpose of this literature study is to investigate existing video compression schemes and techniques used in software implementations. Then to build on existing work within the mature field of video compression, in order to propose a new scheme which would then be tested for viability. Two algorithms were proposed as a result of the literature study. The first algorithm is an improved way to add colour to luminance images of similar scene content. The second algorithm is an encoding scheme that is adaptive to the video content. The proposed encoding scheme adaptively selects to encode the next several frames using well established techniques or to use a technique proposed in this work. At the time of preparing this document, and from the available literature this second proposed algorithm is new. An interesting compression tool has been developed during this study. This tool can be used to obtain a visual expectation of the achievable compression before applying the compression. The tool is a quadrant plot of the difference in image entropy between successive frames and an estimation of the mean percentage motion content between these frames. Over the duration of a scene, the spread of results reveals the extent of the potential compression gain.
郭建綱. "An Image Quadtree Coding Scheme for Compression and Progressive Transmission." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/37243270916169374016.
Full text國防管理學院
資源管理研究所
84
Due to the characteristics of huge memory space and long period of processing time, the spatial data of image and graph will cost too much for computer systems; the situation is even more worst when the network transmission is applied. So, the compression scheme of spatial data becomes an important research domain of information technology because the requirement of graphic user interface( GUI ) and multimedia is inevitable nowadays. Quadtree is a hierarchical data structure to represent the spatial data. It be widely applied on computer graphics, image processing and geographic information system. A new scheme, Separately Bitwise Condensed Quadtree (SBCQ) is propose in this thesis, the proposed scheme approach is an error-free compression scheme for gray scale and color image. The conceptions of this method translate the binary codes of all image pixels into Gray code first; it then separate every byte of pixel into two sub-byte planes; finally, these sub-byte planes are coded by the SBCQ in breadth-first traversal rder. The proposed scheme is verified by empirical experiments which demonstrate that the proposed improve the compression ratio for gray scale and color images. The proposed scheme can be applied to solve problem of network congestion. In addition, when image data are transmitted on network, the overloading of network congestion can be solved by progressive transmission of images. Furthermore, the proposed scheme can also applied to detect edges. The result of tests reveals the facility of the proposed scheme for edge detection.
GUO, XING-HONG, and 郭星宏. "A high compression pate lossless scheme for still color images." Thesis, 1991. http://ndltd.ncl.edu.tw/handle/63742156242313109350.
Full textLi, Wei-Lin, and 李威霖. "A Novel Constructive Data Compression Scheme for Low-Power Testing." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/57004877392584365459.
Full text淡江大學
電機工程學系碩士班
97
As the design trends of very large scale integration (VLSI) circuit evolve into system-on-a-chip (SoC) design, each chip contains several reusable intellectual property (IP) cores. In order to test the chip completely, we must generate a test set for testing in advance, and store these test patterns in memory of automatic test equipment (ATE). One can imagine that test data volume increases as the integrated circuits (ICs) become complex, yet the bandwidth and memory capacity of ATE is limited. Thus, it is difficult to transmit huge test data from ATE memory to SoC. Test data compression is one of the most often used methods to deal with this problem. This technique not only reduces the volume of test data, but also shortens test application time simultaneously. In this thesis, we present two test data compression scheme for low-power testing. In Chapter 3, a low power strategy for test data compression scheme with single scan chain is presented. In this method, we propose an efficient algorithm for scan chain reordering to deal with the power dissipation problem. In addition, we also propose a test slice difference (TSD) technique to improve test data compression. It is an efficient technique and only needs one scan cell. Consequently, hardware overhead is much lower than the cyclical scan chains (CSR) technique. In experimental results, our technique achieves high compression ratio for several large ISCAS’89 benchmark circuits. The power consumption is also better compared with other well-known compression technique. In Chapter 4, we present a novel constructive data compression scheme that reduces both test data volume and shifting-in power for multiple scan chains. In this scheme, we only store the changed point information in ATE and use “Read Selector” to filter unnecessary encoded data. The decompression architecture contains buffers to hold the preceding data. We also propose a new algorithm to assign multiple scan chains and a new linear dependency computation method to find the hidden dependency between test slices. Experimental results show that the proposed scheme respectively outperforms previous method (selective scan slice encoding) by 57% and 77% in test data volume and power consumption on larger circuits in ISCAS’89 benchmarks.
Jiang, Jianmin, and E. A. Edirisinghe. "A hybrid scheme for low-bit rate stereo image compression." 2009. http://hdl.handle.net/10454/2717.
Full textWe propose a hybrid scheme to implement an object driven, block based algorithm to achieve low bit-rate compression of stereo image pairs. The algorithm effectively combines the simplicity and adaptability of the existing block based stereo image compression techniques with an edge/contour based object extraction technique to determine appropriate compression strategy for various areas of the right image. Unlike the existing object-based coding such as MPEG-4 developed in the video compression community, the proposed scheme does not require any additional shape coding. Instead, the arbitrary shape is reconstructed by the matching object inside the left frame, which has been encoded by standard JPEG algorithm and hence made available at the decoding end for those shapes in right frames. Yet the shape reconstruction for right objects incurs no distortion due to the unique correlation between left and right frames inside stereo image pairs and the nature of the proposed hybrid scheme. Extensive experiments carried out support that significant improvements of up to 20% in compression ratios are achieved by the proposed algorithm in comparison with the existing block-based technique, while the reconstructed image quality is maintained at a competitive level in terms of both PSNR values and visual inspections
McIntosh, Ian James. "Implementation of an application specific low bit rate video compression scheme." Thesis, 2001. http://hdl.handle.net/10413/5671.
Full textThesis (M.Sc.Eng.)-University of Natal, Durban, 2001.
MENG, YI-HENG, and 蒙以亨. "A new data compression scheme based upon Lempel-Ziv universal algorithm." Thesis, 1988. http://ndltd.ncl.edu.tw/handle/33662737083984646399.
Full textLiu, Jin-Min, and 劉景民. "A Chinese Text Compression Scheme Based on Large-Alphabet BW-Transform." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/13718831705317109242.
Full text國立臺灣科技大學
電機工程系
92
In this thesis, a Chinese text compression scheme based on large alphabet Burrows-Wheeler transform(BWT) is proposed. First, an inputted Chinese text file is parsed with a large alphabet consisting of characters from BIG-5 and ASCII codes. Then, the parsed token stream is processed by BWT, MTF(Move To Front), and arithmetic coding. To improve the speed of the proposed scheme, we have also studied a few ways for practical implementations of BWT, MTF and arithmetic coding under large-alphabet parsing condition. According to the compression scheme, a practically executable program is developed. When compared with other compression programs, i.e., Win-ZIP, Win-RAR, and BZIP2, our program is shown, in Chinese text file compression experiments, to have better compression rates. Rate improvements are 12.9%, 4.7%, and 1.7%, respectively.
Huang, Jian-Jhih, and 黃建智. "An efficient and effective image compression technique based on segmentation scheme." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/31808924591499938173.
Full text國立屏東科技大學
資訊管理系所
101
Recently, a large number of multimedia transmits in the network. However, the transmission speed and storage space confront with several limitations. Therefore, to provide an efficient and effective image compression technique for multimedia transmission environment in networks is required. In the past, there are several image compression techniques have been presented. However, the quality and execution time of the image compression techniques still need to be improved. Therefore, this thesis proposes a new effective and efficient image compression method to overcome the limitation and improve the drawback. This work involves many critical steps to perform image compression task, such as select a used color, segment the colors with fixed size, compute the average colors for the segmented blocks, divide eight blocks for the average colors, assign codebook size to eight blocks depending on their weights, and allocate codebook size to the block using the maximum number of colors. For simulations and comparisons with different data clustering methods, there are two measure indicators have been utilized- time cost and PSNR. It is observed that the proposed algorithm outperforms several well-known image compression approaches in image compression
Huang, Yu-Ming, and 黃昱銘. "A Constant Rate Block Based Image Compression Scheme and Its Applications." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/42049460879138072754.
Full text國立中興大學
電機工程學系所
104
The display resolutions of 3C products are growing larger and larger nowadays. This results in a huge increase on the demand of the display data transmission bandwidth. It not only requires more hardware resource, but also increases power consumption significantly. As a result, approaches to alleviate the display transmission bandwidth without suffering perceivable image quality loss are the key to tackle the problems. Compared with existing highly compression efficient yet computationally complicated compression schemes such as H.264, the addressed compression schemes focus on low computing complexity and real time processing. Since these compression schemes are often considered embedded functions tailored to specific systems, they are also termed as embedded compression. In this thesis, we investigate on embedded video compression with a constant compression rate to assure the compliance with data bandwidth constraints. The proposed embedded compression scheme features an ensemble of compression techniques with each targeting different offset of images with a certain texture property for efficient compression. All compression techniques are evaluated concurrently and the best of all is selected. The proposed system also supports the partial update feature adopted in Android 5.0, and can largely reduce the data transmission bandwidth if only a small portion of the image is updated. To implement this feature, the compression is performed on a per block basis and all blocks should be compressed independently without using information from the adjacent blocks. This, however, poses significant challenges to the prediction accuracy of pixel values and the flexibility of bit allocation in coding, both are crucial to the compression efficiency. To lower the line buffer storage requirement, an image block of size 2×4 pixels is chosen as the basic compression unit. An integral color space transform, from RGB to YCgCo, is first applied to de-correlate the color components. After this pre-processing step, each color component is processed independently. Compression techniques employed in the proposed system include common value extraction coding, distinct value coding, interpolation-based coding, modified block truncation coding, and vector quantization coding. Among them, common value extraction coding, distinct value coding, and interpolation-based coding are uniquely developed for the proposed system. Modified block truncation coding is derived from an existing one but adds an integer DCT based refinement process. All these techniques aim at exploiting the data correlation in the spatial domain to facilitate efficient prediction and coding. Vector quantization treats the 2×4 block as a vector consisting of 8 tuples and codes the block in an entirety through finding a best match in a pre-defined code book. To enhance the compression efficiency, refinement processes performed in the frequency domain are further applied to the coding results of interpolation-based coding, modified block truncation coding and vector quantization. The compression ratio of the proposed system is fixed while best PSNR values are sought afterward. The compression ratio of the luminance is 2 and those of the two color components are 4. This leads to an overall 3 times constant rate compression. The compression efficiency of the proposed system is evaluated based on a set of test images. These images are captured from various scenarios such as user interface, engineering patterns, text, gaming, video playback and nature scene. They also feature different resolutions, texture complexities and contrasts. The PSNR values of perspective color components as well as the entire image are calculated and compared to the results achieved by the JPEG standard, which uses a 8×8 block as the basic coding unit. The results show that the proposed scheme outperforms JPEG mostly in artificial images containing texts or engineering patterns. JPEG achieves better results in the category of nature scenes or more complicated images. This is mainly attributed to an inherent advantage of a larger coding block adopted in JPEG. However, JPEG, due to its complexity, is not considered an embedded compression scheme, nor can it support a constant rate compression. Subjective test based on visual inspections is also conducted and the distortions caused by the proposed scheme are visually barely noticeable.