Dissertations / Theses on the topic 'Predictive quantization'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 32 dissertations / theses for your research on the topic 'Predictive quantization.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Soong, Michael. "Predictive split vector quantization for speech coding." Thesis, McGill University, 1994. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=68054.
Full textSummation Product Codes (SPCs) are a family of structured vector quantizers that circumvent the complexity obstacle. The performance of SPC vector quantizers can be traded off against their storage and encoding complexity. Besides the complexity factors, the design algorithm can also affect the performance of the quantizer. The conventional generalized Lloyd's algorithm (GLA) generates sub-optimal codebooks. For particular SPC such as multistage VQ, the GLA is applied to design the stage codebooks stage-by-stage. Joint design algorithms on the other hand update all the stage codebooks simultaneously.
In this thesis, a general formulation and an algorithm solution to the joint codebook design problem is provided for the SPCs. The key to this algorithm is that every PC has a reference product codebook which minimizes the overall distortion. This joint design algorithm is tested with a novel SPC, namely "Predictive Split VQ (PSVQ)".
VQ of speech Line Spectral Frequencies (LSF's) using PSVQ is also presented. A result in this work is that PSVQ, designed using the joint codebook design algorithm requires only 20 bits/frame(20 ms) for transparent coding of a 10$ sp{ rm th}$ order LSF's parameters.
Abousleman, Glen Patrick. "Entropy-constrained predictive trellis coded quantization and compression of hyperspectral imagery." Diss., The University of Arizona, 1994. http://hdl.handle.net/10150/186748.
Full textWang, Yan. "Predictive boundary point adaptation and vector quantization compression algorithms for CMOS image sensors /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?ECED%202007%20WANGY.
Full textRivera, Hernández Sergio. "Tensorial spacetime geometries carrying predictive, interpretable and quantizable matter dynamics." Phd thesis, Universität Potsdam, 2012. http://opus.kobv.de/ubp/volltexte/2012/6186/.
Full textWelche Tensorfelder G auf einer glatten Mannigfaltigkeit M können eine Raumzeit-Geometrie beschreiben? Im ersten Teil dieser Dissertation wird es gezeigt, dass nur stark eingeschränkte Klassen von Tensorfeldern eine Raumzeit-Geometrie darstellen können, nämlich Tensorfelder, die eine prädiktive, interpretierbare und quantisierbare Dynamik für Materiefelder ermöglichen. Die offensichtliche Abhängigkeit dieser Charakterisierung erlaubter tensorieller Raumzeiten von einer spezifischen Materiefelder-Dynamik ist keine Schwäche der Theorie, sondern ist letztlich genau das Prinzip, das die üblicherweise betrachteten Lorentzschen Mannigfaltigkeiten auszeichnet: diese stellen die metrische Geometrie dar, welche die Maxwellsche Elektrodynamik prädiktiv, interpretierbar und quantisierbar macht. Materiefeld-Dynamiken, welche die kausale Struktur von Maxwell-Elektrodynamik nicht respektieren, zwingen uns, eine andere Geometrie auszuwählen, auf der die Materiefelder-Dynamik aber immer noch prädiktiv, interpretierbar und quantisierbar sein muss. Diesen drei Voraussetzungen an die Materie entsprechen drei algebraische Voraussetzungen an das total symmetrische kontravariante Tensorfeld P, welches das Prinzipalpolynom der Materiefeldgleichungen (ausgedrückt durch das grundlegende Tensorfeld G) bestimmt: das Tensorfeld P muss hyperbolisch, zeitorientierbar und energie-differenzierend sein. Diese drei notwendigen Bedingungen an die Geometrie genügen, um alle aus der Lorentzschen Geometrie bekannten kinematischen Konstruktionen zu realisieren. Dies zeigen wir im ersten Teil der vorliegenden Arbeit unter Verwendung eines teilweise recht subtilen Wechselspiels zwischen konvexer Analysis, der Theorie partieller Differentialgleichungen und reeller algebraischer Geometrie. Im zweiten Teil dieser Dissertation erforschen wir allgemeine Eigenschaften aller solcher hyperbolischen, zeit-orientierbaren und energie-differenzierenden Geometrien. Physikalisch wichtig sind der Aufbau von frei fallenden und nicht rotierenden Laboratorien, das Auftreten modifizierter Energie-Impuls-Beziehungen und die Identifizierung eines Mechanismus, der erklärt, warum massive Teilchen, die sich schneller als einige masselosse Teilchen bewegen, Energie abstrahlen können, aber nur bis sie sich langsamer als alle masselossen Teilchen bewegen. Im dritten Teil der Dissertation ergründen wir die Quantisierung von Teilchen und Feldern auf tensoriellen Raumzeit-Geometrien, die die obigen physikalischen Bedingungen erfüllen. Eine wichtige Motivation dieser Untersuchung ist es, Techniken zur Berechnung der Zerfallsrate von Teilchen zu berechnen, die sich schneller als langsame masselose Teilchen bewegen. Wir finden, dass es wiederum die drei zuvor im klassischen Kontext identifizierten Voraussetzungen (der Hyperbolizität, Zeit-Orientierbarkeit und Energie-Differenzierbarkeit) sind, welche die Quantisierung allgemeiner linearer Elektrodynamik auf einer flächenmetrischen Raumzeit und die Quantizierung massiver Teilchen, die eine physikalische Energie-Impuls-Beziehung respektieren, erlauben. Wir erkunden auch systematisch, wie man Feldgleichungen aller Ableitungsordnungen generieren kann und beweisen einen Satz, der verallgemeinerte Dirac-Algebren bestimmt und die damit Reduzierung des Ableitungsgrades einer physikalischen Materiefeldgleichung ermöglicht. Der letzte Teil der vorliegenden Schrift skizziert ein bemerkenswertes Ergebnis, das mit den in dieser Dissertation dargestellten Techniken erzielt wurde. Insbesondere aufgrund der hier identifizierten dualen Abbildungen zwischen Teilchenimpulsen und -geschwindigkeiten auf allgemeinen tensoriellen Raumzeiten war es möglich zu zeigen, dass man die Gravitationsdynamik für hyperbolische, zeit-orientierbare und energie-differenzierende Geometrien nicht postulieren muss, sondern dass sich das Problem ihrer Konstruktion auf eine rein mathematische Aufgabe reduziert: die Lösung eines homogenen linearen Differentialgleichungssystems. Dieses weitreichende Ergebnis über modifizierte Gravitationstheorien ist eine direkte (aber schwer herzuleitende) Folgerung der Forschungsergebnisse dieser Dissertation. Die abstrakte Theorie dieser Doktorarbeit wird durch mehrere instruktive Beispiele illustriert.
Horvath, Matthew Steven. "Performance Prediction of Quantization Based Automatic Target Recognition Algorithms." Wright State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=wright1452086412.
Full textHuang, Bihong. "Second-order prediction and residue vector quantization for video compression." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S026/document.
Full textVideo compression has become a mandatory step in a wide range of digital video applications. Since the development of the block-based hybrid coding approach in the H.261/MPEG-2 standard, new coding standard was ratified every ten years and each new standard achieved approximately 50% bit rate reduction compared to its predecessor without sacrificing the picture quality. However, due to the ever-increasing bit rate required for the transmission of HD and Beyond-HD formats within a limited bandwidth, there is always a requirement to develop new video compression technologies which provide higher coding efficiency than the current HEVC video coding standard. In this thesis, we proposed three approaches to improve the intra coding efficiency of the HEVC standard by exploiting the correlation of intra prediction residue. A first approach based on the use of previously decoded residue shows that even though gains are theoretically possible, the extra cost of signaling could negate the benefit of residual prediction. A second approach based on Mode Dependent Vector Quantization (MDVQ) prior to the conventional transformed scalar quantization step provides significant coding gains. We show that this approach is realistic because the dictionaries are independent of QP and of a reasonable size. Finally, a third approach is developed to modify dictionaries gradually to adapt to the intra prediction residue. A substantial gain is provided by the adaptivity, especially when the video content is atypical, without increasing the decoding complexity. In the end we get a compromise of complexity and gain for a submission in standardization
Vasconcelos, Nuno Miguel Borges de Pinho Cruz de. "Library-based image coding using vector quantization of the prediction space." Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/62918.
Full textIncludes bibliographical references (leaves 122-126).
by Nuno Miguel Borges de Pinho Cruz de Vasconcelos.
M.S.
Boland, Simon Daniel. "High quality audio coding using the wavelet transform." Thesis, Queensland University of Technology, 1998.
Find full textClayton, Arnshea. "The Relative Importance of Input Encoding and Learning Methodology on Protein Secondary Structure Prediction." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/cs_theses/19.
Full textKamath, Vidya P. "Enhancing Gene Expression Signatures in Cancer Prediction Models: Understanding and Managing Classification Complexity." Scholar Commons, 2010. http://scholarcommons.usf.edu/etd/3653.
Full textNamburu, Visala. "Speech Coder using Line Spectral Frequencies of Cascaded Second Order Predictors." Thesis, Virginia Tech, 2001. http://hdl.handle.net/10919/35670.
Full textMaster of Science
Tino, Peter, Christian Schittenkopf, and Georg Dorffner. "Temporal pattern recognition in noisy non-stationary time series based on quantization into symbolic streams. Lessons learned from financial volatility trading." SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 2000. http://epub.wu.ac.at/1680/1/document.pdf.
Full textSeries: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
Wandeto, John Mwangi. "Self-organizing map quantization error approach for detecting temporal variations in image sets." Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAD025/document.
Full textA new approach for image processing, dubbed SOM-QE, that exploits the quantization error (QE) from self-organizing maps (SOM) is proposed in this thesis. SOM produce low-dimensional discrete representations of high-dimensional input data. QE is determined from the results of the unsupervised learning process of SOM and the input data. SOM-QE from a time-series of images can be used as an indicator of changes in the time series. To set-up SOM, a map size, the neighbourhood distance, the learning rate and the number of iterations in the learning process are determined. The combination of these parameters that gives the lowest value of QE, is taken to be the optimal parameter set and it is used to transform the dataset. This has been the use of QE. The novelty in SOM-QE technique is fourfold: first, in the usage. SOM-QE employs a SOM to determine QE for different images - typically, in a time series dataset - unlike the traditional usage where different SOMs are applied on one dataset. Secondly, the SOM-QE value is introduced as a measure of uniformity within the image. Thirdly, the SOM-QE value becomes a special, unique label for the image within the dataset and fourthly, this label is used to track changes that occur in subsequent images of the same scene. Thus, SOM-QE provides a measure of variations within the image at an instance in time, and when compared with the values from subsequent images of the same scene, it reveals a transient visualization of changes in the scene of study. In this research the approach was applied to artificial, medical and geographic imagery to demonstrate its performance. Changes that occur in geographic scenes of interest, such as new buildings being put up in a city or lesions receding in medical images are of interest to scientists and engineers. The SOM-QE technique provides a new way for automatic detection of growth in urban spaces or the progressions of diseases, giving timely information for appropriate planning or treatment. In this work, it is demonstrated that SOM-QE can capture very small changes in images. Results also confirm it to be fast and less computationally expensive in discriminating between changed and unchanged contents in large image datasets. Pearson's correlation confirmed that there was statistically significant correlations between SOM-QE values and the actual ground truth data. On evaluation, this technique performed better compared to other existing approaches. This work is important as it introduces a new way of looking at fast, automatic change detection even when dealing with small local changes within images. It also introduces a new method of determining QE, and the data it generates can be used to predict changes in a time series dataset
Yu, Lang. "Evaluating and Implementing JPEG XR Optimized for Video Surveillance." Thesis, Linköping University, Computer Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54307.
Full textThis report describes both evaluation and implementation of the new coming image compression standard JPEG XR. The intention is to determine if JPEG XR is an appropriate standard for IP based video surveillance purposes. Video surveillance, especially IP based video surveillance, currently has an increasing role in the security market. To be a good standard for surveillance, the video stream generated by the camera is required to be low bit-rate, low latency on the network and at the same time keep a high dynamic display range. The thesis start with a deep insightful study of JPEG XR encoding standard. Since the standard could have different settings,optimized settings are applied to JPEG XR encoder to fit the requirement of network video surveillance. Then, a comparative evaluation of the JPEG XR versusthe JPEG is delivered both in terms of objective and subjective way. Later, part of the JPEG XR encoder is implemented in hardware as an accelerator for further evaluation. SystemVerilog is the coding language. TSMC 40nm process library and Synopsys ASIC tool chain are used for synthesize. The throughput, area, power ofthe encoder are given and analyzed. Finally, the system integration of the JPEGXR hardware encoder to Axis ARTPEC-X SoC platform is discussed.
Dvořák, Martin. "Výukový video kodek." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219882.
Full textLundberg, Emil. "Adding temporal plasticity to a self-organizing incremental neural network using temporal activity diffusion." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-180346.
Full textVektorkvantisering (VQ; eng: Vector Quantization) är ett klassiskt problem och en enkel metod för mönsterigenkänning. Bland tillämpningar finns förstörande datakompression, klustring och igenkänning av tal och talare. Även om VQ i stort har ersatts av tidsmedvetna tekniker såsom dolda Markovmodeller (HMM, eng: Hidden Markov Models) och dynamisk tidskrökning (DTW, eng: Dynamic Time Warping) i vissa tillämpningar, som tal- och talarigenkänning, har VQ ännu viss relevans tack vare sin mycket lägre beräkningsmässiga kostnad — särskilt för exempelvis inbyggda system. En ny studie demonstrerar också ett VQ-system med flera sektioner som åstadkommer prestanda i klass med DTW i en tillämpning på igenkänning av handskrivna signaturer, men till en mycket lägre beräkningsmässig kostnad. Att dra nytta av temporala mönster i en VQ-algoritm skulle kunna hjälpa till att förbättra sådana resultat ytterligare. SOTPAR2 är en sådan utökning av Neural Gas, en artificiell neural nätverk-algorithm för VQ. SOTPAR2 använder en konceptuellt enkel idé, baserad på att lägga till sidleds anslutningar mellan nätverksnoder och skapa “temporal aktivitet” som diffunderar genom anslutna noder. Aktiviteten gör sedan så att närmaste-granne-klassificeraren föredrar noder med hög aktivitet, och författarna till SOTPAR2 rapporterar förbättrade resultat jämfört med Neural Gas i en tillämpning på förutsägning av en tidsserie. I denna rapport undersöks hur samma utökning påverkar kvantiserings- och förutsägningsprestanda hos algoritmen självorganiserande inkrementellt neuralt nätverk (SOINN, eng: self-organizing incremental neural network). SOINN är en VQ-algorithm som automatiskt väljer en lämplig kodboksstorlek och också kan användas för klustring med godtyckliga klusterformer. Experimentella resultat visar att denna utökning inte förbättrar prestandan hos SOINN, istället försämrades prestandan i alla experiment som genomfördes. Detta resultat diskuteras, liksom inverkan av parametervärden på prestandan, och möjligt framtida arbete för att förbättra resultaten föreslås.
Hanzálek, Pavel. "Praktické ukázky zpracování signálů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2019. http://www.nusl.cz/ntk/nusl-400849.
Full textWang, Chun-Wei, and 王俊偉. "Adaptive Entropy-Constrained Predictive Motion Vector Quantization." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/28127845362933232261.
Full text中原大學
電機工程研究所
89
Motion vector quantization (MVQ) is an effective algorithm for video coding. It has low computational complexity for block matching and low average rate for motion vector delivery. However, the algorithm has two major shortcomings. One is MVQ needs high computational complexity for codebook training, the other is MVQ can’t control index rate. To overcome these two defects, this thesis presents a novel algorithm named “Adaptive Entropy-Constrained Predictive Motion Vector Quantization” (AECPMVQ). This AECPMVQ algorithm can be separated into three parts. The first is “Adaptive”. The property of this part is that this algorithm can update the codebook online. So AECPMVQ is suitable for real-time coding. The second part, “Entropy-Constrained”, allows index rate and codebook size to be pre-specified independently. Because arithmetic coding is used in this part, AECPMVQ can save more index rate than MVQ. The finial one is “Predictive”. It centralizes the utilization of indices. Hence, the efficiency of arithmetic coding can be further enhanced. The simulation results show that AECPMVQ has better rate- distortion performance than other algorithms. Especially in low-rate coding, the leading of performance of AECPMVQ will be more obvious.
Kuo, I.-Sheng, and 郭萓聖. "A Predictive Classifier for Image Vector Quantization." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/51311548448930956892.
Full text國立成功大學
電機工程學系
87
In this Thesis, a new scheme of still image compressor using vector quantization is proposed. A new classification method of edge blocks and a new prediction method for both classification types and VQ indices are proposed for our new encoder. To achieve better performance, the encoder decomposes images into smooth and edge areas, and encodes them separately using different algorithms. MRVQ with block size 8’8 and 16’16 pixels is applied to smooth areas to achieve higher compression ratio. A total of 32 predicted-CVQ types are applied to the edge areas to achieve good quality. The proposed prediction method has accuracy of about 50% when applying to edge areas only. By applying the proposed encoding scheme to still image 'Lena', the bitrate of 0.219bpp with the PSNR of 30.59dB is achieved.
Yu, Jr-Ruei, and 喻至瑞. "Predictive Split Matrix Quantization of Speech LSP Parameters." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/b7tabp.
Full text國立臺北科技大學
電機工程系研究所
97
Due to the rapid growth of digit mobile communication and voice over the Internet, speech compression plays an increasingly important role in recent years. It is a huge challenge to maintain the quality of coded speech while reducing its bit rate by eliminating redundancy effectively. Line spectrum pair coefficients have been widely used as the short-term spectrum in most of low bit-rate speech coders. Even though short-time frame has the feature of frequency domain, but unable to demonstrate the structure that the long-time speech is coherent. In order to obtain these intrinsic sequential structures and characteristics, a predictive split matrix quantization is proposed in this thesis. It can be seen as an important tool to promote the performance of split vector quantization. Combining 4 frames into one super frame makes it work to capture the structure of speech and remove the redundancy by using the approach of prediction. The result of the experiment shows that KLT-PSMQ is more efficient than memory-less SMQ and about 2 bit per frame can be saved. Based on test scores of subjective and objective evaluations, MELP coder using KLT-PSMQ performs better than the original.
Xu, Dao-Cheng, and 徐德成. "Predictive Dynamic Finite-State Vector Quantization for Image and Video Compression." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/72519656959021362803.
Full textLin, Yig-Shyang, and 林奕祥. "Bits Rate Control of MPEG With Predictive And Adaptive Perceptual Quantization." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/72812246072033261983.
Full textLin, Yi Xiang, and 林奕祥. "The rate control of MPEG with predictive and adaptive perceptual quantization." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/24289258074498848263.
Full textFan, Kuo-Lun, and 范國倫. "Binary Search & Mean Value Predictive Hybrid Fast Vector Quantization Algorithm." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/54042830044642569713.
Full text立德管理學院
應用資訊研究所
91
Vector Quantization (VQ) is an effective technology for signal compression. In traditional VQ, most of the computation concentrates on searching the nearest codeword in the codebook for each input vector. We propose a fast VQ algorithm to reduce encoding time. There are two main parts in our proposed algorithm. One is pre-processing process and the other is practical encoding process. In pre-processing, we will generate some tables that we need to employ to practical encoding. On the second part, that is a practical encoding and we use the tables that generated previously and other techniques are added as well to speed up encoding time. This paper provides an effective algorithm to accelerate the encoding time. The proposed algorithm demonstrates the outstanding performance in terms of the time saving and arithmetic operations. Compared to full search algorithm, it saves more than 95% search time.
Shie, Shih-Chieh, and 謝仕杰. "Low Bit Rate Side-Match Vector Quantization with Predictive Block Classification for Images Coding." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/03488515470237628459.
Full text國立東華大學
資訊工程學系
88
In this thesis, two efficient vector quantization schemes, called SMVQ with intuitive edge extraction (SMVQ-IEE) and SMVQ with adaptive block classification (SMVQ-ABC), are proposed for image compression. SMVQ-IEE and SMVQ-ABC make use of edge information contained in an image in additional to the average values of blocks forming the image. In order to achieve low bit rate coding while preserving good quality of the image, neighboring blocks are utilized to predict the class of the current encoding block. Image blocks are mainly classified as edge blocks and non-edge blocks in SMVQ-IEE and SMVQ-ABC coding schemes. To improve the coding efficiency, edge blocks and non-edge blocks are further reclassified into different classes, respectively. Moreover, the number of bits for encoding an image is greatly reduced by foretelling the class of input block and applying small state codebook in corresponding class. The improvements of the proposed coding schemes are attractive as compared with other VQ techniques. Key Words: Image compression, classified vector quantization, side-match finite-state vector quantization, adaptive variable-rate coding.
Chou, Tzu-Hsuan, and 周子軒. "Feedback for Time-correlated MIMO-OFDM System using Predictive Quantization of Bit Allocation and Subcarrier Clustering." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/ghy2u6.
Full text國立交通大學
電控工程研究所
101
In the thesis, we consider MIMO-OFDM system over time correlated multipath fading channel with limited feedback. The transmission rate is adapted according to the channel condition. Assume the taps of multipath fading channel are i.i.d. and correlated in time, we can model each tap as a first-order Gauss-Markov process. We show that the frequency domain channel on each subcarrier is still first-order Gauss-Markov process. We apply predictive quantization of bit loading method to take advantage of the time correlation. Furthermore, we consider subcarrier clustering, in which the subcarriers are grouped into clusters and we feedback only one bit loading vector for each cluster. Subcarrier clustering allows us to take advantage of the frequency correlation of the MIMO channels. Compared with the previous works that only utilize the frequency correlation on MIMO-OFDM system, the proposed method has a better performance when the channel is varying slowly.
Uddin, S. M. Muslem. "Advanced Model Predictive Control for AC drives with common mode voltage mitigation." Thesis, 2021. http://hdl.handle.net/1959.13/1430649.
Full textModel Predictive Control (MPC) is a popular control strategy studied in many research publications. Moreover, its acceptance by industry is slow. This thesis identifies a class of AC application, which can significantly benefit from MPC paradigm. Thus, is high performance AC drives operating in industrial environments where common mode voltage (CMV) is a critical aspect. After critical analysis of the existing MPC-based approaches, the thesis proposes a new and advanced MPC scheme called Feedback Quantization Model Predictive Control (FBQ-MPC). The proposed scheme has a number of important improvements, including integral action, advanced disturbance rejection, improved modulation performance and control over the harmonic spectrum, as well as CMV minimization. Application of the proposed FBQ-MPC method is demonstrated with two selected power converter options found as most appropriate in CMV sensitive environment. Based on the above, full models of industrial AC drive have been developed and studied by simulation and experiment. The studies have shown that AC drives based on FBQ-MPC overcome the common MPC drawback and offer prominent advantages in CMV sensitive, as well as more general, AC drive applications.
ZHENG, DAO-HAN, and 鄭道涵. "Regressive Model Representation and Quantization Factor Prediction of ECG Data Compression." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/bnxf8z.
Full text國立高雄第一科技大學
電腦與通訊工程系碩士班
105
In addition to Taiwan, Japan、South Korea have entered the aging society, and the national health、medical aspects of more and more attention. Disease prevention and monitoring will be the future trend. Electrocardiography(ECG) is one of the key projects in which the signal requires a long record and large amount of information. The popularity and development of wearing devices, ECG compression system has become an important issue. Including low power, device volume and data compression for the three major challenges. This architecture requires a large number of divisions, and the cost of the divider is too high. So ECG compression algorithm can not be directly converted into a hardware chip. This paper will optimize the quantization factor prediction part and successfully convert it into hardware architecture. The system is divided into three blocks : wavelet conversion, quantization and coding. In the compression process, after the completion of the first part of the wavelet transform, the signal into the quantification step. Because of ECG has medical and monitoring purposes, so its quality control is very important in the quantitative system.Therefore, in the software algorithm into a hardware architecture must also achieve the default quality management. In which predict the quantization factor operation, we need to calculate the quantization factor divided by SPRD2. In order to avoid the division, so we first define the quantization factor for the X axis, SPRD2 for the Y axis. And then use the coordinate axes to record all the quantization factors and SPRD2 at PRD( percentage rms difference , PRD) = 2%. According to the SPRD2 range to cut the classification, and apply the statistical regression analysis in each block. And produce the corresponding equation. This equation only contains multiplication and addition and can directly calculate the two parameters of the division.This method can solve the cost of the divider and avoid the storage of large amounts of data. The improved method of this study has been used 48 kinds of ECG database in the Matlab platform for testing, and all the signals can cross the default PRD = 2% quality control. Finally, we also propose the basic hardware architecture and synthetic data for predicting the quantization factor. Key words: ECG, data compression, PRD, regression analysis, quantization factor
Chin, Shang-Chiang, and 金上強. "Adaptive Quantization Parameter Based Prediction Architecture Design for Rate Distortion Optimization." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/83262529629576520832.
Full text國立東華大學
電機工程學系
97
With the development of network and constant improvement of multimedia technology, efficient video compression has become an important research for the multimedia communication community. Under dual considerations of quality and speed, improvement of video compression technology is the most important. This shows that how to compress video’s magnitudes in image process has became a researching topic for discussion. With advancement of video system, there is higher peak signal-to-noise ratio (PSNR) and lower bit-rate performance of compressed video. Nevertheless, encoding time was increased cause by the complexity of video system, and there is no method to control the distortion of quantization. Therefore, it is difficult to encode video in time and promote visual quality. Rate-distortion optimization is a new technique of modern video system, which is offer variable mode for video coding in order to pursue the performance of PSNR and bit-rate. An『Adaptive Quantization Parameter Based Prediction Algorithm for Rate Distortion Optimization』to provide efficiently scheme for estimating the optimized mode with adaptive variable quantization parameter by quantization error analysis. This proposed algorithm significantly improve the performance of PSNR and bit-rate reduction with little encoding time penalty.
張晶禾. "The Study on Prediction Schemes for Three-Sided Side Match Vector Quantization." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/46063999928291457941.
Full text國立清華大學
資訊工程學系
89
The research on Side-Match Vector Quantization is involved to adapt three sides or even four sides to obtain better case on edge block prediction. The most major improvement is the recent study on TSMVQ. However its finite state coding strategy like side match using equal weight of surrounding block sides. When a clear line goes through the blocks, adapting which two sides could make the side-match state codebook select total different state codebook. This kind of characteristics makes the neighboring blocks which goes through by a line has high correlation. We use such trait to propose an advanced prediction which can use these correlation to predict such edge blocks in less bits. We also study the limitation of the TSMVQ, which leads us to find out there is certainly blocks is hard to predict by Three-sided side match method if the correlation in between the side is not strong. Hence selecting certain number of blocks and sending their position by quadtree coding and their Vector Quantization indices to obtain better image quality by wasting some bits as tradeof
Yang, Sheng-Yu, and 楊勝裕. "A Constant Rate Block Based Image Compression Scheme Using Vector Quantization and Prediction Schemes." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/wrwyp4.
Full text國立中興大學
電機工程學系所
107
This thesis proposes an embedded image compression system aimed at reducing the large amount of data transmission and storage along the display link path. Embedded compression focuses on low computing complexity and low hardware resources requirement, while providing a guarantee of compression performance. The algorithm proposed in this thesis is a constant rate block based image compression scheme with two scheme options. Both schemes will be examined at the same time and the better one is chosen. In order to support the "screen partial update" function of the Android system, a block based compression system is adopted. This means that all blocks are compressed independently, no information from the surrounding blocks is available. The block size is set as 2x4. The compression ratio is also fixed at three to ensure a constant bandwidth requirement. In addition, a Y-Co-Cg color space is used. The major techniques employed are shape gain vector quantization (VQ) and predictor. A 2x4 block is first converted to an 1x8 vector and encoded using pre-trained vector codebooks. By taking advantage of the correlation between color components, all color components share the same index in shape coding to save the bit budget while each color component has its own gain index. The shape gain VQ residuals of the most underperformed color component is further refined by using two techniques, i.e., DPCM and Integer DCT. DPCM achieves prediction by recording the difference between successive pixels. The Integer DCT approach converts the pixel residual values from the spatial domain to the frequency domain, and records the low frequency components only for the refinement. Experimental results, however, indicate that neither techniques achieves satisfactory refinement results. The final scheme applies shape gain VQ to the Cg and Co components only and employs a reference prediction scheme to the Y component. In this prediction scheme, the maximum of the pixel values in the block is first determined and all other pixel values are predicted as a reference the maximum. The reference can be either the difference or the ratio with respect to the maximum. Both differences and ratios are quantized using codebooks to reduce the bit requirement. The evaluation criteria for compression performance are PSNR and the maximum pixel error of the reconstructed image. Testbench includes images in various categories such as natural, portrait, engineering, and text. The compared scheme is a prior art reported in the thesis entitled "A Constant Rate Block Based Image Compression Scheme for Video Display Link Applications." The same compression specifications are employed in both schemes. The experimental results show that our algorithm performs better in natural and portrait images, and the PSNR advantage is about 1~2 dBs. The proposed algorithm performs inferior in engineering images. In terms of image size, our algorithm has better performance on low-resolution images. This is because the reference predictor and shape gain vector quantization schemes are more efficient in handling blocks consisting of sharply changing pixels.
TSAI, YU-TING, and 蔡瑀庭. "Grayscale Image Coding Technique Based on Block Prediction Technique and Classified Side Match Vector Quantization." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/cvkbzc.
Full text靜宜大學
資訊管理學系
107
The goals of the study are to solve the error propagation problem found in SMVQ-based methods and to reduce the required bit rate for grayscale image coding. The proposed method combines the block prediction technique with the CSMVQ method for grayscale image coding. In the block prediction technique, in order to encode the current image block, the encoded neighboring blocks are used as the candidates. If a similar block for the current block is searched, the position code of the candidate block is stored. Otherwise, the current image block is encoded by the CSMVQ method. From the experimental results, we found that the proposed method can greatly reduce the bit rate while preserving good reconstructed image quality.