Дисертації з теми "Predictive quantization"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Predictive quantization.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-32 дисертацій для дослідження на тему "Predictive quantization".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Soong, Michael. "Predictive split vector quantization for speech coding." Thesis, McGill University, 1994. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=68054.

Повний текст джерела
Анотація:
The purpose of this thesis is to examine techniques for efficiently coding speech Linear Predictive Coding (LPC) coefficients. Vector Quantization (VQ) is an efficient approach to encode speech at low bit rates. However its exponentially growing complexity poses a formidable barrier. Thus a structured vector quantizer is normally used instead.
Summation Product Codes (SPCs) are a family of structured vector quantizers that circumvent the complexity obstacle. The performance of SPC vector quantizers can be traded off against their storage and encoding complexity. Besides the complexity factors, the design algorithm can also affect the performance of the quantizer. The conventional generalized Lloyd's algorithm (GLA) generates sub-optimal codebooks. For particular SPC such as multistage VQ, the GLA is applied to design the stage codebooks stage-by-stage. Joint design algorithms on the other hand update all the stage codebooks simultaneously.
In this thesis, a general formulation and an algorithm solution to the joint codebook design problem is provided for the SPCs. The key to this algorithm is that every PC has a reference product codebook which minimizes the overall distortion. This joint design algorithm is tested with a novel SPC, namely "Predictive Split VQ (PSVQ)".
VQ of speech Line Spectral Frequencies (LSF's) using PSVQ is also presented. A result in this work is that PSVQ, designed using the joint codebook design algorithm requires only 20 bits/frame(20 ms) for transparent coding of a 10$ sp{ rm th}$ order LSF's parameters.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Abousleman, Glen Patrick. "Entropy-constrained predictive trellis coded quantization and compression of hyperspectral imagery." Diss., The University of Arizona, 1994. http://hdl.handle.net/10150/186748.

Повний текст джерела
Анотація:
A training-sequence-based entropy-constrained predictive trellis coded quantization (ECPTCQ) scheme is presented for encoding autoregressive sources. For encoding a first-order Gauss-Markov source, the MSE performance of an 8-state ECPTCQ system exceeds that of entropy-constrained DPCM by up to 1.0 dB. In addition, three systems--an ECPTCQ system, a 3-D Discrete Cosine Transform (DCT) system and a hybrid system--are presented for compression of hyperspectral imagery which utilize trellis coded quantization (TCQ). Specifically, the first system utilizes a 2-D DCT and ECPTCQ. The 2-D DCT is used to transform all nonoverlapping 8 x 8 blocks of each band. Thereafter, ECPTCQ is used to encode the transform coefficients in the spectral dimension. The 3-D DCT system uses TCQ to encode transform coefficients resulting from the application of an 8 x 8 x 8 DCT. The hybrid system uses DPCM to spectrally decorrelate the data, while a 2-D DCT coding scheme is used for spatial decorrelation. Side information and rate allocation strategies for all systems are discussed. Entropy-constrained codebooks are optimized for various generalized Gaussian distributions using a modified version of the generalized Lloyd algorithm. The first system can compress a hyperspectral image sequence at 0.125 bits/pixel/band while retaining an average peak signal-to-noise ratio of greater than 43 dB over the spectral bands. The 3-D DCT and hybrid systems achieve compression ratios of 77:1 and 69:1 while maintaining average peak signal-to-noise ratios of 40.75 dB and 40.29 dB, respectively, over the coded bands.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wang, Yan. "Predictive boundary point adaptation and vector quantization compression algorithms for CMOS image sensors /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?ECED%202007%20WANGY.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Rivera, Hernández Sergio. "Tensorial spacetime geometries carrying predictive, interpretable and quantizable matter dynamics." Phd thesis, Universität Potsdam, 2012. http://opus.kobv.de/ubp/volltexte/2012/6186/.

Повний текст джерела
Анотація:
Which tensor fields G on a smooth manifold M can serve as a spacetime structure? In the first part of this thesis, it is found that only a severely restricted class of tensor fields can provide classical spacetime geometries, namely those that can carry predictive, interpretable and quantizable matter dynamics. The obvious dependence of this characterization of admissible tensorial spacetime geometries on specific matter is not a weakness, but rather presents an insight: it was Maxwell theory that justified Einstein to promote Lorentzian manifolds to the status of a spacetime geometry. Any matter that does not mimick the structure of Maxwell theory, will force us to choose another geometry on which the matter dynamics of interest are predictive, interpretable and quantizable. These three physical conditions on matter impose three corresponding algebraic conditions on the totally symmetric contravariant coefficient tensor field P that determines the principal symbol of the matter field equations in terms of the geometric tensor G: the tensor field P must be hyperbolic, time-orientable and energy-distinguishing. Remarkably, these physically necessary conditions on the geometry are mathematically already sufficient to realize all kinematical constructions familiar from Lorentzian geometry, for precisely the same structural reasons. This we were able to show employing a subtle interplay of convex analysis, the theory of partial differential equations and real algebraic geometry. In the second part of this thesis, we then explore general properties of any hyperbolic, time-orientable and energy-distinguishing tensorial geometry. Physically most important are the construction of freely falling non-rotating laboratories, the appearance of admissible modified dispersion relations to particular observers, and the identification of a mechanism that explains why massive particles that are faster than some massless particles can radiate off energy until they are slower than all massless particles in any hyperbolic, time-orientable and energy-distinguishing geometry. In the third part of the thesis, we explore how tensorial spacetime geometries fare when one wants to quantize particles and fields on them. This study is motivated, in part, in order to provide the tools to calculate the rate at which superluminal particles radiate off energy to become infraluminal, as explained above. Remarkably, it is again the three geometric conditions of hyperbolicity, time-orientability and energy-distinguishability that allow the quantization of general linear electrodynamics on an area metric spacetime and the quantization of massive point particles obeying any admissible dispersion relation. We explore the issue of field equations of all possible derivative order in rather systematic fashion, and prove a practically most useful theorem that determines Dirac algebras allowing the reduction of derivative orders. The final part of the thesis presents the sketch of a truly remarkable result that was obtained building on the work of the present thesis. Particularly based on the subtle duality maps between momenta and velocities in general tensorial spacetimes, it could be shown that gravitational dynamics for hyperbolic, time-orientable and energy distinguishable geometries need not be postulated, but the formidable physical problem of their construction can be reduced to a mere mathematical task: the solution of a system of homogeneous linear partial differential equations. This far-reaching physical result on modified gravity theories is a direct, but difficult to derive, outcome of the findings in the present thesis. Throughout the thesis, the abstract theory is illustrated through instructive examples.
Welche Tensorfelder G auf einer glatten Mannigfaltigkeit M können eine Raumzeit-Geometrie beschreiben? Im ersten Teil dieser Dissertation wird es gezeigt, dass nur stark eingeschränkte Klassen von Tensorfeldern eine Raumzeit-Geometrie darstellen können, nämlich Tensorfelder, die eine prädiktive, interpretierbare und quantisierbare Dynamik für Materiefelder ermöglichen. Die offensichtliche Abhängigkeit dieser Charakterisierung erlaubter tensorieller Raumzeiten von einer spezifischen Materiefelder-Dynamik ist keine Schwäche der Theorie, sondern ist letztlich genau das Prinzip, das die üblicherweise betrachteten Lorentzschen Mannigfaltigkeiten auszeichnet: diese stellen die metrische Geometrie dar, welche die Maxwellsche Elektrodynamik prädiktiv, interpretierbar und quantisierbar macht. Materiefeld-Dynamiken, welche die kausale Struktur von Maxwell-Elektrodynamik nicht respektieren, zwingen uns, eine andere Geometrie auszuwählen, auf der die Materiefelder-Dynamik aber immer noch prädiktiv, interpretierbar und quantisierbar sein muss. Diesen drei Voraussetzungen an die Materie entsprechen drei algebraische Voraussetzungen an das total symmetrische kontravariante Tensorfeld P, welches das Prinzipalpolynom der Materiefeldgleichungen (ausgedrückt durch das grundlegende Tensorfeld G) bestimmt: das Tensorfeld P muss hyperbolisch, zeitorientierbar und energie-differenzierend sein. Diese drei notwendigen Bedingungen an die Geometrie genügen, um alle aus der Lorentzschen Geometrie bekannten kinematischen Konstruktionen zu realisieren. Dies zeigen wir im ersten Teil der vorliegenden Arbeit unter Verwendung eines teilweise recht subtilen Wechselspiels zwischen konvexer Analysis, der Theorie partieller Differentialgleichungen und reeller algebraischer Geometrie. Im zweiten Teil dieser Dissertation erforschen wir allgemeine Eigenschaften aller solcher hyperbolischen, zeit-orientierbaren und energie-differenzierenden Geometrien. Physikalisch wichtig sind der Aufbau von frei fallenden und nicht rotierenden Laboratorien, das Auftreten modifizierter Energie-Impuls-Beziehungen und die Identifizierung eines Mechanismus, der erklärt, warum massive Teilchen, die sich schneller als einige masselosse Teilchen bewegen, Energie abstrahlen können, aber nur bis sie sich langsamer als alle masselossen Teilchen bewegen. Im dritten Teil der Dissertation ergründen wir die Quantisierung von Teilchen und Feldern auf tensoriellen Raumzeit-Geometrien, die die obigen physikalischen Bedingungen erfüllen. Eine wichtige Motivation dieser Untersuchung ist es, Techniken zur Berechnung der Zerfallsrate von Teilchen zu berechnen, die sich schneller als langsame masselose Teilchen bewegen. Wir finden, dass es wiederum die drei zuvor im klassischen Kontext identifizierten Voraussetzungen (der Hyperbolizität, Zeit-Orientierbarkeit und Energie-Differenzierbarkeit) sind, welche die Quantisierung allgemeiner linearer Elektrodynamik auf einer flächenmetrischen Raumzeit und die Quantizierung massiver Teilchen, die eine physikalische Energie-Impuls-Beziehung respektieren, erlauben. Wir erkunden auch systematisch, wie man Feldgleichungen aller Ableitungsordnungen generieren kann und beweisen einen Satz, der verallgemeinerte Dirac-Algebren bestimmt und die damit Reduzierung des Ableitungsgrades einer physikalischen Materiefeldgleichung ermöglicht. Der letzte Teil der vorliegenden Schrift skizziert ein bemerkenswertes Ergebnis, das mit den in dieser Dissertation dargestellten Techniken erzielt wurde. Insbesondere aufgrund der hier identifizierten dualen Abbildungen zwischen Teilchenimpulsen und -geschwindigkeiten auf allgemeinen tensoriellen Raumzeiten war es möglich zu zeigen, dass man die Gravitationsdynamik für hyperbolische, zeit-orientierbare und energie-differenzierende Geometrien nicht postulieren muss, sondern dass sich das Problem ihrer Konstruktion auf eine rein mathematische Aufgabe reduziert: die Lösung eines homogenen linearen Differentialgleichungssystems. Dieses weitreichende Ergebnis über modifizierte Gravitationstheorien ist eine direkte (aber schwer herzuleitende) Folgerung der Forschungsergebnisse dieser Dissertation. Die abstrakte Theorie dieser Doktorarbeit wird durch mehrere instruktive Beispiele illustriert.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Horvath, Matthew Steven. "Performance Prediction of Quantization Based Automatic Target Recognition Algorithms." Wright State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=wright1452086412.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Huang, Bihong. "Second-order prediction and residue vector quantization for video compression." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S026/document.

Повний текст джерела
Анотація:
La compression vidéo est une étape cruciale pour une grande partie des applications de télécommunication. Depuis l'avènement de la norme H.261/MPEG-2, un nouveau standard de compression vidéo est produit tous les 10 ans environ, avec un gain en compression de 50% par rapport à la précédente. L'objectif de la thèse est d'obtenir des gains en compression par rapport à la dernière norme de codage vidéo HEVC. Dans cette thèse, nous proposons trois approches pour améliorer la compression vidéo en exploitant les corrélations du résidu de prédiction intra. Une première approche basée sur l'utilisation de résidus précédemment décodés montre que, si des gains sont théoriquement possibles, le surcoût de la signalisation les réduit pratiquement à néant. Une deuxième approche basée sur la quantification vectorielle mode-dépendent (MDVQ) du résidu préalablement à l'étape classique transformée-quantification scalaire, permet d'obtenir des gains substantiels. Nous montrons que cette approche est réaliste, car les dictionnaires sont indépendants du QP et de petite taille. Enfin, une troisième approche propose de rendre adaptatif les dictionnaires utilisés en MDVQ. Un gain substantiel est apporté par l'adaptivité, surtout lorsque le contenu vidéo est atypique, tandis que la complexité de décodage reste bien contenue. Au final on obtient un compromis gain-complexité compatible avec une soumission en normalisation
Video compression has become a mandatory step in a wide range of digital video applications. Since the development of the block-based hybrid coding approach in the H.261/MPEG-2 standard, new coding standard was ratified every ten years and each new standard achieved approximately 50% bit rate reduction compared to its predecessor without sacrificing the picture quality. However, due to the ever-increasing bit rate required for the transmission of HD and Beyond-HD formats within a limited bandwidth, there is always a requirement to develop new video compression technologies which provide higher coding efficiency than the current HEVC video coding standard. In this thesis, we proposed three approaches to improve the intra coding efficiency of the HEVC standard by exploiting the correlation of intra prediction residue. A first approach based on the use of previously decoded residue shows that even though gains are theoretically possible, the extra cost of signaling could negate the benefit of residual prediction. A second approach based on Mode Dependent Vector Quantization (MDVQ) prior to the conventional transformed scalar quantization step provides significant coding gains. We show that this approach is realistic because the dictionaries are independent of QP and of a reasonable size. Finally, a third approach is developed to modify dictionaries gradually to adapt to the intra prediction residue. A substantial gain is provided by the adaptivity, especially when the video content is atypical, without increasing the decoding complexity. In the end we get a compromise of complexity and gain for a submission in standardization
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Vasconcelos, Nuno Miguel Borges de Pinho Cruz de. "Library-based image coding using vector quantization of the prediction space." Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/62918.

Повний текст джерела
Анотація:
Thesis (M.S.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1993.
Includes bibliographical references (leaves 122-126).
by Nuno Miguel Borges de Pinho Cruz de Vasconcelos.
M.S.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Boland, Simon Daniel. "High quality audio coding using the wavelet transform." Thesis, Queensland University of Technology, 1998.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Clayton, Arnshea. "The Relative Importance of Input Encoding and Learning Methodology on Protein Secondary Structure Prediction." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/cs_theses/19.

Повний текст джерела
Анотація:
In this thesis the relative importance of input encoding and learning algorithm on protein secondary structure prediction is explored. A novel input encoding, based on multidimensional scaling applied to a recently published amino acid substitution matrix, is developed and shown to be superior to an arbitrary input encoding. Both decimal valued and binary input encodings are compared. Two neural network learning algorithms, Resilient Propagation and Learning Vector Quantization, which have not previously been applied to the problem of protein secondary structure prediction, are examined. Input encoding is shown to have a greater impact on prediction accuracy than learning methodology with a binary input encoding providing the highest training and test set prediction accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Kamath, Vidya P. "Enhancing Gene Expression Signatures in Cancer Prediction Models: Understanding and Managing Classification Complexity." Scholar Commons, 2010. http://scholarcommons.usf.edu/etd/3653.

Повний текст джерела
Анотація:
Cancer can develop through a series of genetic events in combination with external influential factors that alter the progression of the disease. Gene expression studies are designed to provide an enhanced understanding of the progression of cancer and to develop clinically relevant biomarkers of disease, prognosis and response to treatment. One of the main aims of microarray gene expression analyses is to develop signatures that are highly predictive of specific biological states, such as the molecular stage of cancer. This dissertation analyzes the classification complexity inherent in gene expression studies, proposing both techniques for measuring complexity and algorithms for reducing this complexity. Classifier algorithms that generate predictive signatures of cancer models must generalize to independent datasets for successful translation to clinical practice. The predictive performance of classifier models is shown to be dependent on the inherent complexity of the gene expression data. Three specific quantitative measures of classification complexity are proposed and one measure ( f) is shown to correlate highly (R 2=0.82) with classifier accuracy in experimental data. Three quantization methods are proposed to enhance contrast in gene expression data and reduce classification complexity. The accuracy for cancer prognosis prediction is shown to improve using quantization in two datasets studied: from 67% to 90% in lung cancer and from 56% to 68% in colorectal cancer. A corresponding reduction in classification complexity is also observed. A random subspace based multivariable feature selection approach using costsensitive analysis is proposed to model the underlying heterogeneous cancer biology and address complexity due to multiple molecular pathways and unbalanced distribution of samples into classes. The technique is shown to be more accurate than the univariate ttest method. The classifier accuracy improves from 56% to 68% for colorectal cancer prognosis prediction.  A published gene expression signature to predict radiosensitivity of tumor cells is augmented with clinical indicators to enhance modeling of the data and represent the underlying biology more closely. Statistical tests and experiments indicate that the improvement in the model fit is a result of modeling the underlying biology rather than statistical over-fitting of the data, thereby accommodating classification complexity through the use of additional variables.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Namburu, Visala. "Speech Coder using Line Spectral Frequencies of Cascaded Second Order Predictors." Thesis, Virginia Tech, 2001. http://hdl.handle.net/10919/35670.

Повний текст джерела
Анотація:
A major objective in speech coding is to represent speech with as few bits as possible. Usual transmission parameters include auto regressive parameters, pitch parameters, excitation signals and excitation gains. The pitch predictor makes these coders sensitive to channel errors. Aiming for robustness to channel errors, we do not use pitch prediction and compensate for its lack with a better representation of the excitation signal. We propose a new speech coding approach, Vector Sum Excited Cascaded Linear Prediction (VSECLP), based on code excited linear prediction. We implement forward linear prediction using five cascaded second order sections - parameterized in terms of line spectral frequency - in place of the conventional tenth order filter. The line spectral frequency parameters estimated by the Direct Line Spectral Frequency (DLSF) adaptation algorithm are closer to the true values than those estimated by the Cascaded Recursive Least Squares - Subsection algorithm. A simplified version of DLSF is proposed to further reduce computational complexity. Split vector quantization is used to quantize the line spectral frequency parameters and vector sum codebooks to quantize the excitation signals. The effect on reconstructed speech quality and transmission rate, of an increased number of bits and differently split combinations, is analyzed by testing VSECLP on the TIMIT database. The quantization of the excitation vectors using the discrete cosine transform resulted in segmental signal to noise ratio of 4 dB at 20.95 kbps, whereas the same quality was obtained at 9.6 kbps using vector sum codebooks.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Tino, Peter, Christian Schittenkopf, and Georg Dorffner. "Temporal pattern recognition in noisy non-stationary time series based on quantization into symbolic streams. Lessons learned from financial volatility trading." SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 2000. http://epub.wu.ac.at/1680/1/document.pdf.

Повний текст джерела
Анотація:
In this paper we investigate the potential of the analysis of noisy non-stationary time series by quantizing it into streams of discrete symbols and applying finite-memory symbolic predictors. The main argument is that careful quantization can reduce the noise in the time series to make model estimation more amenable given limited numbers of samples that can be drawn due to the non-stationarity in the time series. As a main application area we study the use of such an analysis in a realistic setting involving financial forecasting and trading. In particular, using historical data, we simulate the trading of straddles on the financial indexes DAX and FTSE 100 on a daily basis, based on predictions of the daily volatility differences in the underlying indexes. We propose a parametric, data-driven quantization scheme which transforms temporal patterns in the series of daily volatility changes into grammatical and statistical patterns in the corresponding symbolic streams. As symbolic predictors operating on the quantized streams we use the classical fixed-order Markov models, variable memory length Markov models and a novel variation of fractal-based predictors introduced in its original form in (Tino, 2000b). The fractal-based predictors are designed to efficiently use deep memory. We compare the symbolic models with continuous techniques such as time-delay neural networks with continuous and categorical outputs, and GARCH models. Our experiments strongly suggest that the robust information reduction achieved by quantizing the real-valued time series is highly beneficial. To deal with non-stationarity in financial daily time series, we propose two techniques that combine ``sophisticated" models fitted on the training data with a fixed set of simple-minded symbolic predictors not using older (and potentially misleading) data in the training set. Experimental results show that by quantizing the volatility differences and then using symbolic predictive models, market makers can generate a statistically significant excess profit. However, with respect to our prediction and trading techniques, the option market on the DAX does seem to be efficient for traders and non-members of the stock exchange. There is a potential for traders to make an excess profit on the FTSE 100. We also mention some interesting observations regarding the memory structure in the studied series of daily volatility differences. (author's abstract)
Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Wandeto, John Mwangi. "Self-organizing map quantization error approach for detecting temporal variations in image sets." Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAD025/document.

Повний текст джерела
Анотація:
Une nouvelle approche du traitement de l'image, appelée SOM-QE, qui exploite quantization error (QE) des self-organizing maps (SOM) est proposée dans cette thèse. Les SOM produisent des représentations discrètes de faible dimension des données d'entrée de haute dimension. QE est déterminée à partir des résultats du processus d'apprentissage non supervisé du SOM et des données d'entrée. SOM-QE d'une série chronologique d'images peut être utilisé comme indicateur de changements dans la série chronologique. Pour configurer SOM, on détermine la taille de la carte, la distance du voisinage, le rythme d'apprentissage et le nombre d'itérations dans le processus d'apprentissage. La combinaison de ces paramètres, qui donne la valeur la plus faible de QE, est considérée comme le jeu de paramètres optimal et est utilisée pour transformer l'ensemble de données. C'est l'utilisation de l'assouplissement quantitatif. La nouveauté de la technique SOM-QE est quadruple : d'abord dans l'usage. SOM-QE utilise un SOM pour déterminer la QE de différentes images - typiquement, dans un ensemble de données de séries temporelles - contrairement à l'utilisation traditionnelle où différents SOMs sont appliqués sur un ensemble de données. Deuxièmement, la valeur SOM-QE est introduite pour mesurer l'uniformité de l'image. Troisièmement, la valeur SOM-QE devient une étiquette spéciale et unique pour l'image dans l'ensemble de données et quatrièmement, cette étiquette est utilisée pour suivre les changements qui se produisent dans les images suivantes de la même scène. Ainsi, SOM-QE fournit une mesure des variations à l'intérieur de l'image à une instance dans le temps, et lorsqu'il est comparé aux valeurs des images subséquentes de la même scène, il révèle une visualisation transitoire des changements dans la scène à l'étude. Dans cette recherche, l'approche a été appliquée à l'imagerie artificielle, médicale et géographique pour démontrer sa performance. Les scientifiques et les ingénieurs s'intéressent aux changements qui se produisent dans les scènes géographiques d'intérêt, comme la construction de nouveaux bâtiments dans une ville ou le recul des lésions dans les images médicales. La technique SOM-QE offre un nouveau moyen de détection automatique de la croissance dans les espaces urbains ou de la progression des maladies, fournissant des informations opportunes pour une planification ou un traitement approprié. Dans ce travail, il est démontré que SOM-QE peut capturer de très petits changements dans les images. Les résultats confirment également qu'il est rapide et moins coûteux de faire la distinction entre le contenu modifié et le contenu inchangé dans les grands ensembles de données d'images. La corrélation de Pearson a confirmé qu'il y avait des corrélations statistiquement significatives entre les valeurs SOM-QE et les données réelles de vérité de terrain. Sur le plan de l'évaluation, cette technique a donné de meilleurs résultats que les autres approches existantes. Ce travail est important car il introduit une nouvelle façon d'envisager la détection rapide et automatique des changements, même lorsqu'il s'agit de petits changements locaux dans les images. Il introduit également une nouvelle méthode de détermination de QE, et les données qu'il génère peuvent être utilisées pour prédire les changements dans un ensemble de données de séries chronologiques
A new approach for image processing, dubbed SOM-QE, that exploits the quantization error (QE) from self-organizing maps (SOM) is proposed in this thesis. SOM produce low-dimensional discrete representations of high-dimensional input data. QE is determined from the results of the unsupervised learning process of SOM and the input data. SOM-QE from a time-series of images can be used as an indicator of changes in the time series. To set-up SOM, a map size, the neighbourhood distance, the learning rate and the number of iterations in the learning process are determined. The combination of these parameters that gives the lowest value of QE, is taken to be the optimal parameter set and it is used to transform the dataset. This has been the use of QE. The novelty in SOM-QE technique is fourfold: first, in the usage. SOM-QE employs a SOM to determine QE for different images - typically, in a time series dataset - unlike the traditional usage where different SOMs are applied on one dataset. Secondly, the SOM-QE value is introduced as a measure of uniformity within the image. Thirdly, the SOM-QE value becomes a special, unique label for the image within the dataset and fourthly, this label is used to track changes that occur in subsequent images of the same scene. Thus, SOM-QE provides a measure of variations within the image at an instance in time, and when compared with the values from subsequent images of the same scene, it reveals a transient visualization of changes in the scene of study. In this research the approach was applied to artificial, medical and geographic imagery to demonstrate its performance. Changes that occur in geographic scenes of interest, such as new buildings being put up in a city or lesions receding in medical images are of interest to scientists and engineers. The SOM-QE technique provides a new way for automatic detection of growth in urban spaces or the progressions of diseases, giving timely information for appropriate planning or treatment. In this work, it is demonstrated that SOM-QE can capture very small changes in images. Results also confirm it to be fast and less computationally expensive in discriminating between changed and unchanged contents in large image datasets. Pearson's correlation confirmed that there was statistically significant correlations between SOM-QE values and the actual ground truth data. On evaluation, this technique performed better compared to other existing approaches. This work is important as it introduces a new way of looking at fast, automatic change detection even when dealing with small local changes within images. It also introduces a new method of determining QE, and the data it generates can be used to predict changes in a time series dataset
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Yu, Lang. "Evaluating and Implementing JPEG XR Optimized for Video Surveillance." Thesis, Linköping University, Computer Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54307.

Повний текст джерела
Анотація:

This report describes both evaluation and implementation of the new coming image compression standard JPEG XR. The intention is to determine if JPEG XR is an appropriate standard for IP based video surveillance purposes. Video surveillance, especially IP based video surveillance, currently has an increasing role in the security market. To be a good standard for surveillance, the video stream generated by the camera is required to be low bit-rate, low latency on the network and at the same time keep a high dynamic display range. The thesis start with a deep insightful study of JPEG XR encoding standard. Since the standard could have different settings,optimized settings are applied to JPEG XR encoder to fit the requirement of network video surveillance. Then, a comparative evaluation of the JPEG XR versusthe JPEG is delivered both in terms of objective and subjective way. Later, part of the JPEG XR encoder is implemented in hardware as an accelerator for further evaluation. SystemVerilog is the coding language. TSMC 40nm process library and Synopsys ASIC tool chain are used for synthesize. The throughput, area, power ofthe encoder are given and analyzed. Finally, the system integration of the JPEGXR hardware encoder to Axis ARTPEC-X SoC platform is discussed.

Стилі APA, Harvard, Vancouver, ISO та ін.
15

Dvořák, Martin. "Výukový video kodek." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219882.

Повний текст джерела
Анотація:
The first goal of diploma thesis is to study the basic principles of video signal compression. Introduction to techniques used to reduce irrelevancy and redundancy in the video signal. The second goal is, on the basis of information about compression tools, implement the individual compression tools in the programming environment of Matlab and assemble simple model of the video codec. Diploma thesis contains a description of the three basic blocks, namely - interframe coding, intraframe coding and coding with variable length word - according the standard MPEG-2.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Lundberg, Emil. "Adding temporal plasticity to a self-organizing incremental neural network using temporal activity diffusion." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-180346.

Повний текст джерела
Анотація:
Vector Quantization (VQ) is a classic optimization problem and a simple approach to pattern recognition. Applications include lossy data compression, clustering and speech and speaker recognition. Although VQ has largely been replaced by time-aware techniques like Hidden Markov Models (HMMs) and Dynamic Time Warping (DTW) in some applications, such as speech and speaker recognition, VQ still retains some significance due to its much lower computational cost — especially for embedded systems. A recent study also demonstrates a multi-section VQ system which achieves performance rivaling that of DTW in an application to handwritten signature recognition, at a much lower computational cost. Adding sensitivity to temporal patterns to a VQ algorithm could help improve such results further. SOTPAR2 is such an extension of Neural Gas, an Artificial Neural Network algorithm for VQ. SOTPAR2 uses a conceptually simple approach, based on adding lateral connections between network nodes and creating “temporal activity” that diffuses through adjacent nodes. The activity in turn makes the nearest-neighbor classifier biased toward network nodes with high activity, and the SOTPAR2 authors report improvements over Neural Gas in an application to time series prediction. This report presents an investigation of how this same extension affects quantization and prediction performance of the self-organizing incremental neural network (SOINN) algorithm. SOINN is a VQ algorithm which automatically chooses a suitable codebook size and can also be used for clustering with arbitrary cluster shapes. This extension is found to not improve the performance of SOINN, in fact it makes performance worse in all experiments attempted. A discussion of this result is provided, along with a discussion of the impact of the algorithm parameters, and possible future work to improve the results is suggested.
Vektorkvantisering (VQ; eng: Vector Quantization) är ett klassiskt problem och en enkel metod för mönsterigenkänning. Bland tillämpningar finns förstörande datakompression, klustring och igenkänning av tal och talare. Även om VQ i stort har ersatts av tidsmedvetna tekniker såsom dolda Markovmodeller (HMM, eng: Hidden Markov Models) och dynamisk tidskrökning (DTW, eng: Dynamic Time Warping) i vissa tillämpningar, som tal- och talarigenkänning, har VQ ännu viss relevans tack vare sin mycket lägre beräkningsmässiga kostnad — särskilt för exempelvis inbyggda system. En ny studie demonstrerar också ett VQ-system med flera sektioner som åstadkommer prestanda i klass med DTW i en tillämpning på igenkänning av handskrivna signaturer, men till en mycket lägre beräkningsmässig kostnad. Att dra nytta av temporala mönster i en VQ-algoritm skulle kunna hjälpa till att förbättra sådana resultat ytterligare. SOTPAR2 är en sådan utökning av Neural Gas, en artificiell neural nätverk-algorithm för VQ. SOTPAR2 använder en konceptuellt enkel idé, baserad på att lägga till sidleds anslutningar mellan nätverksnoder och skapa “temporal aktivitet” som diffunderar genom anslutna noder. Aktiviteten gör sedan så att närmaste-granne-klassificeraren föredrar noder med hög aktivitet, och författarna till SOTPAR2 rapporterar förbättrade resultat jämfört med Neural Gas i en tillämpning på förutsägning av en tidsserie. I denna rapport undersöks hur samma utökning påverkar kvantiserings- och förutsägningsprestanda hos algoritmen självorganiserande inkrementellt neuralt nätverk (SOINN, eng: self-organizing incremental neural network). SOINN är en VQ-algorithm som automatiskt väljer en lämplig kodboksstorlek och också kan användas för klustring med godtyckliga klusterformer. Experimentella resultat visar att denna utökning inte förbättrar prestandan hos SOINN, istället försämrades prestandan i alla experiment som genomfördes. Detta resultat diskuteras, liksom inverkan av parametervärden på prestandan, och möjligt framtida arbete för att förbättra resultaten föreslås.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Hanzálek, Pavel. "Praktické ukázky zpracování signálů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2019. http://www.nusl.cz/ntk/nusl-400849.

Повний текст джерела
Анотація:
The thesis focuses on the issue of signal processing. Using practical examples, it tries to show the use of individual signal processing operations from a practical point of view. For each of the selected signal processing operations, an application is created in MATLAB, including a graphical interface for easier operation. The division of the thesis is such that each chapter is first analyzed from a theoretical point of view, then it is shown using a practical demonstration of what the operation is used in practice. Individual applications are described here, mainly in terms of how they are handled and their possible results. The results of the practical part are presented in the attachment of the thesis.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Wang, Chun-Wei, and 王俊偉. "Adaptive Entropy-Constrained Predictive Motion Vector Quantization." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/28127845362933232261.

Повний текст джерела
Анотація:
碩士
中原大學
電機工程研究所
89
Motion vector quantization (MVQ) is an effective algorithm for video coding. It has low computational complexity for block matching and low average rate for motion vector delivery. However, the algorithm has two major shortcomings. One is MVQ needs high computational complexity for codebook training, the other is MVQ can’t control index rate. To overcome these two defects, this thesis presents a novel algorithm named “Adaptive Entropy-Constrained Predictive Motion Vector Quantization” (AECPMVQ). This AECPMVQ algorithm can be separated into three parts. The first is “Adaptive”. The property of this part is that this algorithm can update the codebook online. So AECPMVQ is suitable for real-time coding. The second part, “Entropy-Constrained”, allows index rate and codebook size to be pre-specified independently. Because arithmetic coding is used in this part, AECPMVQ can save more index rate than MVQ. The finial one is “Predictive”. It centralizes the utilization of indices. Hence, the efficiency of arithmetic coding can be further enhanced. The simulation results show that AECPMVQ has better rate- distortion performance than other algorithms. Especially in low-rate coding, the leading of performance of AECPMVQ will be more obvious.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Kuo, I.-Sheng, and 郭萓聖. "A Predictive Classifier for Image Vector Quantization." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/51311548448930956892.

Повний текст джерела
Анотація:
碩士
國立成功大學
電機工程學系
87
In this Thesis, a new scheme of still image compressor using vector quantization is proposed. A new classification method of edge blocks and a new prediction method for both classification types and VQ indices are proposed for our new encoder. To achieve better performance, the encoder decomposes images into smooth and edge areas, and encodes them separately using different algorithms. MRVQ with block size 8’8 and 16’16 pixels is applied to smooth areas to achieve higher compression ratio. A total of 32 predicted-CVQ types are applied to the edge areas to achieve good quality. The proposed prediction method has accuracy of about 50% when applying to edge areas only. By applying the proposed encoding scheme to still image 'Lena', the bitrate of 0.219bpp with the PSNR of 30.59dB is achieved.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Yu, Jr-Ruei, and 喻至瑞. "Predictive Split Matrix Quantization of Speech LSP Parameters." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/b7tabp.

Повний текст джерела
Анотація:
碩士
國立臺北科技大學
電機工程系研究所
97
Due to the rapid growth of digit mobile communication and voice over the Internet, speech compression plays an increasingly important role in recent years. It is a huge challenge to maintain the quality of coded speech while reducing its bit rate by eliminating redundancy effectively. Line spectrum pair coefficients have been widely used as the short-term spectrum in most of low bit-rate speech coders. Even though short-time frame has the feature of frequency domain, but unable to demonstrate the structure that the long-time speech is coherent. In order to obtain these intrinsic sequential structures and characteristics, a predictive split matrix quantization is proposed in this thesis. It can be seen as an important tool to promote the performance of split vector quantization. Combining 4 frames into one super frame makes it work to capture the structure of speech and remove the redundancy by using the approach of prediction. The result of the experiment shows that KLT-PSMQ is more efficient than memory-less SMQ and about 2 bit per frame can be saved. Based on test scores of subjective and objective evaluations, MELP coder using KLT-PSMQ performs better than the original.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Xu, Dao-Cheng, and 徐德成. "Predictive Dynamic Finite-State Vector Quantization for Image and Video Compression." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/72519656959021362803.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Lin, Yig-Shyang, and 林奕祥. "Bits Rate Control of MPEG With Predictive And Adaptive Perceptual Quantization." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/72812246072033261983.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Lin, Yi Xiang, and 林奕祥. "The rate control of MPEG with predictive and adaptive perceptual quantization." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/24289258074498848263.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Fan, Kuo-Lun, and 范國倫. "Binary Search & Mean Value Predictive Hybrid Fast Vector Quantization Algorithm." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/54042830044642569713.

Повний текст джерела
Анотація:
碩士
立德管理學院
應用資訊研究所
91
Vector Quantization (VQ) is an effective technology for signal compression. In traditional VQ, most of the computation concentrates on searching the nearest codeword in the codebook for each input vector. We propose a fast VQ algorithm to reduce encoding time. There are two main parts in our proposed algorithm. One is pre-processing process and the other is practical encoding process. In pre-processing, we will generate some tables that we need to employ to practical encoding. On the second part, that is a practical encoding and we use the tables that generated previously and other techniques are added as well to speed up encoding time. This paper provides an effective algorithm to accelerate the encoding time. The proposed algorithm demonstrates the outstanding performance in terms of the time saving and arithmetic operations. Compared to full search algorithm, it saves more than 95% search time.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Shie, Shih-Chieh, and 謝仕杰. "Low Bit Rate Side-Match Vector Quantization with Predictive Block Classification for Images Coding." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/03488515470237628459.

Повний текст джерела
Анотація:
碩士
國立東華大學
資訊工程學系
88
In this thesis, two efficient vector quantization schemes, called SMVQ with intuitive edge extraction (SMVQ-IEE) and SMVQ with adaptive block classification (SMVQ-ABC), are proposed for image compression. SMVQ-IEE and SMVQ-ABC make use of edge information contained in an image in additional to the average values of blocks forming the image. In order to achieve low bit rate coding while preserving good quality of the image, neighboring blocks are utilized to predict the class of the current encoding block. Image blocks are mainly classified as edge blocks and non-edge blocks in SMVQ-IEE and SMVQ-ABC coding schemes. To improve the coding efficiency, edge blocks and non-edge blocks are further reclassified into different classes, respectively. Moreover, the number of bits for encoding an image is greatly reduced by foretelling the class of input block and applying small state codebook in corresponding class. The improvements of the proposed coding schemes are attractive as compared with other VQ techniques. Key Words: Image compression, classified vector quantization, side-match finite-state vector quantization, adaptive variable-rate coding.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Chou, Tzu-Hsuan, and 周子軒. "Feedback for Time-correlated MIMO-OFDM System using Predictive Quantization of Bit Allocation and Subcarrier Clustering." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/ghy2u6.

Повний текст джерела
Анотація:
碩士
國立交通大學
電控工程研究所
101
In the thesis, we consider MIMO-OFDM system over time correlated multipath fading channel with limited feedback. The transmission rate is adapted according to the channel condition. Assume the taps of multipath fading channel are i.i.d. and correlated in time, we can model each tap as a first-order Gauss-Markov process. We show that the frequency domain channel on each subcarrier is still first-order Gauss-Markov process. We apply predictive quantization of bit loading method to take advantage of the time correlation. Furthermore, we consider subcarrier clustering, in which the subcarriers are grouped into clusters and we feedback only one bit loading vector for each cluster. Subcarrier clustering allows us to take advantage of the frequency correlation of the MIMO channels. Compared with the previous works that only utilize the frequency correlation on MIMO-OFDM system, the proposed method has a better performance when the channel is varying slowly.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Uddin, S. M. Muslem. "Advanced Model Predictive Control for AC drives with common mode voltage mitigation." Thesis, 2021. http://hdl.handle.net/1959.13/1430649.

Повний текст джерела
Анотація:
Research Doctorate - Doctor of Philosophy (PhD)
Model Predictive Control (MPC) is a popular control strategy studied in many research publications. Moreover, its acceptance by industry is slow. This thesis identifies a class of AC application, which can significantly benefit from MPC paradigm. Thus, is high performance AC drives operating in industrial environments where common mode voltage (CMV) is a critical aspect. After critical analysis of the existing MPC-based approaches, the thesis proposes a new and advanced MPC scheme called Feedback Quantization Model Predictive Control (FBQ-MPC). The proposed scheme has a number of important improvements, including integral action, advanced disturbance rejection, improved modulation performance and control over the harmonic spectrum, as well as CMV minimization. Application of the proposed FBQ-MPC method is demonstrated with two selected power converter options found as most appropriate in CMV sensitive environment. Based on the above, full models of industrial AC drive have been developed and studied by simulation and experiment. The studies have shown that AC drives based on FBQ-MPC overcome the common MPC drawback and offer prominent advantages in CMV sensitive, as well as more general, AC drive applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

ZHENG, DAO-HAN, and 鄭道涵. "Regressive Model Representation and Quantization Factor Prediction of ECG Data Compression." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/bnxf8z.

Повний текст джерела
Анотація:
碩士
國立高雄第一科技大學
電腦與通訊工程系碩士班
105
In addition to Taiwan, Japan、South Korea have entered the aging society, and the national health、medical aspects of more and more attention. Disease prevention and monitoring will be the future trend. Electrocardiography(ECG) is one of the key projects in which the signal requires a long record and large amount of information. The popularity and development of wearing devices, ECG compression system has become an important issue. Including low power, device volume and data compression for the three major challenges. This architecture requires a large number of divisions, and the cost of the divider is too high. So ECG compression algorithm can not be directly converted into a hardware chip. This paper will optimize the quantization factor prediction part and successfully convert it into hardware architecture. The system is divided into three blocks : wavelet conversion, quantization and coding. In the compression process, after the completion of the first part of the wavelet transform, the signal into the quantification step. Because of ECG has medical and monitoring purposes, so its quality control is very important in the quantitative system.Therefore, in the software algorithm into a hardware architecture must also achieve the default quality management. In which predict the quantization factor operation, we need to calculate the quantization factor divided by SPRD2. In order to avoid the division, so we first define the quantization factor for the X axis, SPRD2 for the Y axis. And then use the coordinate axes to record all the quantization factors and SPRD2 at PRD( percentage rms difference , PRD) = 2%. According to the SPRD2 range to cut the classification, and apply the statistical regression analysis in each block. And produce the corresponding equation. This equation only contains multiplication and addition and can directly calculate the two parameters of the division.This method can solve the cost of the divider and avoid the storage of large amounts of data. The improved method of this study has been used 48 kinds of ECG database in the Matlab platform for testing, and all the signals can cross the default PRD = 2% quality control. Finally, we also propose the basic hardware architecture and synthetic data for predicting the quantization factor. Key words: ECG, data compression, PRD, regression analysis, quantization factor
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Chin, Shang-Chiang, and 金上強. "Adaptive Quantization Parameter Based Prediction Architecture Design for Rate Distortion Optimization." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/83262529629576520832.

Повний текст джерела
Анотація:
碩士
國立東華大學
電機工程學系
97
With the development of network and constant improvement of multimedia technology, efficient video compression has become an important research for the multimedia communication community. Under dual considerations of quality and speed, improvement of video compression technology is the most important. This shows that how to compress video’s magnitudes in image process has became a researching topic for discussion. With advancement of video system, there is higher peak signal-to-noise ratio (PSNR) and lower bit-rate performance of compressed video. Nevertheless, encoding time was increased cause by the complexity of video system, and there is no method to control the distortion of quantization. Therefore, it is difficult to encode video in time and promote visual quality. Rate-distortion optimization is a new technique of modern video system, which is offer variable mode for video coding in order to pursue the performance of PSNR and bit-rate. An『Adaptive Quantization Parameter Based Prediction Algorithm for Rate Distortion Optimization』to provide efficiently scheme for estimating the optimized mode with adaptive variable quantization parameter by quantization error analysis. This proposed algorithm significantly improve the performance of PSNR and bit-rate reduction with little encoding time penalty.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

張晶禾. "The Study on Prediction Schemes for Three-Sided Side Match Vector Quantization." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/46063999928291457941.

Повний текст джерела
Анотація:
碩士
國立清華大學
資訊工程學系
89
The research on Side-Match Vector Quantization is involved to adapt three sides or even four sides to obtain better case on edge block prediction. The most major improvement is the recent study on TSMVQ. However its finite state coding strategy like side match using equal weight of surrounding block sides. When a clear line goes through the blocks, adapting which two sides could make the side-match state codebook select total different state codebook. This kind of characteristics makes the neighboring blocks which goes through by a line has high correlation. We use such trait to propose an advanced prediction which can use these correlation to predict such edge blocks in less bits. We also study the limitation of the TSMVQ, which leads us to find out there is certainly blocks is hard to predict by Three-sided side match method if the correlation in between the side is not strong. Hence selecting certain number of blocks and sending their position by quadtree coding and their Vector Quantization indices to obtain better image quality by wasting some bits as tradeof
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Yang, Sheng-Yu, and 楊勝裕. "A Constant Rate Block Based Image Compression Scheme Using Vector Quantization and Prediction Schemes." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/wrwyp4.

Повний текст джерела
Анотація:
碩士
國立中興大學
電機工程學系所
107
This thesis proposes an embedded image compression system aimed at reducing the large amount of data transmission and storage along the display link path. Embedded compression focuses on low computing complexity and low hardware resources requirement, while providing a guarantee of compression performance. The algorithm proposed in this thesis is a constant rate block based image compression scheme with two scheme options. Both schemes will be examined at the same time and the better one is chosen. In order to support the "screen partial update" function of the Android system, a block based compression system is adopted. This means that all blocks are compressed independently, no information from the surrounding blocks is available. The block size is set as 2x4. The compression ratio is also fixed at three to ensure a constant bandwidth requirement. In addition, a Y-Co-Cg color space is used. The major techniques employed are shape gain vector quantization (VQ) and predictor. A 2x4 block is first converted to an 1x8 vector and encoded using pre-trained vector codebooks. By taking advantage of the correlation between color components, all color components share the same index in shape coding to save the bit budget while each color component has its own gain index. The shape gain VQ residuals of the most underperformed color component is further refined by using two techniques, i.e., DPCM and Integer DCT. DPCM achieves prediction by recording the difference between successive pixels. The Integer DCT approach converts the pixel residual values from the spatial domain to the frequency domain, and records the low frequency components only for the refinement. Experimental results, however, indicate that neither techniques achieves satisfactory refinement results. The final scheme applies shape gain VQ to the Cg and Co components only and employs a reference prediction scheme to the Y component. In this prediction scheme, the maximum of the pixel values in the block is first determined and all other pixel values are predicted as a reference the maximum. The reference can be either the difference or the ratio with respect to the maximum. Both differences and ratios are quantized using codebooks to reduce the bit requirement. The evaluation criteria for compression performance are PSNR and the maximum pixel error of the reconstructed image. Testbench includes images in various categories such as natural, portrait, engineering, and text. The compared scheme is a prior art reported in the thesis entitled "A Constant Rate Block Based Image Compression Scheme for Video Display Link Applications." The same compression specifications are employed in both schemes. The experimental results show that our algorithm performs better in natural and portrait images, and the PSNR advantage is about 1~2 dBs. The proposed algorithm performs inferior in engineering images. In terms of image size, our algorithm has better performance on low-resolution images. This is because the reference predictor and shape gain vector quantization schemes are more efficient in handling blocks consisting of sharply changing pixels.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

TSAI, YU-TING, and 蔡瑀庭. "Grayscale Image Coding Technique Based on Block Prediction Technique and Classified Side Match Vector Quantization." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/cvkbzc.

Повний текст джерела
Анотація:
碩士
靜宜大學
資訊管理學系
107
The goals of the study are to solve the error propagation problem found in SMVQ-based methods and to reduce the required bit rate for grayscale image coding. The proposed method combines the block prediction technique with the CSMVQ method for grayscale image coding. In the block prediction technique, in order to encode the current image block, the encoded neighboring blocks are used as the candidates. If a similar block for the current block is searched, the position code of the candidate block is stored. Otherwise, the current image block is encoded by the CSMVQ method. From the experimental results, we found that the proposed method can greatly reduce the bit rate while preserving good reconstructed image quality.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії