Dissertations / Theses on the topic 'Interpolation Filters'

To see the other types of publications on this topic, follow the link: Interpolation Filters.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Interpolation Filters.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Curry, William. "Interpolation with prediction-error filters and training data /." May be available electronically:, 2008. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Turek, Daniel B. (Daniel Bernard) 1980. "Design of efficient digital interpolation filters for integer upsampling." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/18010.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (leaf 44).
Digital signal interpolation systems can be implemented in a variety of ways. The most basic interpolation system for integer upsampling cascades an expander unit with an interpolation low-pass filter. More complex implementations can cascade multiple expander and low-pass filter pairs. There is also flexibility in the design of interpolation filters. This thesis explores how digital interpolation systems for integer upsampling can be efficiently implemented. Efficiency is measured in terms of the number of multiplications required for each output sample point. The following factors are studied for their effect on system efficiency: the decomposition of an interpolation system into multiple cascaded stages, the use of recursive and non-recursive interpolation filters, and the use of linear-phase and minimum-phase interpolation filters. In this thesis interpolation systems are designed to test these factors, and their computational costs are calculated. From this data, conclusions are drawn about efficient designs of interpolation systems for integer upsampling.
by Daniel B. Turek.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
3

Chan, Tai Wai. "Applications of spatial varying filter on image interpolation, demosaicing and video denosing [i.e. denoising] /." View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?ECED%202006%20CHANT.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tang, Weiran. "Frequency merging for demosaicking /." View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?ECED%202009%20TANGW.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fatine, Steven Carleton University Dissertation Engineering Electronics. "Design and VLSI implementation of CMOS decimation and interpolation half-band FIR digital filters." Ottawa, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cheng, Hai-Ling Margaret. "3D spatio-temporal interpolation of of digital image sequences using low-order 3D IIR filters." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq20866.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Piskin, Hatice. "Design And Implementation Of Fir Digital Filters With Variable Frequency Characteristics." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/2/12606853/index.pdf.

Full text
Abstract:
Variable digital filters (VDF) find many application areas in communication, audio, speech and image processing. This thesis analyzes design and implementation of FIR digital filters with variable frequency characteristics and introduces two design methods. The design and implementation of the proposed methods are realized on Matlab software program. Various filter design examples and comparisons are also outlilned. One of the major application areas of VDFs is software defined radio (SDR). The interpolation problem on sample rate converter (SRC) unit of the SDR is solved by using these filters. Realizations of VDFs on SRC are outlined and described. Simulations on Simulink and a specific hardware are examined.
APA, Harvard, Vancouver, ISO, and other styles
8

Hunter, Matthew. "DESIGN OF POLYNOMIAL-BASED FILTERS FOR CONTINUOUSLY VARIABLE SAMPLE RATE CONVERSION WITH APPLICATIONS IN SYNTHETIC INSTRUMENTATI." Doctoral diss., University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2120.

Full text
Abstract:
In this work, the design and application of Polynomial-Based Filters (PBF) for continuously variable Sample Rate Conversion (SRC) is studied. The major contributions of this work are summarized as follows. First, an explicit formula for the Fourier Transform of both a symmetrical and nonsymmetrical PBF impulse response with variable basis function coefficients is derived. In the literature only one explicit formula is given, and that for a symmetrical even length filter with fixed basis function coefficients. The frequency domain optimization of PBFs via linear programming has been proposed in the literature, however, the algorithm was not detailed nor were explicit formulas derived. In this contribution, a minimax optimization procedure is derived for the frequency domain optimization of a PBF with time-domain constraints. Explicit formulas are given for direct input to a linear programming routine. Additionally, accompanying Matlab code implementing this optimization in terms of the derived formulas is given in the appendix. In the literature, it has been pointed out that the frequency response of the Continuous-Time (CT) filter decays as frequency goes to infinity. It has also been observed that when implemented in SRC, the CT filter is sampled resulting in CT frequency response aliasing. Thus, for example, the stopband sidelobes of the Discrete-Time (DT) implementation rise above the CT designed level. Building on these observations, it is shown how the rolloff rate of the frequency response of a PBF can be adjusted by adding continuous derivatives to the impulse response. This is of great advantage, especially when the PBF is used for decimation as the aliasing band attenuation can be made to increase with frequency. It is shown how this technique can be used to dramatically reduce the effect of alias build up in the passband. In addition, it is shown that as the number of continuous derivatives of the PBF increases the resulting DT implementation more closely matches the Continuous-Time (CT) design. When implemented for SRC, samples from a PBF impulse response are computed by evaluating the polynomials using a so-called fractional interval, µ. In the literature, the effect of quantizing µ on the frequency response of the PBF has been studied. Formulas have been derived to determine the number of bits required to keep frequency response distortion below prescribed bounds. Elsewhere, a formula has been given to compute the number of bits required to represent µ to obtain a given SRC accuracy for rational factor SRC. In this contribution, it is shown how these two apparently competing requirements are quite independent. In fact, it is shown that the wordlength required for SRC accuracy need only be kept in the µ generator which is a single accumulator. The output of the µ generator may then be truncated prior to polynomial evaluation. This results in significant computational savings, as polynomial evaluation can require several multiplications and additions. Under the heading of applications, a new Wideband Digital Downconverter (WDDC) for Synthetic Instruments (SI) is introduced. DDCs first tune to a signal's center frequency using a numerically controlled oscillator and mixer, and then zoom-in to the bandwidth of interest using SRC. The SRC is required to produce continuously variable output sample rates from a fixed input sample rate over a large range. Current implementations accomplish this using a pre-filter, an arbitrary factor resampler, and integer decimation filters. In this contribution, the SRC of the WDDC is simplified reducing the computational requirements to a factor of three or more. In addition to this, it is shown how this system can be used to develop a novel computationally efficient FFT-based spectrum analyzer with continuously variable frequency spans. Finally, after giving the theoretical foundation, a real Field Programmable Gate Array (FPGA) implementation of a novel Arbitrary Waveform Generator (AWG) is presented. The new approach uses a fixed Digital-to-Analog Converter (DAC) sample clock in combination with an arbitrary factor interpolator. Waveforms created at any sample rate are interpolated to the fixed DAC sample rate in real-time. As a result, the additional lower performance analog hardware required in current approaches, namely, multiple reconstruction filters and/or additional sample clocks, is avoided. Measured results are given confirming the performance of the system predicted by the theoretical design and simulation.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Electrical Engineering PhD
APA, Harvard, Vancouver, ISO, and other styles
9

Lenez, Thierry. "Synchronisation et égalisation en communication numérique : application au modem VDSL." Grenoble INPG, 2001. http://www.theses.fr/2001INPG0058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

AHMAD, WAQAR. "High-Level Synthesis Based VLSI Architectures for Video Coding." Doctoral thesis, Politecnico di Torino, 2017. http://hdl.handle.net/11583/2665803.

Full text
Abstract:
High Efficiency Video Coding (HEVC) is state-of-the-art video coding standard. Emerging applications like free-viewpoint video, 360degree video, augmented reality, 3D movies etc. require standardized extensions of HEVC. The standardized extensions of HEVC include HEVC Scalable Video Coding (SHVC), HEVC Multiview Video Coding (MV-HEVC), MV-HEVC+ Depth (3D-HEVC) and HEVC Screen Content Coding. 3D-HEVC is used for applications like view synthesis generation, free-viewpoint video. Coding and transmission of depth maps in 3D-HEVC is used for the virtual view synthesis by the algorithms like Depth Image Based Rendering (DIBR). As first step, we performed the profiling of the 3D-HEVC standard. Computational intensive parts of the standard are identified for the efficient hardware implementation. One of the computational intensive part of the 3D-HEVC, HEVC and H.264/AVC is the Interpolation Filtering used for Fractional Motion Estimation (FME). The hardware implementation of the interpolation filtering is carried out using High-Level Synthesis (HLS) tools. Xilinx Vivado Design Suite is used for the HLS implementation of the interpolation filters of HEVC and H.264/AVC. The complexity of the digital systems is greatly increased. High-Level Synthesis is the methodology which offers great benefits such as late architectural or functional changes without time consuming in rewriting of RTL-code, algorithms can be tested and evaluated early in the design cycle and development of accurate models against which the final hardware can be verified.
APA, Harvard, Vancouver, ISO, and other styles
11

Lettsome, Clyde Alphonso. "Fixed-analysis adaptive-synthesis filter banks." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/28143.

Full text
Abstract:
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Smith, Mark J. T.; Committee Co-Chair: Mersereau, Russell M.; Committee Member: Anderson, David; Committee Member: Lanterman, Aaron; Committee Member: Rosen, Gail; Committee Member: Wardi, Yorai.
APA, Harvard, Vancouver, ISO, and other styles
12

Bose, Gibin. "Approximation H infini, interpolation analytique et optimisation convexe : application à l’adaptation d’impédance large bande." Thesis, Université Côte d'Azur, 2021. http://www.theses.fr/2021COAZ4007.

Full text
Abstract:
La thèse étudie en profondeur l'un des problèmes classiques de la conception de circuits RF, le problème de l'adaptation d'impédance. L’adaptation d’impédance consiste à maximiser le transfert de puissance d'une source à une charge dans une bande de fréquences. Les antennes sont l'un des dispositifs classiques dans lesquels l'adaptation d'impédance joue un rôle important. La conception d'un circuit d'adaptation pour une charge donnée revient principalement à trouver une matrice de diffusion sans perte qui, lorsqu'elle est enchaînée à la charge, minimise la réflexion de la puissance dans l'ensemble du système.Dans ce travail, les aspects théoriques du problème de l'adaptation et l'applicabilité pratique des approches développées sont dûment pris en compte. La partie I de la thèse couvre deux approches différentes mais étroitement liées du problème de l'adaptation large bande. Le cadre développé dans la première approche consiste à trouver la meilleure approximation H infini d'une fonction L infini, Փ via la théorie de Nehari. Cela revient à réduire le problème à un problème généralisé de valeurs propres basé sur un opérateur défini sur H2, l'opérateur de Hankel, HՓ. La réalisabilité d'un gain donné est fournie par la contrainte, opérateur norme de HՓ inférieure ou égale à un. La seconde approche formule le problème de l'adaptation comme un problème d'optimisation convexe où une plus grande flexibilité est fournie aux profils de gain par rapport à l'approche précédente. Il est basé sur deux théories riches, à savoir la théorie de l'adaptation de Fano-Youla et l'interpolation analytique. La réalisabilité d'un gain donné est basée sur les conditions de dé-chaînage de Fano-Youla qui se réduisent à la positivité d'une matrice classique en théorie d'interpolation analytique, la matrice de Pick. La concavité de la matrice de Pick concernée permet de trouver la solution au problème au moyen de l'implémentation d'un problème de programmation semi-défini non linéaire. Ainsi, nous estimons des limites inférieures nettes au niveau d'adaptation pour les circuits d'adaptation de degré fini et fournissons des circuits atteignant ces limites.La partie II de la thèse vise à réaliser les circuits d'adaptation sous forme de réseaux en échelle constitués d'inductances et de condensateurs et aborde également certaines contraintes importantes de réalisabilité. Les circuits d'adaptation sont conçus pour plusieurs antennes non-adaptées, testant la robustesse de l'approche développée. La théorie développée dans la première partie de la thèse offre un moyen efficace de comparer le niveau d'adaptation atteint aux limites théoriques
The thesis makes an in-depth study of one of the classical problems in RF circuit design,the problem of impedance matching. Matching problem addresses the issue of transmitting the maximum available power from a source to a load within a frequency band. Antennas are one of the classical devices in which impedance matching plays an important role. The design of a matching circuit for a given load primarily amounts to find a lossless scattering matrix which when chained to the load minimize the reflection of power in the total system.In this work, both the theoretical aspects of the broadband matching problem and thepractical applicability of the developed approaches are given due importance. Part I of the thesis covers two different yet closely related approaches to the matching problem. These are based on the classical approaches developed by Helton and Fano-Youla to study the broadband matching problems. The framework established in the first approach entails in finding the best H infinity approximation to an L infinity function, Փ via Nehari's theory. This amounts to reduce the problem to a generalized eigen value problem based on an operator defined on H2, the Hankel operator, HՓ. The realizability of a given gain is provided by the constraint, operator norm of HՓ less than or equal to one. The second approach formulates the matching problem as a convex optimisation problem where in further flexibility is provided to the gain profiles compared to the previous approach. It is based on two rich theories, namely Fano-Youla matching theory and analytic interpolation. The realizabilty of a given gain is based on the Fano-Youla de-embedding conditions which reduces to the positivity of a classical matrix in analytic interpolation theory, the Pick matrix. The concavity of the concerned Pick matrix allows finding the solution to the problem by means of implementing a non-linear semi-definite programming problem. Most importantly, we estimate sharp lower bounds to the matching criterion for finite degree matching circuits and furnish circuits attaining those bounds.Part II of the thesis aims at realizing the matching circuits as ladder networks consisting of inductors and capacitors and discusses some important realizability constraints as well. Matching circuits are designed for several mismatched antennas, testing the robustness of the developed approach. The theory developed in the first part of the thesis provides an efficient way of comparing the matching criterion obtained to the theoretical limits
APA, Harvard, Vancouver, ISO, and other styles
13

Kokaram, Anil Christopher. "Motion picture restoration." Thesis, University of Cambridge, 1993. https://www.repository.cam.ac.uk/handle/1810/256798.

Full text
Abstract:
This dissertation presents algorithms for restoring some of the major corruptions observed in archived film or video material. The two principal problems of impulsive distortion (Dirt and Sparkle or Blotches) and noise degradation are considered. There is also an algorithm for suppressing the inter-line jitter common in images decoded from noisy video signals. In the case of noise reduction and Blotch removal the thesis considers image sequences to be three dimensional signals involving evolution of features in time and space. This is necessary if any process presented is to show an improvement over standard two-dimensional techniques. It is important to recognize that consideration of image sequences must involve an appreciation of the problems incurred by the motion of objects in the scene. The most obvious implication is that due to motion, useful three dimensional processing does not necessarily proceed in a direction 'orthogonal' to the image frames. Therefore, attention is given to discussing motion estimation as it is used for image sequence processing. Some discussion is given to image sequence models and the 3D Autoregressive model is investigated. A multiresolution BM scheme is used for motion estimation throughout the major part of the thesis. Impulsive noise removal in image processing has been traditionally achieved by the use of median filter structures. A new three dimensional multilevel median structure is presented in this work with the additional use of a detector which limits the distortion caused by the filters . This technique is found to be extremely effective in practice and is an alternative to the traditional global median operation. The new median filter is shown to be superior to those previously presented with respect to the ability to reject the kind of distortion found in practice. A model based technique using the 3D AR model is also developed for detecting and removing Blotches. This technique achieves better fidelity at the expense of heavier computational load. Motion compensated 3D IIR and FIR Wiener filters are investigated with respect to their ability to reject noise in an image sequence. They are compared to several algorithms previously presented which are purely temporal in nature. The filters presented are found to be effective and compare favourably to the other algorithms. The 3D filtering process is superior to the purely temporal process as expected. The algorithm that is presented for suppressing inter-line jitter uses a 2D AR model to estimate and correct the relative displacements between the lines. The output image is much more satisfactory to the observer although in a severe case some drift of image features is to be expected. A suggestion for removing this drift is presented in the conclusions. There are several remaining problems in moving video. In particular, line scratches and picture shake/roll. Line scratches cannot be detected successfully by the detectors presented and so cannot be removed efficiently. Suppressing shake and roll involves compensating the entire frame for motion and there is a need to separate global from local motion. These difficulties provide ample opportunity for further research.
APA, Harvard, Vancouver, ISO, and other styles
14

U, Seng-Pan. "Tecnicas de interpolacao em filtros multiritmo com condensadores comutados para Interfaces Analogicas com filtragem de alta-frequencia = Multirate Switched-Capacitor interpolation techniques for very high-frequency Analog Front-End filtering." Thesis, University of Macau, 2002. http://umaclib3.umac.mo/record=b1873496.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Gilman, Andrew. "Least-squares optimal interpolation for direct image super-resolution : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Palmerston North, New Zealand." Massey University, 2009. http://hdl.handle.net/10179/893.

Full text
Abstract:
Image super-resolution aims to produce a higher resolution representation of a scene from an ensemble of low-resolution images that may be warped, aliased, blurred and degraded by noise. There are a variety of methods for performing super-resolution described in the literature, and in general they consist of three major steps: image registration, fusion and deblurring. This thesis proposes a novel method of performing the first two of these steps. The ultimate aim of image super-resolution is to produce a higher-quality image that is visually clearer, sharper and contains more detail than the individual input images. Machine algorithms can not assess images qualitatively and typically use a quantitative error criterion, often least-squares. This thesis aims to optimise leastsquares directly using a fast method, in particular one that can be implemented using linear filters; hence, a closed-form solution is required. The concepts of optimal interpolation and resampling are derived and demonstrated in practice. Optimal filters optimised on one image are shown to perform nearoptimally on other images, suggesting that common image features, such as stepedges, can be used to optimise a near-optimal filter without requiring the knowledge of the ground-truth output. This leads to the construction of a pulse model, which is used to derive filters for resampling non-uniformly sampled images that result from the fusion of registered input images. An experimental comparison shows that a 10th order pulse model-based filter outperforms a number of methods common in the literature. The use of optimal interpolation for image registration linearises an otherwise nonlinear problem, resulting in a direct solution. Experimental analysis is used to show that optimal interpolation-based registration outperforms a number of existing methods, both iterative and direct, at a range of noise levels and for both heavily aliased images and images with a limited degree of aliasing. The proposed method offers flexibility in terms of the size of the region of support, offering a good trade-off in terms of computational complexity and accuracy of registration. Together, optimal interpolation-based registration and fusion are shown to perform fast, direct and effective super-resolution.
APA, Harvard, Vancouver, ISO, and other styles
16

Käll, Viktor, and Erik Piscator. "Particle Filter Bridge Interpolation in GANs." Thesis, KTH, Matematisk statistik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301733.

Full text
Abstract:
Generative adversarial networks (GANs), a type of generative modeling framework, has received much attention in the past few years since they were discovered for their capacity to recover complex high-dimensional data distributions. These provide a compressed representation of the data where all but the essential features of a sample is extracted, subsequently inducing a similarity measure on the space of data. This similarity measure gives rise to the possibility of interpolating in the data which has been done successfully in the past. Herein we propose a new stochastic interpolation method for GANs where the interpolation is forced to adhere to the data distribution by implementing a sequential Monte Carlo algorithm for data sampling. The results show that the new method outperforms previously known interpolation methods for the data set LINES; compared to the results of other interpolation methods there was a significant improvement measured through quantitative and qualitative evaluations. The developed interpolation method has met its expectations and shown promise, however it needs to be tested on a more complex data set in order to verify that it also scales well.
Generative adversarial networks (GANs) är ett slags generativ modell som har fått mycket uppmärksamhet de senaste åren sedan de upptäcktes för sin potential att återskapa komplexa högdimensionella datafördelningar. Dessa förser en komprimerad representation av datan där enbart de karaktäriserande egenskaperna är bevarade, vilket följdaktligen inducerar ett avståndsmått på datarummet. Detta avståndsmått möjliggör interpolering inom datan vilket har åstadkommits med framgång tidigare. Häri föreslår vi en ny stokastisk interpoleringsmetod för GANs där interpolationen tvingas följa datafördelningen genom att implementera en sekventiell Monte Carlo algoritm för dragning av datapunkter. Resultaten för studien visar att metoden ger bättre interpolationer för datamängden LINES som användes; jämfört med resultaten av tidigare kända interpolationsmetoder syntes en märkbar förbättring genom kvalitativa och kvantitativa utvärderingar. Den framtagna interpolationsmetoden har alltså mött förväntningarna och är lovande, emellertid fordras att den testas på en mer komplex datamängd för att bekräfta att den fungerar väl även under mer generella förhållanden.
APA, Harvard, Vancouver, ISO, and other styles
17

Nasrallah, Anthony. "Novel compression techniques for next-generation video coding." Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT043.

Full text
Abstract:
Le contenu vidéo occupe aujourd'hui environ 82% du trafic Internet mondial. Ce pourcentage important est dû à la révolution des contenus vidéo. D’autre part, le marché exige de plus en plus des vidéos avec des résolutions et des qualités plus élevées. De ce fait, développer des algorithmes de codage encore plus efficaces que ceux existants devient une nécessité afin de limiter afin de limiter l’augmentation de la quantité de données vidéo circulant sur internet et assurer une meilleure qualité de service. En outre, la consommation impressionnante de contenu multimédia dans les produits électroniques impacte l’aspect écologique. Par conséquent, trouver un compromis entre la complexité des algorithmes et l’efficacité des implémentations s’impose comme nouveau défi. Pour cela, une équipe collaborative a été créée dans le but de développer une nouvelle norme de codage vidéo, Versatile Video Coding – VVC/H.266. Bien que VVC ait pu aboutir à une réduction de plus de 40% du débit par rapport à HEVC, cela ne signifie pas du tout qu’il n’y a plus de besoin pour améliorer encore l’efficacité du codage. De plus, VVC ajoute une complexité remarquable par rapport à HEVC. Cette thèse vient répondre à ces problématiques en proposant trois nouvelles méthodes d'encodage. Les apports de cette recherche se répartissent en deux axes principaux. Le premier axe consiste à proposer et mettre en œuvre de nouveaux outils de compression dans la nouvelle norme, capables de générer des gains de codage supplémentaires. Deux méthodes ont été proposées pour ce premier axe. Le point commun entre ces deux méthodes est la dérivation des informations de prédiction du côté du décodeur. En effet, l’augmentation des choix de l’encodeur peut améliorer la précision des prédictions et donne moins de résidus d’énergie, conduisant à une réduction du débit. Néanmoins, plus de modes de prédiction impliquent plus de signalisation à envoyer dans le flux binaire pour informer le décodeur des choix qui ont été faits au niveau de l’encodeur. Les gains mentionnés ci-dessus sont donc largement compensés par la signalisation ajoutée. Si l’information de prédiction est dérivée au niveau du décodeur, ce dernier n’est plus passif, mais devient actif, c’est le concept de décodeur intelligent. Ainsi, il sera inutile de signaler l’information, d’où un gain en signalisation. Chacune des deux méthodes propose une technique intelligente différente pour prédire l’information au niveau du décodeur. La première technique construit un histogramme de gradients pour déduire différents modes de prédiction intra pouvant ensuite être combinés, pour obtenir le mode de prédiction intra final pour un bloc donné. Cette propriété de fusion permet de prédire plus précisément les zones avec des textures complexes, ce qui, dans les schémas de codage conventionnels, nécessiterait plutôt un partitionnement et/ou une transmission plus fine des résidus à haute énergie. La deuxième technique consiste à donner à VVC la possibilité de basculer entre différents filtres d’interpolation pour la prédiction inter. La déduction du filtre optimal sélectionné par l’encodeur est réalisée grâce à des réseaux de neurones convolutifs. Le deuxième axe, contrairement au premier, ne cherche pas à ajouter une contribution à l’algorithme de base de VVC. Cet axe vise plutôt à permettre une utilisation optimisée de l’algorithme déjà existant. L’objectif ultime est de trouver le meilleur compromis possible entre l’efficacité de compression fournie et la complexité imposée par les outils VVC. Ainsi, un système d’optimisation est conçu pour déterminer une technique efficace d’adaptation de l’activation des outils au contenu. La détermination de ces outils peut être effectuée soit en utilisant des réseaux de neurones artificiels, soit sans aucune technique d’intelligence artificielle
Video content now occupies about 82% of global internet traffic. This large percentage is due to the revolution in video content consumption. On the other hand, the market is increasingly demanding videos with higher resolutions and qualities. This causes a significant increase in the amount of data to be transmitted. Hence the need to develop video coding algorithms even more efficient than existing ones to limit the increase in the rate of data transmission and ensure a better quality of service. In addition, the impressive consumption of multimedia content in electronic products has an ecological impact. Therefore, finding a compromise between the complexity of algorithms and the efficiency of implementations is a new challenge. As a result, a collaborative team was created with the aim of developing a new video coding standard, Versatile Video Coding – VVC/H.266. Although VVC was able to achieve a more than 40% reduction in throughput compared to HEVC, this does not mean at all that there is no longer a need to further improve coding efficiency. In addition, VVC adds remarkable complexity compared to HEVC. This thesis responds to these problems by proposing three new encoding methods. The contributions of this research are divided into two main axes. The first axis is to propose and implement new compression tools in the new standard, capable of generating additional coding gains. Two methods have been proposed for this first axis. These two methods rely on the derivation of prediction information at the decoder side. This is because increasing encoder choices can improve the accuracy of predictions and yield less energy residue, leading to a reduction in bit rate. Nevertheless, more prediction modes involve more signaling to be sent into the binary stream to inform the decoder of the choices that have been made at the encoder. The gains mentioned above are therefore more than offset by the added signaling. If the prediction information has been derived from the decoder, the latter is no longer passive, but becomes active hence the concept of intelligent decoder. Thus, it will be useless to signal the information, hence a gain in signalization. Each of the two methods offers a different intelligent technique than the other to predict information at the decoder level. The first technique constructs a histogram of gradients to deduce different intra-prediction modes that can then be combined by means of prediction fusion, to obtain the final intra-prediction for a given block. This fusion property makes it possible to more accurately predict areas with complex textures, which, in conventional coding schemes, would rather require partitioning and/or finer transmission of high-energy residues. The second technique gives VVC the ability to switch between different interpolation filters of the inter prediction. The deduction of the optimal filter selected by the encoder is achieved through convolutional neural networks. The second axis, unlike the first, does not seek to add a contribution to the VVC algorithm. This axis rather aims to build an optimized use of the already existing algorithm. The ultimate goal is to find the best possible compromise between the compression efficiency delivered and the complexity imposed by VVC tools. Thus, an optimization system is designed to determine an effective technique for activating the new coding tools. The determination of these tools can be done either using artificial neural networks or without any artificial intelligence technique
APA, Harvard, Vancouver, ISO, and other styles
18

Rabut, Christophe. "B-splines polyharmoniques cardinales : interpolation, quasi-interpolation, filtrage." Toulouse 3, 1990. http://www.theses.fr/1990TOU30046.

Full text
Abstract:
Les B-splines polynomiales sont couramment utilisées pour définir simplement une fonction spline qui passe "près" de points donnés. Dans le cas où les données sont régulièrement réparties, on apporte, par un traitement préalable des données (convolution avec certains vecteurs à support borné), plus de souplesse à cette opération : on peut alors obtenir une fonction qui passe très près des points -on parle alors de quasi-interpolation- ou au contraire qui filtre les bruits inhérents à ces données on parle alors de filtrage. On montre comment utiliser la méthode de validation croisée pour choisir de façon optimale la force d'un filtrage, qui peut être adaptative, et on propose une méthode de réduction de données, le taux de réduction étant lié à la bande passante du filtre. Ces notions sont ensuite généralisées en dimension quelconque par l'utilisation des B-splines polyharmoniques : après avoir defini les splines polyharmoniques pour des données qui peuvent être en nombre infini, on en donne une expression numériquement plus stable que celle généralement utilisée, et on montre un lien entre splines polyharmoniques d'ordre ou de dimension différents. On définit alors les B-splines polyharmoniques, et on présente leurs propriétés essentielles, très voisines de celles des B-splines polynomiales. On propose l'utilisation de ces B-splines d'une part pour quasi-interpoler ou filtrer des données régulièrement réparties, d'autre part pour déterminer rapidement, par une méthode de subdivision, la spline d'interpolation de ces données. On envisage enfin la généralisation de cette notion de B-spline à des noeuds quelconques et à toute famille de fonctions satisfaisant certaines équations différentielles.
APA, Harvard, Vancouver, ISO, and other styles
19

Merrell, Jacob Porter. "Generalized Constrained Interpolation." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2360.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Hammarqvist, Ulf. "Audio editing in the time-frequency domain using the Gabor Wavelet Transform." Thesis, Uppsala universitet, Centrum för bildanalys, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-153634.

Full text
Abstract:
Visualization, processing and editing of audio, directly on a time-frequency surface, is the scope of this thesis. More precisely the scalogram produced by a Gabor Wavelet transform is used, which is a powerful alternative to traditional techinques where the wave form is the main visual aid and editting is performed by parametric filters. Reconstruction properties, scalogram design and enhancements as well audio manipulation algorithms are investigated for this audio representation.The scalogram is designed to allow a flexible choice of time-frequency ratio, while maintaining high quality reconstruction. For this mean, the Loglet is used, which is observed to be the most suitable filter choice.  Re-assignmentare tested, and a novel weighting function using partial derivatives of phase is proposed.  An audio interpolation procedure is developed and shown to perform well in listening tests.The feasibility to use the transform coefficients directly for various purposes is investigated. It is concluded that Pitch shifts are hard to describe in the framework while noise thresh holding works well. A downsampling scheme is suggested that saves on operations and memory consumption as well as it speeds up real world implementations significantly. Finally, a Scalogram 'compression' procedure is developed, allowing the caching of an approximate scalogram.
APA, Harvard, Vancouver, ISO, and other styles
21

Vatis, Yuri. "Non-symmetric adaptive interpolation filter for motion compensated prediction /." Düsseldorf : VDI-Verl, 2009. http://d-nb.info/998470724/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Glotzbach, John William. "A Color Filter Array Interpolation Method Based on Sampling Theory." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4785.

Full text
Abstract:
Digital cameras use a single image sensor array with a color filter array (CFA) to measure a color image. Instead of measuring a red, green, and blue value at every pixel, these cameras have a filter built onto each pixel so that only one portion of the visible spectrum is measured. To generate a full-color image, the camera must estimate the missing two values at every pixel. This process is known as color filter array interpolation. The Bayer CFA pattern samples the green image on half of the pixels of the imaging sensor on a quincunx grid. The other half of the pixels measure the red and blue images equally on interleaved rectangular sampling grids. This thesis analyzes this problem with sampling theory. The red and blue images are sampled at half the rate of the green image and therefore have a higher probability of aliasing in the output image. This is apparent when simple interpolation algorithms like bilinear interpolation are used for CFA interpolation. Two reference algorithms, a projections onto convex sets (POCS) algorithm and an edge-directed algorithm by Adams and Hamilton (AH), are studied. Both algorithms address aliasing in the green image. Because of the high correlation among the red, green, and blue images, information from the red and blue images can be used to better interpolate the green image. The reference algorithms are studied to learn how this information is used. This leads to two new interpolation algorithms for the green image. The red and blue interpolation algorithm of AH is also studied to determine how the inter-image correlation is used when interpolating these images. This study shows that because the green image is sampled at a higher rate, it retains much of the high-frequency information in the original image. This information is used to estimate aliasing in the red and blue images. We present a general algorithm based on the AH algorithm to interpolate the red and blue images. This algorithm is able to provide results that are on average, better than both reference algorithms, POCS and AH.
APA, Harvard, Vancouver, ISO, and other styles
23

Martin, Vincent Chabert Marie Lacaze Bernard. "Contribution des filtres LPTV et des techniques d'interpolation au tatouage numérique." Toulouse : INP Toulouse, 2007. http://ethesis.inp-toulouse.fr/archive/00000326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Appia, Vikram V. "A color filter array interpolation method for digital cameras using alias cancellation." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22542.

Full text
Abstract:
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Russell Mersereau; Committee Member: Anthony J. Yezzi; Committee Member: Yucel Altunbasak.
APA, Harvard, Vancouver, ISO, and other styles
25

Lahti, Jimmie. "Tree-Structured Linear-Phase Nyquist FIR Filter Interpolators and Decimators." Thesis, Linköpings universitet, Elektroniksystem, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-79047.

Full text
Abstract:
The master thesis is based upon a new type of linear-phase Nyquist finitie impulse responseinterpolator and decimator implemented using a tree-structure. The tree-structure decreasesthe complexity, considerably, compared to the ordinary single-stage interpolator structure.The computational complexity is comparable to a multi-stage Nyquist interpolator structure,but the proposed tree-structure has slightly higher delay. The tree-structure should still beconsidered since it can interpolate with an arbitrary number and all subfilters operate at thebase rate which is not the case for multi-stage Nyquist interpolators.
APA, Harvard, Vancouver, ISO, and other styles
26

Martin, Vincent. "Contribution des filtres LPTV et des techniques d'interpolation au tatouage numérique." Phd thesis, Toulouse, INPT, 2006. http://oatao.univ-toulouse.fr/7478/1/martinvincent.pdf.

Full text
Abstract:
Les Changements d'Horloge Périodiques (PCC) et les filtres Linéaires Variant Périodiquement dans le Temps (LPTV) sont utilisés dans le domaine des télécommunications multi-utilisateurs. Dans cette thèse, nous montrons que, dans l'ensemble des techniques de tatouage par étalement de spectre, ils peuvent se substituer à la modulation par code pseudo-aléatoire. Les modules de décodage optimal, de resynchronisation, de pré-annulation des interférences et de quantification de la transformée d'étalement s'appliquent également aux PCC et aux filtres LPTV. Pour le modèle de signaux stationnaires blancs gaussiens, ces techniques présentent des performances identiques à l'étalement à Séquence Directe (DS) classique. Cependant, nous montrons que, dans le cas d'un signal corrélé localement, la luminance d'une image naturelle notamment, la périodicité des PCC et des filtres LPTV associée à un parcours d'image de type Peano-Hilbert conduit à de meilleures performances. Les filtres LPTV sont en outre un outil plus puissant qu'une simple modulation DS. Nous les utilisons pour effectuer un masquage spectral simultanément à l'étalement, ainsi qu'un rejet des interférences de l'image dans le domaine spectral. Cette dernière technique possède de très bonnes performances au décodage. Le second axe de cette thèse est l'étude des liens entre interpolation et tatouage numérique. Nous soulignons d'abord le rôle de l'interpolation dans les attaques sur la robustesse du tatouage. Nous construisons ensuite des techniques de tatouage bénéficiant des propriétés perceptuelles de l'interpolation. La première consiste en des masques perceptuels utilisant le bruit d'interpolation. Dans la seconde, un schéma de tatouage informé est construit autour de l'interpolation. Cet algorithme, qu'on peut relier aux techniques de catégorisation aléatoire, utilise des règles d'insertion et de décodage originales, incluant un masquage perceptuel intrinsèque. Outre ces bonnes propriétés perceptuelles, il présente un rejet des interférences de l'hôte et une robustesse à diverses attaques telles que les transformations valumétriques. Son niveau de sécurité est évalué à l'aide d'algorithmes d'attaque pratiques.
APA, Harvard, Vancouver, ISO, and other styles
27

Bajramovic, Jasko. "FPGA Implementation of an Interpolator for PWM applications." Thesis, Linköping University, Department of Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-10406.

Full text
Abstract:

In this thesis, a multirate realization of an interpolation operation is explored. As one of the requirements for proper functionality of the digital pulse-width modulator, a 16-bit digital input signal is to be upsampled 32 times. To obtain the required oversampling ratio, five separate interpolator stages were designed and implemented. Each interpolator stage performed uppsampling by a factor of two followed by an image-rejection lowpass FIR filter. Since, each individual interpolator stage upsamples the input signal by a factor of two, interpolation filters were realized as a half-band FIR filters. This kind of linear-phase FIR filters have a nice property of having every other filter coefficient equal to zero except for the middle one which equals 0.5. By utilizing the half-band FIR filters for the actual realization of the interpolation filters, the overall computational complexity was substantially reduced. In addition, several multirate techniques have been utilized for deriving more efficient interpolator structures. Hence, the impulse response of individual interpolator filters was rewritten into its corresponding polyphase form. This further simplifies the interpolator realization. To eliminate multiplication by 0.5 in one of two polyphase subfilters, the filter gain was deliberately increased by a factor of two. Thus, one polyphase path only contained delay elements. In addition, for the realization of filter multipliers, a multiple constant multiplication, (MCM), algorithm was utilized. The idea behind the MCM algorithm, was to perform multiplication operations as a number of addition operations and appropriate input signal shifts. As a result, less hardware was needed for the actual interpolation chain implementation. For the correct functionality of the interpolator chain, scaling coefficients were introduced into the each interpolation stage. This is done in order to reduce the possibility of overflow. For the scaling process, a safe scaling method was used. The actual quantization noise generated by the interpolator chain was also estimated and appropriate system adjustments were performed.

APA, Harvard, Vancouver, ISO, and other styles
28

Pristach, Marián. "Návrh optimalizovaných architektur digitálních filtrů pro nízkopříkonové integrované obvody." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-234534.

Full text
Abstract:
The doctoral thesis deals with development and design of novel architectures of digital filters for low-power integrated circuits. The main goal was to achieve optimum parameters of digital filters with respect to the chip area, power consumption and operating frequency. The target group of the proposed architectures are application specific integrated circuits designed for signal processing from sensors using delta-sigma modulators. Three novel architectures of digital filters optimized for low-power integrated circuits are presented in the thesis. The thesis provides analysis and comparison of parameters of the new filter architectures with the parameters of architectures generated by Matlab tool. A software tool has been designed and developed for the practical application of the proposed architectures of digital filters. The developed software tool allows generating hardware description of the filters with respect to defined parameters.
APA, Harvard, Vancouver, ISO, and other styles
29

Yang, Yanqin. "Contribution à l'évaluation objective de la qualité d'images couleur estimées par dématriçage." Thesis, Lille 1, 2009. http://www.theses.fr/2009LIL10089/document.

Full text
Abstract:
Les travaux menés dans cette thèse portent sur le dématriçage d'images intégré dans une caméra couleur mono capteur. Ce type de caméra acquiert une seule composante couleur en chaque pixel grâce à un filtre CFA (Colour Filter Array) spectralement sélectif. La procédure dite de dématriçage consiste à estimer les deux composantes couleur manquantes en chaque pixel pour obtenir une image couleur. Nous proposons donc de déterminer, parmi les méthodes de dématriçage, celles qui fournissent des images couleur les mieux adaptées à l’analyse automatique des images. Dans un premier temps, nous présentons les principes de l’acquisition d’images numériques couleur par les caméras mono capteur. Après avoir expliqué l’influence de la structure du CFA sur la performance du dématriçage, nous focalisons nos études sur le dématriçage d’image issue du CFA de Bayer. Une formalisation mathématique du dématriçage de l’image CFA en image couleur est ensuite proposée avant de présenter les nombreuses méthodes de dématriçage parues dans la littérature ainsi que les post-traitements qui corrigent les couleurs estimées par dématriçage. Puis, nous nous intéressons aux critères d'évaluation de la qualité des images estimées par dématriçage. En premier lieu, nous décrivons les différents artefacts pouvant être générés par le dématriçage ainsi que la formation de ces artefacts, ce qui permet de bien cerner les limites atteintes par les critères classiques d’évaluation de la qualité des images estimées. Nous proposons alors deux mesures originales qui quantifient la présence de chaque type d'artéfacts (fausse couleur et effet de fermeture éclair). Enfin, nous présentons de nouveaux critères basés sur l’analyse de primitives extraites des images, en mesurant la qualité des contours qui y sont détectés
Our work deals with the quality of colour images provided by a mono-CCD colour camera, which acquires only one colour component at each pixel by means of the CFA (Colour Filter Array) which covers the CCD sensor. A procedure - called demosaicing - is necessary to estimate the other two missing colour components at each pixel, so as to obtain a colour image in this kind of cameras. We aim to determine which method of demosaicing provides the results that are best adapted to colour image analyses for the reconstruction of scene. First, we present the principles on how the mono-CCD cameras acquire digital colour images, as well as the different arrangements of CFA used in such cameras. Once the influence of the CFA arrangement on the performance of demosaicing has been presented, we focus our studies on the demosaicing methods based on the Bayer CFA. A mathematical formalization for demosaicing is proposed before we present the numerous demosaicing methods in the literature, as well as the post-processing algorithms to correct the demosaiced images. We then investigate the evaluation criteria for the quality of the colour images estimated by demosaicing. First are described the different possible artefacts generated by demosaicing and the reasons for their generation, which allow us to point out the limits of the classical measures used to evaluate the estimated images. We then propose two original measures to quantify the presence of the two main artefacts, namely false colour and zipper effect. At last, we present new criteria based on the analysis of features extracted from colour images, by measuring the quality of edge detection in the estimated images
APA, Harvard, Vancouver, ISO, and other styles
30

Pekkucuksen, Ibrahim Ethem. "Edge directed resolution enhancement and demosaicing." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42734.

Full text
Abstract:
The objective of the proposed research is to develop high performance, low computational complexity resolution enhancement and demosaicing algorithms. Our approach to both problems is to find creative ways to incorporate edge information into the algorithm design. However, in contrast with the usual edge directed approaches, we do not try to detect edge presence and orientation explicitly. For the image interpolation problem, we study the relationship between low resolution and high resolution pixels, and derive a general interpolation formula to be used on all pixels. This simple interpolation algorithm is able to generate sharp edges in any orientation. We also propose a simple 3 by 3 filter that quantifies local luminance transition and apply it to the demosaicing problem. Additionally, we propose a gradient based directional demosaicing method that does not require setting any thresholds. We show that the performance of this algorithm can be improved by using multiscale gradients. Finally, we address the low spectral correlation demosaicing problem by proposing a new family of hybrid color filter array (CFA) patterns and a local algorithm that is two orders of magnitude faster than a comparable non-local solution while offering the same level of performance.
APA, Harvard, Vancouver, ISO, and other styles
31

Martinez, Martinez David. "Méthodologies et outils de synthèse pour des fonctions de filtrage chargées par des impédances complexes." Thesis, Limoges, 2019. http://www.theses.fr/2019LIMO0025/document.

Full text
Abstract:
Le problème de l'adaptation d'impédance en ingénierie des hyper fréquences et en électronique en général consiste à minimiser la réflexion de la puissance qui doit être transmise, par un générateur, à une charge donnée dans une bande de fréquence. Les exigences d'adaptation et de filtrage dans les systèmes de communication classiques sont généralement satisfaites en utilisant un circuit d'adaptation suivi d'un filtre. Nous proposons ici de concevoir des filtres d'adaptation qui intègrent à la fois les exigences d'adaptation et de filtrage dans un seul appareil et augmentent ainsi l'efficacité globale et la compacité du système. Dans ce travail, le problème d'adaptation est formulé en introduisant un problème d'optimisation convexe dans le cadre établi par la théorie de d'adaptation de Fano et Youla. De ce contexte, au moyen de techniques modernes de programmation semi-définies non linéaires, un problème convexe, et donc avec une optimalité garantie, est obtenu. Enfin, pour démontrer les avantages fournis par la théorie développée au-delà de la synthèse de filtres avec des charges complexes variables en fréquence, nous examinons deux applications pratiques récurrentes dans la conception de ce type de dispositifs. Ces applications correspondent, d'une part, à l'adaptation d'un réseau d'antennes dans le but de maximiser l'efficacité du rayonnement, et, d'autre part, à la synthèse de multiplexeurs où chacun des filtres de canal est adapté au reste du dispositif, notamment les filtres correspondant aux autres canaux
The problem of impedance matching in electronics and particularly in RF engineering consists on minimising the reflection of the power that is to be transmitted, by a generator, to a given load within a frequency band. The matching and filtering requirements in classical communication systems are usually satisfied by using a matching circuit followed by a filter. We propose here to design matching filters that integrate both, matching and filtering requirements, in a single device and thereby increase the overall efficiency and compactness of the system. In this work, the matching problem is formulated by introducing convex optimisation on the framework established by the matching theory of Fano and Youla. As a result, by means of modern non-linear semi-definite programming techniques, a convex problem, and therefore with guaranteed optimality, is achieved. Finally, to demonstrate the advantages provided by the developed theory beyond the synthesis of filters with frequency varying loads, we consider two practical applications which are recurrent in the design of communication devices. These applications are, on the one hand, the matching of an array of antennas with the objective of maximizing the radiation efficiency, and on the other hand the synthesis of multiplexers where each of the channel filters is matched to the rest of the device, including the filters corresponding to the other channels
APA, Harvard, Vancouver, ISO, and other styles
32

Kozáček, Peter. "Řízení BLDC motoru v oblasti nízkých otáček." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-221289.

Full text
Abstract:
The diploma work concerns on an issue of data collection of speed and electrical angle based on informations from Hall sensor with the necessary resolution for control of BLDC motor. Specifically, concenred on a section with low speed. Most of moors use Hall sensor for detecting speed and position of the rotor. At low speed section, becomes the situation when we can not determine the position of the rotor with (the) required (sufficient) resolution, this situation creates a „wince“ in the control (ripple torque). The task is to design and evaluate the possibilities of the algorithm for control and acquisition speed and rotor position with the required accuracy.
APA, Harvard, Vancouver, ISO, and other styles
33

Kucheruk, V. Yu, K. V. Ovchynnykov, M. D. Molchaniuk, В. Ю. Кучерук, and К. В. Овчинников. "The usage of the linear interpolating filter for an accurate fluctation fading time measuring activated in LC-circuit." Thesis, ВНТУ, 2013. http://ir.lib.vntu.edu.ua/handle/123456789/8111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Garla, Venkatakrishnaiah Sharath Chandra, and Harivinay Varadaraju. "Validation of Black-and-White Topology Optimization Designs." Thesis, Linköpings universitet, Mekanik och hållfasthetslära, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-174807.

Full text
Abstract:
Topology optimization has seen rapid developments in its field with algorithms getting better and faster all the time. These new algorithms help reduce the lead time from concept development to a finished product. Simulation and post-processing of geometry are one of the major developmental costs. Post-processing of this geometry also takes up a lot of time and is dependent on the quality of the geometry output from the solver to make the product ready for rapid prototyping or final production. The work done in this thesis deals with the post-processing of the results obtained from topology optimization algorithms which output the result as a 2D image. A suitable methodology is discussed where this image is processed and converted into a CAD geometry all while minimizing deviation in geometry, compliance and volume fraction. Further on, a validation of the designs is performed to measure the extracted geometry's deviation from the post-processed result. The workflow is coded using MATLAB and uses an image-based post-processing approach. The proposed workflow is tested on several numerical examples to assess the performance, limitations and numerical instabilities. The code written for the entire workflow is included as an appendix and can be downloaded from the website:https://github.com/M87K452b/postprocessing-topopt.
APA, Harvard, Vancouver, ISO, and other styles
35

Qendro, Fatos. "Automatisation du contrôle non destructif par courants de Foucault, amélioration du rapport signal sur bruit." Compiègne, 1989. http://www.theses.fr/1989COMPD183.

Full text
Abstract:
Ce travail présente une contribution à la classification automatique de défauts lors du contrôle par courants de Foucault du fil de cuivre. Après avoir étudié la possibilité d'une telle classification il est déduit que l'amélioration du rapport signal sur bruit est une étape indispensable pour la réussite de cette classification. Plusieurs méthodes de filtrage adaptatif ont été testées et une méthode d'interpolation adaptative a été développée. Cette méthode peut être appliquée avec succès à l'amélioration du rapport signal sur bruit dans le cas des signaux impulsionnels noyés dans un bruit stationnaire à bande étroite, modélisable par un modèle AR d'ordre fini. La méthode d'interpolation est appliquée avec succès aux signaux obtenus en contrôle non destructif par courants de Foucault de tubes de générateur de vapeur.
APA, Harvard, Vancouver, ISO, and other styles
36

Cantarello, Luca. "Use of a Kalman filtering technique for near-surface temperature analysis." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13455/.

Full text
Abstract:
A statistical post-processing of the hourly 2-meter temperature fields from the Nordic convective-scale operational Numerical Weather Prediction model Arome MetCoOp 2.5 Km has been developed and tested at the Norwegian Meteorological Institute (MET Norway). The objective of the work is to improve the representation of the temperature close to the surface combining model data and in-situ observations for climatological and hydrological applications. In particular, a statistical scheme based on a bias-aware Local Ensemble Transform Kalman Filter has been adapted to the spatial interpolation of surface temperature. This scheme starts from an ensemble of 2-meter temperature fields derived from Arome MetCoOp 2.5 Km and, taking into account the observations provided by the MET Norway network, produces an ensemble of analysis fields characterised by a grid spacing of 1 km. The model best estimate employed in the interpolation procedure is given by the latest avilable forecast, subsequently corrected for the model bias. The scheme has been applied off-line and the final analysis is performed independently at each grid point. The final analysis ensemble has been evaluated and its mean value has been proved to improve significantly the best estimate of Arome MetCoOp 2.5 km in representing the 2-meter temperature fields, in terms of both accuracy and precision, with a reduction in the root mean squared values as well as in the bias and an improvement in reproducing the cold extremes during wintertime. More generally, the analysis ensemble displays better forecast verification scores, with an overall reduction in the Brier Score and its reliability component and an increase in the resolution term for the zero degrees threshold. However, the final ensemble spread remains too narrow, though not as narrow as the model output.
APA, Harvard, Vancouver, ISO, and other styles
37

Waku, Kouomou Jules. "Ondelettes et applications en imagerie et en calcul de surfaces." Phd thesis, Grenoble 1, 1993. http://tel.archives-ouvertes.fr/tel-00005140.

Full text
Abstract:
Cette these presente des travaux sur les aspects theoriques de la transformation en ondelettes et quelques applications en imagerie et en calcul de surface. Nous presentons trois approches de construction d'une base d'ondelettes, a savoir l'approche theorie des groupes. l'approche analyse multiresolution et l'approche banc de filtres. Les applications de la transformee en ondelettes portent sur la compression d'image, la representation de courbes discretes et le calcul de l'approximation d'une surface par les fonctions radiales. Nous commencons par un survol de differentes techniques de compression. Nous montrons graphiquement et numeriquement que les transformations en ondelettes, comparativement aux autres methodes pyramidales (Brt et Adelson) permettent d'anvisager de tres bons resultats de compression. On montre que parmi les representations hierarchiques, la representation par ondelettes est celle qui permet de preserver au mieux les indices visuels dans le cadre de la construction d'un modele numerique de terrain par exemple. En ce qui concerne la representation des courbes discrete, nous avons mis au point un algorithme d'analyse et de synthese multi-echelles. Ce nouvel algorithme s'applique a des directions elementaires correspondant a une suite de Freeman representant un contour discret ou une courbe discrete. On montre que l'ondelette de Haar permet d'obtenir une bonne representation multi-echelle d'une courbe discrete avec une taille memoire faible et un cout de calcul minimal. Enfin, apres avoir pose dans le cadre general le probleme d'interpolation par les fonctions radiales et presente une analyse des conditions d'existence de la solution, nous proposons une nouvelle approche de resolution de systeme lineaire qui definit les parametres du probleme. Notre approche est fondee sur la transformation en ondelettes et permet de rendre creuse la matrice du systeme. nous montrons la performance de cette approche surtout quand le nombre de donnees est important. Les resultats d'interpolation d'une surface par une spline de type plaque mince ou multiquadratique sont presentes. En particulier, nous avons teste les ondelettes splines, les ondelettes a support compact et les ondelettes biorthogonales. Les resultats graphiques sont accompagnes des estimations numeriques des erreurs, ceci permettant une meilleure appreciation des demarches proposees.
APA, Harvard, Vancouver, ISO, and other styles
38

Waku, Kouomou Jules. "Ondelettes et applications en imagerie et en calcul de surfaces." Phd thesis, Université Joseph Fourier (Grenoble ; 1971-2015), 1993. http://tel.archives-ouvertes.fr/tel-00005140.

Full text
Abstract:
Cette these presente des travaux sur les aspects theoriques de la transformation en ondelettes et quelques applications en imagerie et en calcul de surface. Nous presentons trois approches de construction d'une base d'ondelettes, a savoir l'approche theorie des groupes. L'approche analyse multiresolution et l'approche banc de filtres. Les applications de la transformee en ondelettes portent sur la compression d'image, la representation de courbes discretes et le calcul de l'approximation d'une surface par les fonctions radiales. Nous commencons par un survol de differentes techniques de compression. Nous montrons graphiquement et numeriquement que les transformations en ondelettes, comparativement aux autres methodes pyramidales (Brt et Adelson) permettent d'anvisager de tres bons resultats de compression. On montre que parmi les representations hierarchiques, la representation par ondelettes est celle qui permet de preserver au mieux les indices visuels dans le cadre de la construction d'un modele numerique de terrain par exemple. En ce qui concerne la representation des courbes discrete, nous avons mis au point un algorithme d'analyse et de synthese multi-echelles. Ce nouvel algorithme s'applique a des directions elementaires correspondant a une suite de Freeman representant un contour discret ou une courbe discrete. On montre que l'ondelette de Haar permet d'obtenir une bonne representation multi-echelle d'une courbe discrete avec une taille memoire faible et un cout de calcul minimal. Enfin, apres avoir pose dans le cadre general le probleme d'interpolation par les fonctions radiales et presente une analyse des conditions d'existence de la solution, nous proposons une nouvelle approche de resolution de systeme lineaire qui definit les parametres du probleme. Notre approche est fondee sur la transformation en ondelettes et permet de rendre creuse la matrice du systeme. Nous montrons la performance de cette approche surtout quand le nombre de donnees est important. Les resultats d'interpolation d'une surface par une spline de type plaque mince ou multiquadratique sont presentes. En particulier, nous avons teste les ondelettes splines, les ondelettes a support compact et les ondelettes biorthogonales. Les resultats graphiques sont accompagnes des estimations numeriques des erreurs, ceci permettant une meilleure appreciation des demarches proposees
APA, Harvard, Vancouver, ISO, and other styles
39

Mezera, Lukáš. "Zlepšení rozlišení pro vícečetné snímky stejné scény." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237196.

Full text
Abstract:
Úkolem této diplomové práce je navrhnout vlastní metodu pro zvýšení rozlišení v obraze scény, pokud je k dispozici více snímků dané scény. V teoretické části diplomové práce jsou jako nejlepší metody pro zvýšení rozlišení v obraze vybrány ty, které jsou založeny na principech zpracování signálu. Dále jsou popsány základní požadavky metod pro zvýšení rozlišení v obraze při přítomnosti více snímků stejné scény a jejich typická struktura. Následuje stručný přehled těchto metod a jejich vzájemné porovnání podle optimálních kritérií. Praktická část diplomové práce se zabývá samotným návrhem metody pro zvýšení rozlišení v obraze, pokud je k dispozici více snímků této scény. První navržená metoda je naimplementována a otestována. Při testování této metody je však  zjištěna její špatná funkčnost pro snímky scény s nízkým rozlišením, které vznikly vzájemnou rotací. Z toho důvodu je navržena vylepšená metoda pro zvýšení rozlišení v obraze. Tato metoda využívá při svém výpočtu robustních technik. Díky tomu je již vylepšená metoda nezávislá na rotaci mezi snímky scény s nízkým rozlišením. I tato metoda je řádně otestována a její výsledky jsou porovnány s výsledky první navržené metody pro zvýšení rozlišení v obraze. V porovnání výpočetních časů je lepší první navrhovaná metoda, avšak její výsledky pro obrazy obsahující rotace nejsou kvalitní. Oproti tomu pro obrazy, které vznikly pouze posunem při snímání scény, jsou tyto výsledky velice dobré. Vylepšená metoda je tedy využitelná zejména pro obrazy obsahující rotace. V závěru této práce je ještě navrženo jedno vylepšení, které by mohlo zlepšit výsledky druhé navrhnuté metody pro zvýšení rozlišení v obraze scény.
APA, Harvard, Vancouver, ISO, and other styles
40

Niang, Oumar. "Décomposition Modale Empirique : Contribution à la Modélisation Mathématique et Application en Traitement du Signal et de l'Image." Phd thesis, Université Paris XII Val de Marne, 2007. http://tel.archives-ouvertes.fr/tel-00878354.

Full text
Abstract:
La Décomposition Modale Empirique (EMD), est une méthode de décomposition multi-résolution de signaux en fonctions Modes Intrinsèques (IMF) et cela, de manière auto-adaptative. En la couplant avec la transformée de Hilbert, elle devient une méthode d'analyse Temps-Fréquence , la transformée de Hilbert-Huang, permettant d'étudier bon nombre de classes de signaux. Malgré ces nombreuses applications, l'une des plus importantes limites de l'EMD est son manque de formalisme mathématique. A la place d'une interpolation par splines cubiques utilisée dans l'EMD classique, nous avons estimé l'enveloppe moyenne par une solution d'un système d'EDP. Par une méthode variationnelle, nous avons établi un cadre théorique pour prouver les résultats de convergence, d'existence de modes et la propriété de presque orthogonalité de l'EMD. La comparaison avec des bancs de filtres itératifs et les ondelettes, montre l'aspect multi-résolution de l'EMD. Deux nouvelles applications en traitement du signal et de l'image sont présentées : l'extraction des intermittences et mode mixing et la restauration par shrinkage par EMD. Enfin le modèle peut servir de base pour l'étude de l'unicité de la décomposition.
APA, Harvard, Vancouver, ISO, and other styles
41

Bokhari, Syed. "Design and Discrete Optimization of BIBO Stable FRM Digital Filters Incorporating IIR Digital Interpolation Subfilters." Master's thesis, 2010. http://hdl.handle.net/10048/1014.

Full text
Abstract:
Digital filters having sharp transition band play a vital role in modern digital signal processing (DSP) applications. Emerging technologies require digital filters to be both computationally efficient in software/hardware realizations. This thesis is concerned with the design and structural-level optimization of sharp transition band digital filters employing the well known frequency response masking (FRM) approach. Unlike the conventional finite impulse response (FIR) based FRM approach, the FRM technique used in this thesis incorporates infinite impulse response (IIR) digital interpolation subfilters, thereby reducing the overall filter order that results in a reduction of hardware complexity. Two realization methods are discussed in this thesis, namely, the bilinear-lossless-discrete-integrators (bilinear-LDI) digital filter design technique, and the lattice wave digital filter (lattice WDF) digital filter design technique. Diversity controlled (DC) genetic algorithm (GA) is employed to optimize both types of IIR based FRM digital filters over the efficient canonical signed digit (CSD) multiplier coefficient space. DCGAs represent FRM digital filters by a binary chromosome and proceed from a population pool of candidate chromosomes to future generations in order to arrive at the desired FRM digital filter satisfying the design specifications. A novel cost-function is used that allows the DCGA to simultaneously optimize both the amplitude-frequency and group-delay frequency response. A fast convergence speed has been observed.
Communications
APA, Harvard, Vancouver, ISO, and other styles
42

Κολοβός, Παύλος. "Υλοποίηση VLSI αρχιτεκτονικής με ψηφιακά φίλτρα για ασύρματο OFDM Modem." Thesis, 2006. http://nemertes.lis.upatras.gr/jspui/handle/10889/541.

Full text
Abstract:
Η μετάδοση δεδομένων µέσω ασύρματων δικτύων αποτελεί ένα από τα σημαντικότερα αντικείμενα μελέτης στον χώρο των τηλεπικοινωνιών τα τελευταία χρόνια. Λόγω του μεγάλου όγκου δεδομένων που απαιτείται για μετάδοση μέσα σε πολύ μικρό χρονικό διάστημα, απαιτείται η εφαρμογή όσο το δυνατόν αποδοτικότερης κωδικοποίησης και διαμόρφωσης, σκοπεύοντας παράλληλα στην επίτευξη μικρής κατανάλωσης ισχύος και ταυτόχρονα υψηλού throughput. Η παρούσα διπλωματική εργασία πραγματεύεται τη μελέτη, σχεδίαση και την υλοποίηση της αρχιτεκτονικής ενός OFDM συστήματος, βασισμένο στις προδιαγραφές που ορίζει το πρότυπο IEEE802.11. Το πρότυπο αυτό επιλέχθηκε, καθώς τυγχάνει ευρείας αποδοχής, όσον αφορά τη μετάδοση δεδομένων σε ασύρματα δίκτυα. Στα πλαίσια της διπλωματικής παρουσιάζονται εξομοιώσεις και μετρήσεις για την εύρεση των κατάλληλων χαρακτηριστικών και του τύπου των φίλτρων που χρησιμοποιούνται σε ένα OFDM σύστημα, ενώ εισάγονται δύο νέες αρχιτεκτονικές του τμήματος κατασκευής του OFDM συμβόλου, καθώς και τα αποτελέσματα της σύνθεσης αυτών. Ιδιαίτερη βαρύτητα δίνεται στη σχεδίαση και την υλοποίηση των αρχιτεκτονικών αυτών με σκοπό την ενσωμάτωσή τους σε συσκευές FPGA.
Transmission of data through wireless networks is one of the most important aspects in the study telecommunications systems. The large volume of data that needs to be transmitted in a very small time interval has resulted in the need to make the system more efficient while increasing the throughput. This has been accomplished through more efficient coding, reduction of the power consumption and through the use of modulation. This masters thesis deals with the study, design and implementation of the typical OFDM system architecture, based on the standard IEEE802.11. This standard was chosen because of its wide acceptance and use regarding transmission of data in wireless networks. The thesis details the adjustments and measurements needed for determining suitable characteristics and the types of filters required in an OFDM system. Additionally two new OFDM system architectures arte introduced aiming at reducing the overall power consumption and the complexity of the formulae used for symbol construction. Particular importance is given in the designing and the implementation of these architectures regarding their incorporation in FPGA devices.
APA, Harvard, Vancouver, ISO, and other styles
43

LIAO, YUN-JIAN, and 廖運健. "Interpolation filter architecture for subsampled image." Thesis, 1991. http://ndltd.ncl.edu.tw/handle/44541634614272301679.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Chen, Wei-Chiang, and 陳偉堅. "Scalable Image Coding with Lagrange Interpolation Filter." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/59420335904575643111.

Full text
Abstract:
碩士
國立交通大學
電信工程研究所
84
A scalable image coding method based on prediction is investigated. The scalability is achieved with a hierarchical pyramid signal representationstructure.Interpolation as prediction from lower resolution level to high resolutionlevel is done on a block by block basis.The block-based operation and the effect of quantization error on the subsequent interpolation is investigated.
APA, Harvard, Vancouver, ISO, and other styles
45

Chiang, Tsu-Rong, and 江祖榮. "Color Filter Array Interpolation Algorithmwith Artifact Removal." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/61096111982348356999.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
103
Recently, the digital still cameras (DSCs) are popular with consumers as a device to capture digital images easily. For cost and size consideration, most DSCs and industrial cameras use Charge-Coupled Device (CCD) or Complementary Metal-Oxide-Semiconductor (CMOS) sensor to capture the varied colors in a scene with a single sensor chip. However, the single CCD or CMOS sensor does not provide sufficient color resolutions. The solution for most DSC designers is to cover the sensor’s surface with a mosaic of colored filters: Color Filter Array (CFA). Since there is only one color array element in each pixel, the other two missing color elements must be estimated. Thus, the recovery of full color images from CFA-based sensors requires a process of estimating values of missing color elements at each pixel by its adjacent pixels. This reconstruction process of a full color image from a color mosaic is commonly called color interpolation or color de-mosaicking algorithm. In this thesis, a de-mosaicking technique is proposed to interpolate three color channels (Red, Green, and Blue) using Residual Interpolation and embedded artifact removal method based on median filter. We have compared this new approach to five different methods, and our method can not only outstandingly preserve more edge details and handle fine textures well but also remove most color alias, Moiré pattern, and maze pattern.
APA, Harvard, Vancouver, ISO, and other styles
46

You, Sin-Jia, and 游信嘉. "Implementation of digital interpolation filter with FPGA." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/99392717134368340139.

Full text
Abstract:
碩士
龍華科技大學
電子工程系碩士班
100
In this thesis, we implement and design the digital interpolation filter with verilog hardware description language and Altera FPGA device. We study and test the performances of four point digital interpolation filter under various delay coefficients with linear input signal and SRRC pulses signals. For the interpolator architecture, we have designed various versions of pipelined and combined register architectures and find that the fastest version increases the clock rate (or throughput) up to 3.5 times and the latency increases 4 clock periods. In all these architectures, we have rescaled the multiplier coefficients by a factor of 6 to reduce the complexity of the circuit.
APA, Harvard, Vancouver, ISO, and other styles
47

Wang, Chun-Yi, and 王淳毅. "Color Interpolation of RGBW Color Filter Array." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/77574596316362790967.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
101
In this thesis, we propose a novel interpolation on RGBW color filter array. Moreover, this method reduces the problem of color-alias and requires low computation. We introduce the background first, and review some traditional interpolation schemes. Then we discuss some problems in color interpolations such as edge-blurred and color-alias effects. Next, we will explain our proposed method in Chapter 3, and show the results in Chapter 4. We will compare different interpolations on Peak Signal-to-Noise Ratio (PSNR) and processing time. Last, conclusion and future works will be presented in Chapter 5.
APA, Harvard, Vancouver, ISO, and other styles
48

Ho, Wen-Jen, and 何文楨. "Interpolative DPCM Pyramid Based on Halfband Filters." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/04560246657291124024.

Full text
Abstract:
博士
國立交通大學
電子工程學系
86
In this thesis, a multiresolution representation called interpolative DPCMpyramid is proposed and its applications on image compression and fast surfaceinterpolation are also investigated.Interpolative DPCM pyramid is similar to the Laplacian pyramid except the wayof generating the low resolution signal.Instead of filtering the signal before downsampling, the low resolution signalis obtained by direct downsampling the high resolution signal withoutprefilters. The high resolution signal is estimated by interpolating the low resolution signal with the halfband filters. With this approach,an integer to integer multiresolution transform can be constructed.This pyramid can be treated as a biorthogonal perfect reconstructionfilter bank.It can also be treated as the discrete-time interpolative wavelet transform.Based on the framework of critically sampled pyramid, the relations betweeninterpolative DPCM pyramid, interpolative filter bank and interpolative wavelettransform are clarified in this thesis.The role of halfband filters in the construction of wavelet bases is discussed.Based on the time-domain Strang-Fix condition, the relation between theregularity and the polynomial interpolation is made more clear.The non-maximally flat filters are shown to provide much flexibility than themaximally flat filters in the scheme of adaptive interpolation. With thisadaptive interpolation algorithm,the adaptive interpolative DPCM pyramid is described andits applications on the lossless image compression are also investigated.Experimental results show that the adaptive interpolative DPCM pyramid arecomparable to the state-of-art lossless image compression methods.Based on the multiresolution scheme of interpolative wavelet transform, the fastcomputation algorithm is developed for the surface interpolation.In the developed fast algorithm, the interpolative wavelet transform isfirst appliedto construct the multiresolution representation of the interpolation problem, then, the multigrid algorithm is followed by the wavelet transform to solvethis problem.The wavelet transform can be treated as a precondition of the problem and theresultant signal to be solved will have an octave-band decomposition structure. This structure possesses the representation form needed for multigridcomputation and naturally inducesthe combined multiresolution/multigrid computationstructure.This combined structure is proposed in this thesis.Experimental results show that the proposed computation structure yieldssignificant improvement in the computation speed.
APA, Harvard, Vancouver, ISO, and other styles
49

WANG, DAO-YI, and 汪道一. "Image interpolation by spatial-temporal weighted median filter." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/86134671130702057361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Lee, Yu-Nien, and 李祐年. "A High-Performance Color Filter Array Interpolation Design." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/29327712685281259965.

Full text
Abstract:
碩士
國立雲林科技大學
電機工程系碩士班
99
With the advent of the digital age, digital productsexist in every corner of life. The image-related products can be found everywhere, in which the most popular product is the digital camera. On a digital camera, many researches focus on image processing technology.In those technologies, one of the most important techniques is the color filter array interpolation or so called demosaicking algorithm. Generally, the color filter array interpolation methods can be classified into two categories: high-complexity techniques and low-cost techniques. The former yields visually pleasing images by utilizing more sophisticated demosaicking methods but requires more computation resource. However, the latteris more suitable for real-time applications, due to its simplicity and easy implementation in the VLSI chip. In this thesis, a high-performance algorithm for color filter array interpolation is proposed. The proposed scheme exploits both edge information and inter-channel correlations to improve the quality of the color image. Besides, the pipelined hardware architectureof the proposed algorithm is also designed for real-time application, which is a high-performance hardware implementation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography