Dissertations / Theses on the topic 'Aliasing Analysi'

To see the other types of publications on this topic, follow the link: Aliasing Analysi.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 18 dissertations / theses for your research on the topic 'Aliasing Analysi.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Alsop, Stephen A. "Defeating signal analysis aliasing problems." Thesis, University of Strathclyde, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.248868.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dahl, Jason F. "Time Aliasing Methods of Spectrum Estimation." Diss., CLICK HERE for online access, 2003. http://contentdm.lib.byu.edu/ETD/image/etd157.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Barus, Jasa. "An analysis of aliasing in built-in self test procedure." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/27945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xavier, Dhirendran P. "An experimental analysis of aliasing in BIST under different error models /." Thesis, McGill University, 1989. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=55651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ivanov, André. "BIST signature analysis : analytical techniques for computing the probability of aliasing." Thesis, McGill University, 1988. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=75923.

Full text
Abstract:
Testing VLSI circuits is a complex task that requires enormous amounts of resources. To decrease testing costs, testing issues are considered earlier in the design process. This is known as "design for testability" (DFT). Built-in Self Test (BIST) is one proposed DFT approach. BIST generally consists of incorporating additional circuitry on the chip to generate test patterns and compact the response of the circuit under test (CUT) into a reference signature. Compaction implies an information loss, introducing the possibility that a faulty circuit declares itself as good. Such errors are known as aliasing errors. Several BIST schemes have been proposed, and each have a particular performance in regard to aliasing. However, the schemes are often evaluated and compared with ill-defined measures for which the underlying assumptions are either not stated or understood clearly. Here, a novel classification for the measures of aliasing is proposed. By providing clear definitions of different possible measures, the proposed classification augments the understanding of the aliasing problem.
This dissertation focuses on the popular BIST scheme that consists of applying pseudorandom test patterns to a CUT and compacting the latter's response by a signature analysis register (LFSR). Assessing the quality of such a scheme in regard to fault coverage is crucial. Fault coverage can be established by full fault simulation. However, high costs may preclude this approach. Other techniques, probabilistic in nature, have been proposed, but a lack of computationally feasible techniques for analyzing the aliasing problem under a reasonable model has left them elusive. Here, new and computationally feasible techniques are developed. More specifically, closed-form expressions for the probability of aliasing are derived for a certain type of LFSRs. Upper bounds are derived for LFSRs characterized by primitive polynomials. An iterative technique is developed for computing the exact probability of aliasing for LFSRs characterized by any feedback polynomial, and for any test sequence length. These new techniques enable better assessments of the quality of BIST schemes that use signature analysis for response compaction. In turn, they are useful for making important design decisions, e.g., determining the number of test patterns that should be applied to a CUT to achieve a certain test confidence; alternatively, deciding how long the signature analyzer should be, and what type of feedback it should possess to achieve a certain desired test confidence.
The techniques developed for computing the probability of aliasing in BIST are also useful in the context of coding theory. The iterative technique developed for computing the probability of aliasing may be used as an efficient technique for computing the probability of an undetected error for shortened versions of cyclic codes.
APA, Harvard, Vancouver, ISO, and other styles
6

Namroud, Iman. "An Analysis of Aliasing and Image Restoration Performance for Digital Imaging Systems." University of Dayton / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1399046084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Khedekar, Neha N. "Exploratory Study of the Impact of Value and Reference Semantics on Programming." Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/34871.

Full text
Abstract:
In this thesis, we measure the impact of reference semantics on programming and reasoning. We designed a survey to compare how well programmers perform under three different programming paradigms. Two of the paradigms, object-copying and swapping use value semantics, while the third, reference-copying, uses reference semantics. We gave the survey to over 25 people. We recorded number of questions answered correctly in each paradigm and the amount of time it took to answer each question. We looked at the overall results as well as the results within various levels of Java experience. Based on anecdotal evidence from the literature, we expected questions based on value semantics to be easier than questions based on reference semantics. The results did not yield differences that were statistically significant, but they did conform to our general expectations. While further investigation is clearly needed, we believe that this work represents an important first step in the empirical analysis of a topic that has previously only been discussed informally.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
8

Hicks, William T. "An Analysis of Various Digital Filter Types for Use as Matched Pre-Sample Filters in Data Encoders." International Foundation for Telemetering, 1995. http://hdl.handle.net/10150/611585.

Full text
Abstract:
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada
The need for precise gain and phase matching in multi-channel data sampling systems can result in very strict design requirements for presample or anti-aliasing filters. The traditional use of active RC-type filters is expensive, especially when performance requirements are tight and when operation over a wide environmental temperature range is required. New Digital Signal Processing (DSP) techniques have provided an opportunity for cost reduction and/or performance improvements in these types of applications. This paper summarizes the results of an evaluation of various digital filter types used as matched presample filters in data sampling systems.
APA, Harvard, Vancouver, ISO, and other styles
9

Ranganath, Venkatesh Prasad. "Scalable and accurate approaches for program dependence analysis, slicing, and verification of concurrent object oriented programs." Diss., Manhattan, Kan. : Kansas State University, 2006. http://hdl.handle.net/2097/248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Noury, Nicolas. "Mise en correspondance A Contrario de points d'intérêt sous contraintes géométrique et photométrique." Phd thesis, Université Henri Poincaré - Nancy I, 2011. http://tel.archives-ouvertes.fr/tel-00640168.

Full text
Abstract:
L'analyse de la structure et du mouvement permet d'estimer la forme d'objets 3D et la position de la caméra à partir de photos ou de vidéos. Le plus souvent, elle est réalisée au moyen des étapes suivantes : 1) L'extraction de points d'intérêt, 2) La mise en correspondance des points d'intérêt entre les images à l'aide de descripteurs photométriques des voisinages de point, 3) Le filtrage des appariements produits à l'étape précédente afin de ne conserver que ceux compatibles avec une contrainte géométrique fixée, dont on peut alors calculer les paramètres. Cependant, la ressemblance photométrique seule utilisée en deuxième étape ne suffit pas quand plusieurs points ont la même apparence. Ensuite, la dernière étape est effectuée par un algorithme de filtrage robuste, Ransac, qui nécessite de fixer des seuils, ce qui se révèle être une opération délicate. Le point de départ de ce travail est l'approche A Contrario Ransac de Moisan et Stival, qui permet de s'abstraire des seuils. Ensuite, notre première contribution a consisté en l'élaboration d'un modèle a contrario qui réalise la mise en correspondance à l'aide de critères photométrique et géométrique, ainsi que le filtrage robuste en une seule étape. Cette méthode permet de mettre en correspondance des scènes contenant des motifs répétés, ce qui n'est pas possible par l'approche habituelle. Notre seconde contribution étend ce résultat aux forts changements de point de vue, en améliorant la méthode ASift de Morel et Yu. Elle permet d'obtenir des correspondances plus nombreuses et plus densément réparties, dans des scènes difficiles contenant des motifs répétés observés sous des angles très différents.
APA, Harvard, Vancouver, ISO, and other styles
11

GHOSH, SWAROOP. "SCAN CHAIN FAULT IDENTIFICATION USING WEIGHT-BASED CODES FOR SoC CIRCUITS." University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1085765670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Hanzálek, Pavel. "Praktické ukázky zpracování signálů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2019. http://www.nusl.cz/ntk/nusl-400849.

Full text
Abstract:
The thesis focuses on the issue of signal processing. Using practical examples, it tries to show the use of individual signal processing operations from a practical point of view. For each of the selected signal processing operations, an application is created in MATLAB, including a graphical interface for easier operation. The division of the thesis is such that each chapter is first analyzed from a theoretical point of view, then it is shown using a practical demonstration of what the operation is used in practice. Individual applications are described here, mainly in terms of how they are handled and their possible results. The results of the practical part are presented in the attachment of the thesis.
APA, Harvard, Vancouver, ISO, and other styles
13

Almansa, Andrés. "Echantillonnage, interpolation et détection : applications en imagerie satellitaire." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2002. http://tel.archives-ouvertes.fr/tel-00665725.

Full text
Abstract:
Cette thèse aborde quelques-uns des problèmes qui surviennent dans la conception d'un système complet de vision par ordinateur : de l'échantillonnage à la détection de structures et leur interprétation. La motivation principale pour traiter ces problèmes a été fournie par le CNES et la conception des satellites d'observation terrestre, ainsi que par les applications de photogrammétrie et vidéo-surveillance chez Cognitech, Inc. pendant les étapes finales de ce travail, mais les techniques développées sont d'une généralité suffisante pour présenter un intérêt dans d'autres systèmes de vision par ordinateur. Dans une première partie nous abordons une étude comparative des différents systèmes d'échantillonnage d'images sur un réseau régulier, soit carré soit hexagonal, à l'aide d'une mesure de résolution effective, qui permet de déterminer la quantité d'information utile fournie par chaque pixel du réseau, une fois que l'on a séparé les effets du bruit et du repliement spectral. Cette mesure de résolution est utilisée à son tour pour améliorer des techniques de zoom et de restauration basées sur la minimisation de la variation totale. Ensuite l'étude comparative est poursuivie en analysant dans quelle mesure chacun des systèmes permet d'éliminer les perturbations du réseau d'échantillonnage dues aux micro-vibrations du satellite pendant l'acquisition. Après une présentation des limites théoriques du problème, nous comparons les performances des méthodes de reconstruction existantes avec un nouvel algorithme, mieux adapté aux conditions d'échantillonnage du CNES. Dans une deuxième partie nous nous intéressons à l'interpolation de modèles d'élévation de terrain, dans deux cas particuliers: l'interpolation de lignes de niveau, et l'étude des zones dans lesquelles une méthode de corrélation à partir de paires stéréo ne fournit pas des informations fiables. Nous étudions les liens entre les méthodes classiques utilisées en sciences de la terre tels que Krigeage ou distances géodésiques, et la méthode AMLE, et nous proposons une extension de la théorie axiomatique de l'interpolation qui conduit à cette dernière. Enfin une évaluation expérimentale permet de conclure qu'une nouvelle combinaison du Krigeage avec l'AMLE fournit les meilleures interpolations pour les modèles de terrain. Enfin nous nous intéressons à la détection d'alignements et de leurs points de fuite dans une image, car ils peuvent être utilisés aussi bien pour la construction de modèles d'élévation urbains, que pour résoudre des problèmes de photogrammétrie et calibration de caméras. Notre approche est basée sur la théorie de la Gestalt, et son implémentation effective récemment proposée par Desolneux-Moisan-Morel à l'aide du principe de Helmholtz. Le résultat est un détecteur de points de fuite sans paramètres, qui n'utilise aucune information a priori sur l'image ou la caméra.
APA, Harvard, Vancouver, ISO, and other styles
14

NIKOLIC, Durica. "A General Framework for Constraint-Based Static Analyses of Java Bytecode Programs." Doctoral thesis, 2013. http://hdl.handle.net/11562/546351.

Full text
Abstract:
Questa tesi introduce un generico e parametrizzato framework per analisi statica dei programmi Java bytecode, basato sulla generazione e soluzione dei vincoli. All'interno del framework è possibile gestire sia i flussi di eccezione all'interno di programmi analizzati, sia i side-effect indotti dalle esecuzioni dei metodi che possono modificare la memoria. Questo framework è generico nel senso che diverse istanziazioni dei suoi parametri risultano in diverse analisi statiche capaci di catturare varie proprietà relative alla memoria delle variabili del programma ad ogni punto del programma. Le analisi statiche definite dal framework sono basate su interpretazione astratta, e quindi le proprietà d'interesse sono rappresentate da dei domini astratti. Il framework può essere usato per la definizione sia delle analisi statiche che producono le approssimazioni del tipo "possible" oppure "may", che quelle del tipo "definite" oppure "must". Nel primo caso, il risultato di tali analisi è una sovra-approssimazione di quello che potrebbe essere vero ad un certo punto del programma, mentre nel secondo caso il risultato rappresenta una sotto-approssimazione della situazione reale. Questa tesi fornisce un insieme di condizioni che diverse istanziazioni dei parametri del framework devono soddisfare affinché le analisi statiche definite all'interno del framework siano "sound" (corrette). Quando i parametri istanziati soddisfano tali condizioni, il framework garantisce la correttezza dell'analisi corrispondente all'istanziazione in questione. Il vantaggio di questo approccio è che il designer di una nuova analisi statica deve soltanto mostrare che i parametri da lui istanziati soddisfano i criteri specificati dal framework.In questo modo la dimostrazione di correttezza dell'analisi completa è semplificata. Questa è una caratteristica molto importante del presente lavoro. La tesi introduce due nuove analisi statiche relatve alle proprietà della memoria: la Possible Reachability Analysis Between Program Variables e la Definite Expression Aliasing Analysis. La prima rappresenta un esempio delle analisi "possible" e determina, per ogni punto p del programma, quali sono le coppie ordinate delle variabili disponibili a tale punto, tali che v potrebbe raggiungere w al punto p, ovvero, che a partire dalla variabile v è possibile seguire un insieme di locazioni di memoria che portano all'oggetto legato alla variabile w. La seconda analisi è un esempio delle analisi "definite" e determina, per ogni punto p del programma ed ogni variabile v disponibile a tale punto, un insieme di espressioni il cui valore è sempre uguale al valore che la variabile v può avere al punto p, per ogni possibile esecuzione. Entrambe le analisi sono state formalizzate e dimostrate corrette grazie ai risultati teorici del framework introdotto in questa tesi. In più, entrambe le analisi sono state implementate all'interno dell'analizzatore statico per Java e Android chiamato Julia (www.juliasoft.com). Gli esperimenti eseguiti sui programmi reali mostrano che la precisione dei principali tool di Julia (nullness e termination tool) è migliorata rispetto alle versioni precedenti di Julia nelle quali le nuove analisi non erano presenti.
The present thesis introduces a generic parameterized framework for static analysis of Java bytecode programs, based on constraint generation and solving. This framework is able to deal with the exceptional flows inside the program and the side-effects induced by calls to non-pure methods. It is generic in the sense that different instantiations of its parameters give rise to different static analyses which might capture complex memory-related properties at each program point. Different properties of interest are represented as abstract domains, and therefore the static analyses defined inside the framework are abstract interpretation-based. The framework can be used to generate possible or may approximations of the property of interest, as well as definite or must approximations of that property. In the former case, the result of the static analysis is an over-approximation of what might be true at a given program point; in the latter, it is an under-approximation. This thesis provides a set of conditions that different instantiations of framework's parameters must satisfy in order to have a sound static analysis. When these conditions are satisfied by a parameter's instantiation, the framework guarantees that the corresponding static analysis is sound. It means that the designer of a novel static analysis should only show that the parameters he or she instantiated actually satisfy the conditions provided by the framework. This way the framework simplifies the proofs of soundness of the static analysis: instead of showing that the overall analysis is sound, it is enough to show that the provided instantiation describing the actual static analyses satisfies the conditions mentioned above. This a very important feature of the present approach. Then the thesis introduces two novel static analyses dealing with memory-related properties: the Possible Reachability Analysis Between Program Variables and the Definite Expression Aliasing Analysis. The former analysis is an example of a possible analysis which determines, for each program point p, which are the ordered pairs of variables available at p, such that v might reach w at p, i.e., such that starting from v it is possible to follow a path of memory locations that leads to the object bound to w. The latter analysis is an example of a definite analysis, and it determines, for each program point p and each variable v available at that point, a set of expressions which are always aliased to v at p. Both analyses have been formalized and proved sound by using the theoretical results of the framework. These analyses have been also implemented inside the Julia tool (www.juliasoft.com), which is a static analyzer for Java and Android. Experimental evaluation of these analyses on real-life benchmarks shows how the precision of Julia's principal checkers (nullness and termination checkers) increased compared to the previous version of Julia where these two analyses were not implemented. Moreover, this experimental evaluation showed that the presence of the reachability analysis actually decreased the total run-time of Julia. On the other hand, the aliasing analysis takes more time, but the number of possible warnings produced by the principal checkers drastically decreased.
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Bye-Cherng, and 王佰誠. "Analysis of Designed Experiments with Complex aliasing." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/86537042655573275702.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Moghtaderi, AZADEH. "Multitaper Methods for Time-Frequency Spectrum Estimation and Unaliasing of Harmonic Frequencies." Thesis, 2009. http://hdl.handle.net/1974/1700.

Full text
Abstract:
This thesis is concerned with various aspects of stationary and nonstationary time series analysis. In the nonstationary case, we study estimation of the Wold-Cram'er evolutionary spectrum, which is a time-dependent analogue of the spectrum of a stationary process. Existing estimators of the Wold-Cram'er evolutionary spectrum suffer from several problems, including bias in boundary regions of the time-frequency plane, poor frequency resolution, and an inability to handle the presence of purely harmonic frequencies. We propose techniques to handle all three of these problems. We propose a new estimator of the Wold-Cram'er evolutionary spectrum (the BCMTFSE) which mitigates the first problem. Our estimator is based on an extrapolation of the Wold-Cram'er evolutionary spectrum in time, using an estimate of its time derivative. We apply our estimator to a set of simulated nonstationary processes with known Wold-Cram'er evolutionary spectra to demonstrate its performance. We also propose an estimator of the Wold-Cram'er evolutionary spectrum, valid for uniformly modulated processes (UMPs). This estimator mitigates the second problem, by exploiting the structure of UMPs to improve the frequency resolution of the BCMTFSE. We apply this estimator to a simulated UMP with known Wold-Cram'er evolutionary spectrum. To deal with the third problem, one can detect and remove purely harmonic frequencies before applying the BCMTFSE. Doing so requires a consideration of the aliasing problem. We propose a frequency-domain technique to detect and unalias aliased frequencies in bivariate time series, based on the observation that aliasing manifests as nonlinearity in the phase of the complex coherency between a stationary process and a time-delayed version of itself. To illustrate this ``unaliasing'' technique, we apply it to simulated data and a real-world example of solar noon flux data.
Thesis (Ph.D, Mathematics & Statistics) -- Queen's University, 2009-02-05 10:18:13.476
APA, Harvard, Vancouver, ISO, and other styles
17

Hong-BinWang and 王泓斌. "Analysis and Verification of Class-D Amplifier with Zero-Phase-Shift PWM-Aliasing-Distortion Reduction." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/4g9z56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

ElGarewi, Ahmed. "Analysis of algorithms for filter bank design optimization." Thesis, 2019. http://hdl.handle.net/1828/11131.

Full text
Abstract:
This thesis deals with design algorithms for filter banks based on optimization. The design specifications consist of the perfect reconstruction and frequency response specifications for finite impulse response (FIR) analysis and synthesis filters. The perfect reconstruction conditions are formulated as a set of linear equations with respect to the analysis filters’ coefficients and the synthesis filters’ coefficients. Five design algorithms are presented. The first three are based on an unconstrained optimization of performance indices, which include the perfect reconstruction error and the error in the frequency specifications. The last two algorithms are formulated as constrained optimization problems with the perfect reconstruction error as the performance index and the frequency specifications as constraints. The performance of the five algorithms is evaluated and compared using six examples; these examples include uniform filter bank, compatible non-uniform filter bank and incompatible non-uniform filter bank designs. The evaluation criteria are based on distortion and aliasing errors, the magnitude response characteristics of analysis and synthesis filters, the computation time required for the optimization, and the convergence of the performance index with respect to the number of iterations. The results show that the five algorithms can achieve almost perfect reconstruction and can meet the frequency response specifications at an acceptable level. In the case of incompatible non-uniform filter banks, the algorithms have challenges to achieve almost perfect reconstruction.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography