Dissertations / Theses on the topic 'Digital Signal Analyzer'

To see the other types of publications on this topic, follow the link: Digital Signal Analyzer.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 28 dissertations / theses for your research on the topic 'Digital Signal Analyzer.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Shi, Xiaodong. "Upgrading liquid metal cleanliness analyzer (LiMCA) with digital signal processing (DSP) technology." Thesis, McGill University, 1994. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=22677.

Full text
Abstract:
The development of advanced metal products requires "clean" liquid metals as their basic materials. There are more and more applications for which the cleanliness of the liquid metals has to be qualified that the number and size of inclusions must be controlled below some acceptable limits. Such demands for quality have resulted in the development of measuring systems that can count the number and size distribution of inclusions. One such device, the so-called LiMCA (Liquid Metal Cleanliness Analyzer), which was developed at McGill University, measures inclusions in liquid metals and has been successfully used in the aluminum industry for years.
Digital Signal Processing (DSP) technology has been successfully applied to upgrade the LiMCA system. With this technology, the DSP-based LiMCA system is able to describe each LiMCA transient by a group of seven parameters, and with the help of them, classify it into a certain category. Moreover, it simultaneously counts the classified peaks based on their height and their time of occurrence. (Abstract shortened by UMI.)
APA, Harvard, Vancouver, ISO, and other styles
2

Зубков, О. В., І. В. Свид, О. С. Мальцев, and Л. Ф. Сайківська. "In-circuit Signal Analysis in the Development of Digital Devices in Vivado 2018." Thesis, Theoretical and Applied Aspects of Device Development on Microcontrollers and FPGAs, MC&FPGA-2019, 2019. https://doi.org/10.35598/mcfpga.2019.003.

Full text
Abstract:
Considered the implementation of in-circuit analysis of logical signals in digital devices synthesized in Xilinx Field-Programmable Gate Array. Designed a digital control device streaming analog-to-digital converter. An analysis of the results of the analog-digital conversion was carried out and measures were taken to smooth out the false results of the conversion.
APA, Harvard, Vancouver, ISO, and other styles
3

Зубков, О. В., І. В. Свид, О. С. Мальцев, and Л. Ф. Сайківська. "In-circuit Signal Analysis in the Development of Digital Devices in Vivado 2018." Thesis, NURE, MC&FPGA, 2019. https://mcfpga.nure.ua/conf/2019-mcfpga/10-35598-mcfpga-2019-003.

Full text
Abstract:
Considered the implementation of in-circuit analysis of logical signals in digital devices synthesized in Xilinx Field-Programmable Gate Array. Designed a digital control device streaming analog-to-digital converter. An analysis of the results of the analog-digital conversion was carried out and measures were taken to smooth out the false results of the conversion.
APA, Harvard, Vancouver, ISO, and other styles
4

Lau, Anthony Kwok. "A digital oscilloscope and spectrum analyzer for anaysis of primate vocalizations : master's research project report." Scholarly Commons, 1989. https://scholarlycommons.pacific.edu/uop_etds/2177.

Full text
Abstract:
The major objective of this report is to present information regarding the design, construction, and testing of the Digital Oscilloscope Peripheral which allows the IBM Personal Computer (IBM PC) to be used as both a digital oscilloscope and a spectrum analyzer. The design and development of both hardware and software are described briefly; however, the test results are analyzed and discussed in great detail. All documents including the circuit diagrams, program flowcharts and listings, and user manual are provided in the appendices for reference. Several different products are referred to in this report; the following lists each one and its respective company: IBM, XT, AT, and PS/2 are registered trademarks of International Business; Machines Corporation.; MS-DOS is a registered trademark of Microsoft Corporation.; and Turbo Basic is a registered trademark of Borland International, Inc.
APA, Harvard, Vancouver, ISO, and other styles
5

Whittaker, Philip. "On board signal analysis using novel analogue/digital signal processing techniques on low earth orbit mini/microsatellites." Thesis, University of Surrey, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ghent, Jeremy E. "A digital signal processing approach to analyze the effects of multiple reflections between highway noise barriers." Ohio : Ohio University, 2003. http://www.ohiolink.edu/etd/view.cgi?ohiou1175090494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pous, Nicolas. "Analyse de signaux analogiques/radiofréquences à l'aide de ressources digitales en vue du test." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2011. http://tel.archives-ouvertes.fr/tel-00667202.

Full text
Abstract:
Les travaux présentés dans ce mémoire entrent dans le cadre de la réduction des coûts de production des circuits RF. En effet, la démocratisation de ces appareils pousse les industriels à chercher de nouvelles solutions afin de produire, à bas prix, ces circuits RF. Le test représentant une large partie du coût de ces produits, l'objectif de cette thèse a donc été de proposer une stratégie originale permettant d'effectuer l'analyse de signaux modulés à l'aide d'un équipement de test numérique faible coût. Tout d'abord, le manuscrit dresse un tableau général du test industriel et présente un éventail des solutions proposées dans la littérature pour réduire le coût du test des circuits RF, ainsi que des exemples d'utilisation du concept de " level-crossing ", méthode choisie pour effectuer la capture puis la reconstruction des signaux analogiques et RF. Les principes de base utilisés pour la reconstruction de signaux analogiques à partir d'informations temporelles sont ensuite abordés. L'élément clé des algorithmes de reconstruction est la détermination des instants de passage du signal par un seuil de tension prédéterminé. De cette information, il est ensuite possible de déterminer la phase, la fréquence et l'amplitude du signal observé. La suite est consacrée à l'analyse de signaux modulés. Dans un premier temps sur des modulations élémentaires, puis sur des schémas de modulations plus complexes, basés sur des cas d'étude concrets. Le travail se termine en abordant la prise en compte des non-idéalités de la chaîne d'acquisition. Il s'agit en particulier d'étudier l'impact de ces non-idéalités pour élaborer des algorithmes visant à compenser les erreurs résultantes.
APA, Harvard, Vancouver, ISO, and other styles
8

Lê, Nguyên Khoa 1975. "Time-frequency analyses of the hyperbolic kernel and hyperbolic wavelet." Monash University, Dept. of Electrical and Computer Systems Engineering, 2002. http://arrow.monash.edu.au/hdl/1959.1/8299.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Larguier, Laurent. "Analyse de l'impact du bruit de commutation sur les blocs digitaux des circuits intégrés CMOS." Montpellier 2, 2008. http://www.theses.fr/2008MON20191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rodesten, Stephan. "Program för frekvensanalys." Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-58157.

Full text
Abstract:
Denna rapport täcker arbetsprocessen bakom att skapa en spektrumanalysator. Läsaren kommer att få läsa om den valda metoden men även alternativa metoder. Utöver detta kommer även de teoretiska delarna bakom varje moment att undersökas samt jämföras med potentiella alternativa lösningar. Projektet har utförts på uppdrag av KA Automation. Syftet med projektet var att skapa en basplattform för analys av ljudfrekvenser. Målet med detta var att kunna identifiera ljudegenskaper i form av frekvenserna hos exempelvis servomotorer i vattenpumpar. Tanken var att i ett senare utvecklingsskede kunna identifiera om och när nya frekvenser dykt upp i ljudprofilen vilket i sådana fall kan resultera i att motorn är i behov av service. Basplattformen är uppbyggd med hjälp av C# och ljudbehandlingsbiblioteket NAudio. Från resultatet kan slutsatsen dras att detta program kan analysera ljud och visa de olika frekvensernas styrka och därmed är en lämplig basplattform för vidareutveckling.
This report will cover the work process behind creating a spectrum analyzer. The reader will be able to read about the chosen method but also the alternative methods. Apart from this the theoretical parts behind every moment will also be covered and compared to potential alternative solutions. The project has been carried out on behalf of KA Automation. The purpose of the project was to create a base for analyzing sound frequencies. The goal was to be able to identify sound properties in the form of frequencies in servo motors in for example water pumps. The idea was to be able to in a later development stage be able to identify when new frequencies have entered the audio profile which might result in the motor to be in need of service. The base is created with the help of C# and the sound library NAudio. From the result one can conclude that this program can analyze sound and display the magnitude of its frequency components and is therefore a suitable base for future development.
APA, Harvard, Vancouver, ISO, and other styles
11

Hattay, Jamel. "Wavelet-based lifting structures and blind source separation : applications to digital in-line holography." Rouen, 2016. http://www.theses.fr/2016ROUES016.

Full text
Abstract:
Ce projet de thèse expose des méthodes de traitement, dans le domaine des ondelettes, pour résoudre certains problèmes liés à la mise en oeuvre de l’holographie numérique dans l’axe. Ce developpement utilise des outils de la théorie de l’information et des divers moyens de traitement du signal tels que la séparation aveugle de sources (SAS). Cette technique est exploitée, ici, pour améliorer l’efficacité de l’holographie numérique, tels que la suppression de l’image jumelle, l’estimation de l’indice de réfraction, le codage et la transmission temps réel des hologrammes. Tout d’abord, nous donnons une brève introduction à la configuration dans l’axe de l’holographie numérique telle qu’elle est mise en oeuvre à l’UMR 6614 CORIA: l'explication de l’étape d’enregistrement ainsi que les différentes approches de restitution des hologrammes utilisés dans cette thèse. Ensuite, nous présentons un état de l’art des méthodes permettant de résoudre les deux principaux obstacles rencontrés dans la reconstruction des hologrammes numériques: l’étape de mise au point et la suppression de l’image jumelle. Ensuite, nous expliquons méticuleusement l’outil basé sur la transformée d’ondelettes, pour assurer une décomposition multi résolution de l’image, qui permet la séparation aveugle des images mélangées par un produit de convolution. Notre proposition consiste à utiliser la 2ème génération de la transformée en ondelettes d’une manière adaptative appelée aussi Schéma de lifting en quinconce Adaptif (SLQA). Cette décomposition est couplée à un algorithme de séparation appropriée pour former les trois étapes suivantes : les images d’entrées, mélangées par convolution, sont décomposées par SLQA pour former un arbre d’ondelettes. Ensuite, on applique l’algorithme de séparation sur le noeud le plus parcimonieux, généralement à la résolution la plus élevée, et enfin les images séparées sont reconstruites à l’aide de l’inverse de SLQA. Cet outil est appliqué pour résoudre plusieurs problèmes liés à des applications d’holographie numérique dans l’axe. Dans ce contexte, deux méthodes sont proposées. La première méthode, utilisant l’entropie globale, est développée pour rechercher de manière automatique le meilleur plan de mise au point des images holographiques. La deuxième méthode sert à supprimer l’image jumelle qui accompagne l’image restituée. Cette dernière se base sur la décomposition SLQA avec un algorithme de séparation statistique qui utilise la fameuse technique Analyse en Composantes Indépendantes (ACI). Vu que le formalisme d’un produit de convolution est retenu dans l’étape de formation de l’hologramme, l’outil SLQA et ACI assurent parfaitement la tâche de déconvolution. Les résultats expérimentaux confirment bien que nos deux méthodes proposées sont capables d’estimer le meilleur plan de mise au point et d’éliminer l’effet de l’image jumelle dans l’image restituée. Puis, nous proposons d’estimer l’épaisseur d’un anneau dans une image restituée d’un hologramme contenant la diffraction d’une bulle de vapeur stable dans une gouttelette d’un liquide. La dernière partie met en oeuvre le nouveau concept de Télé-Holographie. Il s’agit de mettre en place un échange de flux interactif entre la chambre d’enregistrement des hologrammes in-situ et un laboratoire distant au sein duquel s’effectue le traitement numérique de ces hologrammes. Pour atteindre cet objectif, nous proposons de réaliser une compression sans perte des hologrammes numériques par transformée en ondelettes. Pour la phase de la transmission progressive, selon la capacité du canal de transmission, nous proposons une manière efficace pour le codage de l’arbre des zéros des coefficients emboités obtenu par la transformée d’ondelette en quinconce (SLQA). Ce codeur nous permet une réduction considérable du débit binaire lors de la transmission des hologrammes. Les premiers tests effectués sur des hologrammes réels, enregistrés au sein du laboratoire CORIA, montrent une amélioration significative des taux de compression totale et de la taille de l’hologramme compressé
The present thesis is meant to develop specific processes, in the realm of wavelets domain, for certain digital holography applications. We mainly use the so-called blind source separation (BSS) techniques to solve numerous digital holography problems, namely, the twin image suppression, real time coding and transmission of holograms. Firstly, we give a brief introduction to in-line configuration of digital holography in flow measurements: the recording step explanation and the study of two reconstruction approaches that have been used during this thesis. Then, we emphasize the two well known obstacles of digital holograms reconstruction, namely, the determination of the best focus plane and the twin image removal. Secondly, we propose a meticulous scrutiny of the tool, based on the Blind Source Separation (BSS), enhanced by a multiscale decomposition algorithm, which enables the blind separation of convolutively mixed images. The suggested algorithm uses a wavelet-based transformer, called Adaptive Quincunx Lifting Scheme (AQLS), coupled with an appropriate unmixing algorithm. The resulting deconvolution process is made up of three steps. In the first step, the convolutively mixed images are decomposed by AQLS. Then, separation algorithm is applied to the most relevant component to unmix the transformed images. The unmixed images are, thereafter, reconstructed using the inverse of the AQLS transform. In a subsequent part, we adopt the blind source separation technique in the wavelet field domain to solve several problems related to digital holography. In this context, we present two main contributions for digital in-line hologram processing. The first contribution consists in an entropy-based method to retrieve the best focus plane, a crucial issue in digital hologram reconstruction. The second contribution consists in a new approach to remove a common unwanted artifact in holography called the twin image. The latter contribution is based on the blind source separation technique, and the resulting algorithm is made up of two steps: an Adaptive Quincunx Lifting Scheme (AQLS) based on the wavelet packet transform and a statistical unmixing algorithm based on Independent Component Analysis (ICA) tool. The role of the AQLS is to maximize the sparseness of the input holograms. Since the convolutive formalism is retained in digital in-line holography, BSS-based tool is extended and coupled with wavelet-based AQLS to fulfill the deconvolution task. Experimental results confirm that convolutive blind source separation is able to discard the unwanted twin image from digital in-line holograms. The last of this part consists in measuring the thickness of a ring. This ring is obtained from an improved reconstructed image of an hologram containing a vapor bubble created by thermal coupling between a laser pulse and nanoparticles in a droplet of a liquid. The last part introduces the Tele-Holography concept. Once the image of the object is perfectly reconstructed, the next objective is to code and transmit the reconstructed image for an interactive flow of exchange between a given laboratory, where the holograms are recorded, and a distant partner research. We propose the tele-holography process that involves the wavelet transform tool for lossless compression and transmission of digital holograms. The concept of tele-holography is motivated by the fact that the digital holograms are considered as a 2D image yielding the depth information of 3D objects. Besides, we propose a quincunx embedded zero-tree wavelet coder (QEZW) for scalable transmission. Owing to the transmission channel capacity, it reduces drastically the bit rate of the holography transmission flow. A flurry of experimental results carried out on real digital holograms show that the proposed lossless compression process yields a significant improvement in compression ratio and total compressed size. These experimentations reveal the capacities of the proposed coder in terms of real bitrate for progressive transmission
APA, Harvard, Vancouver, ISO, and other styles
12

Boire, Jean-Yves. "Recueil et analyse des electrogrammes visuels." Clermont-Ferrand 2, 1987. http://www.theses.fr/1987CLF2E382.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Matthews, Brett Alexander. "Probabilistic modeling of neural data for analysis and synthesis of speech." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/50116.

Full text
Abstract:
This research consists of probabilistic modeling of speech audio signals and deep-brain neurological signals in brain-computer interfaces. A significant portion of this research consists of a collaborative effort with Neural Signals Inc., Duluth, GA, and Boston University to develop an intracortical neural prosthetic system for speech restoration in a human subject living with Locked-In Syndrome, i.e., he is paralyzed and unable to speak. The work is carried out in three major phases. We first use kernel-based classifiers to detect evidence of articulation gestures and phonological attributes speech audio signals. We demonstrate that articulatory information can be used to decode speech content in speech audio signals. In the second phase of the research, we use neurological signals collected from a human subject with Locked-In Syndrome to predict intended speech content. The neural data were collected with a microwire electrode surgically implanted in speech motor cortex of the subject's brain, with the implant location chosen to capture extracellular electric potentials related to speech motor activity. The data include extracellular traces, and firing occurrence times for neural clusters in the vicinity of the electrode identified by an expert. We compute continuous firing rate estimates for the ensemble of neural clusters using several rate estimation methods and apply statistical classifiers to the rate estimates to predict intended speech content. We use Gaussian mixture models to classify short frames of data into 5 vowel classes and to discriminate intended speech activity in the data from non-speech. We then perform a series of data collection experiments with the subject designed to test explicitly for several speech articulation gestures, and decode the data offline. Finally, in the third phase of the research we develop an original probabilistic method for the task of spike-sorting in intracortical brain-computer interfaces, i.e., identifying and distinguishing action potential waveforms in extracellular traces. Our method uses both action potential waveforms and their occurrence times to cluster the data. We apply the method to semi-artificial data and partially labeled real data. We then classify neural spike waveforms, modeled with single multivariate Gaussians, using the method of minimum classification error for parameter estimation. Finally, we apply our joint waveforms and occurrence times spike-sorting method to neurological data in the context of a neural prosthesis for speech.
APA, Harvard, Vancouver, ISO, and other styles
14

Bourdier, Renaud. "Analyse temps/frequence, filtrage et synthese numeriques de signaux de parole : application au filtrage, a la reduction de bruit et a la restauration d'enregistrements anciens." Le Mans, 1988. http://www.theses.fr/1988LEMA1001.

Full text
Abstract:
Etude des phenomenes temporels et frequentiels apparaissant lors d'une synthese a partir de la modification des spectres deduits de l'analyse par transformee de fourier a court terme (tfct). Les performances de l'implementation par tfct d'une analyse synthese, d'une operation de filtrage invariant ou dependant du temps, et d'une reduction de bruit ont ete caracterisees
APA, Harvard, Vancouver, ISO, and other styles
15

Гриненко, Віталій Вікторович, Виталий Викторович Гриненко, Vitalii Viktorovych Hrynenko, and А. В. Любко. "Багатофункціональний осцилографа-аналізатор з генератором сигналів довільної форми." Thesis, Сумський державний університет, 2015. http://essuir.sumdu.edu.ua/handle/123456789/41239.

Full text
Abstract:
Під час перевірки і налагодження пристроїв для відображення форми сигналу та відслідковування зміни цифрового сигналу у часі використовують осцилографи та аналізатори, які дозволяють спостерігати за зміною декількох цифрових сигналів одночасно.
APA, Harvard, Vancouver, ISO, and other styles
16

Liu, Ming. "Analyse et optimisation du système asiatique de diffusion terrestre et mobile de la télévision numérique." Phd thesis, INSA de Rennes, 2011. http://tel.archives-ouvertes.fr/tel-00662247.

Full text
Abstract:
Cette thèse a pour objectif l'analyse du système de télévision numérique chinois (DTMB) et l'optimisation de sa fonction d'estimation de canal. Tout d'abord, une analyse approfondie de ce système est effectuée en comparaison du système DVB-T en termes de spécifications, d'efficacité spectrale et de performances. Ensuite, la fonction d'estimation de canal basée sur la séquence pseudo-aléatoire du système est étudiée dans les domaines temporel et fréquentiel, et plusieurs améliorations sont apportées aux méthodes typiques afin de notamment gérer les canaux très dispersifs en temps. Enfin, de nouveaux procédés itératifs aidés par les données et peu complexes sont proposés pour raffiner les estimés de canal. Les fonctions de décodage de canal et d'entrelacement sont exclues de la boucle et des fonctions de filtrage temps/fréquence sont étudiées pour fiabiliser les estimations. Ces nouveaux algorithmes démontrent leur efficacité par rapport aux méthodes courantes de la littérature.
APA, Harvard, Vancouver, ISO, and other styles
17

Thomazella, Rogério. "Técnicas de processamento digital de sinais de sensor piezelétrico na detecção de vibrações auto-excitadas (chatter) no processo de retificação /." Bauru, 2019. http://hdl.handle.net/11449/182150.

Full text
Abstract:
Orientador: Paulo Roberto de Aguiar
Resumo: O chatter corresponde a movimentos instáveis e caóticos no sistema de usinagem, resultando em flutuação das forças de corte e na impressão de ondulações na superficie da peça usinada. É um fenômeno indesejável ao processo de usinagem, especialmente ao processo de retificação, pois a sua ocorrência acentuada resulta em um produto acabado com tolerâncias dimensionais e geométricas fora dos padrões, ou até mesmo em danos irreversíveis, como por exemplo, alteração na dureza, alta rugosidade e queima superficial da peça usinada. Na literatura, poucos trabalhos tratam da análise e monitoramento do chatter com técnicas de processamento digital de sinais, especialmente de aceleração. O objetivo desse trabalho é propor uma nova técnica de processamento digital utilizando os sinais de aceleração baseados no cálculo da STFT - Short Time Fourier Transform (Transformada de Fourier de curta duração) e na estatística Relação de Potência (ROP – ratio of power), com a finalidade de detecção do fenômeno de chatter na retificação tangencial plana com rebolo superabrasivo de nitreto cúbico de boro (CBN) e óxido de alumínio. Para tanto, ensaios de retificação foram realizados em corpos de prova de aço ABNT 1045. Um acelerômetro piezelétrico foi acoplado ao suporte das peças e sinais de aceleração foram coletados à uma frequência de amostragem de 2MHz. Dentre as variáveis de saída, obteve-se a dureza Vickers (HV), rugosidade média (Ra) e a análise microestrutural das peças retificadas. Os sinais d... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: Chatter corresponds to unstable and chaotic movements in the machining system, resulting in fluctuation of the cutting forces. It is a serious and undesired physical phenomenon that occurs in the grinding process during parts manufacturing. The intense occurrence of this phenomenon during machining can generate a finished part outside the dimensional and geometric tolerances or even cause irreversible damage, such as: changes to the hardness, high surface roughness, and thermal damages to the ground part. Few vibration signal processing techniques have been proposed for monitoring chatter during grinding. Thus, the objective of this study is to propose and validate a new vibration signal processing technique based on the short-time Fourier transform (STFT) and the ratio of power (ROP) statistic for the detection of chatter during the tangential surface grinding of ABNT 1045 steel with different grinding wheels. Experimental grinding tests were conducted, and the vibration signals were recorded at 2 MHz. The Vickers hardness (HV), roughness (Ra) and metallography of the ground workpiece surfaces were performed. Subsequently, a digital processing technique based on the STFT and ROP was applied to the vibration signals to extract the characteristics of the chatter in the grinding process. The results show that this technique can be used to characterize over time the spectral patterns of a frequency band related to chatter. The observed patterns have a strong relationship with th... (Complete abstract click electronic access below)
Doutor
APA, Harvard, Vancouver, ISO, and other styles
18

Hadri, Salah Eddine. "Contribution à la synthèse de structures optimales pour la réalisation des filtres et de régulateurs en précision finie." Vandoeuvre-les-Nancy, INPL, 1996. http://www.theses.fr/1996INPL129N.

Full text
Abstract:
Un des principaux problèmes dans le traitement numérique du signal est la précision finie des calculs. La présente étude consiste en la minimisation des effets néfastes des erreurs numériques sur les performances des filtres et des régulateurs digitaux. Dans un premier temps, on présente des méthodes analytiques qui permettent de développer une expression quantitative de l'erreur due à la quantification dans les filtres et les régulateurs digitaux. On procède ensuite à l'étude et à l'analyse des paramètres qui influencent les performances d'un réglage digital compte tenu de la précision finie, ainsi que leurs interactions. L’étape suivante est consacrée à la synthèse de structures pour les filtres et les régulateurs qui possèdent les meilleures propriétés numériques en termes de certains critères d'optimalité. Les méthodes et les résultats existants sont présentés. Notre contribution a été d'établir, pour des hypothèses moins restrictives, des conditions d'optimalité plus générales, qui offrent un ensemble de réalisations optimales plus étendu. Ce dernier inclut la réalisation optimale utilisant un minimum de coefficients. Nous avons montré que les conditions d'optimalité données dans les travaux antérieurs sont suffisantes et non nécessaires. Les méthodes utilisées permettent de résoudre le problème d'optimisation en aboutissant à des solutions particulières. Toutefois, les quantités prises comme mesures du bruit de calcul et de la sensibilité de la fonction de transfert, par rapport à la quantification des coefficients, ne permettent pas de comparer les performances de différentes réalisations. Notre méthodologie nous a permis d'unifier plusieurs objectifs et concepts qui ont été jusqu'à présent traites indépendamment. Elle a permis, entre autres, d'appliquer d'une manière directe les idées développées pour les filtres au cas des régulateurs (prise en compte de la boucle)
APA, Harvard, Vancouver, ISO, and other styles
19

Almansa, Andrés. "Sur quelques problèmes mathématiques en analyse d'images et vision stéréoscopique." Habilitation à diriger des recherches, Université René Descartes - Paris V, 2005. http://tel.archives-ouvertes.fr/tel-00011765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

ESPINOLA, Sérgio de Brito. "Análise acústica para classificação de patologias da voz empregando análise de Componentes Principais, Redes Neurais Artificiais e Máquina de vetores de Suporte." Universidade Federal de Campina Grande, 2014. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/123.

Full text
Abstract:
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2017-09-19T15:36:01Z No. of bitstreams: 1 Dissertacao_SergioEspinola_CEEI_UFCG.pdf: 59559230 bytes, checksum: 045a4738e365ab656e17da8b2185cb9b (MD5)
Made available in DSpace on 2017-09-19T15:36:01Z (GMT). No. of bitstreams: 1 Dissertacao_SergioEspinola_CEEI_UFCG.pdf: 59559230 bytes, checksum: 045a4738e365ab656e17da8b2185cb9b (MD5) Previous issue date: 2014-03-12
Estima-se que um terço da força de trabalho humana dependa da voz para realização de seus ofícios. Procedimentos médicos avaliam a qualidade vocal do indivíduo sendo os mais usados aqueles baseados na escuta da voz (subjetivo) ou na inspeção das dobras (ou pregas) vocais por exames sofisticados (objetivos, porém invasivos e caros). A análise acústica da voz busca extrair medidas robustas para descrever vários fenômenos associados à produção da fala ou características intrínsecas do ser humano como frequência fundamental, timbre, etc. O presente estudo consiste na caracterização de um modelo de processamento digital de Voz para apoio ao diagnóstico no contexto da construção de sistemas de identificação automatizados de patologias da fala. Para análise da técnica proposta foi utilizada uma base de dados (base KAY) que foi estruturada por especialistas num arranjo de seis grupos de Patologias. A esse, acrescentado também um de vozes “Normal”. Assim, 182 vozes foram escolhidas, as quais dispunham de um catálogo indexado de cerca de 33 descritores, para cada voz, calculados da elocução da vogal \a\ sustentada. Ao selecionar combinações desses descritores – como perturbações em frequência (jitter), em amplitude (shimmer) etc, este estudo encontrou evidências estatísticas e mostrou ser possível: a) Separar vozes normais das patológicas – esperado, b) Separar patologias específicas (Paralisia, Edema de Reinke, Nódulos) com acurácia de 100% (para a grande maioria dessas combinações) e cerca de 92% (para Nódulos contra Reinke); c) Discriminá-las por meio de classificadores (redes neurais artificiais e máquina de vetores de suporte) e reduzir a dimensionalidade e complexidade (quantidade de dados) via técnica de análise de componentes principais (ACP) sobre esses descritores para a separação intra patologias; e d) Testes estatísticos com os grupos locais confirmaram também limiares de indícios de Anormalidade presentes na literatura. A utilização de menor quantidade de descritores – obtida pós ACP (compressão) – mostrou-se também eficiente (mesmas taxas de acurácia).
It is estimated one-third of the work force relies on the use the voice in their jobs. The clinical diagnostic may be performed on voice listening by a specialist (subjective perspective) or through invasive and often not cheaper exams to check vocal structures. The area of Voice Acoustic analyses aims to extract robust measurements to describe several phenomena associated with voice production, or human being particular characteristics like fundamental frequency, timbre, etc. This study consisted of a model characterizing the digital voice processing for support in building automatic systems for the identification of disorders of speech (to aid diagnosis of pathologies). To support this investigation and proposed model, a commercial voice database (KAY base) was used with the endorsement from medical specialists. Derived acoustic analyses of those speech samples data records were presented to professionals for classification and six “severities groups” case-studied were built. After these analyses, one Normal group was added and, at the end, 182 voices have been selected. Their refined audio database contain, among other things, an indexed list of vocal descriptors calculated on the presence of the utterance of the vowel \a\ sustained speech. Statistical evidences were found: a) Difference between pathological groups vocal descriptors to normal (expected); b) It was achieved 100% from true positive, most cases, among Paralysis, Reinke's Edema and Nodules separations; c) from few cases, there were detected minor distinctions: Paralysis, Reinke's Edema, Nodules and Edema (pair comparison) with disordered groups; c) Among Machine Learning Algorithms (artificial neural networks "RN" and support vector machine "SVM"), the technique of Principal Components Analyses (PCA) and main statistics performed, it was found facts to help to structure some automated recognition systems. These Supervised learning methods showed that it could be possible to generate classification predictions (disordered presence) for the response to new data; and d) Inner tests also confirmed literature established reference thresholds. Hence considering suitable combinations of descriptors with two machine learning classifiers, as showed, is sufficient suitable and worthy.
APA, Harvard, Vancouver, ISO, and other styles
21

Sargent, Gabriel. "Estimation de la structure de morceaux de musique par analyse multi-critères et contrainte de régularité." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00853737.

Full text
Abstract:
Les récentes évolutions des technologies de l'information et de la communication font qu'il est aujourd'hui facile de consulter des catalogues de morceaux de musique conséquents. De nouvelles représentations et de nouveaux algorithmes doivent de ce fait être développés afin de disposer d'une vision représentative de ces catalogues et de naviguer avec agilité dans leurs contenus. Ceci nécessite une caractérisation efficace des morceaux de musique par l'intermédiaire de descriptions macroscopiques pertinentes. Dans cette thèse, nous nous focalisons sur l'estimation de la structure des morceaux de musique : il s'agit de produire pour chaque morceau une description de son organisation par une séquence de quelques dizaines de segments structurels, définis par leurs frontières (un instant de début et un instant de fin) et par une étiquette représentant leur contenu sonore.La notion de structure musicale peut correspondre à de multiples acceptions selon les propriétés musicales choisies et l'échelle temporelle considérée. Nous introduisons le concept de structure "sémiotique" qui permet de définir une méthodologie d'annotation couvrant un vaste ensemble de styles musicaux. La détermination des segments structurels est fondée sur l'analyse des similarités entre segments au sein du morceau, sur la cohérence de leur organisation interne (modèle "système-contraste") et sur les relations contextuelles qu'ils entretiennent les uns avec les autres. Un corpus de 383 morceaux a été annoté selon cette méthodologie et mis à disposition de la communauté scientifique.En termes de contributions algorithmiques, cette thèse se concentre en premier lieu sur l'estimation des frontières structurelles, en formulant le processus de segmentation comme l'optimisation d'un coût composé de deux termes~: le premier correspond à la caractérisation des segments structurels par des critères audio et le second reflète la régularité de la structure obtenue en référence à une "pulsation structurelle". Dans le cadre de cette formulation, nous comparons plusieurs contraintes de régularité et nous étudions la combinaison de critères audio par fusion. L'estimation des étiquettes structurelles est pour sa part abordée sous l'angle d'un processus de sélection d'automates à états finis : nous proposons un critère auto-adaptatif de sélection de modèles probabilistes que nous appliquons à une description du contenu tonal. Nous présentons également une méthode d'étiquetage des segments dérivée du modèle système-contraste.Nous évaluons différents systèmes d'estimation automatique de structure musicale basés sur ces approches dans le cadre de campagnes d'évaluation nationales et internationales (Quaero, MIREX), et nous complétons cette étude par quelques éléments de diagnostic additionnels.
APA, Harvard, Vancouver, ISO, and other styles
22

Maurandi, Victor. "Algorithmes pour la diagonalisation conjointe de tenseurs sans contrainte unitaire. Application à la séparation MIMO de sources de télécommunications numériques." Thesis, Toulon, 2015. http://www.theses.fr/2015TOUL0009/document.

Full text
Abstract:
Cette thèse développe des méthodes de diagonalisation conjointe de matrices et de tenseurs d’ordre trois, et son application à la séparation MIMO de sources de télécommunications numériques. Après un état, les motivations et objectifs de la thèse sont présentés. Les problèmes de la diagonalisation conjointe et de la séparation de sources sont définis et un lien entre ces deux domaines est établi. Par la suite, plusieurs algorithmes itératifs de type Jacobi reposant sur une paramétrisation LU sont développés. Pour chacun des algorithmes, on propose de déterminer les matrices permettant de diagonaliser l’ensemble considéré par l’optimisation d’un critère inverse. On envisage la minimisation du critère selon deux approches : la première, de manière directe, et la seconde, en supposant que les éléments de l’ensemble considéré sont quasiment diagonaux. En ce qui concerne l’estimation des différents paramètres du problème, deux stratégies sont mises en œuvre : l’une consistant à estimer tous les paramètres indépendamment et l’autre reposant sur l’estimation indépendante de couples de paramètres spécifiquement choisis. Ainsi, nous proposons trois algorithmes pour la diagonalisation conjointe de matrices complexes symétriques ou hermitiennes et deux algorithmes pour la diagonalisation conjointe d’ensembles de tenseurs symétriques ou non-symétriques ou admettant une décomposition INDSCAL. Nous montrons aussi le lien existant entre la diagonalisation conjointe de tenseurs d’ordre trois et la décomposition canonique polyadique d’un tenseur d’ordre quatre, puis nous comparons les algorithmes développés à différentes méthodes de la littérature. Le bon comportement des algorithmes proposés est illustré au moyen de simulations numériques. Puis, ils sont validés dans le cadre de la séparation de sources de télécommunications numériques
This thesis develops joint diagonalization of matrices and third-order tensors methods for MIMO source separation in the field of digital telecommunications. After a state of the art, the motivations and the objectives are presented. Then the joint diagonalisation and the blind source separation issues are defined and a link between both fields is established. Thereafter, five Jacobi-like iterative algorithms based on an LU parameterization are developed. For each of them, we propose to derive the diagonalization matrix by optimizing an inverse criterion. Two ways are investigated : minimizing the criterion in a direct way or assuming that the elements from the considered set are almost diagonal. Regarding the parameters derivation, two strategies are implemented : one consists in estimating each parameter independently, the other consists in the independent derivation of couple of well-chosen parameters. Hence, we propose three algorithms for the joint diagonalization of symmetric complex matrices or hermitian ones. The first one relies on searching for the roots of the criterion derivative, the second one relies on a minor eigenvector research and the last one relies on a gradient descent method enhanced by computation of the optimal adaptation step. In the framework of joint diagonalization of symmetric, INDSCAL or non symmetric third-order tensors, we have developed two algorithms. For each of them, the parameters derivation is done by computing the roots of the considered criterion derivative. We also show the link between the joint diagonalization of a third-order tensor set and the canonical polyadic decomposition of a fourth-order tensor. We confront both methods through numerical simulations. The good behavior of the proposed algorithms is illustrated by means of computing simulations. Finally, they are applied to the source separation of digital telecommunication signals
APA, Harvard, Vancouver, ISO, and other styles
23

Postolski, Michal. "Discrete topology and geometry algorithms for quantitative human airway trees analysis based on computed tomography images." Phd thesis, Université Paris-Est, 2013. http://pastel.archives-ouvertes.fr/pastel-00977514.

Full text
Abstract:
Computed tomography is a very useful technic which allow non-invasive diagnosis in many applications for example is used with success in industry and medicine. However, manual analysis of the interesting structures can be tedious and extremely time consuming, or even impossible due its complexity. Therefore in this thesis we study and develop discrete geometry and topology algorithms suitable for use in many practical applications, especially, in the problem of automatic quantitative analysis of the human airway trees based on computed tomography images. In the first part, we define basic notions used in discrete topology and geometry then we showed that several class of discrete methods like skeletonisation algorithms, medial axes, tunnels closing algorithms and tangent estimators, are widely used in several different practical application. The second part consist of a proposition and theory of a new methods for solving particular problems. We introduced two new medial axis filtering method. The hierarchical scale medial axis which is based on previously proposed scale axis transform, however, is free of drawbacks introduced in the previously proposed method and the discrete adaptive medial axis where the filtering parameter is dynamically adapted to the local size of the object. In this part we also introduced an efficient and parameter less new tangent estimators along three-dimensional discrete curves, called 3D maximal segment tangent direction. Finally, we showed that discrete geometry and topology algorithms can be useful in the problem of quantitative analysis of the human airway trees based on computed tomography images. According to proposed in the literature design of such system we applied discrete topology and geometry algorithms to solve particular problems at each step of the quantitative analysis process. First, we propose a robust method for segmenting airway tree from CT datasets. The method is based on the tunnel closing algorithm and is used as a tool to repair, damaged by acquisition errors, CT images. We also proposed an algorithm for creation of an artificial model of the bronchial tree and we used such model to validate algorithms presented in this work. Then, we compare the quality of different algorithms using set of experiments conducted on computer phantoms and real CT dataset. We show that recently proposed methods which works in cubical complex framework, together with methods introduced in this work can overcome problems reported in the literature and can be a good basis for the further implementation of the system for automatic quantification of bronchial tree properties
APA, Harvard, Vancouver, ISO, and other styles
24

Martins, Carlos Henrique Nascimento. "Plataforma de processamento de sinais para aplicações em sistemas de potência." Universidade Federal de Juiz de Fora (UFJF), 2011. https://repositorio.ufjf.br/jspui/handle/ufjf/4100.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-04-19T17:59:14Z No. of bitstreams: 1 carloshenriquenascimentomartins.pdf: 4613305 bytes, checksum: 54c109a7e2d55a96f74c2f40d23c074b (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-04-20T12:29:45Z (GMT) No. of bitstreams: 1 carloshenriquenascimentomartins.pdf: 4613305 bytes, checksum: 54c109a7e2d55a96f74c2f40d23c074b (MD5)
Made available in DSpace on 2017-04-20T12:29:45Z (GMT). No. of bitstreams: 1 carloshenriquenascimentomartins.pdf: 4613305 bytes, checksum: 54c109a7e2d55a96f74c2f40d23c074b (MD5) Previous issue date: 2011-04-08
Este trabalho tem por objetivo principal apresentar o desenvolvimento de plataformas eletrônicas de processamento de sinais de alto desempenho para monitoramento do sistema elétricos de potência. No trabalho são discutidas arquiteturas de hardware para três aplicações em sistemas de potência: analisador fasorial (Phasor Measurement Unit - PMU), analisador de qualidade de energia elétrica (QEE) e analisador de harmônicos variantes no tempo (AHVT). Além disso são todos os conceitos de eletrônica digital e analógica envolvidos na concepção deste projeto, tomando como base equipamentos de mercado, a literatura pertinente e as normas reguladoras para dispositivos de análise de parâmetros elétricos. No projeto é abordado principalmente a implementação de hardware, que envolve implementação de estruturas de conversão Analógico Digital, filtro anti-aliasing, condicionamento de sinais, processamento e gerenciamento de dados e finalmente meios de comunicação. O hardware foi testado utilizando algoritmos básicos de processamento de sinais sendo apresentado casos reais de monitoramento dos parâmetros do sinal elétrico e uma versão inicial do AHVT.
This work has the aim to present the development of electronic platforms for signal processing for high performance electric power monitoring system. At this work are discussed hardware architectures for three power systems applications: phasor measurement unit (PMU), Power Quality Analyzer (PQ) and Time Varying Harmonic Analyzer (TVHA). Also are explained all features of analog and digital electronics involved in the design of this project, based on commercial devices, the literature and regulatory standards for electrical parameters devices. The project is addressed principally to hardware implementation, which involves implementation of structures such as the Analog to Digital conversion, anti-aliasing filter, signal conditioning, processing and data management and communication. The hardware is tested using basic digital signal algorithms and real cases of parameters monitoring are presented. Furthermore prototype version of the TVHA is presented.
APA, Harvard, Vancouver, ISO, and other styles
25

Le, Borgne Yann-Aël. "Learning in wireless sensor networks for energy-efficient environmental monitoring." Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210334.

Full text
Abstract:
Wireless sensor networks form an emerging class of computing devices capable of observing the world with an unprecedented resolution, and promise to provide a revolutionary instrument for environmental monitoring. Such a network is composed of a collection of battery-operated wireless sensors, or sensor nodes, each of which is equipped with sensing, processing and wireless communication capabilities. Thanks to advances in microelectronics and wireless technologies, wireless sensors are small in size, and can be deployed at low cost over different kinds of environments in order to monitor both over space and time the variations of physical quantities such as temperature, humidity, light, or sound.

In environmental monitoring studies, many applications are expected to run unattended for months or years. Sensor nodes are however constrained by limited resources, particularly in terms of energy. Since communication is one order of magnitude more energy-consuming than processing, the design of data collection schemes that limit the amount of transmitted data is therefore recognized as a central issue for wireless sensor networks.

An efficient way to address this challenge is to approximate, by means of mathematical models, the evolution of the measurements taken by sensors over space and/or time. Indeed, whenever a mathematical model may be used in place of the true measurements, significant gains in communications may be obtained by only transmitting the parameters of the model instead of the set of real measurements. Since in most cases there is little or no a priori information about the variations taken by sensor measurements, the models must be identified in an automated manner. This calls for the use of machine learning techniques, which allow to model the variations of future measurements on the basis of past measurements.

This thesis brings two main contributions to the use of learning techniques in a sensor network. First, we propose an approach which combines time series prediction and model selection for reducing the amount of communication. The rationale of this approach, called adaptive model selection, is to let the sensors determine in an automated manner a prediction model that does not only fits their measurements, but that also reduces the amount of transmitted data.

The second main contribution is the design of a distributed approach for modeling sensed data, based on the principal component analysis (PCA). The proposed method allows to transform along a routing tree the measurements taken in such a way that (i) most of the variability in the measurements is retained, and (ii) the network load sustained by sensor nodes is reduced and more evenly distributed, which in turn extends the overall network lifetime. The framework can be seen as a truly distributed approach for the principal component analysis, and finds applications not only for approximated data collection tasks, but also for event detection or recognition tasks.

/

Les réseaux de capteurs sans fil forment une nouvelle famille de systèmes informatiques permettant d'observer le monde avec une résolution sans précédent. En particulier, ces systèmes promettent de révolutionner le domaine de l'étude environnementale. Un tel réseau est composé d'un ensemble de capteurs sans fil, ou unités sensorielles, capables de collecter, traiter, et transmettre de l'information. Grâce aux avancées dans les domaines de la microélectronique et des technologies sans fil, ces systèmes sont à la fois peu volumineux et peu coûteux. Ceci permet leurs deploiements dans différents types d'environnements, afin d'observer l'évolution dans le temps et l'espace de quantités physiques telles que la température, l'humidité, la lumière ou le son.

Dans le domaine de l'étude environnementale, les systèmes de prise de mesures doivent souvent fonctionner de manière autonome pendant plusieurs mois ou plusieurs années. Les capteurs sans fil ont cependant des ressources limitées, particulièrement en terme d'énergie. Les communications radios étant d'un ordre de grandeur plus coûteuses en énergie que l'utilisation du processeur, la conception de méthodes de collecte de données limitant la transmission de données est devenue l'un des principaux défis soulevés par cette technologie.

Ce défi peut être abordé de manière efficace par l'utilisation de modèles mathématiques modélisant l'évolution spatiotemporelle des mesures prises par les capteurs. En effet, si un tel modèle peut être utilisé à la place des mesures, d'importants gains en communications peuvent être obtenus en utilisant les paramètres du modèle comme substitut des mesures. Cependant, dans la majorité des cas, peu ou aucune information sur la nature des mesures prises par les capteurs ne sont disponibles, et donc aucun modèle ne peut être a priori défini. Dans ces cas, les techniques issues du domaine de l'apprentissage machine sont particulièrement appropriées. Ces techniques ont pour but de créer ces modèles de façon autonome, en anticipant les mesures à venir sur la base des mesures passées.

Dans cette thèse, deux contributions sont principalement apportées permettant l'applica-tion de techniques d'apprentissage machine dans le domaine des réseaux de capteurs sans fil. Premièrement, nous proposons une approche qui combine la prédiction de série temporelle avec la sélection de modèles afin de réduire la communication. La logique de cette approche, appelée sélection de modèle adaptive, est de permettre aux unités sensorielles de determiner de manière autonome un modèle de prédiction qui anticipe correctement leurs mesures, tout en réduisant l'utilisation de leur radio.

Deuxièmement, nous avons conçu une méthode permettant de modéliser de façon distribuée les mesures collectées, qui se base sur l'analyse en composantes principales (ACP). La méthode permet de transformer les mesures le long d'un arbre de routage, de façon à ce que (i) la majeure partie des variations dans les mesures des capteurs soient conservées, et (ii) la charge réseau soit réduite et mieux distribuée, ce qui permet d'augmenter également la durée de vie du réseau. L'approche proposée permet de véritablement distribuer l'ACP, et peut être utilisée pour des applications impliquant la collecte de données, mais également pour la détection ou la classification d'événements.


Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
26

Yang, Tai-Yi, and 楊泰宜. "Realization of Exponential Signal Analyzer by the Digital Signal Processor." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/38607258884751688928.

Full text
Abstract:
碩士
義守大學
電機工程學系碩士班
96
This thesis offers a complete method to find the exact parameters of the exponential signal, including frequencies, dampings, amplitudes, and phases. This method composes time-domain method and frequency-domain method, it includes three major steps: frequency-domain interpolated algorithm, gradient method, and quadratic interpolation. In the step one, the time-domain signal is transformed into frequency-domain, and the approach parameters will be found by the frequency-domain interpolated algorithm. A re-established signal which is close to the real one is also done in this step. In the step two, the gradient method modifies the re-established signal three times in time-domain. In the step can efficiently overcome the interference of the non-linear factor and the noise. The iteration in the step three is the quadratic interpolation. It could improve the search efficiency and reduce iteration time. After a few iterations, the method will obtain the exact parameters. This paper accomplishes above theories in an analyzer by the TMS320LF2407 DSP. The analyzer is composed of a sensor, a circuit for voltage conversing, an A/D converter, a keyboard, a LCM, and a DSP. The analyzer has features of real-time analysis, process, and display.
APA, Harvard, Vancouver, ISO, and other styles
27

Yu, Zhuizhuan. "Digitally-Assisted Mixed-Signal Wideband Compressive Sensing." Thesis, 2011. http://hdl.handle.net/1969.1/ETD-TAMU-2011-05-9328.

Full text
Abstract:
Digitizing wideband signals requires very demanding analog-to-digital conversion (ADC) speed and resolution specifications. In this dissertation, a mixed-signal parallel compressive sensing system is proposed to realize the sensing of wideband sparse signals at sub-Nqyuist rate by exploiting the signal sparsity. The mixed-signal compressive sensing is realized with a parallel segmented compressive sensing (PSCS) front-end, which not only can filter out the harmonic spurs that leak from the local random generator, but also provides a tradeoff between the sampling rate and the system complexity such that a practical hardware implementation is possible. Moreover, the signal randomization in the system is able to spread the spurious energy due to ADC nonlinearity along the signal bandwidth rather than concentrate on a few frequencies as it is the case for a conventional ADC. This important new property relaxes the ADC SFDR requirement when sensing frequency-domain sparse signals. The mixed-signal compressive sensing system performance is greatly impacted by the accuracy of analog circuit components, especially with the scaling of CMOS technology. In this dissertation, the effect of the circuit imperfection in the mixed-signal compressive sensing system based on the PSCS front-end is investigated in detail, such as the finite settling time, the timing uncertainty and so on. An iterative background calibration algorithm based on LMS (Least Mean Square) is proposed, which is shown to be able to effectively calibrate the error due to the circuit nonideal factors. A low-speed prototype built with off-the-shelf components is presented. The prototype is able to sense sparse analog signals with up to 4 percent sparsity at 32 percent of the Nqyuist rate. Many practical constraints that arose during building the prototype such as circuit nonidealities are addressed in detail, which provides good insights for a future high-frequency integrated circuit implementation. Based on that, a high-frequency sub-Nyquist rate receiver exploiting the parallel compressive sensing is designed and fabricated with IBM90nm CMOS technology, and measurement results are presented to show the capability of wideband compressive sensing at sub-Nyquist rate. To the best of our knowledge, this prototype is the first reported integrated chip for wideband mixed-signal compressive sensing. The proposed prototype achieves 7 bits ENOB and 3 GS/s equivalent sampling rate in simulation assuming a 0.5 ps state-of-art jitter variance, whose FOM beats the FOM of the high speed state-of-the-art Nyquist ADCs by 2-3 times. The proposed mixed-signal compressive sensing system can be applied in various fields. In particular, its applications for wideband spectrum sensing for cognitive radios and spectrum analysis in RF tests are discussed in this work.
APA, Harvard, Vancouver, ISO, and other styles
28

Ribeiro, Diogo Carlos Alcobia. "Instrumentation for measurement and characterization of mixed-signal devices." Doctoral thesis, 2018. http://hdl.handle.net/10773/24802.

Full text
Abstract:
This PhD thesis work is about the development of radio frequency oriented measurement and characterization approaches for mixed-signal devices. Mixed-signal devices are an important building block for newer, higher data-rate and smart radios. However, intuitive and simple characterization approaches have not yet been developed. The most basic mixed-signal device is an ADC or a DAC. The ADCs and DACs will be considered in this work, as well as, more complex mixed-signal devices and even entire (integrated) radio front-ends. A microwave network analysis approach, in an S-parameters like fashion, will be used to augment the modeling techniques of mixed-signal devices. This type of behavioral modeling approach is extensively supported within the tools used during the RF design stages. This will allow RF engineers to account for the non-ideal effects of these devices in a simpler way. The final outcome may be used to establish an instrument capable to characterize both basic and more complex mixed-signal devices, in the same way a traditional VNA does for fully analog devices.
Este trabalho de doutoramento aborda o desenvolvimento de técnicas de medida e caracterização, usando uma perspetiva de radiofrequência, para dispositivos analógico-digitais. Os dispositivos analógico-digitais são blocos importantes no desenho de novos rádios, com maior velocidade de troca de dados, ou mesmo rádios inteligentes. As ADCs e DACs podem ser consideradas como os dispositivos analógico-digitais mais simples. Neste trabalho vão ser consideradas as ADCs e DACs, bem como dispositivos analógico-digitais mais complexos, ou mesmo cadeias completas de rádio. Uma abordagem baseada na análise de circuitos de micro-ondas, semelhante `a utilizada pelos ‘parâmetros S’, vai ser utilizada para expandir as técnicas de caracterização dos dispositivos analógico-digitais. Este tipo de caracterização comportamental ´e suportada amplamente pelas ferramentas utilizadas durante o desenvolvimento de aparelhos rádio. Esta técnica vai permitir aos engenheiros de RF considerar os efeitos não-ideais deste tipo de dispositivos, de uma forma mais simples. O resultado final deste trabalho poder´a vir a ser utilizado para estabelecer um instrumento capaz de caracterizar dispositivos analógico-digitais simples ou mais complexos, da mesma forma que um VNA é utilizado para dispositivos puramente analógicos.
Programa Doutoral em Engenharia Eletrotécnica
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography