To see the other types of publications on this topic, follow the link: Optimum Sampling Design.

Dissertations / Theses on the topic 'Optimum Sampling Design'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 dissertations / theses for your research on the topic 'Optimum Sampling Design.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ringer, William P. "Design, construction and analysis of a 14-bit direct digital antenna utilizing optical sampling and optimum SNS encoding." Monterey, California. Naval Postgraduate School, 1997. http://hdl.handle.net/10945/8215.

Full text
Abstract:
Approved for public release; distribution is unlimited.
Direct digital direction finding (DF) antennas will allow an incoming signal to be digitally encoded at the antenna with high dynamic range (14 bits approx. equal 86 dB) without the use of down conversion that is typically necessary. As a shipboard DF device, it also allows for the encoding of wide band, high power signals (e.g., +/- 43 volts) that can often appear on shipboard antennas due to the presence of in band transmitters that are located close by. This design utilizes three pulsed laser driven Mach-Zehnder optical interferometers to sample the RF signal. Each channel requires only 6 bit accuracy (64 comparators) to produce an Optimum Symmetrical Number System (OSNS) residue representation of the input signal. These residues are then sent to a locally programmed Field Programmable Gate Array (FPGA) for decoding into a 14 bit digital representation of the input RF voltage. Modern day FPGA devices are rapidly becoming the state of the art in programmable logic. The inclusion of on chip flip flops allows for a fast and efficient pipelined approach to OSNS decoding. This thesis documents the first 14 bit digital antenna which utilizes an FPGA algorithm as a method of OSNS decoding. This design uses FPGA processors for both OSNS decoding and Parity processing
APA, Harvard, Vancouver, ISO, and other styles
2

De, Schaetzen Werner. "Optimal calibration and sampling design for hydraulic network models." Thesis, University of Exeter, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.322278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Afrifa-Yamoah, Ebenezer. "Imputation, modelling and optimal sampling design for digital camera data in recreational fisheries monitoring." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2021. https://ro.ecu.edu.au/theses/2387.

Full text
Abstract:
Digital camera monitoring has evolved as an active application-oriented scheme to help address questions in areas such as fisheries, ecology, computer vision, artificial intelligence, and criminology. In recreational fisheries research, digital camera monitoring has become a viable option for probability-based survey methods, and is also used for corroborative and validation purposes. In comparison to onsite surveys (e.g. boat ramp surveys), digital cameras provide a cost-effective method of monitoring boating activity and fishing effort, including night-time fishing activities. However, there are challenges in the use of digital camera monitoring that need to be resolved. Notably, missing data problems and the cost of data interpretation are among the most pertinent. This study provides relevant statistical support to address these challenges of digital camera monitoring of boating effort, to improve its utility to enhance recreational fisheries management in Western Australia and elsewhere, with capacity to extend to other areas of application. Digital cameras can provide continuous recordings of boating and other recreational fishing activities; however, interruptions of camera operations can lead to significant gaps within the data. To fill these gaps, some climatic and other temporal classification variables were considered as predictors of boating effort (defined as number of powerboat launches and retrievals). A generalized linear mixed effect model built on fully-conditional specification multiple imputation framework was considered to fill in the gaps in the camera dataset. Specifically, the zero-inflated Poisson model was found to satisfactorily impute plausible values for missing observations for varied durations of outages in the digital camera monitoring data of recreational boating effort. Additional modelling options were explored to guide both short- and long-term forecasting of boating activity and to support management decisions in monitoring recreational fisheries. Autoregressive conditional Poisson (ACP) and integer-valued autoregressive (INAR) models were identified as useful time series models for predicting short-term behaviour of such data. In Western Australia, digital camera monitoring data that coincide with 12-month state-wide boat-based surveys (now conducted on a triennial basis) have been read but the periods between the surveys have not been read. A Bayesian regression framework was applied to describe the temporal distribution of recreational boating effort using climatic and temporally classified variables to help construct data for such missing periods. This can potentially provide a useful cost-saving alternative of obtaining continuous time series data on boating effort. Finally, data from digital camera monitoring are often manually interpreted and the associated cost can be substantial, especially if multiple sites are involved. Empirical support for low-level monitoring schemes for digital camera has been provided. It was found that manual interpretation of camera footage for 40% of the days within a year can be deemed as an adequate level of sampling effort to obtain unbiased, precise and accurate estimates to meet broad management objectives. A well-balanced low-level monitoring scheme will ultimately reduce the cost of manual interpretation and produce unbiased estimates of recreational fishing indexes from digital camera surveys.
APA, Harvard, Vancouver, ISO, and other styles
4

Cole, James Jacob. "Assessing Nonlinear Relationships through Rich Stimulus Sampling in Repeated-Measures Designs." OpenSIUC, 2018. https://opensiuc.lib.siu.edu/dissertations/1587.

Full text
Abstract:
Explaining a phenomenon often requires identification of an underlying relationship between two variables. However, it is common practice in psychological research to sample only a few values of an independent variable. Young, Cole, and Sutherland (2012) showed that this practice can impair model selection in between-subject designs. The current study expands that line of research to within-subjects designs. In two Monte Carlo simulations, model discrimination under systematic sampling of 2, 3, or 4 levels of the IV was compared with that under random uniform sampling and sampling from a Halton sequence. The number of subjects, number of observations per subject, effect size, and between-subject parameter variance in the simulated experiments were also manipulated. Random sampling out-performed the other methods in model discrimination with only small, function-specific costs to parameter estimation. Halton sampling also produced good results but was less consistent. The systematic sampling methods were generally rank-ordered by the number of levels they sampled.
APA, Harvard, Vancouver, ISO, and other styles
5

Ryan, Elizabeth G. "Contributions to Bayesian experimental design." Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/79628/1/Elizabeth_Ryan_Thesis.pdf.

Full text
Abstract:
This thesis progresses Bayesian experimental design by developing novel methodologies and extensions to existing algorithms. Through these advancements, this thesis provides solutions to several important and complex experimental design problems, many of which have applications in biology and medicine. This thesis consists of a series of published and submitted papers. In the first paper, we provide a comprehensive literature review on Bayesian design. In the second paper, we discuss methods which may be used to solve design problems in which one is interested in finding a large number of (near) optimal design points. The third paper presents methods for finding fully Bayesian experimental designs for nonlinear mixed effects models, and the fourth paper investigates methods to rapidly approximate the posterior distribution for use in Bayesian utility functions.
APA, Harvard, Vancouver, ISO, and other styles
6

Basudhar, Anirban. "Computational Optimal Design and Uncertainty Quantification of Complex Systems Using Explicit Decision Boundaries." Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/201491.

Full text
Abstract:
This dissertation presents a sampling-based method that can be used for uncertainty quantification and deterministic or probabilistic optimization. The objective is to simultaneously address several difficulties faced by classical techniques based on response values and their gradients. In particular, this research addresses issues with discontinuous and binary (pass or fail) responses, and multiple failure modes. All methods in this research are developed with the aim of addressing problems that have limited data due to high cost of computation or experiment, e.g. vehicle crashworthiness, fluid-structure interaction etc.The core idea of this research is to construct an explicit boundary separating allowable and unallowable behaviors, based on classification information of responses instead of their actual values. As a result, the proposed method is naturally suited to handle discontinuities and binary states. A machine learning technique referred to as support vector machines (SVMs) is used to construct the explicit boundaries. SVM boundaries can be highly nonlinear, which allows one to use a single SVM for representing multiple failure modes.One of the major concerns in the design and uncertainty quantification communities is to reduce computational costs. To address this issue, several adaptive sampling methods have been developed as part of this dissertation. Specific sampling methods have been developed for reliability assessment, deterministic optimization, and reliability-based design optimization. Adaptive sampling allows the construction of accurate SVMs with limited samples. However, like any approximation method, construction of SVM is subject to errors. A new method to quantify the prediction error of SVMs, based on probabilistic support vector machines (PSVMs) is also developed. It is used to provide a relatively conservative probability of failure to mitigate some of the adverse effects of an inaccurate SVM. In the context of reliability assessment, the proposed method is presented for uncertainties represented by random variables as well as spatially varying random fields.In order to validate the developed methods, analytical problems with known solutions are used. In addition, the approach is applied to some application problems, such as structural impact and tolerance optimization, to demonstrate its strengths in the context of discontinuous responses and multiple failure modes.
APA, Harvard, Vancouver, ISO, and other styles
7

Belouni, Mohamad. "Plans d'expérience optimaux en régression appliquée à la pharmacocinétique." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM056/document.

Full text
Abstract:
Le problème d'intérêt est d'estimer la fonction de concentration et l'aire sous la courbe (AUC) à travers l'estimation des paramètres d'un modèle de régression linéaire avec un processus d'erreur autocorrélé. On construit un estimateur linéaire sans biais simple de la courbe de concentration et de l'AUC. On montre que cet estimateur construit à partir d'un plan d'échantillonnage régulier approprié est asymptotiquement optimal dans le sens où il a exactement la même performance asymptotique que le meilleur estimateur linéaire sans biais (BLUE). De plus, on montre que le plan d'échantillonnage optimal est robuste par rapport à la misspecification de la fonction d'autocovariance suivant le critère du minimax. Lorsque des observations répétées sont disponibles, cet estimateur est consistant et a une distribution asymptotique normale. Les résultats obtenus sont généralisés au processus d'erreur de Hölder d'indice compris entre 0 et 2. Enfin, pour des tailles d'échantillonnage petites, un algorithme de recuit simulé est appliqué à un modèle pharmacocinétique avec des erreurs corrélées
The problem of interest is to estimate the concentration curve and the area under the curve (AUC) by estimating the parameters of a linear regression model with autocorrelated error process. We construct a simple linear unbiased estimator of the concentration curve and the AUC. We show that this estimator constructed from a sampling design generated by an appropriate density is asymptotically optimal in the sense that it has exactly the same asymptotic performance as the best linear unbiased estimator (BLUE). Moreover, we prove that the optimal design is robust with respect to a misspecification of the autocovariance function according to a minimax criterion. When repeated observations are available, this estimator is consistent and has an asymptotic normal distribution. All those results are extended to the error process of Hölder with index including between 0 and 2. Finally, for small sample sizes, a simulated annealing algorithm is applied to a pharmacokinetic model with correlated errors
APA, Harvard, Vancouver, ISO, and other styles
8

Benamara, Tariq. "Full-field multi-fidelity surrogate models for optimal design of turbomachines." Thesis, Compiègne, 2017. http://www.theses.fr/2017COMP2368.

Full text
Abstract:
L’optimisation des différents composants d’une turbomachine reste encore un sujet épineux, malgré les récentes avancées théoriques, expérimentales ou informatiques. Cette thèse propose et investigue des techniques d’optimisation assistées par méta-modèles vectoriels multi-fidélité basés sur la Décomposition aux Valeurs Propres (POD). Le couplage de la POD à des techniques de modélisation multifidélité permet de suivre l’évolution des structures dominantes de l’écoulement en réponse à des déformations géométriques. Deux méthodes d’optimisation multi-fidélité basées sur la POD sont ici proposées. La première consiste en une stratégie d’enrichissement adaptée aux modèles multi-fidelité par Gappy-POD (GPOD). Celle-ci vise surtout des problèmes associés à des simulations basse-fidélité à coût de restitution négligeable, ce qui la rend difficilement utilisable pour la conception aérodynamique de turbomachines. Elle est néanmoins validée sur une étude du domaine de vol d’une aile 2D issue de la littérature. La seconde méthodologie est basée sur une extension multi-fidèle des modèles par POD Non-Intrusive (NIPOD). Cette extension naît de la ré-interprétation du concept de POD Contrainte (CPOD) et permet l’enrichissement de l’espace réduit par ajout important d’information basse-fidélité approximative. En seconde partie de cette thèse, un cas de validation est introduit pour valider les méthodologies d’optimisation vectorielle multi-fidélité. Cet exemple présente des caractéristiques représentatives des problèmes d’optimisation de turbomachines. La capacité de généralisation des méta-modèles par NIPOD multifidélité proposés est comparée, aussi bien sur cas analytique qu’industriel, à des techniques de méta-modélisation issues de la littérature. Enfin, nous utilisons la méthode développée au cours de cette thèse pour l’optimisation d’un étage et demi d’un compresseur basse-pression et comparons les résultats obtenus à des approches à l’état de l’art
Optimizing turbomachinery components stands as a real challenge despite recent advances in theoretical, experimental and High-Performance Computing (HPC) domains. This thesis introduces and validates optimization techniques assisted by full-field Multi-Fidelity Surrogate Models (MFSMs) based on Proper Orthogonal Decomposition (POD). The combination of POD and Multi-Fidelity Modeling (MFM) techniques allows to capture the evolution of dominant flow features with geometry modifications. Two POD based multi-fidelity optimization methods are proposed. Thefirst one consists in an enrichment strategy dedicated to Gappy-POD (GPOD)models. It is more suitable for instantaneous low-fidelity computations whichmakes it hardly tractable for aerodynamic design of turbomachines. This methodis demonstrated on the flight domain study of a 2D airfoil from the literature. The second methodology is based on a multi-fidelity extension to Non-IntrusivePOD (NIPOD) models. This extension starts with a re-interpretation of theConstrained POD (CPOD) concept and allows to enrich the reduced spacedefinition with abondant, albeit inaccurate, low-fidelity information. In the second part of the thesis, a benchmark test case is introduced to test fullfield multi-fidelity optimization methodologies on an example presenting featuresrepresentative of turbomachinery problems. The predictability of the proposedMulti-Fidelity NIPOD (MFNIPOD) surrogate models is compared to classical surrogates from the literature on both analytical and industrial-scale applications. Finally, we employ the proposed tool to the shape optimization of a 1.5-stage boosterand we compare the obtained results with standard state of the art approaches
APA, Harvard, Vancouver, ISO, and other styles
9

Yngman, Gunnar. "Individualization of fixed-dose combination regimens : Methodology and application to pediatric tuberculosis." Thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-242059.

Full text
Abstract:
Introduction: No Fixed-Dose Combination (FDC) formulations currently exist for pediatric tuberculosis (TB) treatment. Earlier work implemented, in the software NONMEM, a rational method for optimizing design and individualization of pediatric anti-TB FDC formulations based on patient body weight, but issues with parameter estimation, dosage strata heterogeneity and representative pharmacokinetics remained. Aim: To further develop the rational model-based methodology aiding the selection of appropriate FDC formulation designs and dosage regimens, in pediatric TB treatment. Materials and Methods: Optimization of the method with respect to the estimation of body weight breakpoints was sought. Heterogeneity of dosage groups with respect to treatment efficiency was sought to be improved. Recently published pediatric pharmacokinetic parameters were implemented and the model translated to MATLAB, where also the performance was evaluated by stochastic estimation and graphical visualization. Results: A logistic function was found better suited as an approximation of breakpoints. None of the estimation methods implemented in NONMEM were more suitable than the originally used FO method. Homogenization of dosage group treatment efficiency could not be solved. MATLAB translation was successful but required stochastic estimations and highlighted high densities of local minima. Representative pharmacokinetics were successfully implemented. Conclusions: NONMEM was found suboptimal for the task due to problems with discontinuities and heterogeneity, but a stepwise method with representative pharmacokinetics were successfully implemented. MATLAB showed more promise in the search for a method also addressing the heterogeneity issue.
APA, Harvard, Vancouver, ISO, and other styles
10

"Optimal Sampling Designs for Functional Data Analysis." Doctoral diss., 2020. http://hdl.handle.net/2286/R.I.57156.

Full text
Abstract:
abstract: Functional regression models are widely considered in practice. To precisely understand an underlying functional mechanism, a good sampling schedule for collecting informative functional data is necessary, especially when data collection is limited. However, scarce research has been conducted on the optimal sampling schedule design for the functional regression model so far. To address this design issue, efficient approaches are proposed for generating the best sampling plan in the functional regression setting. First, three optimal experimental designs are considered under a function-on-function linear model: the schedule that maximizes the relative efficiency for recovering the predictor function, the schedule that maximizes the relative efficiency for predicting the response function, and the schedule that maximizes the mixture of the relative efficiencies of both the predictor and response functions. The obtained sampling plan allows a precise recovery of the predictor function and a precise prediction of the response function. The proposed approach can also be reduced to identify the optimal sampling plan for the problem with a scalar-on-function linear regression model. In addition, the optimality criterion on predicting a scalar response using a functional predictor is derived when the quadratic relationship between these two variables is present, and proofs of important properties of the derived optimality criterion are also provided. To find such designs, an algorithm that is comparably fast, and can generate nearly optimal designs is proposed. As the optimality criterion includes quantities that must be estimated from prior knowledge (e.g., a pilot study), the effectiveness of the suggested optimal design highly depends on the quality of the estimates. However, in many situations, the estimates are unreliable; thus, a bootstrap aggregating (bagging) approach is employed for enhancing the quality of estimates and for finding sampling schedules stable to the misspecification of estimates. Through case studies, it is demonstrated that the proposed designs outperform other designs in terms of accurately predicting the response and recovering the predictor. It is also proposed that bagging-enhanced design generates a more robust sampling design under the misspecification of estimated quantities.
Dissertation/Thesis
Doctoral Dissertation Statistics 2020
APA, Harvard, Vancouver, ISO, and other styles
11

Zhu, Zhengyuan. "Optimal sampling design and parameter estimation of Gaussian random fields /." 2002. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3060286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Huang, Syuan-Rong, and 黃炫融. "Optimal Design and Acceptance Sampling Plan under Progressive Type-I Interval Censoring." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/28167502019544617560.

Full text
Abstract:
碩士
淡江大學
統計學系碩士班
95
In traditional censoring schemes, surviving units can only be removed at the end of the life tests. However, in some practical situations, one has to remove surviving units at the points other than the final termination point. A life test of this type is called progressive censoring. Besides, in some life tests, we can only record whether a test unit fails in an interval instead of measuring failure time exactly. Hence, the test units are inspected intermittently. This type of inspection is called interval censoring. In this thesis, we combine progressive censoring and interval censoring to develop a progressive type-I interval-censoring scheme. We will focus on two designing problems of progressive type-I interval-censoring life test with exponential failure time distribution. The first problem is how to design an appropriate life test that would result in the optimal estimation of the mean life. Simply put, more test units, more test time, and more number of inspections will generate more information, which improves the precision of estimates. However, one practical problem arising from designing a life test is the restricted budget of experiment. The size of budget always affects the decisions of number of test units, number of inspections and length of inspection intervals and hence, affects the precision of estimation. In this study, we will use the nonlinear mixed integer programming to obtain the optimal settings of a progressive type-I interval-censored life test by minimizing the asymptotic variance of mean life under the constraint that the total experimental cost does not exceed a pre-determined budget. An example is discussed to illustrate the proposed method and sensitivity analysis is also studied. The second problem is to establish the acceptance sampling plans with cost consideration. We will construct acceptance sampling plans which have the minimum experimental cost under given consumer''s and producer''s risks. Some numerical examples and studies are performed to illustrate the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
13

Pin-YiLin and 林彬儀. "Optimal Sampling Augmentation and Resource Allocation for Design with Inadequate Uncertainty Data." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/67882107257094449437.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Satyanarayana, J. V. "Efficient Design of Embedded Data Acquisition Systems Based on Smart Sampling." Thesis, 2014. http://etd.iisc.ac.in/handle/2005/3518.

Full text
Abstract:
Data acquisition from multiple analog channels is an important function in many embedded devices used in avionics, medical electronics, robotics and space applications. It is desirable to engineer these systems to reduce their size, power consumption, heat dissipation and cost. The goal of this research is to explore designs that exploit a priori knowledge of the input signals in order to achieve these objectives. Sparsity is a commonly observed property in signals that facilitates sub-Nyquist sampling and reconstruction through compressed sensing, thereby reducing the number of A to D conversions. New architectures are proposed for the real-time, compressed acquisition of streaming signals. A. It is demonstrated that by sampling a collection of signals in a multiplexed fashion, it is possible to efficiently utilize all the available sampling cycles of the analogue-to-digital converters (ADCs), facilitating the acquisition of multiple signals using fewer ADCs. The proposed method is modified to accommodate more general signals, for which spectral leakage, due to the occurrence of non-integral number of cycles in the reconstruction window, violates the sparsity assumption. When the objective is to only detect the constituent frequencies in the signals, as against exact reconstruction, it can be achieved surprisingly well even in the presence of severe noise (SNR ~ 5 dB) and considerable undersampling. This has been applied to the detection of the carrier frequency in a noisy FM signal. Information redundancy due to inter-signal correlation gives scope for compressed acquisition of a set of signals that may not be individually sparse. A scheme has been proposed in which the correlation structure in a set of signals is progressively learnt within a small fraction of the duration of acquisition, because of which only a few ADCs are adequate for capturing the signals. Signals from the different channels of EEG possess significant correlation. Employing signals taken from the Physionet database, the correlation structure of nearby EEG electrodes was captured. Subsequent to this training phase, the learnt KLT matrix has been used to reconstruct signals of all the electrodes with reasonably good accuracy from the recordings of a subset of electrodes. Average error is below 10% between the original and reconstructed signals with respect to the power in delta, theta and alpha bands: and below 15% in the beta band. It was also possible to reconstruct all the channels in the 10-10 system of electrode placement with an average error less than 8% using recordings on the sparser 10-20 system. In another design, a set of signals are collectively sampled on a finer sampling grid using ADCs driven by phase-shifted clocks. Thus, each signal is sampled at an effective rate that is a multiple of the ADC sampling rate. So, it is possible to have a less steep transition between the pass band and the stop band, thereby reducing the order of the anti-aliasing filter from 30 to 8. This scheme has been applied to the acquisition of voltages proportional to the deflection of the control surfaces in an aerospace vehicle. The idle sampling cycles of an ADC that performs compressive sub-sampling of a sparse signal, can be used to acquire the residue left after a coarse low-resolution sample is taken in the preceding cycle, like in a pipelined ADC. Using a general purpose, low resolution ADC, a DAC and a summer, one can acquire a sparse signal with double the resolution of the ADC, without having to use a dedicated pipelined ADC. It has also been demonstrated as to how this idea can be applied to achieve a higher dynamic range in the acquisition of fetal electrocardiogram signals. Finally, it is possible to combine more than one of the proposed schemes, to handle acquisition of diverse signals with di_erent kinds of sparsity. The implementation of the proposed schemes in such an integrated design can share common hardware components so as to achieve a compact design.
APA, Harvard, Vancouver, ISO, and other styles
15

Satyanarayana, J. V. "Efficient Design of Embedded Data Acquisition Systems Based on Smart Sampling." Thesis, 2014. http://etd.iisc.ernet.in/2005/3518.

Full text
Abstract:
Data acquisition from multiple analog channels is an important function in many embedded devices used in avionics, medical electronics, robotics and space applications. It is desirable to engineer these systems to reduce their size, power consumption, heat dissipation and cost. The goal of this research is to explore designs that exploit a priori knowledge of the input signals in order to achieve these objectives. Sparsity is a commonly observed property in signals that facilitates sub-Nyquist sampling and reconstruction through compressed sensing, thereby reducing the number of A to D conversions. New architectures are proposed for the real-time, compressed acquisition of streaming signals. A. It is demonstrated that by sampling a collection of signals in a multiplexed fashion, it is possible to efficiently utilize all the available sampling cycles of the analogue-to-digital converters (ADCs), facilitating the acquisition of multiple signals using fewer ADCs. The proposed method is modified to accommodate more general signals, for which spectral leakage, due to the occurrence of non-integral number of cycles in the reconstruction window, violates the sparsity assumption. When the objective is to only detect the constituent frequencies in the signals, as against exact reconstruction, it can be achieved surprisingly well even in the presence of severe noise (SNR ~ 5 dB) and considerable undersampling. This has been applied to the detection of the carrier frequency in a noisy FM signal. Information redundancy due to inter-signal correlation gives scope for compressed acquisition of a set of signals that may not be individually sparse. A scheme has been proposed in which the correlation structure in a set of signals is progressively learnt within a small fraction of the duration of acquisition, because of which only a few ADCs are adequate for capturing the signals. Signals from the different channels of EEG possess significant correlation. Employing signals taken from the Physionet database, the correlation structure of nearby EEG electrodes was captured. Subsequent to this training phase, the learnt KLT matrix has been used to reconstruct signals of all the electrodes with reasonably good accuracy from the recordings of a subset of electrodes. Average error is below 10% between the original and reconstructed signals with respect to the power in delta, theta and alpha bands: and below 15% in the beta band. It was also possible to reconstruct all the channels in the 10-10 system of electrode placement with an average error less than 8% using recordings on the sparser 10-20 system. In another design, a set of signals are collectively sampled on a finer sampling grid using ADCs driven by phase-shifted clocks. Thus, each signal is sampled at an effective rate that is a multiple of the ADC sampling rate. So, it is possible to have a less steep transition between the pass band and the stop band, thereby reducing the order of the anti-aliasing filter from 30 to 8. This scheme has been applied to the acquisition of voltages proportional to the deflection of the control surfaces in an aerospace vehicle. The idle sampling cycles of an ADC that performs compressive sub-sampling of a sparse signal, can be used to acquire the residue left after a coarse low-resolution sample is taken in the preceding cycle, like in a pipelined ADC. Using a general purpose, low resolution ADC, a DAC and a summer, one can acquire a sparse signal with double the resolution of the ADC, without having to use a dedicated pipelined ADC. It has also been demonstrated as to how this idea can be applied to achieve a higher dynamic range in the acquisition of fetal electrocardiogram signals. Finally, it is possible to combine more than one of the proposed schemes, to handle acquisition of diverse signals with di_erent kinds of sparsity. The implementation of the proposed schemes in such an integrated design can share common hardware components so as to achieve a compact design.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography