Dissertations / Theses on the topic 'Detection and estimation theory'

To see the other types of publications on this topic, follow the link: Detection and estimation theory.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Detection and estimation theory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Feinstein, Jonathan S. "Detection controlled estimation : theory and applications." Thesis, Massachusetts Institute of Technology, 1987. http://hdl.handle.net/1721.1/14868.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wright, George Alfred Jr. "Nonparameter density estimation and its application in communication theory." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/14979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Warner, Carl Michael 1952. "ESTIMATION OF NONSTATIONARY SIGNALS IN NOISE (PROCESSING, ADAPTIVE, WIENER FILTERS, ESTIMATION, DIGITAL)." Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/291297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

McElwain, Thomas P. "L-estimators used in CFAR detection." Thesis, Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/29199.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Leong, Alex Seak Chon. "Performance of estimation and detection algorithms in wireless networks." Connect to thesis, 2007. http://repository.unimelb.edu.au/10187/2229.

Full text
Abstract:
This thesis focuses on techniques for analyzing the performance of estimation and detection algorithms under conditions which could be encountered in wireless networks, with emphasis on wireless sensor networks. These include phenomena such as measurement losses, fading channels, measurement delays and power constraints.
We first look at the hidden Markov model (HMM) filter with random measurement losses. The loss process is governed by another Markov chain. In the two-state case we derive analytical expressions to compute the probability of error. In the multi-state case we derive approximations that are valid at high signal-to-noise ratio (SNR). Relationships between the error probability and parameters of the loss process are investigated.
We then consider the problem of detecting two-state Markov chains in noise, under the Neyman-Pearson formulation. Our measure of performance here is the error exponent, and we give methods for computing this, firstly when channels are time-invariant, and then for time-varying fading channels. We also characterize the behaviour of the error exponent at high SNR.
We will look at the fixed lag Kalman smoother with random measurement losses. We investigate both the notion of estimator stability via expectation of the error covariance, and a probabilistic constraint on the error covariance. A comparison with the Kalman filter where lost measurements are retransmitted is made.
Finally we consider the distributed estimation of scalar linear systems using multiple sensors under the analog forwarding scheme. We study the asymptotic behaviour of the steady state error covariance as the number of sensors increases. We formulate optimization problems to minimize the sum power subject to error covariance constraints, and to minimize the error covariance subject to sum power constraints. We compare between the performance of multi-access and orthogonal access schemes, and for fading channels the effects of various levels of channel state information (CSI).
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Zaiyue. "Fault detection, estimation and control of periodically excited nonlinear systems." Click to view the E-thesis via HKUTO, 2008. http://sunzi.lib.hku.hk/hkuto/record/B40887984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Zaiyue, and 楊再躍. "Fault detection, estimation and control of periodically excited nonlinear systems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B40887984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xu, Cuichun. "Statistical processing on radar, sonar, and optical signals /." View online ; access limited to URI, 2008. http://0-digitalcommons.uri.edu.helin.uri.edu/dissertations/AAI3328735.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lu, Jingyang. "Resilient dynamic state estimation in the presence of false information injection attacks." VCU Scholars Compass, 2016. http://scholarscompass.vcu.edu/etd/4644.

Full text
Abstract:
The impact of false information injection is investigated for linear dynamic systems with multiple sensors. First, it is assumed that the system is unaware of the existence of false information and the adversary is trying to maximize the negative effect of the false information on Kalman filter's estimation performance under a power constraint. The false information attack under different conditions is mathematically characterized. For the adversary, many closed-form results for the optimal attack strategies that maximize the Kalman filter's estimation error are theoretically derived. It is shown that by choosing the optimal correlation coefficients among the false information and allocating power optimally among sensors, the adversary could significantly increase the Kalman filter's estimation errors. In order to detect the false information injected by an adversary, we investigate the strategies for the Bayesian estimator to detect the false information and defend itself from such attacks. We assume that the adversary attacks the system with certain probability, and that he/she adopts the worst possible strategy that maximizes the mean squared error (MSE) if the attack is undetected. An optimal Bayesian detector is designed which minimizes the average system estimation error instead of minimizing the probability of detection error, as a conventional Bayesian detector typically does. The case that the adversary attacks the system continuously is also studied. In this case, sparse attack strategies in multi-sensor dynamic systems are investigated from the adversary's point of view. It is assumed that the defender can perfectly detect and remove the sensors once they are corrupted by false information injected by an adversary. The adversary's goal is to maximize the covariance matrix of the system state estimate by the end of attack period under the constraint that the adversary can only attack the system a few times over the sensor and over the time, which leads to an integer programming problem. In order to overcome the prohibitive complexity of the exhaustive search, polynomial-time algorithms, such as greedy search and dynamic programming, are proposed to find the suboptimal attack strategies. As for greedy search, it starts with an empty set, and one sensor is added at each iteration, whose elimination will lead to the maximum system estimation error. The process terminates when the cardinality of the active set reaches to the sparsity constraint. Greedy search based approaches such as sequential forward selection (SFS), sequential backward selection (SBS), and simplex improved sequential forward selection (SFS-SS) are discussed and corresponding attack strategies are provided. Dynamic programming is also used in obtaining the sub-optimal attack strategy. The validity of dynamic programming lies on a straightforward but important nature of dynamic state estimation systems: the credibility of the state estimate at current step is in accordance with that at previous step. The problem of false information attack on and the Kalman filter's defense of state estimation in dynamic multi-sensor systems is also investigated from a game theoretic perspective. The relationship between the Kalman filter and the adversary can be regarded as a two-person zero-sum game. The condition under which both sides of the game will reach a Nash equilibrium is investigated.
APA, Harvard, Vancouver, ISO, and other styles
10

Ling, Tao. "High resolution gamma detector for small-animal positron emission tomography /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/9751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Brunel, Victor-Emmanuel. "Non-parametric estimation of convex bodies and convex polytopes." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2014. http://tel.archives-ouvertes.fr/tel-01066977.

Full text
Abstract:
Dans ce travail, nous nous intéressons à l'estimation d'ensembles convexes dans l'espace Euclidien R^d, en nous penchant sur deux modèles. Dans le premier modèle, nous avons à notre disposition un échantillon de n points aléatoires, indépendants et de même loi, uniforme sur un ensemble convexe inconnu. Le second modèle est un modèle additif de régression, avec bruit sous-gaussien, et dont la fonction de régression est l'indicatrice d'Euler d'un ensemble convexe ici aussi inconnu. Dans le premier modèle, notre objectif est de construire un estimateur du support de la densité des observations, qui soit optimal au sens minimax. Dans le second modèle, l'objectif est double. Il s'agit de construire un estimateur du support de la fonction de régression, ainsi que de décider si le support en question est non vide, c'est-'a-dire si la fonction de régression est effectivement non nulle, ou si le signal observé n'est que du bruit. Dans ces deux modèles, nous nous intéressons plus particulièrement au cas où l'ensemble inconnu est un polytope convexe, dont le nombre de sommets est connu. Si ce nombre est inconnu, nous montrons qu'une procédure adaptative permet de construire un estimateur atteignant la même vitesse asymptotique que dans le cas précédent. Enfin, nous démontrons que ce m$eme estimateur pallie à l'erreur de spécification du modèle, consistant à penser à tort que l'ensemble convexe inconnu est un polytope. Nous démontrons une inégalité de déviation pour le volume de l'enveloppe convexe des observations dans le premier modèle. Nous montrons aussi que cette inégalité implique des bornes optimales sur les moments du volume manquant de cette enveloppe convexe, ainsi que sur les moments du nombre de ses sommets. Enfin, dans le cas unidimensionnel, pour le second modèle, nous donnons la taille asymptotique minimale que doit faire l'ensemble inconnu afin de pouvoir être détecté, et nous proposons une règle de décision, permettant un test consistant du caractère non vide de cet ensemble.
APA, Harvard, Vancouver, ISO, and other styles
12

Tran, Nguyen Duy. "Performance bounds in terms of estimation and resolution and applications in array processing." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2012. http://tel.archives-ouvertes.fr/tel-00777503.

Full text
Abstract:
This manuscript concerns the performance analysis in signal processing and consists into two parts : First, we study the lower bounds in characterizing and predicting the estimation performance in terms of mean square error (MSE). The lower bounds on the MSE give the minimum variance that an estimator can expect to achieve and it can be divided into two categories depending on the parameter assumption: the so-called deterministic bounds dealing with the deterministic unknown parameters, and the so-called Bayesian bounds dealing with the random unknown parameter. Particularly, we derive the closed-form expressions of the lower bounds for two applications in two different fields: (i) The first one is the target localization using the multiple-input multiple-output (MIMO) radar in which we derive the lower bounds in the contexts with and without modeling errors, respectively. (ii) The other one is the pulse phase estimation of X-ray pulsars which is a potential solution for autonomous deep space navigation. In this application, we show the potential universality of lower bounds to tackle problems with parameterized probability density function (pdf) different from classical Gaussian pdf since in X-ray pulse phase estimation, observations are modeled with a Poisson distribution. Second, we study the statistical resolution limit (SRL) which is the minimal distance in terms of the parameter of interest between two signals allowing to correctly separate/estimate the parameters of interest. More precisely, we derive the SRL in two contexts: array processing and MIMO radar by using two approaches based on the estimation theory and information theory. We also present in this thesis the usefulness of SRL in optimizing the array system.
APA, Harvard, Vancouver, ISO, and other styles
13

Oreifej, Omar. "Robust Subspace Estimation Using Low-Rank Optimization. Theory and Applications in Scene Reconstruction, Video Denoising, and Activity Recognition." Doctoral diss., University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5684.

Full text
Abstract:
In this dissertation, we discuss the problem of robust linear subspace estimation using low-rank optimization and propose three formulations of it. We demonstrate how these formulations can be used to solve fundamental computer vision problems, and provide superior performance in terms of accuracy and running time. Consider a set of observations extracted from images (such as pixel gray values, local features, trajectories...etc). If the assumption that these observations are drawn from a liner subspace (or can be linearly approximated) is valid, then the goal is to represent each observation as a linear combination of a compact basis, while maintaining a minimal reconstruction error. One of the earliest, yet most popular, approaches to achieve that is Principal Component Analysis (PCA). However, PCA can only handle Gaussian noise, and thus suffers when the observations are contaminated with gross and sparse outliers. To this end, in this dissertation, we focus on estimating the subspace robustly using low-rank optimization, where the sparse outliers are detected and separated through the `1 norm. The robust estimation has a two-fold advantage: First, the obtained basis better represents the actual subspace because it does not include contributions from the outliers. Second, the detected outliers are often of a specific interest in many applications, as we will show throughout this thesis. We demonstrate four different formulations and applications for low-rank optimization. First, we consider the problem of reconstructing an underwater sequence by removing the turbulence caused by the water waves. The main drawback of most previous attempts to tackle this problem is that they heavily depend on modelling the waves, which in fact is ill-posed since the actual behavior of the waves along with the imaging process are complicated and include several noise components; therefore, their results are not satisfactory. In contrast, we propose a novel approach which outperforms the state-of-the-art. The intuition behind our method is that in a sequence where the water is static, the frames would be linearly correlated. Therefore, in the presence of water waves, we may consider the frames as noisy observations drawn from a the subspace of linearly correlated frames. However, the noise introduced by the water waves is not sparse, and thus cannot directly be detected using low-rank optimization. Therefore, we propose a data-driven two-stage approach, where the first stage “sparsifies” the noise, and the second stage detects it. The first stage leverages the temporal mean of the sequence to overcome the structured turbulence of the waves through an iterative registration algorithm. The result of the first stage is a high quality mean and a better structured sequence; however, the sequence still contains unstructured sparse noise. Thus, we employ a second stage at which we extract the sparse errors from the sequence through rank minimization. Our method converges faster, and drastically outperforms state of the art on all testing sequences. Secondly, we consider a closely related situation where an independently moving object is also present in the turbulent video. More precisely, we consider video sequences acquired in a desert battlefields, where atmospheric turbulence is typically present, in addition to independently moving targets. Typical approaches for turbulence mitigation follow averaging or de-warping techniques. Although these methods can reduce the turbulence, they distort the independently moving objects which can often be of great interest. Therefore, we address the problem of simultaneous turbulence mitigation and moving object detection. We propose a novel three-term low-rank matrix decomposition approach in which we decompose the turbulence sequence into three components: the background, the turbulence, and the object. We simplify this extremely difficult problem into a minimization of nuclear norm, Frobenius norm, and L1 norm. Our method is based on two observations: First, the turbulence causes dense and Gaussian noise, and therefore can be captured by Frobenius norm, while the moving objects are sparse and thus can be captured by L1 norm. Second, since the object's motion is linear and intrinsically different than the Gaussian-like turbulence, a Gaussian-based turbulence model can be employed to enforce an additional constraint on the search space of the minimization. We demonstrate the robustness of our approach on challenging sequences which are significantly distorted with atmospheric turbulence and include extremely tiny moving objects. In addition to robustly detecting the subspace of the frames of a sequence, we consider using trajectories as observations in the low-rank optimization framework. In particular, in videos acquired by moving cameras, we track all the pixels in the video and use that to estimate the camera motion subspace. This is particularly useful in activity recognition, which typically requires standard preprocessing steps such as motion compensation, moving object detection, and object tracking. The errors from the motion compensation step propagate to the object detection stage, resulting in miss-detections, which further complicates the tracking stage, resulting in cluttered and incorrect tracks. In contrast, we propose a novel approach which does not follow the standard steps, and accordingly avoids the aforementioned difficulties. Our approach is based on Lagrangian particle trajectories which are a set of dense trajectories obtained by advecting optical flow over time, thus capturing the ensemble motions of a scene. This is done in frames of unaligned video, and no object detection is required. In order to handle the moving camera, we decompose the trajectories into their camera-induced and object-induced components. Having obtained the relevant object motion trajectories, we compute a compact set of chaotic invariant features, which captures the characteristics of the trajectories. Consequently, a SVM is employed to learn and recognize the human actions using the computed motion features. We performed intensive experiments on multiple benchmark datasets, and obtained promising results. Finally, we consider a more challenging problem referred to as complex event recognition, where the activities of interest are complex and unconstrained. This problem typically pose significant challenges because it involves videos of highly variable content, noise, length, frame size ... etc. In this extremely challenging task, high-level features have recently shown a promising direction as in [53, 129], where core low-level events referred to as concepts are annotated and modeled using a portion of the training data, then each event is described using its content of these concepts. However, because of the complex nature of the videos, both the concept models and the corresponding high-level features are significantly noisy. In order to address this problem, we propose a novel low-rank formulation, which combines the precisely annotated videos used to train the concepts, with the rich high-level features. Our approach finds a new representation for each event, which is not only low-rank, but also constrained to adhere to the concept annotation, thus suppressing the noise, and maintaining a consistent occurrence of the concepts in each event. Extensive experiments on large scale real world dataset TRECVID Multimedia Event Detection 2011 and 2012 demonstrate that our approach consistently improves the discriminativity of the high-level features by a significant margin.
Ph.D.
Doctorate
Electrical Engineering and Computing
Engineering and Computer Science
Computer Engineering
APA, Harvard, Vancouver, ISO, and other styles
14

Matricardi, Elisabetta. "Performance Analysis of a Radar System Based on 5G Signals." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
Poiché lo spettro sta diventando una risorsa sempre più scarsa, l'idea di riutilizzarlo per più di un'applicazione è interessante, in quanto può evitare il sottoutilizzo di risorse spettrali altrimenti allocate in modo permanente. Negli ultimi anni è stata introdotta una alternativa ai segnali radar, che consiste in forme d'onda OFDM generate digitalmente. Questa tesi ha come obiettivo quello di investigare le prestazioni di un segnale di comunicazione, il 5G NR, quando usato per scopi di sensing. Un sistema radar e un algoritmo di processing basato sull'elaborazione radar OFDM nel dominio delle frequenze sono stati implementati e integrati con un metodo di interpolazione per tenere conto delle sottoportanti nulle all'interno della griglia di risorse in trasmissione. Quindi, sono state eseguite simulazioni al calcolatore per valutare le prestazioni di stima della distanza e della velocità; si mostra che è possibile ottenere un errore basso di stima dei parametri del target anche per valori di SNR bassi. Inoltre, le forme d'onda 5G NR, grazie alla loro impressionante larghezza di banda, alla numerologia scalabile e alla robustezza contro il multipath, hanno dimostrato di funzionare bene anche nei canali con fading. Mediante l'ausilio delle curve ROC, viene illustrata la differenza delle prestazioni del radar quando vengono utilizzati due tipi di canali radio, ovvero il canale AWGN e il canale con multipath fading.
APA, Harvard, Vancouver, ISO, and other styles
15

Debbabi, Nehla. "Approche algébrique et théorie des valeurs extrêmes pour la détection de ruptures : Application aux signaux biomédicaux." Thesis, Reims, 2015. http://www.theses.fr/2015REIMS025/document.

Full text
Abstract:
Ce travail développe des techniques non-supervisées de détection et de localisation en ligne de ruptures dans les signaux enregistrés dans un environnement bruité. Ces techniques reposent sur l'association d'une approche algébrique avec la TVE. L'approche algébrique permet d'appréhender aisément les ruptures en les caractérisant en termes de distributions de Dirac retardées et leurs dérivées dont la manipulation est facile via le calcul opérationnel. Cette caractérisation algébrique, permettant d'exprimer explicitement les instants d'occurrences des ruptures, est complétée par une interprétation probabiliste en termes d'extrêmes : une rupture est un évènement rare dont l'amplitude associée est relativement grande. Ces évènements sont modélisés dans le cadre de la TVE, par une distribution de Pareto Généralisée. Plusieurs modèles hybrides sont proposés dans ce travail pour décrire à la fois le comportement moyen (bruit) et les comportements extrêmes (les ruptures) du signal après un traitement algébrique. Des algorithmes entièrement non-supervisés sont développés pour l'évaluation de ces modèles hybrides, contrairement aux techniques classiques utilisées pour les problèmes d'estimation en question qui sont heuristiques et manuelles. Les algorithmes de détection de ruptures développés dans cette thèse ont été validés sur des données générées, puis appliqués sur des données réelles provenant de différents phénomènes, où les informations à extraire sont traduites par l'apparition de ruptures
This work develops non supervised techniques for on-line detection and location of change-points in noisy recorded signals. These techniques are based on the combination of an algebraic approach with the Extreme Value Theory (EVT). The algebraic approach offers an easy identification of the change-points. It characterizes them in terms of delayed Dirac distributions and their derivatives which are easily handled via operational calculus. This algebraic characterization, giving rise to an explicit expression of the change-points locations, is completed with a probabilistic interpretation in terms of extremes: a change point is seen as a rare and extreme event. Based on EVT, these events are modeled by a Generalized Pareto Distribution.Several hybrid multi-components models are proposed in this work, modeling at the same time the mean behavior (noise) and the extremes ones (change-points) of the signal after an algebraic processing. Non supervised algorithms are proposed to evaluate these hybrid models, avoiding the problems encountered with classical estimation methods which are graphical ad hoc ones. The change-points detection algorithms developed in this thesis are validated on generated data and then applied on real data, stemming from different phenomenons, where change-points represent the information to be extracted
APA, Harvard, Vancouver, ISO, and other styles
16

King, David R. "A bayesian solution for the law of categorical judgment with category boundary variability and examination of robustness to model violations." Thesis, Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/52960.

Full text
Abstract:
Previous solutions for the the Law of Categorical Judgment with category boundary variability have either constrained the standard deviations of the category boundaries in some way or have violated the assumptions of the scaling model. In the current work, a fully Bayesian Markov chain Monte Carlo solution for the Law of Categorical Judgment is given that estimates all model parameters (i.e. scale values, category boundaries, and the associated standard deviations). The importance of measuring category boundary standard deviations is discussed in the context of previous research in signal detection theory, which gives evidence of interindividual variability in how respondents perceive category boundaries and even intraindividual variability in how a respondent perceives category boundaries across trials. Although the measurement of category boundary standard deviations appears to be important for describing the way respondents perceive category boundaries on the latent scale, the inclusion of category boundary standard deviations in the scaling model exposes an inconsistency between the model and the rating method. Namely, with category boundary variability, the scaling model suggests that a respondent could experience disordinal category boundaries on a given trial. However, the idea that a respondent actually experiences disordinal category boundaries seems unlikely. The discrepancy between the assumptions of the scaling model and the way responses are made at the individual level indicates that the assumptions of the model will likely not be met. Therefore, the current work examined how well model parameters could be estimated when the assumptions of the model were violated in various ways as a consequence of disordinal category boundary perceptions. A parameter recovery study examined the effect of model violations on estimation accuracy by comparing estimates obtained from three response processes that violated the assumptions of the model with estimates obtained from a novel response process that did not violate the assumptions of the model. Results suggest all parameters in the Law of Categorical Judgment can be estimated reasonably well when these particular model violations occur, albeit to a lesser degree of accuracy than when the assumptions of the model are met.
APA, Harvard, Vancouver, ISO, and other styles
17

Peterson, Anders. "The Origin-Destination Matrix Estimation Problem : Analysis and Computations." Doctoral thesis, Norrköping : Dept. of Science and Technology, Linköpings universitet, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8859.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Almeida, Tiago Paggi de. "Decomposição de sinais eletromiográficos de superfície misturados linearmente utilizando análise de componentes independentes." [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261976.

Full text
Abstract:
Orientador: Antônio Augusto Fasolo Quevedo
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-20T12:21:10Z (GMT). No. of bitstreams: 1 Almeida_TiagoPaggide_M.pdf: 6663822 bytes, checksum: bdc5918b5983a84b46acf03bb9096cc7 (MD5) Previous issue date: 2012
Resumo: A eletromiografia e uma pratica clinica que permite inferir sobre a integridade do sistema neuromuscular, o que inclui a analise da unidade funcional contrátil do sistema neuromuscular, a unidade motora. O sinal eletromiografico e um sinal elétrico resultante do transiente iônico devido potenciais de ação de unidades motoras capturados por eletrodos invasivos ou não invasivos. Eletrodos invasivos capturam potenciais de ação de ate uma única unidade motora, porem o procedimento e demorado e incomodo. Eletrodos de superfície permitem detectar potenciais de ação de modo não invasivo, porem resultam na mistura de potenciais de ação de varias unidades motoras, resultando em um sinal com aparência de ruído aleatório, dificultando uma analise. Técnicas de Separação Cega de Fontes, como Analise de Componentes Independentes, tem se mostrado eficientes na decomposição de sinais eletromiograficos de superfície nos constituintes potenciais de ação de unidades motoras. Este projeto tem como objetivo desenvolver um protótipo capaz de capturar sinais mioeletricos de superfície e analisar a viabilidade da separação de sinais eletromiograficos intramusculares misturados linearmente, utilizando Analise de Componentes Independentes. O sistema proposto integra uma matriz de eletrodos com ate sete canais, um modulo de pré-processamento, um software para controle da captura dos sinais eletromiograficos de superfície e o algoritmo FastICA em ambiente MATLABR para separação dos sinais eletromiograficos. Os resultados mostram que o sistema foi capaz de capturar sinais eletromiograficos de superfície e os sinais eletromiograficos intramusculares misturados linearmente foram separados de forma confiável
Abstract: Electromyography is a clinical practice that provides information regarding the physiological condition of the neuromuscular system, which includes the analysis of the contractile functional unit of the neuromuscular system, known as motor unit. The electromyographic signal is an electrical signal resultant from ionic transient regarding motor unit action potentials captured by invasive or non-invasive electrodes. Invasive electrodes are able to detect action potentials of even one motor unit, although the procedure is time consuming and uncomfortable. Surface electrodes enable detecting action potential noninvasively, although the detected signal is a mixture of action potentials from several motor units within the detection area of the electrode, resulting in a complex interference pattern which is difficult to interpret. Blind Source Separation techniques, such as Independent Component Analysis, have proven effective for decomposing surface electromyographic signals into the constituent motor unit action potentials. The objective of this project was to develop a system in order to capture surface myoelectric signals and to analyze the viability for decomposing intramuscular myoelectric signals that were mixed linearly, using independent component analyzes. The system includes an electrode matrix with up to seven channels, a preprocessing module, a software for controlling surface myoelectric signals capture, and the FastICA algorithm in MATLABR for the intramuscular myoelectric signals decomposition. The results show that the system was able to capture surface myoelectric signals and was capable of decomposing the intramuscular myoelectric signals that were previously linearly mixed
Mestrado
Engenharia Biomedica
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
19

Nolibé, Gilles. "Developpement d'une methodologie de determination d'operateurs de calcul specifiques dans des problemes d'identification et d'estimation en temps reel." Toulon, 1988. http://www.theses.fr/1988TOUL0001.

Full text
Abstract:
Elaboration d'une description algebrique des flux de donnees representatifs du traitement algorithmique de signal. Pour cela, une formulation de ces flux qui tient compte de l'ordonnancement des processus afin de determiner le parallelisme interne des algorithmes a implanter. Les resultats theoriques obtenus ont ete appliques a des problemes de traitement de signal
APA, Harvard, Vancouver, ISO, and other styles
20

Iskander, D. R. "The Generalised Bessel function K distribution and its application to the detection of signals in the presence of non-Gaussian interference." Thesis, Queensland University of Technology, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
21

Whipps, Gene. "Coupled harmonics : estimation and detection." Connect to resource, 2003. http://rave.ohiolink.edu/etdc/view.cgi?acc%5Fnum=osu1261318405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Miolane, Léo. "Fundamental limits of inference : a statistical physics approach." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEE043.

Full text
Abstract:
Nous étudions des problèmes statistiques classiques, tels que la détection de communautés dans un graphe, l’analyse en composantes principales, les modèles de mélanges Gaussiens, les modèles linéaires (généralisés ou non), dans un cadre Bayésien. Nous calculons pour ces problèmes le “risque de Bayes” qui est la plus petite erreur atteignable par une méthode statistique, dans la limite de grande dimension. Nous observons alors un phénomène surprenant : dans de nombreux cas il existe une valeur critique de l’intensité du bruit au-delà de laquelle il n’est plus possible d’extraire de l’information des données. En dessous de ce seuil, nous comparons la performance d’algorithmes polynomiaux à celle optimale. Dans de nombreuses situations nous observons un écart : bien qu’il soit possible – en théorie – d’estimer le signal, aucune méthode algorithmiquement efficace ne parvient à être optimale. Dans ce manuscrit, nous adoptons une approche issue de la physique statistique qui explique ces phénomènes en termes de transitions de phase. Les méthodes et outils que nous utilisons proviennent donc de la physique, plus précisément de l’étude mathématique des verres de spins
We study classical statistical problems such as community detection on graphs, Principal Component Analysis (PCA), sparse PCA, Gaussian mixture clustering, linear and generalized linear models, in a Bayesian framework. We compute the best estimation performance (often denoted as “Bayes Risk”) achievable by any statistical method in the high dimensional regime. This allows to observe surprising phenomena: for many problems, there exists a critical noise level above which it is impossible to estimate better than random guessing. Below this threshold, we compare the performance of existing polynomial-time algorithms to the optimal one and observe a gap in many situations: even if non-trivial estimation is theoretically possible, computationally efficient methods do not manage to achieve optimality. From a statistical physics point of view that we adopt throughout this manuscript, these phenomena can be explained by phase transitions. The tools and methods of this thesis are therefore mainly issued from statistical physics, more precisely from the mathematical study of spin glasses
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Hao. "Noise enhanced signal detection and estimation." Related electronic resource:, 2007. http://proquest.umi.com/pqdweb?did=1342743841&sid=2&Fmt=2&clientId=3739&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Sun, Xu S. M. Massachusetts Institute of Technology. "Analogic for code estimation and detection." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33892.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2005.
Includes bibliographical references (p. 125-128).
Analogic is a class of analog statistical signal processing circuits that dynamically solve an associated inference problem by locally propagating probabilities in a message-passing algorithm [29] [15]. In this thesis, we study an exemplary embodiment of analogic called Noise-Locked Loop(NLL) which is a pseudo-random code estimation system. The previous work shows NLL can perform direct-sequence spread-spectrum acquisition and tracking functionality and promises orders-of-magnitude win over digital implementations [29]. Most of the research [30] [2] [3] has been focused on the simulation and implementation of probability representation NLL derived from exact form message-passing algorithms. We propose an approximate message-passing algorithm for NLL in log-likelihood ratio(LLR) representation and have constructed its analogic implementation. The new approximate NLL gives shorter acquisition time comparing to the exact form NLL. The approximate message-passing algorithm makes it possible to construct analogic which is almost temperature independent. This is very useful in the design of robust large-scale analogic networks. Generalized belief propagation(GBP) has been proposed to improve the computational accuracy of Belief Propagation [31] [32] [33].
(cont.) The application of GBP to NLL promises significantly improvement of the synchronization performance. However, there is no report on circuit implementation. In this thesis, we propose analogic circuits to implement the basic computations in GBP, which can be used to construct general GBP systems. Finally we propose a novel current-mode signal restoration circuit which will be important in scaling analogic to large networks.
by Xu Sun.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
25

Richard, Michael D. (Michael David). "Estimation and detection with chaotic systems." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/12230.

Full text
Abstract:
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (p. 209-214).
by Michael D. Richard.
Sc.D.
APA, Harvard, Vancouver, ISO, and other styles
26

El, Korso Mohammed Nabil. "Analyse de performances en traitement d'antenne : bornes inférieures de l'erreur quadratique moyenne et seuil de résolution limite." Thesis, Paris 11, 2011. http://www.theses.fr/2011PA112074/document.

Full text
Abstract:
Ce manuscrit est dédié à l’analyse de performances en traitement d’antenne pour l’estimation des paramètres d’intérêt à l’aide d’un réseau de capteurs. Il est divisé en deux parties :– Tout d’abord, nous présentons l’étude de certaines bornes inférieures de l’erreur quadratique moyenne liées à la localisation de sources dans le contexte champ proche. Nous utilisons la borne de Cramér-Rao pour l’étude de la zone asymptotique (notamment en terme de rapport signal à bruit avec un nombre fini d’observations). Puis, nous étudions d’autres bornes inférieures de l’erreur quadratique moyenne qui permettent de prévoir le phénomène de décrochement de l’erreur quadratique moyenne des estimateurs (on cite, par exemple, la borne de McAulay-Seidman, la borne de Hammersley-Chapman-Robbins et la borne de Fourier Cramér-Rao).– Deuxièmement, nous nous concentrons sur le concept du seuil statistique de résolution limite, c’est-à-dire, la distance minimale entre deux signaux noyés dans un bruit additif qui permet une ”correcte” estimation des paramètres. Nous présentons quelques applications bien connues en traitement d’antenne avant d’étendre les concepts existants au cas de signaux multidimensionnels. Par la suite, nous étudions la validité de notre extension en utilisant un test d’hypothèses binaire. Enfin, nous appliquons notre extension à certains modèles d’observation multidimensionnels
This manuscript concerns the performance analysis in array signal processing. It can bedivided into two parts :- First, we present the study of some lower bounds on the mean square error related to the source localization in the near eld context. Using the Cramér-Rao bound, we investigate the mean square error of the maximum likelihood estimator w.r.t. the direction of arrivals in the so-called asymptotic area (i.e., for a high signal to noise ratio with a nite number of observations.) Then, using other bounds than the Cramér-Rao bound, we predict the threshold phenomena.- Secondly, we focus on the concept of the statistical resolution limit (i.e., the minimum distance between two closely spaced signals embedded in an additive noise that allows a correct resolvability/parameter estimation.) We de ne and derive the statistical resolution limit using the Cramér-Rao bound and the hypothesis test approaches for the mono-dimensional case. Then, we extend this concept to the multidimensional case. Finally, a generalized likelihood ratio test based framework for the multidimensional statistical resolution limit is given to assess the validity of the proposed extension
APA, Harvard, Vancouver, ISO, and other styles
27

Hua, Nan. "Space-efficient data sketching algorithms for network applications." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44899.

Full text
Abstract:
Sketching techniques are widely adopted in network applications. Sketching algorithms “encode” data into succinct data structures that can later be accessed and “decoded” for various purposes, such as network measurement, accounting, anomaly detection and etc. Bloom filters and counter braids are two well-known representatives in this category. Those sketching algorithms usually need to strike a tradeoff between performance (how much information can be revealed and how fast) and cost (storage, transmission and computation). This dissertation is dedicated to the research and development of several sketching techniques including improved forms of stateful Bloom Filters, Statistical Counter Arrays and Error Estimating Codes. Bloom filter is a space-efficient randomized data structure for approximately representing a set in order to support membership queries. Bloom filter and its variants have found widespread use in many networking applications, where it is important to minimize the cost of storing and communicating network data. In this thesis, we propose a family of Bloom Filter variants augmented by rank-indexing method. We will show such augmentation can bring a significant reduction of space and also the number of memory accesses, especially when deletions of set elements from the Bloom Filter need to be supported. Exact active counter array is another important building block in many sketching algorithms, where storage cost of the array is of paramount concern. Previous approaches reduce the storage costs while either losing accuracy or supporting only passive measurements. In this thesis, we propose an exact statistics counter array architecture that can support active measurements (real-time read and write). It also leverages the aforementioned rank-indexing method and exploits statistical multiplexing to minimize the storage costs of the counter array. Error estimating coding (EEC) has recently been established as an important tool to estimate bit error rates in the transmission of packets over wireless links. In essence, the EEC problem is also a sketching problem, since the EEC codes can be viewed as a sketch of the packet sent, which is decoded by the receiver to estimate bit error rate. In this thesis, we will first investigate the asymptotic bound of error estimating coding by viewing the problem from two-party computation perspective and then investigate its coding/decoding efficiency using Fisher information analysis. Further, we develop several sketching techniques including Enhanced tug-of-war(EToW) sketch and the generalized EEC (gEEC)sketch family which can achieve around 70% reduction of sketch size with similar estimation accuracies. For all solutions proposed above, we will use theoretical tools such as information theory and communication complexity to investigate how far our proposed solutions are away from the theoretical optimal. We will show that the proposed techniques are asymptotically or empirically very close to the theoretical bounds.
APA, Harvard, Vancouver, ISO, and other styles
28

Combernoux, Alice. "Détection et filtrage rang faible pour le traitement d'antenne utilisant la théorie des matrices aléatoires en grandes dimensions." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC016/document.

Full text
Abstract:
Partant du constat que dans plus en plus d'applications, la taille des données à traiter augmente, il semble pertinent d'utiliser des outils appropriés tels que la théorie des matrices aléatoires dans le régime en grandes dimensions. Plus particulièrement, dans les applications de traitement d'antenne et radar spécifiques STAP et MIMO-STAP, nous nous sommes intéressés au traitement d'un signal d'intérêt corrompu par un bruit additif composé d'une partie dite rang faible et d'un bruit blanc gaussien. Ainsi l'objet de cette thèse est d'étudier dans le régime en grandes dimensions la détection et le filtrage dit rang faible (fonction de projecteurs) pour le traitement d'antenne en utilisant la théorie des matrices aléatoires.La thèse propose alors trois contributions principales, dans le cadre de l'analyse asymptotique de fonctionnelles de projecteurs. Ainsi, premièrement, le régime en grandes dimensions permet ici de déterminer une approximation/prédiction des performances théoriques non asymptotiques, plus précise que ce qui existe actuellement en régime asymptotique classique (le nombre de données d'estimation tends vers l'infini à taille des données fixe). Deuxièmement, deux nouveaux filtres et deux nouveaux détecteurs adaptatifs rang faible ont été proposés et il a été montré qu'ils présentaient de meilleures performances en fonction des paramètres du système en terme de perte en RSB, probabilité de fausse alarme et probabilité de détection. Enfin, les résultats ont été validés sur une application de brouillage, puis appliqués aux traitements radar STAP et MIMO-STAP sparse. L'étude a alors mis en évidence une différence notable avec l'application de brouillage liée aux modèles de matrice de covariance traités dans cette thèse
Nowadays, more and more applications deal with increasing dimensions. Thus, it seems relevant to exploit the appropriated tools as the random matrix theory in the large dimensional regime. More particularly, in the specific array processing applications as the STAP and MIMO-STAP radar applications, we were interested in the treatment of a signal of interest corrupted by an additive noise composed of a low rang noise and a white Gaussian. Therefore, the aim of this thesis is to study the low rank filtering and detection (function of projectors) in the large dimensional regime for array processing with random matrix theory tools.This thesis has three main contributions in the context of asymptotic analysis of projector functionals. Thus, the large dimensional regime first allows to determine an approximation/prediction of theoretical non asymptotic performance, much more precise than the literature in the classical asymptotic regime (when the number of estimation data tends to infinity at a fixed dimension). Secondly, two new low rank adaptive filters and detectors have been proposed and it has been shown that they have better performance as a function of the system parameters, in terms of SINR loss, false alarm probability and detection probability. Finally, the results have been validated on a jamming application and have been secondly applied to the STAP and sparse MIMO-STAP processings. Hence, the study highlighted a noticeable difference with the jamming application, related to the covariance matrix models concerned by this thesis
APA, Harvard, Vancouver, ISO, and other styles
29

Baur, Cordula. "Risk Estimation in Portfolio Theory." St. Gallen, 2007. http://www.biblio.unisg.ch/org/biblio/edoc.nsf/wwwDisplayIdentifier/05609706001/$FILE/05609706001.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Ru, Jifeng. "Adaptive estimation and detection techniques with applications." ScholarWorks@UNO, 2005. http://louisdl.louislibraries.org/u?/NOD,285.

Full text
Abstract:
Thesis (Ph. D.)--University of New Orleans, 2005.
Title from electronic submission form. "A dissertation ... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Engineering and Applied Science"--Dissertation t.p. Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
31

Ricklin, Nathan D. "Time varying channels characterization, estimation, and detection /." Diss., [La Jolla] : University of California, San Diego, 2010. http://wwwlib.umi.com/cr/ucsd/fullcit?p3404890.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2010.
Title from first page of PDF file (viewed June 23, 2010). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (leaves 98-102).
APA, Harvard, Vancouver, ISO, and other styles
32

Törnqvist, David. "Estimation and Detection with Applications to Navigation." Doctoral thesis, Linköpings universitet, Reglerteknik, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-14956.

Full text
Abstract:
The ability to navigate in an unknown environment is an enabler for truly utonomous systems. Such a system must be aware of its relative position to the surroundings using sensor measurements. It is instrumental that these measurements are monitored for disturbances and faults. Having correct measurements, the challenging problem for a robot is to estimate its own position and simultaneously build a map of the environment. This problem is referred to as the Simultaneous Localization and Mapping (SLAM) problem. This thesis studies several topics related to SLAM, on-board sensor processing, exploration and disturbance detection. The particle filter (PF) solution to the SLAM problem is commonly referred to as FastSLAM and has been used extensively for ground robot applications. Having more complex vehicle models using for example flying robots extends the state dimension of the vehicle model and makes the existing solution computationally infeasible. The factorization of the problem made in this thesis allows for a computationally tractable solution. Disturbance detection for magnetometers and detection of spurious features in image sensors must be done before these sensor measurements can be used for estimation. Disturbance detection based on comparing a batch of data with a model of the system using the generalized likelihood ratio test is considered. There are two approaches to this problem. One is based on the traditional parity space method, where the influence of the initial state is removed by projection, and the other on combining prior information with data in the batch. An efficient parameterization of incipient faults is given which is shown to improve the results considerably. Another common situation in robotics is to have different sampling rates of the sensors. More complex sensors such as cameras often have slower update rate than accelerometers and gyroscopes. An algorithm for this situation is derived for a class of models with linear Gaussian dynamic model and sensors with different sampling rates, one slow with a nonlinear and/or non-Gaussian measurement relation and one fast with a linear Gaussian measurement relation. For this case, the Kalman filter is used to process the information from the fast sensor and the information from the slow sensor is processed using the PF. The problem formulation covers the important special case of fast dynamics and one slow sensor, which appears in many navigation and tracking problems. Vision based target tracking is another important estimation problem in robotics. Distributed exploration with multi-aircraft flight experiments has demonstrated localization of a stationary target with estimate covariance on the order of meters. Grid-based estimation as well as the PF have been examined.
The third article in this thesis is included with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Linköping University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to pubs-permissions@ieee.org. By choosing to view this material, you agree to all provisions of the copyright laws protecting it.Please be advised that wherever a copyright notice from another organization is displayed beneath a figure, a photo, a videotape or a Powerpoint presentation, you must get permission from that organization, as IEEE would not be the copyright holder.
APA, Harvard, Vancouver, ISO, and other styles
33

Borah, Deva Kanta, and dborah@nmsu edu. "Detection and Estimation in Digital Wireless Communications." The Australian National University. Research School of Information Sciences and Engineering, 2000. http://thesis.anu.edu.au./public/adt-ANU20050506.015503.

Full text
Abstract:
This thesis investigates reliable data communication techniques for wireless channels. The problem of data detection at the receiver is considered and several novel detectors and parameter estimators are presented.¶ It is shown that by using a noise-limiting prefilter, with a spectral support at least equal to the signal part of the received signal, and sampling its output at the Nyquist rate, a set of sufficient statistics for maximum likelihood sequence detection (MLSD) is obtained.¶ Observing that the time-variations of the multipaths in a wireless channel are bandlimited, channel taps are closely approximated as polynomials in time. Using this representation, detection techniques for frequency-flat and frequency-selective channels are obtained. The proposed polynomial predictor based sequence detector (PPSD) for frequency-flat channels is similar in structure to the MLSD that employs channel prediction. However, the PPSD uses {\em a priori} known polynomial based predictor taps. It is observed that the PPSD, without any explicit knowledge of the channel autocovariance, performs close to the Innovations based MLSD.¶ New techniques for frequency-selective channel estimation are presented. They are based on a rectangular windowed least squares algorithm, and they employ a polynomial model of the channel taps. A recursive form of the least squares algorithm with orthonormal polynomial basis vectors is developed. Given the appropriate window size and polynomial model order, the proposed method outperforms the conventional least mean squares (LMS) and the exponentially weighted recursive least squares (EW-RLS) algorithms. Novel algorithms are proposed to obtain near optimal window size and polynomial model order.¶ The improved channel estimation techniques developed for frequency-selective channels are incorporated into sliding window and fixed block channel estimators. The sliding window estimator uses received samples over a time window to calculate the channel taps. Every symbol period, the window is moved along another symbol period and a new estimate is calculated. A fixed block estimator uses all received samples to estimate the channel taps throughout a data packet, all at once. In fast fading and at a high signal-to-noise ratio (SNR), both techniques outperform the MLSD receivers which employ the LMS algorithm for channel estimation.¶ An adaptive multiuser detector, optimal in the weighted least squares (WLS) sense, is derived for direct sequence code division multiple access (DS-CDMA) systems. In a multicellular configuration, this detector jointly detects the users within the cell of interest, while suppressing the intercell interferers in a WLS sense. In the absence of intercell interferers, the detector reduces to the well-known multiuser MLSD structure that employs a bank of matched filters. The relationship between the proposed detector and a centralized decision feedback detector is derived. The effects of narrowband interference are investigated and compared with the multiuser MLSD.¶ Since in a fast time-varying channel, the LMS or the EW-RLS algorithms cannot track the channel variations effectively, the receiver structures proposed for single user communications are extended to multiuser DS-CDMA systems. The fractionally-chip-spaced channel taps of the convolution of the chip waveform with the multipath channel are estimated. Linear equalizer, decision feedback equalizer and MLSDs are studied, and under fast fading, as the SNR increases, they are found to outperform the LMS based adaptive minimum mean squared error (MMSE) linear receivers.
APA, Harvard, Vancouver, ISO, and other styles
34

Abler, Craig Bennett 1975. "Spectral envelope estimation for transient event detection." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/47690.

Full text
Abstract:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.
Includes bibliographical references (p. 133-134).
A Nonintursive Load Monitor (NILM) is a device that determines the operating schedule of electric loads by properly locating and identifying transient events in the spectral envelopes of the current waveform measured at the utility service entry. The spectral envelopes of the current waveform are the coefficients of its time varying Fourier series representation and as such can be estimated by low-pass filtering the current mixed with appropriate basis sinusoids. Spectral envelope estimators have been termed pre-processors. In this thesis, two pre-processors were designed. The first utilizes magic sine waves as the basis functions instead of sinusoids. The second is a digital pre-processor developed on a digital signal processor. The digital design was used in complete NILM platforms and its performance is analyzed to determine the quality of the envelopes produced. Finally, avenues for further work on the digital pre-processing unit are suggested.
by Craig Bennett Abler.
S.B.and M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
35

Qu, Yang. "Mixed Signal Detection, Estimation, and Modulation Classification." Wright State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=wright1576615989584971.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Yang, Fan. "Object Detection for Contactless Vital Signs Estimation." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42297.

Full text
Abstract:
This thesis explores the contactless estimation of people’s vital signs. We designed two camera-based systems and applied object detection algorithms to locate the regions of interest where vital signs are estimated. With the development of Deep Learning, Convolutional Neural Network (CNN) model has many applications in the real world nowadays. We applied the CNN based frameworks to the different types of camera based systems and improve the efficiency of the contactless vital signs estimation. In the field of medical healthcare, contactless monitoring draws a lot attention in the recent years because the wide use of different sensors. However most of the methods are still in the experimental phase and have never been used in real applications. We were interested in monitoring vital signs of patients lying in bed or sitting around the bed at a hospital. This required using sensors that have range of 2 to 5 meters. We developed a system based on the depth camera for detecting people’s chest area and the radar for estimating the respiration signal. We applied a CNN based object detection method to locate the position of the subject lying in the bed covered with blanket. And the respiratory-like signal is estimated from the radar device based on the detected subject’s location. We also create a manually annotated dataset containing 1,320 depth images. In each of the depth image the silhouette of the subject’s upper body is annotated, as well as the class. In addition, a small subset of the depth images also labeled four keypoints for the positioning of people’s chest area. This dataset is built on the data collected from the anonymous patients at the hospital which is substantial. Another problem in the field of human vital signs monitoring is that systems seldom contain the functions of monitoring multiple vital signs at the same time. Though there are few attempting to work on this problem recently, they are still all prototypes and have a lot limitations like shorter operation distance. In this application, we focused on contactless estimating subjects’ temperature, breathing rate and heart rate at different distances with or without wearing the mask. We developed a system based on thermal and RGB camera and also explore the feasibility of CNN based object detection algorithms to detect the vital signs from human faces with specifically defined RoIs based on our thermal camera system. We proposed the methods to estimate respiratory rate and heart rate from the thermal videos and RGB videos. The mean absolute difference (MAE) between the estimated HR using the proposed method and the baseline HR for all subjects at different distances is 4.24 ± 2.47 beats per minute, the MAE between the estimated RR and the reference RR for all subjects at different distances is 1.55 ± 0.78 breaths per minute.
APA, Harvard, Vancouver, ISO, and other styles
37

Ragy, Sammy. "Resources in quantum imaging, detection and estimation." Thesis, University of Nottingham, 2015. http://eprints.nottingham.ac.uk/29097/.

Full text
Abstract:
The research included in this thesis comes in two main bodies. In the first, the focus is on intensity interferometric schemes, and I attempt to identify the types of correlations dominant in their operation. This starts with the, now rather historical, Hanbury Brown and Twiss setup from the 1950s and progresses to more recent interests such as ghost imaging and a variant of `quantum illumination', which is a quantum-enhanced detection scheme. These schemes are considered in the continuous variable regime, with Gaussian states in particular. Intensity interferometry has been the cause of a number of disputes between quantum opticians over the past 60 years and I weigh in on the arguments using relatively recent techniques from quantum information theory. In the second half, the focus turns away from the optical imaging and detection schemes, and onto quantum estimation -- multiparameter quantum estimation to be precise. This is an intriguing area of study where one has to carefully juggle tradeoffs in choosing both the optimal measurement and optimal state for performing an estimation in two or more parameters. I lay out a framework for circumventing some of the difficulties involved in this and apply it to several physical examples, revealing some interesting and at times counterintuitive features of multiparameter estimation.
APA, Harvard, Vancouver, ISO, and other styles
38

Shen, Juei-Chin. "Detection and estimation techniques in cognitive radio." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/detection-and-estimation-techniques-in-cognitive-radio(8d246e71-4484-4843-a1f4-4cce4504dd1c).html.

Full text
Abstract:
Faced with imminent spectrum scarcity largely due to inflexible licensed band arrangements, cognitive radio (CR) has been proposed to facilitate higher spectrum utilization by allowing cognitive users (CUs) to access the licensed bands without causing harmful interference to primary users (PUs). To achieve this without the aid of PUs, the CUs have to perform spectrum sensing reliably detecting the presence or absence of PU signals. Without reliable spectrum sensing, the discovery of spectrum opportunities will be inefficient, resulting in limited utilization enhancement. This dissertation examines three major techniques for spectrum sensing, which are matched filter, energy detection, and cyclostationary feature detection. After evaluating the advantages and disadvantages of these techniques, we narrow down our research to a focus on cyclostationary feature detection (CFD). Our first contribution is to boost performance of an existing and prevailing CFD method. This boost is achieved by our proposed optimal and sub-optimal schemes for identifying best hypothesis test points. The optimal scheme incorporates prior knowledge of the PU signals into test point selection, while the sub-optimal scheme circumvents the need for this knowledge. The results show that our proposed can significantly outperform other existing schemes. Secondly, in view of multi-antenna deployment in CR networks, we generalize the CFD method to include the multi-antenna case. This requires effort to justify the joint asymptotic normality of vector-valued statistics and show the consistency of covariance estimates. Meanwhile, to effectively integrate the received multi-antenna signals, a novel cyclostationary feature based channel estimation is devised to obtain channel side information. The simulation results demonstrate that the errors of channel estimates can diminish sharply by increasing the sample size or the average signal-to-noise ratio. In addition, no research has been found that analytically assessed CFD performance over fading channels. We make a contribution to such analysis by providing tight bounds on the average detection probability over Nakagami fading channels and tight approximations of diversity reception performance subject to independent and identically distributed Rayleigh fading. For successful coexistence with the primary system, interference management in cognitive radio networks plays a prominent part. Normally certain average or peak transmission power constraints have to be placed on the CR system. Depending on available channel side information and fading types (fast or slow fading) experienced by the PU receiver, we derive the corresponding constraints that should be imposed. These constraints indicate that the second moment of interference channel gain is an important parameter for CUs allocating transmission power. Hence, we develop a cooperative estimation procedure which provides robust estimate of this parameter based on geolocation information. With less aid from the primary system, the success of this procedure relies on statistically correlated channel measurements from cooperative CUs. The robustness of our proposed procedure to the uncertainty of geolocation information is analytically presented. Simulation results show that this procedure can lead to better mean-square error performance than other existing estimates, and the effects of using inaccurate geolocation information diminish steadily with the increasing number of cooperative cognitive users.
APA, Harvard, Vancouver, ISO, and other styles
39

Peng, Qinmu. "Visual attention: saliency detection and gaze estimation." HKBU Institutional Repository, 2015. https://repository.hkbu.edu.hk/etd_oa/207.

Full text
Abstract:
Visual attention is an important characteristic in the human vision system, which is capable of allocating the cognitive resources to the selected information. Many researchers are attracted to the study of this mechanism in the human vision system and have achieved a wide range of successful applications. Generally, there are two tasks encountered in the visual attention research including visual saliency detection and gaze estimation. The former is normally described as distinctiveness or prominence as a result of a visual stimulus. Given images or videos as input, saliency detection methods try to simulate the mechanism of human vision system, predicting and locating the salient parts in them. While the later involves physical device to track the eye movement and estimate the gaze points. As for saliency detection, it is an effective technique for studying and mimicking the mechanism of the human vision system. Most of saliency models can predict the visual saliency with the boundary or the rough location of the true salient object, but miss the appearance or shape information. Besides, they pay little attention to the image quality problem such as low-resolution or noises. To handle these problems, in this thesis, we propose to model the visual saliency from local and global perspectives for better detection of the visual saliency. The combination of the local and global saliency scheme employing different visual cues can make fully use of their respective advantages to compute the saliency. Compared with existing models, the proposed method can provide better saliency with more appearance and shape information, and can work well even in the low-resolution or noisy images. The experimental results demonstrate the superiority of the proposed algorithm. Next, video saliency detection is another issue for the visual saliency computation. Numerous works have been proposed to extract the video saliency for the tasks of object detection. However, one might not be able to obtain desirable saliency for inferring the region of foreground objects when the video presents low contrast or complicated background. Thus, this thesis develops a salient object detection approach with less demanding assumption, which gives higher detection performance. The method computes the visual saliency in each frame using a weighted multiple manifold ranking algorithm. It then computes motion cues to estimate the motion saliency and localization prior. By adopting a new energy function, the data term depends on the visual saliency and localization prior; and the smoothness term depends on the constraint in time and space. Compared to existing methods, our approach automatically segments the persistent foreground object while preserving the potential shape. We apply our method to challenging benchmark videos, and show competitive or better results than the existing counterparts. Additionally, to address the problem of gaze estimation, we present a low cost and efficient approach to obtain the gaze point. As opposed to eye gaze estimation techniques requiring specific hardware, e.g. infrared high-resolution camera and infrared light sources, as well as a cumbersome calibration process. We concentrate on visible-imaging and present an approach for gaze estimation using a web camera in a desktop environment. We combine intensity energy and edge strength to locate the iris center and utilize the piecewise eye corner detector to detect the eye corner. To compensate for head movement causing gaze error, we adopt a sinusoidal head model (SHM) to simulate the 3D head shape, and propose an adaptive weighted facial features embedded in the pose from the orthography and scaling with iterations algorithm (AWPOSIT), whereby the head pose can be estimated. Consequently, the gaze estimation is obtained by the integration of the eye vector and head movement information. The proposed method is not sensitive to the light conditions, and the experimental results show the efficacy of the proposed approach
APA, Harvard, Vancouver, ISO, and other styles
40

Qin, Li. "Nonparametric estimation in actuarial ruin theory." Thesis, University of Cambridge, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.608513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Khorramzadeh, Yasamin. "Network Reliability: Theory, Estimation, and Applications." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/64383.

Full text
Abstract:
Network reliability is the probabilistic measure that determines whether a network remains functional when its elements fail at random. Definition of functionality varies depending on the problem of interest, thus network reliability has much potential as a unifying framework to study a broad range of problems arising in complex network contexts. However, since its introduction in the 1950's, network reliability has remained more of an interesting theoretical construct than a practical tool. In large part, this is due to well-established complexity costs for both its evaluation and approximation, which has led to the classification of network reliability as a NP-Hard problem. In this dissertation we present an algorithm to estimate network reliability and then utilize it to evaluate the reliability of large networks under various descriptions of functionality. The primary goal of this dissertation is to pose network reliability as a general scheme that provides a practical and efficiently computable observable to distinguish different networks. Employing this concept, we are able to demonstrate how local structural changes can impose global consequences. We further use network reliability to assess the most critical network entities which ensure a network's reliability. We investigate each of these aspects of reliability by demonstrating some example applications.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
42

Yang, Qian. "Stock bubbles : The theory and estimation." Thesis, Brunel University, 2006. http://bura.brunel.ac.uk/handle/2438/3597.

Full text
Abstract:
This work attempts to make a breakthrough in the empirical research of market inefficiency by introducing a new approach, the value frontier method, to estimate the magnitude of stock bubbles, which has been an interesting topic that has attracted a lot of research attention in the past. The theoretical framework stems from the basic argument of Blanchard & Watson’s (1982) rational expectation of asset value that should be equal to the fundamental value of the stock, and the argument of Scheinkman & Xiong (2003) and Hong, Scheinkman & Xiong (2006) that bubbles are formed by heterogeneous beliefs which can be refined as the optimism effect and the resale option effect. The applications of the value frontier methodology are demonstrated in this work at the market level and the firm level respectively. The estimated bubbles at the market level enable us to analyse bubble changes over time among 37 countries across the world, which helps further examine the relationship between economic factors (e.g. inflation) and bubbles. Firm-level bubbles are estimated in two developed markets, the US and the UK, as well as one emerging market, China. We found that the market-average bubble is less volatile than industry-level bubbles. This finding provides a compelling explanation to the failure of many existing studies in testing the existence of bubbles at the whole market level. In addition, the significant decreasing trend of Chinese bubbles and their co-moving tendency with the UK and the US markets offer us evidence in support of our argument that even in an immature market, investors can improve their investment perceptions towards rationality by learning not only from previous experience but also from other opened markets. Furthermore, following the arguments of “sustainable bubbles” from Binswanger (1999) and Scheinkman & Xiong (2003), we reinforce their claims at the end that a market with bubbles can also be labelled efficient; in particular, it has three forms of efficiency. First, a market without bubbles is completely efficient from the perspective of investors’ responsiveness to given information; secondly, a market with “sustainable bubbles” (bubbles that co-move with the economy), which results from rational responses to economic conditions, is in the strong form of information-responsive efficiency; thirdly, a market with “non-sustainable bubbles”, i.e. the bubble changes are not linked closely with economic foundations, is in the weak form of information-responsive efficiency.
APA, Harvard, Vancouver, ISO, and other styles
43

Whaley, Dewey Lonzo. "The Interquartile Range: Theory and Estimation." Digital Commons @ East Tennessee State University, 2005. https://dc.etsu.edu/etd/1030.

Full text
Abstract:
The interquartile range (IQR) is used to describe the spread of a distribution. In an introductory statistics course, the IQR might be introduced as simply the “range within which the middle half of the data points lie.” In other words, it is the distance between the two quartiles, IQR = Q3 - Q1. We will compute the population IQR, the expected value, and the variance of the sample IQR for various continuous distributions. In addition, a bootstrap confidence interval for the population IQR will be evaluated.
APA, Harvard, Vancouver, ISO, and other styles
44

Zemp, Roger James. "Detection theory in ultrasonic imaging /." For electronic version search Digital dissertations database. Restricted to UC campuses. Access is free to UC campus dissertations, 2004. http://uclibs.org/PID/11984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Sadough, Seyed Mohammad Sajad. "Ultra wideband OFDM systems : channel estimation and improved detection accounting for estimation inaccuracies." Paris 11, 2008. http://www.theses.fr/2008PA112001.

Full text
Abstract:
Les travaux présentés dans cette thèse se situent dans le cadre de la transmission OFDM appliqué au contexte Ultra Large Bande (UWB). L’objectif principal va être l’estimation du canal de propagation et la conception de récepteurs en prenant en compte la connaissance non idéale du canal. On propose d’abord une approche semi-aveugle d’estimation du canal qui bénéficie de la parcimonie du canal UWB dans le domaine des ondelettes pour réduire le nombre de paramètre à estimer. Ensuite, on propose des structures de réception itérative où la conception du détecteur prend en compte la présence des erreurs d’estimation de canal. La détection au sens maximum de vraisemblance (MV) est améliorée en modifiant sa métrique de détection pour le cas où le canal est estimé de manière imparfaite. La métrique MV améliorée ainsi obtenue nous permet de remettre en question le schéma de détection de type turbo MAP dans un contexte BICM et l’adapter à la connaissance imparfaite du canal. De plus, on dérive les débits de coupure atteignables (achievable outage rates) avec le détecteur MV améliorée ou un détecteur MV désadapté (utilisant les estimés de canal comme s’il s’agissait des vraies) qu’on comparera avec un détecteur théorique défini comme étant le meilleur récepteur possible en présence d’erreurs d’estimation de canal. Enfin, un récepteur itératif à complexité réduite basé sur un filtrage MMSE et l’annulation parallèle d’interférence (turbo-PIC) est introduit et ensuite adapté à la connaissance imparfaite du canal. Il est important de souligner que les améliorations proposées dans cette thèse n’impliquent pas une augmentation sensible de la complexité au niveau du récepteur
The aim of this thesis is to study the problem of iterative data detection in an ultra wideband (UWB) OFDM system, where the receiver disposes only of an imperfect (and possibly poor) estimate of the unknown channel parameters. First, we propose an efficient receiver jointly estimating the channel and the transmitted symbols in an iterative manner. This receiver is based on a wavelet representation of the unknown channel and exploits the sparseness property of UWB channels in the wavelet domain to reduce the receiver’s computational complexity. Second, we rely on the statistics characterizing the quality of the channel estimation as a mean to integrate the imperfect channel knowledge into the design of iterative receivers. In this way, we formulate an improved maximum likelihood (ML) detection metric taking into account the presence of channel estimation errors. A modified iterative MAP detector is derived by an appropriate use of this metric. The results are compared to those obtained by using the classical mismatched ML detector, which uses the channel estimate as if it was the perfect channel. Furthermore, we calculate the achieved throughputs associated to both improved and mismatched ML detectors, in terms of achievable outage rates. Finally, we propose an improved low-complexity iterative detector based on soft parallel interference cancellation and linear MMSE filtering where we takes into account the presence of channel estimation errors in the formulation of the detector. The important point is that the performance improvements reported in this thesis are obtained while imposing practically no additional complexity to the receiver
APA, Harvard, Vancouver, ISO, and other styles
46

Al, Haj Murad. "Looking at Faces: Detection, Tracking and Pose Estimation." Doctoral thesis, Universitat Autònoma de Barcelona, 2013. http://hdl.handle.net/10803/113482.

Full text
Abstract:
Els éssers humans podem percebre molt fàcilment les cares, les podem seguir en l’espai i temps, així com descodificar el seu contingut, com la seva postura, identitat o expressió. No obstant això, tot i moltes dècades d’investigació per desenvolupar un sistema amb percepció automàtica de cares, segueix sent difícil d’aconseguir una solució completa en àrees com la detecció de cares, el reconeixement de l’expressió facial, la estimació de la posició o el reconeixement de la cara. Això és degut a que la percepció facial automàtica abasta moltes àrees importants i difícils de la visió per computador: les aplicacions finals abasten una gamma molt àmplia com la vídeo vigilància, interacció humà-ordinador, la indexació i recuperació del contingut d’imatges, la identificació biomètrica , la codificació de vídeo i el reconeixement de l’edat i / o sexe. En particular, aquesta tesi està dedicada a tres grans problemes en la percepció automàtica de cares: la detecció de rostres, el seguiment de cares i l’estimació de la posició facial. En el camp de la detecció de rostres, es presenta un model que utilitza múltiples heurístiques senzilles ad-hoc basades en píxels per detectar les regions de la imatge corresponents a pell humana. A més, s’han estudiat diferents espais de color per determinar si hi ha alguna transformació d’espai de color que pugui millorar la detecció del color de la pell. Els resultats experimentals mostren que la separabilitat no augmenta gaire en altres espais de color en comparació amb l’obtinguda en l’espai RGB. A partir del millor espai de color trobat, s’ha dissenyat un detector de cares capaç de generalitzar amb èxit en diferentes escenes. Com a segona aportació, s’ha desenvolupat un algorisme per al seguiment robust i precís de la cara, dins d’un marc unificat que combina l’estimació dels paràmetres facials amb el control d’una càmera activa, per al seguiment de cares mitjançant una càmera Pa- Tilt-Zoom. Un filtre de Kalman estès permet estimar conjuntament les coordenades món dels objectes i la posició de la càmera. La sortida s’utilitza per accionar un controlador PID per tal de realitzar un seguiment reactiu del rostre, generant les accions de control correctes no només per mantenir un zoom-in a la cara per maximitzar la mida, sinó també per poder allunyar i reduir el risc de perdre l’objectiu. Encara que aquest treball està principalment motivat per fer un seguiment de cares, es pot aplicar fàcilment com ajuda d’un detector d’objectes per rastrejar una escena amb una càmera activa. L’aplicabilitat del mètode s’ha demostrat tant en entorns simulats com a escenaris reals. S’ha dedicat l’última i més important part d’aquesta tesi a l’estimació de la posició del cap. En la majoria de treballs previs per a l’estimació de la posició del cap, s’especifiquen les cares manualment. Per tant, els resultats detallats no tenen en compte una possible desalineació de la cara, encara que tant en regressió com en classificació, els algoritmes són generalment sensibles a un error en localització: si l’objecte no està ben alineat amb el model après, la comparació entre les característiques de l’objecte en la imatge i les del model condueix a errors. En aquest últim capítol, es proposa un mètode basat en regressió per mínims quadrats parcials per estimar la posició i a més resoldre simultàniament l’alineació de la cara. Les contribucions en aquesta part són de dos tipus: 1) es mostra que el mètode proposat assoleix millors resultats que l’estat de l’art i 2) es desenvolupa una tècnica per reduir la desalineació basat en factors PLS que milloren l’aprenentatge basat en múltiples instàncies sense la necessitat de tornar a aprendre o d’haver d’incloure mostres mal alineades, ambdós pasos normalment necessaris en l’aprenentatge basat en múltiples instàncies.
Los seres humanos pueden percibir muy fácilmente las caras, las pueden seguir en el espacio y tiempo, así como decodificar su contenido, como su postura, identidad y expresión. Sin embargo, a pesar de muchas décadas de investigación para desarrollar un sistema con percepción automática de caras, una solución completa sigue siendo difícil de alcanzar en áreas como la detección de caras, el reconocimiento de la expresión facial, la estimación de la posición o el reconocimiento del rostro. Esto es debido a que la percepción facial automática involucra muchas áreas importantes y difíciles de la visión por computador, cuyas aplicaciones finales abarcan una gama muy amplia como la video vigilancia, interacción humano-computadora, la indexación y recuperación del contenido de imágenes, la identificación biométrica, la codificación de vídeo y el reconocimiento de la edad y/o sexo. En particular, esta tesis está dedicada a tres grandes problemas en la percepción automática de caras: la detección de rostros, el seguimiento de caras y la estimación de la posición facial. En el campo de la detección de rostros, se presenta un modelo que utiliza múltiples heurísticas sencillas ad-hoc basadas en píxeles para detectar las regiones de la imagen correspondientes a piel humana. Además, se han estudiado diferentes espacios de color para determinar si existe alguna transformación de espacio de color que puede mejorar la detección del color de la piel. Los resultados experimentales muestran que la separabilidad no aumenta demasiado en otros espacios de color en comparación con la obtenida en el espacio RGB. A partir del mejor espacio de color, se ha diseñado un detector de caras capaz de generalizar en diferentes escenarios con éxito. Como segunda aportación, se ha desarrollado un algoritmo para el seguimiento robusto y preciso de la cara, dentro de un marco unificado que combina la estimación de los parámetros faciales con el control de una cámara activa, para el seguimiento de caras mediante una cámara Pan-Tilt-Zoom. Un filtro de Kalman extendido permite estimar conjuntamente las coordenadas mundo de los objetos así como la posición de la cámara. La salida se utiliza para accionar un controlador PID con el fin de realizar un seguimiento reactivo del rostro, generando las acciones de control correctas no solo para mantener un zoom-in en la cara para maximizar el tamaño, sino también para poder alejarse y reducir el riesgo de perder el objetivo. Aunque este trabajo está principalmente motivado para realizar un seguimiento de caras, se puede aplicar fácilmente como ayuda de un detector de objetos para rastrear una escena con una cámara activa. La aplicabilidad del método se ha demostrado tanto en entornos simulados como en escenarios reales. Se ha dedicado la última y más importante parte de esta tesis a la estimación de la postura de la cabeza. En la mayoría de trabajos previos para la estimación de la posición de la cabeza, se especifica manualmente las caras. Por tanto, los resultados detallados no tienen en cuenta una posible desalineación de la cara, aunque tanto en regresión como en clasificación, los algoritmos son generalmente sensibles a este error en localización: si el objeto no está bien alineado con el modelo aprendido, la comparación entre las características del objeto en la imagen y las del modelo conduce a errores. En este último capítulo, se propone un método basado en regresión por mínimos cuadrados parciales para estimar la postura y además resolver la alineación de la cara simultáneamente. Las contribuciones en esta parte son de dos tipos: 1) se muestra que el método propuesto alcanza mejores resultados que el estado del arte y 2) se desarrolla una técnica para reducir la desalineación basado en factores PLS que mejoran el aprendizaje basado en múltiples instancias sin la necesidad de re-aprender o tener que incluir muestras mal alineadas, ambos normalmente necesarios en el aprendizaje basado en múltiples instancias.
Humans can effortlessly perceive faces, follow them over space and time, and decode their rich content, such as pose, identity and expression. However, despite many decades of research on automatic facial perception in areas like face detection, expression recognition, pose estimation and face recognition, and despite many successes, a complete solution remains elusive. Automatic facial perception encompasses many important and challenging areas of computer vision and its applications span a very wide range; these applications include video surveillance, human-computer interaction, content-based image retrieval, biometric identification, video coding and age/gender recognition. This thesis is dedicated to three problems in automatic face perception, namely face detection, face tracking and pose estimation. In face detection, an initial simple model is presented that uses pixel-based heuristics to segment skin locations and hand-crafted rules to return the locations of the faces present in the image. Different colorspaces are studied to judge whether a colorspace transformation can aid skin color detection. Experimental results show that the separability does not increase in other colorspaces when compared to the RGB space. The output of this study is used in the design of a more complex face detector that is able to successfully generalize to different scenarios. In face tracking, we present a framework that combines estimation and control in a joint scheme to track a face with a single pan-tilt-zoom camera. An extended Kalman filter is used to jointly estimate the object world-coordinates and the camera position. The output of the filter is used to drive a PID controller in order to reactively track a face, taking correct decisions when to zoom-in on the face to maximize the size and when to zoom-out to reduce the risk of losing the target. While this work is mainly motivated by tracking faces, it can be easily applied atop of any detector to track different objects. The applicability of this method is demonstrated on simulated as well as real-life scenarios. The last and most important part of this thesis is dedicate to monocular head pose estimation. In most prior work on heads pose estimation, the positions of the faces on which the pose is to be estimated are specified manually. Therefore, the results are reported without studying the effect of misalignment. Regression, as well as classification, algorithms are generally sensitive to localization error. If the object is not accurately registered with the learned model, the comparison between the object features and the model features leads to errors. In this chapter, we propose a method based on partial least squares regression to estimate pose and solve the alignment problem simultaneously. The contributions of this part are two-fold: 1) we show that the proposed method achieves better than state-of-the-art results on the estimation problem and 2) we develop a technique to reduce misalignment based on the learned PLS factors that outperform multiple instance learning (MIL) without the need for any re-training or the inclusion of misaligned samples in the training process, as normally done in MIL.
APA, Harvard, Vancouver, ISO, and other styles
47

Ni, Xuelei. "New results in detection, estimation, and model selection." Available online, Georgia Institute of Technology, 2006, 2006. http://etd.gatech.edu/theses/available/etd-12042005-190654/.

Full text
Abstract:
Thesis (Ph. D.)--Industrial and Systems Engineering, Georgia Institute of Technology, 2006.
Xiaoming Huo, Committee Chair ; C. F. Jeff Wu, Committee Member ; Brani Vidakovic, Committee Member ; Liang Peng, Committee Member ; Ming Yuan, Committee Member.
APA, Harvard, Vancouver, ISO, and other styles
48

Isaksson, Marcus. "Face Detection and Pose Estimation using Triplet Invariants." Thesis, Linköping University, Department of Electrical Engineering, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1060.

Full text
Abstract:

Face detection and pose estimation are two widely studied problems - mainly because of their use as subcomponents in important applications, e.g. face recognition. In this thesis I investigate a new approach to the general problem of object detection and pose estimation and apply it to faces. Face detection can be considered a special case of this general problem, but is complicated by the fact that faces are non-rigid objects. The basis of the new approach is the use of scale and orientation invariant feature structures - feature triplets - extracted from the image, as well as a biologically inspired associative structure which maps from feature triplets to desired responses (position, pose, etc.). The feature triplets are constructed from curvature features in the image and coded in a way to represent distances between major facial features (eyes, nose and mouth). The final system has been evaluated on different sets of face images.

APA, Harvard, Vancouver, ISO, and other styles
49

Ni, Xuelei. "New results in detection, estimation, and model selection." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/10419.

Full text
Abstract:
This thesis contains two parts: the detectability of convex sets and the study on regression models In the first part of this dissertation, we investigate the problem of the detectability of an inhomogeneous convex region in a Gaussian random field. The first proposed detection method relies on checking a constructed statistic on each convex set within an nn image, which is proven to be un-applicable. We then consider using h(v)-parallelograms as the surrogate, which leads to a multiscale strategy. We prove that 2/9 is the minimum proportion of the maximally embedded h(v)-parallelogram in a convex set. Such a constant indicates the effectiveness of the above mentioned multiscale detection method. In the second part, we study the robustness, the optimality, and the computing for regression models. Firstly, for robustness, M-estimators in a regression model where the residuals are of unknown but stochastically bounded distribution are analyzed. An asymptotic minimax M-estimator (RSBN) is derived. Simulations demonstrate the robustness and advantages. Secondly, for optimality, the analysis on the least angle regressions inspired us to consider the conditions under which a vector is the solution of two optimization problems. For these two problems, one can be solved by certain stepwise algorithms, the other is the objective function in many existing subset selection criteria (including Cp, AIC, BIC, MDL, RIC, etc). The latter is proven to be NP-hard. Several conditions are derived. They tell us when a vector is the common optimizer. At last, extending the above idea about finding conditions into exhaustive subset selection in regression, we improve the widely used leaps-and-bounds algorithm (Furnival and Wilson). The proposed method further reduces the number of subsets needed to be considered in the exhaustive subset search by considering not only the residuals, but also the model matrix, and the current coefficients.
APA, Harvard, Vancouver, ISO, and other styles
50

Ma, Jun. "Channel estimation and signal detection for wireless relay." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/37082.

Full text
Abstract:
Wireless relay can be utilized to extend signal coverage, achieve spatial diversity by user cooperation, or shield mobile terminals from adverse channel conditions over the direct link. In a two-hop multi-input-multi-output (MIMO) amplify-and-forward (AF) relay system, the overall noise at the destination station (DS) consists of the colored noise forwarded from the relay station (RS) and the local white noise. We propose blind noise correlation estimation at the DS by utilizing statistics of the broadband relay channel over the RS-DS hop, which effectively improves signal detection at the DS. For further performance improvement, we also propose to estimate the two cascaded MIMO relay channels over the source-RS and the RS-DS links at the DS based on the overall channel between the source and the DS and the amplifying matrix applied at the RS. To cancel cross-talk interference at a channel-reuse-relay-station (CRRS), we utilize the random forwarded signals of the CRRS as equivalent pilots for local coupling channel estimation and achieve a much higher post signal-to-interference ratio (SIR) than the conventional dedicated pilots assisted cancellers without causing any in-band interference at the DS. When an OFDM-based RS is deployed on a high-speed train to shield mobile terminals from the high Doppler frequency over the direct link, inter-subchannel interference (ICI) mitigation is required at the RS. By utilizing statistics of the channel between the base station and the train, we develop both full-rate and reduced-rate OFDM transmission with inherent ICI self-cancellation via transmit and/or receive preprocessing, which achieve significant performance improvement over the existing ICI self-cancellation schemes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography