Dissertations / Theses on the topic 'Signals’ analysis methods'

To see the other types of publications on this topic, follow the link: Signals’ analysis methods.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Signals’ analysis methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sava, Herkole P. "Spectral analysis of phonocardiographic signals using advanced parametric methods." Thesis, University of Edinburgh, 1995. http://hdl.handle.net/1842/12903.

Full text
Abstract:
The research detailed in this thesis investigates the performance of several advanced signal processing techniques when analysis heart sound, and investigates the feasibility of such a method for monitoring the condition of bioprosthetic heart valves. A data-acquisition system was designed which records and digitises heart sounds in a wide variety of cases ranging from sounds produced by native heart valves to mechanical prosthetic heart values. Heart sounds were recorded from more than 150 patients including subjects with normal and abnormal native, bioprosthetic, and mechanical prosthetic heart values. The acquired sounds were pre-processed in order to extract the signal of interest. Various spectral estimation techniques were investigated with a view to assessing the performance and suitability of these methods when analysing the first and second heart sounds. The performance of the following methods is analysed: the classical Fourier transform, autoregressive modelling based on two different approaches, autoregressive-moving average modelling, and Prony's spectral method. In general, it was also found that all parametric methods based on the singular value decomposition technique produce a more accurate spectral representation than the conventional methods (i.e. Fourier transform and autoregressive modelling) in terms of spectral resolution. Among these, the Prony's method is the best. In addition a modified forward-backward overdetermined Prony's algorithm is proposed for analysing heart sounds which produces an improvement of more than 10% over previous methods in terms of normalised mean-square error. Furthermore, a new method for estimating the model order is proposed for the case of heart sounds based on the distribution of the eigenvalues of the data matrix.
APA, Harvard, Vancouver, ISO, and other styles
2

Balli, Tugce. "Nonlinear analysis methods for modelling of EEG and ECG signals." Thesis, University of Essex, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.528852.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pham, Duong Hung. "Contributions to the analysis of multicomponent signals : synchrosqueezing and associated methods." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM044/document.

Full text
Abstract:
De nombreux signaux physiques incluant des signaux audio (musique, parole), médicaux (ECG, PCG), de mammifères marins ou d'ondes gravitationnelles peuvent être modélisés comme une superposition d'ondes modulées en amplitude et en fréquence (modes AM-FM), appelés signaux multicomposantes (SMCs). L'analyse temps-fréquence (TF) joue un rôle central pour la caractérisation de tels signaux et, dans ce cadre, diverses méthodes ont été développées au cours de la dernière décennie. Néanmoins, ces méthodes souffrent d'une limitation intrinsèque appelée le principe d'incertitude. Dans ce contexte, la méthode de réallocation (MR) a été développée visant à améliorer les représentations TF (RTFs) données respectivement par la transformée de Fourier à court terme (TFCT) et la transformée en ondelette continue (TOC), en les concentrant autour des lignes de crête correspondant aux fréquences instantanées. Malheureusement, elle ne permet pas de reconstruction des modes, contrairement à sa variante récente connue sous le nom de transformée synchrosqueezée (TSS). Toutefois, de nombreux problèmes associés à cette dernière restent encore à traiter tels que le traitement des fortes modulations en fréquence, la reconstruction des modes d'un SMC à partir de sa TFCT sous-échantillonnée or l'estimation des signatures TF de modes irréguliers et discontinus. Cette thèse traite principalement de tels problèmes afin de construire des nouvelles méthodes TF inversibles plus puissantes et précises pour l'analyse des SMCs.Cette thèse offre six nouvelles contributions précieuses. La première contribution introduit une extension de TSS d'ordre deux appliqué à la TOC ainsi qu'une discussion sur son analyse théorique et sa mise en œuvre pratique. La seconde contribution propose une généralisation des techniques de synchrosqueezing construites sur la TFCT, connue sous le nom de transformée synchrosqueezée d'ordre supérieur (FTSSn), qui permet de mieux traiter une large gamme de SMCs. La troisième contribution propose une nouvelle technique utilisant sur la transformée synchrosqueezée appliquée à la TFCT de second ordre (FTSS2) et une procédure de démodulation, appelée DTSS2, conduisant à une meilleure performance de la reconstruction des modes. La quatrième contribution est celle d'une nouvelle approche permettant la récupération des modes d'un SMC à partir de sa TFCT sous-échantillonnée. La cinquième contribution présente une technique améliorée, appelée calcul de représentation des contours adaptatifs (CRCA), utilisée pour une estimation efficace des signatures TF d'une plus grande classe de SMCs. La dernière contribution est celle d'une analyse conjointe entre l'CRCA et la factorisation matricielle non-négative (FMN) pour un débruitage performant des signaux phonocardiogrammes (PCG)
Many physical signals including audio (music, speech), medical data (ECG, PCG), marine mammals or gravitational-waves can be accurately modeled as a superposition of amplitude and frequency-modulated waves (AM-FM modes), called multicomponent signals (MCSs). Time-frequency (TF) analysis plays a central role in characterizing such signals and in that framework, numerous methods have been proposed over the last decade. However, these methods suffer from an intrinsic limitation known as the uncertainty principle. In this regard, reassignment method (RM) was developed with the purpose of sharpening TF representations (TFRs) given respectively by the short-time Fourier transform (STFT) or the continuous wavelet transform (CWT). Unfortunately, it did not allow for mode reconstruction, in opposition to its recent variant known as synchrosqueezing transforms (SST). Nevertheless, many critical problems associated with the latter still remain to be addressed such as the weak frequency modulation condition, the mode retrieval of an MCS from its downsampled STFT or the TF signature estimation of irregular and discontinuous signals. This dissertation mainly deals with such problems in order to provide more powerful and accurate invertible TF methods for analyzing MCSs.This dissertation gives six valuable contributions. The first one introduces a second-order extension of wavelet-based SST along with a discussion on its theoretical analysis and practical implementation. The second one puts forward a generalization of existing STFT-based synchrosqueezing techniques known as the high-order STFT-based SST (FSSTn) that enables to better handle a wide range of MCSs. The third one proposes a new technique established on the second-order STFT-based SST (FSST2) and demodulation procedure, called demodulation-FSST2-based technique (DSST2), enabling a better performance of mode reconstruction. The fourth contribution is that of a novel approach allowing for the retrieval of modes of an MCS from its downsampled STFT. The fifth one presents an improved method developed in the reassignment framework, called adaptive contour representation computation (ACRC), for an efficient estimation of TF signatures of a larger class of MCSs. The last contribution is that of a joint analysis of ACRC with non-negative matrix factorization (NMF) to enable an effective denoising of phonocardiogram (PCG) signals
APA, Harvard, Vancouver, ISO, and other styles
4

Lin, Chao. "P and T wave analysis in ECG signals using Bayesian methods." Phd thesis, Toulouse, INPT, 2012. http://oatao.univ-toulouse.fr/8990/1/lin.pdf.

Full text
Abstract:
This thesis studies Bayesian estimation/detection algorithms for P and T wave analysis in ECG signals. In this work, different statistical models and associated Bayesian methods are proposed to solve simultaneously the P and T wave delineation task (determination of the positions of the peaks and boundaries of the individual waves) and the waveform-estimation problem. These models take into account appropriate prior distributions for the unknown parameters (wave locations and amplitudes, and waveform coefficients). These prior distributions are combined with the likelihood of the observed data to provide the posterior distribution of the unknown parameters. Due to the complexity of the resulting posterior distributions, Markov chain Monte Carlo algorithms are proposed for (sample-based) detection/estimation. On the other hand, to take full advantage of the sequential nature of the ECG, a dynamic model is proposed under a similar Bayesian framework. Sequential Monte Carlo methods (SMC) are also considered for delineation and waveform estimation. In the last part of the thesis, two Bayesian models introduced in this thesis are adapted to address a specific clinical research problem referred to as T wave alternans (TWA) detection. One of the proposed approaches has served as an efficient analysis tool in the Endocardial T wave Alternans Study (ETWAS) project in collaboration with St. Jude Medical, Inc and Toulouse Rangueil Hospital. This project was devoted to prospectively assess the feasibility of TWA detection in repolarisation on EGM stored in ICD memories.
APA, Harvard, Vancouver, ISO, and other styles
5

Nagappa, Sharad. "Time-varying frequency analysis of bat echolocation signals using Monte Carlo methods." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/4622.

Full text
Abstract:
Echolocation in bats is a subject that has received much attention over the last few decades. Bat echolocation calls have evolved over millions of years and can be regarded as well suited to the task of active target-detection. In analysing the time-frequency structure of bat calls, it is hoped that some insight can be gained into their capabilities and limitations. Most analysis of calls is performed using non-parametric techniques such as the short time Fourier transform. The resulting time-frequency distributions are often ambiguous, leading to further uncertainty in any subsequent analysis which depends on the time-frequency distribution. There is thus a need to develop a method which allows improved time-frequency characterisation of bat echolocation calls. The aim of this work is to develop a parametric approach for signal analysis, specifically taking into account the varied nature of bat echolocation calls in the signal model. A time-varying harmonic signal model with a polynomial chirp basis is used to track the instantaneous frequency components of the signal. The model is placed within a Bayesian context and a particle filter is used to implement the filter. Marginalisation of parameters is considered, leading to the development of a new marginalised particle filter (MPF) which is used to implement the algorithm. Efficient reversible jump moves are formulated for estimation of the unknown (and varying) number of frequency components and higher harmonics. The algorithm is applied to the analysis of synthetic signals and the performance is compared with an existing algorithm in the literature which relies on the Rao-Blackwellised particle filter (RBPF) for online state estimation and a jump Markov system for estimation of the unknown number of harmonic components. A comparison of the relative complexity of the RBPF and the MPF is presented. Additionally, it is shown that the MPF-based algorithm performs no worse than the RBPF, and in some cases, better, for the test signals considered. Comparisons are also presented from various reversible jump sampling schemes for estimation of the time-varying number of tones and harmonics. The algorithm is subsequently applied to the analysis of bat echolocation calls to establish the improvements obtained from the new algorithm. The calls considered are both amplitude and frequency modulated and are of varying durations. The calls are analysed using polynomial basis functions of different orders and the performance of these basis functions is compared. Inharmonicity, which is deviation of overtones away from integer multiples of the fundamental frequency, is examined in echolocation calls from several bat species. The results conclude with an application of the algorithm to the analysis of calls from the feeding buzz, a sequence of extremely short duration calls emitted at high pulse repetition frequency, where it is shown that reasonable time-frequency characterisation can be achieved for these calls.
APA, Harvard, Vancouver, ISO, and other styles
6

Ramnarain, Pallavi. "A Comparative Analysis of Methods for Baseline Drift Removal in Preterm Infant Respiration Signals." VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/138.

Full text
Abstract:
Breathing is a vital function intrinsic to the survival of any human being. In preterm infants it is an important indicator of maturation and feeding competency, which is a hallmark for hospital release. The recommended method of measurement of infant respiration is the use of thermistors. Accurate event detection within thermistor generated signals relies heavily upon effective noise reduction, specifically baseline drift removal. Baseline drift originates from several sensor-based factors, including thermistor placement within the sensor and in relation to the infant nares. This work compares four methods for baseline drift removal using the same event detection algorithm. The methods compared were a linear spline subtraction, a cubic spline subtraction, a neural network baseline approximation, and a double differentiation of the thermistor signal. The method yielding the highest event detection rate was shown to be the double differentiation method, which serves to attenuate the baseline drift to zero without approximating and subtracting it.
APA, Harvard, Vancouver, ISO, and other styles
7

Fuchs, Karen [Verfasser], and Gerhard [Akademischer Betreuer] Tutz. "Functional data analysis methods for the evaluation of sensor signals / Karen Fuchs ; Betreuer: Gerhard Tutz." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2017. http://d-nb.info/1156533767/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vaerenbergh, Steven Van. "Kernel Methods for Nonlinear Identification, Equalization and Separation of Signals." Doctoral thesis, Universidad de Cantabria, 2010. http://hdl.handle.net/10803/10673.

Full text
Abstract:
En la última década, los métodos kernel (métodos núcleo) han demostrado ser técnicas muy eficaces en la resolución de problemas no lineales. Parte de su éxito puede atribuirse a su sólida base matemática dentro de los espacios de Hilbert generados por funciones kernel ("reproducing kernel Hilbert spaces", RKHS); y al hecho de que resultan en problemas convexos de optimización. Además, son aproximadores universales y la complejidad computacional que requieren es moderada. Gracias a estas características, los métodos kernel constituyen una alternativa atractiva a las técnicas tradicionales no lineales, como las series de Volterra, los polinómios y las redes neuronales. Los métodos kernel también presentan ciertos inconvenientes que deben ser abordados adecuadamente en las distintas aplicaciones, por ejemplo, las dificultades asociadas al manejo de grandes conjuntos de datos y los problemas de sobreajuste ocasionados al trabajar en espacios de dimensionalidad infinita.En este trabajo se desarrolla un conjunto de algoritmos basados en métodos kernel para resolver una serie de problemas no lineales, dentro del ámbito del procesado de señal y las comunicaciones. En particular, se tratan problemas de identificación e igualación de sistemas no lineales, y problemas de separación ciega de fuentes no lineal ("blind source separation", BSS). Esta tesis se divide en tres partes. La primera parte consiste en un estudio de la literatura sobre los métodos kernel. En la segunda parte, se proponen una serie de técnicas nuevas basadas en regresión con kernels para resolver problemas de identificación e igualación de sistemas de Wiener y de Hammerstein, en casos supervisados y ciegos. Como contribución adicional se estudia el campo del filtrado adaptativo mediante kernels y se proponen dos algoritmos recursivos de mínimos cuadrados mediante kernels ("kernel recursive least-squares", KRLS). En la tercera parte se tratan problemas de decodificación ciega en que las fuentes son dispersas, como es el caso en comunicaciones digitales. La dispersidad de las fuentes se refleja en que las muestras observadas se agrupan, lo cual ha permitido diseñar técnicas de decodificación basadas en agrupamiento espectral. Las técnicas propuestas se han aplicado al problema de la decodificación ciega de canales MIMO rápidamente variantes en el tiempo, y a la separación ciega de fuentes post no lineal.
In the last decade, kernel methods have become established techniques to perform nonlinear signal processing. Thanks to their foundation in the solid mathematical framework of reproducing kernel Hilbert spaces (RKHS), kernel methods yield convex optimization problems. In addition, they are universal nonlinear approximators and require only moderate computational complexity. These properties make them an attractive alternative to traditional nonlinear techniques such as Volterra series, polynomial filters and neural networks.This work aims to study the application of kernel methods to resolve nonlinear problems in signal processing and communications. Specifically, the problems treated in this thesis consist of the identification and equalization of nonlinear systems, both in supervised and blind scenarios, kernel adaptive filtering and nonlinear blind source separation.In a first contribution, a framework for identification and equalization of nonlinear Wiener and Hammerstein systems is designed, based on kernel canonical correlation analysis (KCCA). As a result of this study, various other related techniques are proposed, including two kernel recursive least squares (KRLS) algorithms with fixed memory size, and a KCCA-based blind equalization technique for Wiener systems that uses oversampling. The second part of this thesis treats two nonlinear blind decoding problems of sparse data, posed under conditions that do not permit the application of traditional clustering techniques. For these problems, which include the blind decoding of fast time-varying MIMO channels, a set of algorithms based on spectral clustering is designed. The effectiveness of the proposed techniques is demonstrated through various simulations.
APA, Harvard, Vancouver, ISO, and other styles
9

Anand, K. "Methods for Blind Separation of Co-Channel BPSK Signals Arriving at an Antenna Array and Their Performance Analysis." Thesis, Indian Institute of Science, 1995. http://hdl.handle.net/2005/123.

Full text
Abstract:
Capacity improvement of Wireless Communication Systems is a very important area of current research. The goal is to increase the number of users supported by the system per unit bandwidth allotted. One important way of achieving this improvement is to use multiple antennas backed by intelligent signal processing. In this thesis, we present methods for blind separation of co-channel BPSK signals arriving at an antenna array. These methods consist of two parts, Constellation Estimation and Assignment. We give two methods for constellation estimation, the Smallest Distance Clustering and the Maximum Likelihood Estimation. While the latter is theoretically sound,the former is Computationally simple and intuitively appealing. We show that the Maximum Likelihood Constellation Estimation is well approximated by the Smallest Distance Clustering Algorithm at high SNR. The Assignment Algorithm exploits the structure of the BPSK signals. We observe that both the methods for estimating the constellation vectors perform very well at high SNR and nearly attain Cramer-Rao bounds. Using this fact and noting that the Assignment Algorithm causes negligble error at high SNR, we derive an upper bound on the probability of bit error for the above methods at high SNR. This upper bound falls very rapidly with increasing SNR, showing that our constellation estimation-assignment approach is very efficient. Simulation results are given to demonstrate the usefulness of the bounds.
APA, Harvard, Vancouver, ISO, and other styles
10

Baccherini, Simona. "Pattern recognition methods for EMG prosthetic control." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/12033/.

Full text
Abstract:
In this work we focus on pattern recognition methods related to EMG upper-limb prosthetic control. After giving a detailed review of the most widely used classification methods, we propose a new classification approach. It comes as a result of comparison in the Fourier analysis between able-bodied and trans-radial amputee subjects. We thus suggest a different classification method which considers each surface electrodes contribute separately, together with five time domain features, obtaining an average classification accuracy equals to 75% on a sample of trans-radial amputees. We propose an automatic feature selection procedure as a minimization problem in order to improve the method and its robustness.
APA, Harvard, Vancouver, ISO, and other styles
11

MOREIRA, GREGORI de A. "Métodos para obtenção da altura da camada limite planetária a partir de dados de Lidar." reponame:Repositório Institucional do IPEN, 2013. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10564.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:42:00Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:04:50Z (GMT). No. of bitstreams: 0
Dissertação (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
12

Hajian-Tilaki, Karimollah. "Methodologic contributions to ROC analysis : a study of the robustness of the binormal model for quantitative data and methods for studies involving multiple signals." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=29034.

Full text
Abstract:
The purpose of this dissertation is twofold: (i) to examine the robustness of the binormal model as a "semi-parametric" approach to ROC analysis for quantitative diagnostic tests; (ii) to develop nonparametric methods for ROC analysis of data concerning multiple "signals".
Metz et al (1990) adapted the binormal model, used previously for rating data only, for ROC analysis of quantitative diagnostic tests. Their investigation of its performance was limited to data generated from the bi-normal model itself. Part (i) of this thesis describes a broader numerical investigation to assess how it performs in various configurations of non-binormal pairs of distributions, where one or both pair members were mixtures of Gaussian (MG) distributions. We also investigated the effects of sample size and the number of data categories used. Three criteria, bias in estimates of the area under the curve (AUC), bias in estimated true positive fraction (TPF's) at specific false positive fraction (FPF) points, and discrepancies between the estimated and true TPF over the wider portion of the ROC curve, were used to assess the impact of departures from binormality. The bias in the estimates of AUC was small for all configurations studied, no matter what amount of discretization and what sample sizes were used. By the other criteria, the binormal model was robust to departures involving $ {$G, MG--skewed or bimodal$ }$ pairs. The fits were less appropriate at FPF = 0.05 and 0.10 when both pair-members were skewed to the right, but even then the bias in estimates of TPF was less than 0.06. The "semi-parametric" and nonparametric approaches yielded very similar estimates of AUC and of the corresponding sampling variability.
Part (ii) develops nonparametric ROC analysis for the situation when pathology and test interpretation data for each patient are K-dimensional. The approach computes K "pseudo-accuracies" for each patient; from these, K U-statistics are derived. One can form a summary index from these K components, as well as the standard error (SE) of this index based on the observed correlations among the pseudo-accuracies. The applicability of a simplified formula for the SE was assessed. The method was also extended to comparisons of two diagnostic systems. The procedures are illustrated using data sets from two clinical studies. The approach can handle the complex structure of multi-signal ROC data; it takes the various inter-correlations into account, and makes efficient use of the data.
APA, Harvard, Vancouver, ISO, and other styles
13

Burchardt, Lara Sophie [Verfasser]. "Rhythm in Animals’ Acoustic Signals: Novel Methods for the Analysis of Rhythm Production and Perception on the Example of Bats, Birds, and Whales / Lara Sophie Burchardt." Berlin : Freie Universität Berlin, 2021. http://d-nb.info/1241117810/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Wang, Yuan. "Heart rate variability and respiration signals as late onset sepsis diagnostic tools in neonatal intensive care units." Thesis, Rennes 1, 2013. http://www.theses.fr/2013REN1S106/document.

Full text
Abstract:
Le sepsis tardif, défini comme une infection systémique chez les nouveaux nés âgés de plus de 3 jours, survient chez environ 7% à 10% de tous les nouveau-nés et chez plus de 25% des nouveau-nés de très faible poids de naissance qui sont hospitalisés dans les unités de soins intensifs néonatals (USIN). Les apnées et bradycardies (AB) spontanées récurrentes et graves sont parmi les principaux indicateurs précoces cliniques de l'infection systémique chez les prématurés. L'objectif de cette thèse est de déterminer si la variabilité du rythme cardiaque (VRC), la respiration et l'analyse de leurs relations aident au diagnostic de l'infection chez les nouveaux nés prématurés par des moyens non invasifs en USIN. Par conséquent, on a effectué l'analyse Mono-Voie (MV) et Bi-Voies (BV) sur deux groupes sélectionnés de nouveau-nés prématurés: sepsis (S) vs. non-sepsis (NS). (1) Tout d'abord, on a étudié la série RR non seulement par des méthodes de distribution (moy, varn, skew, kurt, med, SpAs), par les méthodes linéaire: le domaine temporel (SD, RMSSD) et dans le domaine fréquentiel (p_VLF, p_LF, p_HF), mais aussi par les méthodes non–linéaires: la théorie du chaos (alphas, alphaF) et la théorie de l'information (AppEn, SamEn, PermEn, Regul). Pour chaque méthode, nous étudions trois tailles de fenêtre 1024/2048/4096, puis nous comparons ces méthodes afin de trouver les meilleures façons de distinguer S de NS. Les résultats montrent que les indices alphaS, alphaF et SamEn sont les paramètres optimaux pour séparer les deux populations. (2) Ensuite, la question du couplage fonctionnel entre la VRC et la respiration nasale est adressée. Des relations linéaires et non-linéaires ont été explorées. Les indices linéaires sont la corrélation (r²), l'indice de la fonction de cohérence (Cohere) et la corrélation temps-fréquence (r2t,f) , tandis que le coefficient de régression non-linéaire (h²) a été utilisé pour analyser des relations non-linéaires. Nous avons calculé les deux directions de couplage pendant l'évaluation de l'indice h2 de régression non-linéaire. Enfin, à partir de l'ensemble du processus d'analyse, il est évident que les trois indices (r2tf_rn_raw_0p2_0p4, h2_rn_raw et h2_nr_raw) sont des moyens complémentaires pour le diagnostic du sepsis de façon non-invasive chez ces patients fragiles. (3) Après, l'étude de faisabilité de la détection du sepsis en USIN est réalisée sur la base des paramètres retenus lors des études MV et BV. Nous avons montré que le test proposé, basé sur la fusion optimale des six indices ci-dessus, conduit à de bonnes performances statistiques. En conclusion, les mesures choisies lors de l'analyse des signaux en MV et BV ont une bonne répétabilité et permettent de mettre en place un test en vue du diagnostic non invasif et précoce du sepsis. Le test proposé peut être utilisé pour fournir une alarme fiable lors de la survenue d'un épisode d'AB tout en exploitant les systèmes de monitoring actuels en USIN
Late-onset sepsis, defined as a systemic infection in neonates older than 3 days, occurs in approximately 10% of all neonates and in more than 25% of very low birth weight infants who are hospitalized in Neonatal Intensive Care Units (NICU). Recurrent and severe spontaneous apneas and bradycardias (AB) is one of the major clinical early indicators of systemic infection in the premature infant. Various hematological and biochemical markers have been evaluated for this indication but they are invasive procedures that cannot be repeated several times. The objective of this Ph.D dissertation was to determine if heart rate variability (HRV), respiration and the analysis of their relationships help to the diagnosis of infection in premature infants via non-invasive ways in NICU. Therefore, we carried out Mono-Channel (MC) and Bi-Channel (BC) Analysis in two selected groups of premature infants: sepsis (S) vs. non-sepsis (NS). (1) Firstly, we studied the RR series not only by distribution methods (moy, varn, skew, kurt, med, SpAs), by linear methods: time domain (SD, RMSSD) and frequency domain (p_VLF, p_LF, p_HF), but also by non-linear methods: chaos theory (alphaS, alphaF) and information theory (AppEn, SamEn, PermEn, Regul). For each method, we attempt three sizes of window 1024/2048/4096, and then compare these methods in order to find the optimal ways to distinguish S from NS. The results show that alphaS, alphaF and SamEn are optimal parameters to recognize sepsis from the diagnosis of late neonatal infection in premature infants with unusual and recurrent AB. (2) The question about the functional coupling of HRV and nasal respiration is addressed. Linear and non-linear relationships have been explored. Linear indexes were correlation (r²), coherence function (Cohere) and time-frequency index (r2t,f), while a non-linear regression coefficient (h²) was used to analyze non-linear relationships. We calculated two directions during evaluate the index h2 of non-linear regression. Finally, from the entire analysis process, it is obvious that the three indexes (r2tf_rn_raw_0p2_0p4, h2_rn_raw and h2_nr_raw) were complementary ways to diagnosticate sepsis in a non-invasive way, in such delicate patients.(3) Furthermore, feasibility study is carried out on the candidate parameters selected from MC and BC respectively. We discovered that the proposed test based on optimal fusion of 6 features shows good performance with the largest Area Under Curves (AUC) and the least Probability of False Alarm (PFA). As a conclusion, we believe that the selected measures from MC and BC signal analysis have a good repeatability and accuracy to test for the diagnosis of sepsis via non-invasive NICU monitoring system, which can reliably confirm or refute the diagnosis of infection at an early stage
APA, Harvard, Vancouver, ISO, and other styles
15

Ravirala, Narayana. "Device signal detection methods and time frequency analysis." Diss., Rolla, Mo. : University of Missouri-Rolla, 2007. http://scholarsmine.umr.edu/thesis/pdf/Ravirala_09007dcc803fea67.pdf.

Full text
Abstract:
Thesis (M.S.)--University of Missouri--Rolla, 2007.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed March 18, 2008) Includes bibliographical references (p. 89-90).
APA, Harvard, Vancouver, ISO, and other styles
16

Rowley, Alexander. "Signal processing methods for cerebral autoregulation." Thesis, University of Oxford, 2008. http://ora.ox.ac.uk/objects/uuid:3d85ab53-9c9b-4b50-98f2-2e67848e5da4.

Full text
Abstract:
Cerebral autoregulation describes the clinically observed phenomenon that cerebral blood flow remains relatively constant in healthy human subjects despite large systemic changes in blood pressure, dissolved blood gas concentrations, heart rate and other systemic variables. Cerebral autoregulation is known to be impaired post ischaemic stroke, after severe head injury, in patients suffering from autonomic dysfunction and under the action of various drugs. Cerebral auto-regulation is a dynamic, multivariate phenomenon. Sensitive techniques are required to monitor cerebral auto-regulation in a clinical setting. This thesis presents 4 related signal processing studies of cerebral autoregulation. The first study shows how consideration of changes in blood gas concentrations simultaneously with changes in blood pressure can improve the accuracy of an existing frequency domain technique for monitoring cerebral autoregulation from spontaneous fluctuations in blood pressure and a transcranial doppler measure of cerebral blood flow velocity. The second study shows how the continuous wavelet transform can be used to investigate coupling between blood pressure and near infrared spectroscopy measures of cerebral haemodynamics in patients with autonomic failure. This introduces time information into the frequency based assessment, however neglects the contribution of blood gas concentrations. The third study shows how this limitation can be resolved by introducing a new time-varying multivariate system identification algorithm based around the dual tree undecimated wavelet transform. All frequency and time-frequency domain methods of monitoring cerebral autoregulation assume linear coupling between the variables under consideration. The fourth study therefore considers nonlinear techniques of monitoring cerebral autoregulation, and illustrates some of the difficulties inherent in this form of analysis. The general approach taken in this thesis is to formulate a simple system model; usually in the form of an ODE or a stochastic process. The form of the model is adapted to encapsulate a hypothesis about features of cerebral autoregulation, particularly those features that may be difficult to recover using existing methods of analysis. The performance of the proposed method of analysis is then evaluated under these conditions. After this testing, the techniques are then applied to data provided by the Laboratory of Human Cerebrovascular Physiology in Alberta, Canada, and the National Hospital for Neurology and Neurosurgery in London, UK.
APA, Harvard, Vancouver, ISO, and other styles
17

Clifford, Gari D. "Signal processing methods for heart rate variability analysis." Thesis, University of Oxford, 2002. http://ora.ox.ac.uk/objects/uuid:5129701f-1d40-425a-99a3-59a05e8c1b23.

Full text
Abstract:
Heart rate variability (HRV), the changes in the beat-to-beat heart rate calculated from the electrocardiogram (ECG), is a key indicator of an individual's cardiovascular condition. Assessment of HRV has been shown to aid clinical diagnosis and intervention strategies. However, the variety of HRV estimation methods and contradictory reports in this field indicate that there is a need for a more rigorous investigation of these methods as aids to clinical evaluation. This thesis investigates the development of appropriate HRV signal processing techniques in the context of pilot studies in two fields of potential application, sleep and head-up tilting (HUT). A novel method for characterising normality in the ECG using both timing information and morphological characteristics is presented. A neural network, used to learn the beat-to-beat variations in ECG waveform morphology, is shown to provide a highly sensitive technique for identifying normal beats. Fast Fourier Transform (FFT) based frequency-domain HRV techniques, which require re-sampling of the inherently unevenly sampled heart beat time-series (RR tachogram) to produce an evenly sampled time series, are then explored using a new method for producing an artificial RR tachogram. Re-sampling is shown to produce a significant error in the estimation of an (entirely specified) artificial RR tachogram. The Lomb periodogram, a method which requires no re-sampling and is applicable to the unevenly sampled nature of the signal is investigated. Experiments demonstrate that the Lomb periodogram is superior to the FFT for evaluating HRV measured by the LF/HF-ratio, a ratio of the low to high frequency power in the RR tachogram within a specified band (0.04-0.4 Hz). The effect of adding artificial ectopic beats in the RR tachogram is then considered and it is shown that ectopic beats significantly alter the spectrum and therefore must be removed or replaced. Replacing ectopic beats by phantom beats is compared to the case of ectopic-realted RR interval removal for the FFT and Lomb methods for varying levels of ectopy. The Lomb periodogram is shown to provide a signficantly better estimate of the LF/HF- ratio under these conditions and is a robust method for measuring the LF/HF-ratio in the presence of (a possibly unknown number of) ectpoic beats or artefacts. The Lomb peridogram and FFT-based techniques are applied to a database of sleep apnoeic and normal subjects. A new method of assessing HRV during sleep is proposed to minimise the confounding effects on HRV of changes due to changing mental activity. Estimation of LF/HF-ratio using the Lomb technique is shown to separate these two patient groups more effectively than with FFT-based techniques. Results are also presented for the application of these methods to controlled (HUT) studies on subjects with syncope, an autonomic nervous system problem, which indicate that the techniques developed in this thesis may provide a method for differentiating between sub-classes of syncope.
APA, Harvard, Vancouver, ISO, and other styles
18

Palladini, Alessandro <1981&gt. "Statistical methods for biomedical signal analysis and processing." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1358/.

Full text
Abstract:
Statistical modelling and statistical learning theory are two powerful analytical frameworks for analyzing signals and developing efficient processing and classification algorithms. In this thesis, these frameworks are applied for modelling and processing biomedical signals in two different contexts: ultrasound medical imaging systems and primate neural activity analysis and modelling. In the context of ultrasound medical imaging, two main applications are explored: deconvolution of signals measured from a ultrasonic transducer and automatic image segmentation and classification of prostate ultrasound scans. In the former application a stochastic model of the radio frequency signal measured from a ultrasonic transducer is derived. This model is then employed for developing in a statistical framework a regularized deconvolution procedure, for enhancing signal resolution. In the latter application, different statistical models are used to characterize images of prostate tissues, extracting different features. These features are then uses to segment the images in region of interests by means of an automatic procedure based on a statistical model of the extracted features. Finally, machine learning techniques are used for automatic classification of the different region of interests. In the context of neural activity signals, an example of bio-inspired dynamical network was developed to help in studies of motor-related processes in the brain of primate monkeys. The presented model aims to mimic the abstract functionality of a cell population in 7a parietal region of primate monkeys, during the execution of learned behavioural tasks.
APA, Harvard, Vancouver, ISO, and other styles
19

Christian, Sprinkle. "Analysis of Contrast Enhancement Methods for Infrared Images." DigitalCommons@CalPoly, 2011. https://digitalcommons.calpoly.edu/theses/659.

Full text
Abstract:
Contrast enhancement techniques are used widely to improve the visual quality of images. The difference in luminance reflected from two adjacent surfaces creates contrast between the surfaces in the image. The greater the contrast, the easier it is to recognize and differentiate objects in an image. Thus object contrast is an important factor in the perception of the visual quality of an image and in its usefulness for object recognition and image analysis applications. The focus of this thesis is on studying how well different contrast enhancement techniques developed for visible spectrum photographic images work on infrared images; to determine which techniques might be best suited for incorporation into commercial infrared imaging applications. The database for this thesis consists of night vision infrared images taken using a Photon 320 camera by FLIR Systems, Inc. Numerous contrast enhancement techniques found in literature were reviewed in this project, out of which four (4) representative techniques have been selected and presented in detail. Homomorphic filtering, fuzzy-logic enhancement, and single-scale retinex techniques have been implemented based on published papers. These were each compared to the classical technique of histogram equalization using metrics of computational time, histogram standard deviation, sharpness and user observations. These metrics provide both quantitative and qualitative analyses of the implemented techniques which are relevant to the end user applications of infrared imaging. Based on the metric calculations and results, homomorphic filtering and histogram equalization proved to have better results compared to the other techniques. All the implemented techniques are global contrast enhancement methods and for future work local contrast enhancement techniques may be applied to the results obtained in this investigation as a post-processing technique.
APA, Harvard, Vancouver, ISO, and other styles
20

Чижевський, Володимир Валерійович. "Оцінювання в режимі реального часу загрози коливного порушення стійкості енергооб’єднання." Thesis, НТУУ "КПІ", 2016. https://ela.kpi.ua/handle/123456789/17652.

Full text
Abstract:
У дисертаційній роботі запропоновано підхід до розв’язання актуальної науково- технічної задачі оцінювання в режимі реального часу загрози коливного порушення стійкості енергооб’єднання (ЕО), обумовленої виникненням низькочастотних коливань (НЧК), який базується на застосуванні методів аналізу сигналів для оброблення результатів вимірювань параметрів режиму ЕО пристроями векторних вимірювань. За сукупністю вимог здійснено селекцію методів аналізу сигналів, які в режимі реального часу здатні забезпечити визначення параметрів домінантних мод НЧК. Для надійного оцінювання в режимі реального часу загрози коливного порушення стійкості ЕО вперше запропоновано застосовувати ансамбль попередньо відібраних та налаштованих методів аналізу сигналів, розроблено процедуру узагальнення отриманих з його використанням результатів визначення параметрів домінантних мод НЧК. Експериментально доведено, що цифрова фільтрація сигналів до їх оброблення ансамблем методів аналізу сигналів, а також використання миттєвих значень сингалу підвищує надійність та точність визначення параметрів домінантних мод НЧК. Запропоновано і обґрунтовано доцільність застосування інтегрованої системи демпфірування НЧК з метою недопущення коливного порушення стійкості ЕО.
In this thesis an approach to the solution of actual scientific and technical problem of real-time evaluation of a menace of power union’s oscillating instability due to arising of low frequency oscillations (LFO) is proposed. This approach based on using of specially prepared ensemble of signals’ analysis methods for processing of results of synchronized measuring of power union’s state parameters. With complex of requirements the selection of methods for real-time identification of LFOs’ parameters is made. Necessity of using of power system stabilizers with settings adopted to the actual frequency values of LFOs’ dominant modes for increasing of reliability of LFOs’ damping in LFOs’ integrated damping system is found out. Using of ensemble of previously selected signals’ analysis methods for reliable realtime evaluation of a menace of power union's oscillating instability is proposed for the first time. The procedure for generalization of parameters estimation results of LFOs’ dominant modes obtained from the proposed ensemble of signals’ analysis methods is developed. Increasing of the precision and estimation reliability of the parameters of LFOs’ dominant modes by means of using of instantaneous values of the signal and digital filtering of the signals before their processing with the ensemble of signals’ analysis methods is experimentally proved.
В диссертационной работе предложен подход к решению актуальной научно- технической задачи оценивания в режиме реального времени угрозы колебательного нарушения устойчивости энергообъединения (ЭО), обусловленного возникновением низкочастотных колебаний (НЧК), основанный на использовании специально подготовленного ансамбля методов анализа сигналов для обработки результатов синхронизированных измерений параметров режима ЭО. Приведены результаты выполненных с использованием цифровых моделей тестовой энергосистемы и ЭО экспериментов по исследованию условий эффективного использования систем автоматического управления возбуждением синхронных машин (СМ) для демпфирования НЧК в ЭО. Установлено, что с целью повышения надёжности демпфирования НЧК необходимо использование системных стабилизаторов (PSS) c настройками, “ориентированными” на фактические значения частот доминирующих мод НЧК. Определены требования к решению в режиме реального времени задачи оценивания угрозы колебательного нарушения устойчивости ЭО с использованием результатов измерения режимных параметров ЭО и методов анализа сигналов для расчёта параметров доминирующих мод НЧК. Предложено использование интегрированной системы демпфирования НЧК с целью недопущения колебательного нарушения устойчивости ЭО и обоснована целе- сообразность её применения. На основании результатов модельно-расчётных исследований подтверждена более высокая эффективность предложенной системы по сравнению с традиционными неадаптивными системами автоматического управления возбуждением СМ. Впервые с целью надёжного оценивания в режиме реального времени угрозы колебательного нарушения устойчивости ЭО предложено использовать ансамбль предварительно отобранных методов анализа сигналов. Впервые по совокупности требований для создания указанного ансамбля осуществлена селекция методов анализа сигналов, обеспечивающих в режиме реального времени надёжные идентификацию и определение параметров доминирующих мод НЧК, в результате которой отобраны методы анализа сигналов, объединённые в 2 группы: основную (метод общих наименьших квадратов Хенкеля, метод пучка матриц, “классический” и модифицированный (с применением разложения по сингулярным числам, базирующемся на методе наименьших квадратов) методы Прони) и референсную (методы, базирующиеся на дискретном преобразовании Фурье и модифицированном (с использованием всего одной интерполяции при расчёте среднего значения обводящей в каждом отсеивании в процессе итерационного определения функций собственных мод) преобразовании Гильберта-Хуанга). Экспери- ментально доказано, что порядок модели анализируемых с целью определения параметров доминирующих мод НЧК сигналов может быть адекватно и оперативно определён с применением принципа минимальной длины описания (MDL). Установлено, что с целью адекватного оценивания характера демпфирования мод НЧК вместе с определением показателя демпфирования необходимо дополнительно отслеживать динамику изменения во времени амплитуды каждой моды. Экспериментально определено, что необходимая адекватность (точность) и оперативность определения параметров доминирующих мод НЧК может быть достигнута при условии использования окон наблюдения длительностью 2…5с при частоте дискретизации значений выборки данных не ниже частоты основной гармоники промышленного тока. Разработан алгоритм и программно реализована процедура обобщения результатов определения параметров доминирующих мод НЧК, полученных с применением ансамбля методов анализа сигналов. Установлено и экспериментально доказано повышение надёжности определения параметров доминирующих мод НЧК в случае проведения цифровой фильтрации результатов измерения режимных параметров ЭО, полученных с помощью устройств векторных измерений, до их обработки ансамблем методов анализа сигналов. Установлено и экспериментально доказано повышение точности определения параметров составляющих НЧК (прежде всего, при определении частоты доминирующих мод НЧК) в случае использования мгновенных значений сигнала по сравнению с использованием для этой цели действующих значений этого же сигнала. Разработаны программные средства идентификации и определения параметров доминирующих мод НЧК с целью надёжного оценивания в режиме реального времени угрозы колебательного нарушения устойчивости ОЭС Украины. Указанные программные средства внедрены в эксплуатацию и использованы малым частным предприятием “Анигер”(г. Киев, Украина) с целью расширения функций комплекса программ верхнего объектного уровня электроизмерительных регистрирующих приборов “Регіна-Ч”, установленных на объектах Объединённой энергетической системы Украины.
APA, Harvard, Vancouver, ISO, and other styles
21

Maji, Suman Kumar. "Multiscale methods in signal processing for adaptive optics." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00909085.

Full text
Abstract:
In this thesis, we introduce a new approach to wavefront phase reconstruction in Adaptive Optics (AO) from the low-resolution gradient measurements provided by a wavefront sensor, using a non-linear approach derived from the Microcanonical Multiscale Formalism (MMF). MMF comes from established concepts in statistical physics, it is naturally suited to the study of multiscale properties of complex natural signals, mainly due to the precise numerical estimate of geometrically localized critical exponents, called the singularity exponents. These exponents quantify the degree of predictability, locally, at each point of the signal domain, and they provide information on the dynamics of the associated system. We show that multiresolution analysis carried out on the singularity exponents of a high-resolution turbulent phase (obtained by model or from data) allows a propagation along the scales of the gradients in low-resolution (obtained from the wavefront sensor), to a higher resolution. We compare our results with those obtained by linear approaches, which allows us to offer an innovative approach to wavefront phase reconstruction in Adaptive Optics.
APA, Harvard, Vancouver, ISO, and other styles
22

Roodaki, Alireza. "Signal decompositions using trans-dimensional Bayesian methods." Phd thesis, Supélec, 2012. http://tel.archives-ouvertes.fr/tel-00765464.

Full text
Abstract:
This thesis addresses the challenges encountered when dealing with signal decomposition problems with an unknown number of components in a Bayesian framework. Particularly, we focus on the issue of summarizing the variable-dimensional posterior distributions that typically arise in such problems. Such posterior distributions are defined over union of subspaces of differing dimensionality, and can be sampled from using modern Monte Carlo techniques, for instance the increasingly popular Reversible-Jump MCMC (RJ-MCMC) sampler. No generic approach is available, however, to summarize the resulting variable-dimensional samples and extract from them component-specific parameters. One of the main challenges that needs to be addressed to this end is the label-switching issue, which is caused by the invariance of the posterior distribution to the permutation of the components. We propose a novel approach to this problem, which consists in approximating the complex posterior of interest by a "simple"--but still variable-dimensional parametric distribution. We develop stochastic EM-type algorithms, driven by the RJ-MCMC sampler, to estimate the parameters of the model through the minimization of a divergence measure between the two distributions. Two signal decomposition problems are considered, to show the capability of the proposed approach both for relabeling and for summarizing variable dimensional posterior distributions: the classical problem of detecting and estimating sinusoids in white Gaussian noise on the one hand, and a particle counting problem motivated by the Pierre Auger project in astrophysics on the other hand.
APA, Harvard, Vancouver, ISO, and other styles
23

Liu, Jingfang. "Adaptive iterative filtering methods for nonlinear signal analysis and applications." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/52169.

Full text
Abstract:
Time-frequency analysis for non-linear and non-stationary signals is extraordinarily challenging. To capture the changes in these types of signals, it is necessary for the analysis methods to be local, adaptive and stable. In recent years, decomposition based analysis methods were developed by different researchers to deal with non-linear and non-stationary signals. These methods share the feature that a signal is decomposed into finite number of components on which the time-frequency analysis can be applied. Differences lie in the strategies to extract these components: by iteration or by optimization. However, considering the requirements of being local, adaptive and stable, neither of these decompositions are perfectly satisfactory. Motivated to find a local, adaptive and stable decomposition of a signal, this thesis presents Adaptive Local Iterative Filtering (ALIF) algorithm. The adaptivity is obtained having the filter lengths being determined by the signal itself. The locality is ensured by the filter we designed based on a PDE model. The stability of this algorithm is shown and the convergence is proved. Moreover, we also propose a local definition for the instantaneous frequency in order to achieve a completely local analysis for non-linear and non-stationary signals. Examples show that this decomposition really helps in both simulated data analysis and real world application.
APA, Harvard, Vancouver, ISO, and other styles
24

Coughlin, Kathleen T. "Stratospheric and tropospheric signals extracted using the empirical mode decomposition method /." Thesis, Connect to this title online; UW restricted, 2003. http://hdl.handle.net/1773/6781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Dahl, Jason F. "Time Aliasing Methods of Spectrum Estimation." Diss., CLICK HERE for online access, 2003. http://contentdm.lib.byu.edu/ETD/image/etd157.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Blaney, S. "Gamma radiation methods for clamp-on multiphase flow metering." Thesis, Cranfield University, 2008. http://dspace.lib.cranfield.ac.uk/handle/1826/5655.

Full text
Abstract:
The development of a cost-effective multiphase flow meter to determine the individual phase flow rates of oil, water and gas was investigated through the exploitation of a single clamp-on gamma densitometer and signal processing techniques. A fast-sampling (250 Hz) gamma densitometer was installed at the top of the 10.5 m high, 108.2 mm internal diameter, stainless steel catenary riser in the Cranfield University multiphase flow test facility. Gamma radiation attenuation data was collected for two photon energy ranges of the caesium-137 radioisotope based densitometer for a range of air, water and oil flow mixtures, spanning the facility’s delivery range. Signal analysis of the gamma densitometer data revealed the presence of quasi-periodic waveforms in the time-varying multiphase flow densities and discriminatory correlations between statistical features of the gamma count data and key multiphase flow parameters. The development of a mechanistic approach to infer the multiphase flow rates from the gamma attenuation information was investigated. A model for the determination of the individual phase flow rates was proposed based on the gamma attenuation levels; while quasi-periodic waveforms identified in the multiphase fluid density were observed to exhibit a strong correlation with the gas and liquid superficial phase velocity parameters at fixed water cuts. Analysis of the use of pattern recognition techniques to correlate the gamma densitometer data with the individual phase superficial velocities and the water cut was undertaken. Two neural network models were developed for comparison: a single multilayer-perceptron and a multilayer hierarchical flow regime dependent model. The pattern recognition systems were trained to map the temporal fluctuations in the multiphase mixture density with the individual phase flow rates using statistical features extracted from the gamma count signals as their inputs. Initial results yielded individual phase flow rate predictions to within ±10% based on flow regime specific correlations.
APA, Harvard, Vancouver, ISO, and other styles
27

Vadali, Venkata Akshay Bhargav Krishna. "A Comparative Study of Signal Processing Methods for Fetal Phonocardiography Analysis." Scholar Commons, 2018. https://scholarcommons.usf.edu/etd/7373.

Full text
Abstract:
More than one million fetal deaths occur in the United States every year [1]. Monitoring the long-term heart rate variability provides a great amount of information about the fetal health condition which requires continuous monitoring of the fetal heart rate. All the existing technologies have either complex instrumentation or need a trained professional at all times or both. The existing technologies are proven to be impractical for continuous monitoring [2]. Hence, there is an increased interest towards noninvasive, continuous monitoring, and less expensive technologies like fetal phonocardiography. Fetal Phonocardiography (FPCG) signal is obtained by placing an acoustic transducer on the abdomen of the mother. FPCG is rich in physiological bio-signals and can continuously monitor the fetal heart rate non-invasively. Despite its high diagnostic potential, it is still not being used as the secondary point of care. There are two challenges as to why it is still being considered as the secondary point of care; in the data acquisition system and the signal processing methodologies. The challenges pertaining to data acquisition systems are but not limited to sensor placement, maternal obesity and multiple heart rates. While, the challenges in the signal processing methodologies are dynamic nature of FPCG signal, multiple known and unknown signal components and SNR of the signal. Hence, to improve the FPCG based care, challenges in FPCG signal processing methodologies have been addressed in this study. A comparative evaluation was presented on various advanced signal processing techniques to extract the bio-signals with fidelity. Advanced signal processing approaches, namely empirical mode decomposition, spectral subtraction, wavelet decomposition and adaptive filtering were used to extract the vital bio-signals. However, extracting these bio-signals with fidelity is a challenging task in the context of FPCG as all the bio signals and the unwanted artifacts overlap in both time and frequency. Additionally, the signal is corrupted by noise induced from the fetal and maternal movements as well the background and the sensor. Empirical mode decomposition algorithm was efficient to denoise and extract the maternal and fetal heart sounds in a single step. Whereas, spectral subtraction was used to denoise the signal which was later subjected to wavelet decomposition to extract the signal of interest. On the other hand, adaptive filtering was used to estimate the fetal heart sound from a noisy FPCG where maternal heart sound was the reference input. The extracted signals were validated by obtaining the frequency ranges computed by the Short Time Fourier Transform (STFT). It was observed that the bandwidths of extracted fetal heart sounds and maternal heart sounds were consistent with the existing gold standards. Furthermore, as a means of additional validation, the heart rates were calculated. Finally, the results obtained from all these methods were compared and contrasted qualitatively and quantitatively.
APA, Harvard, Vancouver, ISO, and other styles
28

Bengtsson, Thomas. "Time series discrimination, signal comparison testing, and model selection in the state-space framework /." free to MU campus, to others for purchase, 2000. http://wwwlib.umi.com/cr/mo/fullcit?p9974611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Qin, Lei. "Online machine learning methods for visual tracking." Thesis, Troyes, 2014. http://www.theses.fr/2014TROY0017/document.

Full text
Abstract:
Nous étudions le problème de suivi de cible dans une séquence vidéo sans aucune connaissance préalable autre qu'une référence annotée dans la première image. Pour résoudre ce problème, nous proposons une nouvelle méthode de suivi temps-réel se basant sur à la fois une représentation originale de l’objet à suivre (descripteur) et sur un algorithme adaptatif capable de suivre la cible même dans les conditions les plus difficiles comme le cas où la cible disparaît et réapparait dans le scène (ré-identification). Tout d'abord, pour la représentation d’une région de l’image à suivre dans le temps, nous proposons des améliorations au descripteur de covariance. Ce nouveau descripteur est capable d’extraire des caractéristiques spécifiques à la cible, tout en ayant la capacité à s’adapter aux variations de l’apparence de la cible. Ensuite, l’étape algorithmique consiste à mettre en cascade des modèles génératifs et des modèles discriminatoires afin d’exploiter conjointement leurs capacités à distinguer la cible des autres objets présents dans la scène. Les modèles génératifs sont déployés dans les premières couches afin d’éliminer les candidats les plus faciles alors que les modèles discriminatoires sont déployés dans les couches suivantes afin de distinguer la cibles des autres objets qui lui sont très similaires. L’analyse discriminante des moindres carrés partiels (AD-MCP) est employée pour la construction des modèles discriminatoires. Enfin, un nouvel algorithme d'apprentissage en ligne AD-MCP a été proposé pour la mise à jour incrémentale des modèles discriminatoires
We study the challenging problem of tracking an arbitrary object in video sequences with no prior knowledge other than a template annotated in the first frame. To tackle this problem, we build a robust tracking system consisting of the following components. First, for image region representation, we propose some improvements to the region covariance descriptor. Characteristics of a specific object are taken into consideration, before constructing the covariance descriptor. Second, for building the object appearance model, we propose to combine the merits of both generative models and discriminative models by organizing them in a detection cascade. Specifically, generative models are deployed in the early layers for eliminating most easy candidates whereas discriminative models are in the later layers for distinguishing the object from a few similar "distracters". The Partial Least Squares Discriminant Analysis (PLS-DA) is employed for building the discriminative object appearance models. Third, for updating the generative models, we propose a weakly-supervised model updating method, which is based on cluster analysis using the mean-shift gradient density estimation procedure. Fourth, a novel online PLS-DA learning algorithm is developed for incrementally updating the discriminative models. The final tracking system that integrates all these building blocks exhibits good robustness for most challenges in visual tracking. Comparing results conducted in challenging video sequences showed that the proposed tracking system performs favorably with respect to a number of state-of-the-art methods
APA, Harvard, Vancouver, ISO, and other styles
30

Li, Ke. "Measurement and analysis of grip strength using advanced methods." Troyes, 2009. http://www.theses.fr/2009TROY0038.

Full text
Abstract:
La force de préhension palmaire est un indicateur précieux qui peut être utilisé pour décrire non seulement la fonction de la main mais également le statut global du membre supérieur voire du corps entier. Sa mesure reste plusieurs aspects à améliorer ou explorer. Cette thèse est une contribution au développement de nouvelles méthodes de mesure et d’analyse de la force de préhension palmaire. Après une analyse bibliographique approfondie, un nouveau dispositif adapté au milieu écologique est présenté. Cet outil, la Grip-Ball, consiste en un capteur de pression et un système de communication sans fil, dans une balle étanche et souple, ce qui permet de mesurer et transmettre la pression interne à la balle lors de son écrasement. Une deuxième étude s’attache à comparer un autre dispositif innovant, le Myogrip, adapté aux très faibles forces de préhension avec les deux dispositifs les plus utilisés (Jamar et Martin Vigorimètre). En outre, les effets de la position du coude et la taille de la poignée ont été testés pour ces trois dynamomètres. Le développement d’un modèle de prédiction basé uniquement sur la circonférence de la main fait l’objet d’une troisième étude, ce qui donne un modèle simple facilement utilisable en routine. Les trois derniers chapitres sont consacrés à la présentation de méthodes avancées de traitement du signal lors de contractions soutenues dans le temps: transformée de Hilbert-Huang, analyse fractale, analyse par récurrence. Ces méthodes ont montré leur aptitude à caractériser les effets de la fatigue, du tremblement, de la maladie ou de l’âge au cours de ces contractions
Grip strength is a valuable indicator that can be used to describe not only hand function, but also the overall functional status of the upper-limb strength or even of the entire body. A number of improvements could be made. The aim of this thesis is to contribute to the development of new methods of measurement and analysis of grip-strength. After an in-depth literature review of the most relevant aspects of grip-strength testing, an intelligent dynamometer for home-based testing, the Grip-Ball, is presented. This dynamometer consists of a pressure sensor and a wireless communication system, which are inserted in-side a supple, air-tight ball, in order to measure the pressure inside the ball when it is squeezed. In addition to the Grip-Ball, another innovative dynamometer, the Myogrip, which is well-suited to the measurement of very weak grip strength, was compared to two of the most widely-used dynamometers (Jamar and Martin Vigorimeter). Furthermore the investigation was performed to evaluate the effects of elbow position and of the handle sizes when using these three dynamometers. The development of simple predictive model for the maximal grip strength based solely on hand circumference is presented in a third study, with this simple model suitable for routine use. The last three chapters are devoted to the presentation of advanced methods of signal processing obtained from sustained grip-strength contractions: Hilbert-Huang transform, fractal analysis, and recurrence analysis. These methods are able to characterise the effects of fatigue, tremor, disease or age during these sustained contractions
APA, Harvard, Vancouver, ISO, and other styles
31

Zhao, Tao. "A new method for detection and classification of out-of-control signals in autocorrelated multivariate processes." Morgantown, W. Va. : [West Virginia University Libraries], 2008. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=5615.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2008.
Title from document title page. Document formatted into pages; contains x, 111 p. : ill. Includes abstract. Includes bibliographical references (p. 102-106).
APA, Harvard, Vancouver, ISO, and other styles
32

Zywicki, Daren Joseph. "Advanced signal processing methods applied to engineering analysis of seismic surface waves." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/20232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Tingying, Peng. "Signal processing methods for the analysis of cerebral blood flow and metabolism." Thesis, University of Oxford, 2009. http://ora.ox.ac.uk/objects/uuid:ca84ac5b-7df0-488d-b390-d6f4f6e3ee52.

Full text
Abstract:
An important protective feature of the cerebral circulation is its ability to maintain sufficient cerebral blood flow and oxygen supply in accordance with the energy demands of the brain despite variations in a number of external factors such as arterial blood pressure, heart rate and respiration rate. If cerebral autoregulation is impaired, abnormally low or high CBF can lead to cerebral ischemia, intracranial hypertension or even capillary damage, thus contributing to the onset of cerebrovascular events. The control and regulation of cerebral blood flow is a dynamic, multivariate phenomenon. Sensitive techniques are required to monitor and process experimental data concerning cerebral blood flow and metabolic rate in a clinical setting. This thesis presents a model simulation study and 4 related signal processing studies concerned with CBF regulation. The first study models the regulation of the cerebral vasculature to systemic changes in blood pressure, dissolved blood gas concentration and neural activation in a integrated haemodynamic system. The model simulations show that the three pathways which are generally thought to be independent (pressure, CO₂ and activation) greatly influence each other, it is vital to consider parallel changes of unmeasured variability when performing a single pathway study. The second study shows how simultaneously measured blood gas concentration fluctuations can improve the accuracy of an existing frequency domain technique for recovering cerebral autoregulation dynamics from spontaneous fluctuations in blood pressure and cerebral blood flow velocity. The third study shows how the continuous wavelet transform can recover both time and frequency information about dynamic autoregulation, including the contribution of blood gas concentration. The fourth study shows how the discrete wavelet transform can be used to investigate frequency-dependent coupling between cerebral and systemic cardiovascular dynamics. The final study then uses these techniques to investigate the systemic effects on resting BOLD variability. The general approach taken in this thesis is a combined analysis of both modelling and data analysis. Physiologically-based models encapsulate hypotheses about features of CBF regulation, particularly those features that may be difficult to recover using existing analysis methods, and thus provide the motivation for developing both new analysis methods and criteria to evaluate these methods. On the other hand, the statistical features extracted directly from experimental data can be used to validate and improve the model.
APA, Harvard, Vancouver, ISO, and other styles
34

Sandgren, Niclas. "Parametric methods for frequency-selective MR spectroscopy /." Uppsala : Univ. : Dept. of Information Technology, Univ, 2004. http://www.it.uu.se/research/reports/lic/2004-001/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Ruusunen, M. (Mika). "Signal correlations in biomass combustion – an information theoretic analysis." Doctoral thesis, Oulun yliopisto, 2013. http://urn.fi/urn:isbn:9789526201924.

Full text
Abstract:
Abstract Increasing environmental and economic awareness are driving the development of combustion technologies to efficient biomass use and clean burning. To accomplish these goals, quantitative information about combustion variables is needed. However, for small-scale combustion units the existing monitoring methods are often expensive or complex. This study aimed to quantify correlations between flue gas temperatures and combustion variables, namely typical emission components, heat output, and efficiency. For this, data acquired from four small-scale combustion units and a large circulating fluidised bed boiler was studied. The fuel range varied from wood logs, wood chips, and wood pellets to biomass residue. Original signals and a defined set of their mathematical transformations were applied to data analysis. In order to evaluate the strength of the correlations, a multivariate distance measure based on information theory was derived. The analysis further assessed time-varying signal correlations and relative time delays. Ranking of the analysis results was based on the distance measure. The uniformity of the correlations in the different data sets was studied by comparing the 10-quantiles of the measured signal. The method was validated with two benchmark data sets. The flue gas temperatures and the combustion variables measured carried similar information. The strongest correlations were mainly linear with the transformed signal combinations and explicable by the combustion theory. Remarkably, the results showed uniformity of the correlations across the data sets with several signal transformations. This was also indicated by simulations using a linear model with constant structure to monitor carbon dioxide in flue gas. Acceptable performance was observed according to three validation criteria used to quantify modelling error in each data set. In general, the findings demonstrate that the presented signal transformations enable real-time approximation of the studied combustion variables. The potentiality of flue gas temperatures to monitor the quality and efficiency of combustion allows development toward cost effective control systems. Moreover, the uniformity of the presented signal correlations could enable straightforward copies of such systems. This would cumulatively impact the reduction of emissions and fuel consumption in small-scale biomass combustion
Tiivistelmä Kasvava ympäristö- ja kustannustietoisuus ohjaa polttoteknologioiden kehitystä yhä tehokkaampaan biomassan hyödyntämiseen ja puhtaampaan palamiseen. Näiden tavoitteiden saavuttamiseen tarvitaan mittaustietoa palamismuuttujista. Nykyiset palamisen seurantaan tarkoitetut ratkaisut ovat kuitenkin pienpolttolaitteita ajatellen usein kalliita tai monimutkaisia. Tässä työssä tutkittiin mitattujen savukaasun lämpötilojen riippuvuussuhdetta tyypillisiin kaasukomponentteihin, lämpötehoon ja tehokkuuteen. Tätä varten analysoitiin mittausaineistot neljästä erityyppisestä pienpolttolaitteesta ja suuresta kiertoleijupeti-kattilasta. Puupolttoaineina olivat klapi, hake, pelletti ja hakkuujäte. Analyysi tehtiin alkuperäisillä mittaussignaaleilla ja niistä matemaattisesti muunnetuilla signaaleilla. Riippuvuussuhteiden selvittämiseksi johdettiin informaatioteoriaan perustuva monimuuttuja-etäisyysmitta, jonka lukuarvolla mitataan signaalien samankaltaisuutta. Esitetty analyysimenetelmä sisälsi myös riippuvuuksien ajallisen muutoksen ja suhteellisten aikaviiveiden arvioinnin. Tulosten arvojärjestys perustui etäisyysmitan arvoon. Riippuvuussuhteiden samankaltaisuutta mittausaineistojen välillä vertailtiin 10-kvantiileilla. Analyysimenetelmän toimivuus vahvistettiin kahdella tunnetulla koeaineistolla. Savukaasun lämpötilojen ja palamismuuttujien mittaussignaaleissa oli samankaltainen informaatiosisältö. Vahvimmat riippuvuudet olivat muunnettujen signaalien yhdistelmillä pääosin lineaarisia ja palamisteorian mukaisia. Merkittävää oli, että tietyillä signaalimuunnos- ja palamismuuttujapareilla oli sama riippuvuussuhde kaikissa mittausaineistossa. Tämä todettiin myös simuloinneilla arvioitaessa savukaasujen hiilidioksidipitoisuutta lineaarisella, kiinteällä mallirakenteella. Mallin tarkkuus oli riittävä kolmella erityyppisellä kriteerillä jokaisessa mittausaineistossa. Tulosten perusteella signaalimuunnoksilla voidaan arvioida palamismuuttujia reaaliaikaisesti. Savukaasujen lämpötilojen potentiaali palamisen laadun ja tehokkuuden seurannassa mahdollistaa kustannustehokkaiden säätöratkaisujen kehityksen. Löydettyjä yleistettäviä riippuvuussuhteita hyödyntämällä niiden käyttöönotto lukuisissa polttolaitteissa helpottuisi. Pienpolton päästöjen ja polttoaineen kulutuksen vähentyminen olisi tällöin kumulatiivista
APA, Harvard, Vancouver, ISO, and other styles
36

Larsson, Felix, and Robin Linna. "An Analysis of Passenger Demand Forecast Evaluation Methods." Thesis, Linköpings universitet, Kommunikations- och transportsystem, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139942.

Full text
Abstract:
In the field of aviation forecasting is used, among other things, to determine the number of passengers to expect for each flight. This is beneficial in the practice of revenue management, as the forecast is used as a base when setting the price for each flight. In this study, a forecast evaluation has been done on seven different routes with a total of 61 different flights, using four different methods. These are: Mean Absolute Scaled Error (MASE), Mean Absolute Percentage Error (MAPE), Tracking Signal, and a goodness of fit test to determine if the forecast errors are normally distributed. The MASE has been used to determine if the passenger forecasts are better or worse than a naïve forecast, while the MAPE provides an error value for internal comparisons between the flights. The Tracking Signal and the normal distribution test have been used in order to determine whether a flight has bias or not towards under- or overforecasting. The results point towards a general underforecast across all studied flights. A total of 89 % of the forecasts perform better than the naïve forecast, with an average MASE value of 0,78. As such, the forecast accuracy is better than that of the naïve forecast. There are however large error values among the observed flights, affecting the MAPE average. The MAPE average is 38,53 % while the median is 30,60 %. The measure can be used for internal comparisons, and one such way is to use the average value as a benchmark in order to focus on improving those forecasts with a higher than average MAPE. The authors have found that the MASE and MAPE are useful in measuring forecast accuracy and as such the recommendation of the authors is that these two error measures can be used together to evaluate forecast accuracy at frequent intervals. In addition to this there is value in examining the error distribution in conjunction with the Mean Error when searching for bias, as this will indicate if there is systematic error present.
APA, Harvard, Vancouver, ISO, and other styles
37

Kwag, Jae-Hwan. "A comparative study of LP methods in MR spectral analysis /." free to MU campus, to others for purchase, 1999. http://wwwlib.umi.com/cr/mo/fullcit?p9962536.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Rau, Christian, and rau@maths anu edu au. "Curve Estimation and Signal Discrimination in Spatial Problems." The Australian National University. School of Mathematical Sciences, 2003. http://thesis.anu.edu.au./public/adt-ANU20031215.163519.

Full text
Abstract:
In many instances arising prominently, but not exclusively, in imaging problems, it is important to condense the salient information so as to obtain a low-dimensional approximant of the data. This thesis is concerned with two basic situations which call for such a dimension reduction. The first of these is the statistical recovery of smooth edges in regression and density surfaces. The edges are understood to be contiguous curves, although they are allowed to meander almost arbitrarily through the plane, and may even split at a finite number of points to yield an edge graph. A novel locally-parametric nonparametric method is proposed which enjoys the benefit of being relatively easy to implement via a `tracking' approach. These topics are discussed in Chapters 2 and 3, with pertaining background material being given in the Appendix. In Chapter 4 we construct concomitant confidence bands for this estimator, which have asymptotically correct coverage probability. The construction can be likened to only a few existing approaches, and may thus be considered as our main contribution. ¶ Chapter 5 discusses numerical issues pertaining to the edge and confidence band estimators of Chapters 2-4. Connections are drawn to popular topics which originated in the fields of computer vision and signal processing, and which surround edge detection. These connections are exploited so as to obtain greater robustness of the likelihood estimator, such as with the presence of sharp corners. ¶ Chapter 6 addresses a dimension reduction problem for spatial data where the ultimate objective of the analysis is the discrimination of these data into one of a few pre-specified groups. In the dimension reduction step, an instrumental role is played by the recently developed methodology of functional data analysis. Relatively standar non-linear image processing techniques, as well as wavelet shrinkage, are used prior to this step. A case study for remotely-sensed navigation radar data exemplifies the methodology of Chapter 6.
APA, Harvard, Vancouver, ISO, and other styles
39

Wang, Tian. "Abnormal detection in video streams via one-class learning methods." Thesis, Troyes, 2014. http://www.theses.fr/2014TROY0018/document.

Full text
Abstract:
La vidéosurveillance représente l’un des domaines de recherche privilégiés en vision par ordinateur. Le défi scientifique dans ce domaine comprend la mise en œuvre de systèmes automatiques pour obtenir des informations détaillées sur le comportement des individus et des groupes. En particulier, la détection de mouvements anormaux de groupes d’individus nécessite une analyse fine des frames du flux vidéo. Dans le cadre de cette thèse, la détection de mouvements anormaux est basée sur la conception d’un descripteur d’image efficace ainsi que des méthodes de classification non linéaires. Nous proposons trois caractéristiques pour construire le descripteur de mouvement : (i) le flux optique global, (ii) les histogrammes de l’orientation du flux optique (HOFO) et (iii) le descripteur de covariance (COV) fusionnant le flux optique et d’autres caractéristiques spatiales de l’image. Sur la base de ces descripteurs, des algorithmes de machine learning (machines à vecteurs de support (SVM)) mono-classe sont utilisés pour détecter des événements anormaux. Deux stratégies en ligne de SVM mono-classe sont proposées : la première est basée sur le SVDD (online SVDD) et la deuxième est basée sur une version « moindres carrés » des algorithmes SVM (online LS-OC-SVM)
One of the major research areas in computer vision is visual surveillance. The scientific challenge in this area includes the implementation of automatic systems for obtaining detailed information about the behavior of individuals and groups. Particularly, detection of abnormal individual movements requires sophisticated image analysis. This thesis focuses on the problem of the abnormal events detection, including feature descriptor design characterizing the movement information and one-class kernel-based classification methods. In this thesis, three different image features have been proposed: (i) global optical flow features, (ii) histograms of optical flow orientations (HOFO) descriptor and (iii) covariance matrix (COV) descriptor. Based on these proposed descriptors, one-class support vector machines (SVM) are proposed in order to detect abnormal events. Two online strategies of one-class SVM are proposed: The first strategy is based on support vector description (online SVDD) and the second strategy is based on online least squares one-class support vector machines (online LS-OC-SVM)
APA, Harvard, Vancouver, ISO, and other styles
40

Lenis, Gustavo [Verfasser], and O. [Akademischer Betreuer] Dössel. "Signal Processing Methods for the Analysis of the Electrocardiogram / Gustavo Lenis ; Betreuer: O. Dössel." Karlsruhe : KIT-Bibliothek, 2017. http://d-nb.info/1132998018/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

EL, OUARDI Abdelghafour. "Applying Autonomous Methods for Signal Analysis and Correction with Applications in the ship Industry." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-39629.

Full text
Abstract:
The manufacturing and transportation industries generate a large amount of data sets which are often of inconsistent quality. The goal of this project is to find the mathematical principles of a system which learns automatically the essential statistical and analytical properties of datasets in order to detect and correct certain classes of faults in real time.
APA, Harvard, Vancouver, ISO, and other styles
42

Keen, Alan G. "Planar transmission line analyses using the Method of Lines." Thesis, University of Kent, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.293993.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Anderson, Joseph T. "Geometric Methods for Robust Data Analysis in High Dimension." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488372786126891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ahlström, Christer. "Nonlinear phonocardiographic Signal Processing." Doctoral thesis, Linköpings universitet, Fysiologisk mätteknik, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11302.

Full text
Abstract:
The aim of this thesis work has been to develop signal analysis methods for a computerized cardiac auscultation system, the intelligent stethoscope. In particular, the work focuses on classification and interpretation of features derived from the phonocardiographic (PCG) signal by using advanced signal processing techniques. The PCG signal is traditionally analyzed and characterized by morphological properties in the time domain, by spectral properties in the frequency domain or by nonstationary properties in a joint time-frequency domain. The main contribution of this thesis has been to introduce nonlinear analysis techniques based on dynamical systems theory to extract more information from the PCG signal. Especially, Takens' delay embedding theorem has been used to reconstruct the underlying system's state space based on the measured PCG signal. This processing step provides a geometrical interpretation of the dynamics of the signal, whose structure can be utilized for both system characterization and classification as well as for signal processing tasks such as detection and prediction. In this thesis, the PCG signal's structure in state space has been exploited in several applications. Change detection based on recurrence time statistics was used in combination with nonlinear prediction to remove obscuring heart sounds from lung sound recordings in healthy test subjects. Sample entropy and mutual information were used to assess the severity of aortic stenosis (AS) as well as mitral insufficiency (MI) in dogs. A large number of, partly nonlinear, features was extracted and used for distinguishing innocent murmurs from murmurs caused by AS or MI in patients with probable valve disease. Finally, novel work related to very accurate localization of the first heart sound by means of ECG-gated ensemble averaging was conducted. In general, the presented nonlinear processing techniques have shown considerably improved results in comparison with other PCG based techniques. In modern health care, auscultation has found its main role in primary or in home health care, when deciding if special care and more extensive examinations are required. Making a decision based on auscultation is however difficult, why a simple tool able to screen and assess murmurs would be both time- and cost-saving while relieving many patients from needless anxiety. In the emerging field of telemedicine and home care, an intelligent stethoscope with decision support abilities would be of great value.
APA, Harvard, Vancouver, ISO, and other styles
45

Salvi, Giampiero. "Mining Speech Sounds : Machine Learning Methods for Automatic Speech Recognition and Analysis." Doctoral thesis, Stockholm : KTH School of Computer Science and Comunication, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Georgakis, Apostolos. "Time-frequency methods of signal analysis and filtering with applications in biomedical and communications engineering." Thesis, Lancaster University, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.418787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Dreyer, Anna Alexandra. "Likelihood and Bayesian signal processing methods for the analysis of auditory neural and behavioral data." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45908.

Full text
Abstract:
Thesis (Ph. D.)--Harvard-MIT Division of Health Sciences and Technology, 2008.
Includes bibliographical references.
Developing a consensus on how to model neural and behavioral responses and to quantify important response properties is a challenging signal processing problem because models do not always adequately capture the data and different methods often yield different estimates of the same response property. The threshold, the first stimulus level for which a difference between baseline activity and stimulus-driven activity exists, is an example of such a response property for both neural and behavioral responses.In the first and second sections of this work, we show how the state-space model framework can be used to represent neural and behavioral responses to auditory stimuli with a high degree of model goodness-of-fit. In the first section, we use likelihood methods to develop a state-space generalized linear model and estimate maximum likelihood parameters for neural data. In the second section, we develop the alternative Bayesian state-space model for behavioral data. Based on the estimated joint density, we then illustrate how important response properties, such as the neural and behavioral threshold, can be estimated, leading to lower threshold estimates than current methods by at least 2 dB. Our methods provide greater sensitivity, obviation of the hypothesis testing framework, and a more accurate description of the data.Formulating appropriate models to describe neural data in response to natural sound stimulation is another problem that currently represents a challenge. In the third section of the thesis, we develop a generalized linear model for responses to natural sound stimuli and estimate maximum likelihood parameters. Our methodology has the advantage of describing neural responses as point processes, capturing aspects of the stimulus response such as past spiking history and estimating the contributions of the various response covariates, resulting in a high degree of model goodness-of-fit.
(cont) Using our model parameter estimates, we illustrate that decoding of the natural sound stimulus in our model framework produces neural discrimination performance on par with behavioral data.These findings have important implications for developing theoretically-sound and practical definitions of the neural response properties, for understanding information transmission within the auditory system and for design of auditory prostheses.
by Anna A. Dreyer.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
48

Victorin, Amalia. "Multi-taper method for spectral analysis and signal reconstruction of solar wind data." Thesis, KTH, Rymd- och plasmafysik, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-91824.

Full text
Abstract:
Fluctuations in the solar wind characteristics such as speed, temperature, magnetic strength and density are associated with pulsations in the magnetosphere. Coherent magnetohydrodynamic waves in the solar wind may sometimes be a direct source of periodic pulsations in the frequency interval 1 to 7 mHz in the magnetosphere. In studies of the solar wind and the way its variation affects the magnetosphere, the significance of different frequency components and their signal fonn are of interest. Spectral analysis and signal reconstruction are important tools in these studies and in this report the MultiTaper Method (MTM) of spectral analysis is compared to the "classic" method, using the Hanning window and Fourier transformation. The MTM-SSA toolkit, developed by Department of Atmospheric Science at the University of California, is used to ascertain whether the MTM might be suitable. The advantages of the MTM are reduced information loss in analysed data sequences and statistical support in the analysis. Besides the compared methods of spectral analysis, an attempt has been made to test the validity of the adiabatic law, assumed as the relation between the thermal pressure and the density in the solar wind plasma. It was unfortunately difficult to estimate the gamma parameter of this relation, possibly due to the turbulent behaviour of the solar wind.
APA, Harvard, Vancouver, ISO, and other styles
49

Le, Faucheur Xavier Jean Maurice. "Statistical methods for feature extraction in shape analysis and bioinformatics." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33911.

Full text
Abstract:
The presented research explores two different problems of statistical data analysis. In the first part of this thesis, a method for 3D shape representation, compression and smoothing is presented. First, a technique for encoding non-spherical surfaces using second generation wavelet decomposition is described. Second, a novel model is proposed for wavelet-based surface enhancement. This part of the work aims to develop an efficient algorithm for removing irrelevant and noise-like variations from 3D shapes. Surfaces are encoded using second generation wavelets, and the proposed methodology consists of separating noise-like wavelet coefficients from those contributing to the relevant part of the signal. The empirical-based Bayesian models developed in this thesis threshold wavelet coefficients in an adaptive and robust manner. Once thresholding is performed, irrelevant coefficients are removed and the inverse wavelet transform is applied to the clean set of wavelet coefficients. Experimental results show the efficiency of the proposed technique for surface smoothing and compression. The second part of this thesis proposes using a non-parametric clustering method for studying RNA (RiboNucleic Acid) conformations. The local conformation of RNA molecules is an important factor in determining their catalytic and binding properties. RNA conformations can be characterized by a finite set of parameters that define the local arrangement of the molecule in space. Their analysis is particularly difficult due to the large number of degrees of freedom, such as torsion angles and inter-atomic distances among interacting residues. In order to understand and analyze the structural variability of RNA molecules, this work proposes a methodology for detecting repetitive conformational sub-structures along RNA strands. Clusters of similar structures in the conformational space are obtained using a nearest-neighbor search method based on the statistical mechanical Potts model. The proposed technique is a mostly automatic clustering algorithm and may be applied to problems where there is no prior knowledge on the structure of the data space, in contrast to many other clustering techniques. First, results are reported for both single residue conformations- where the parameter set of the data space includes four to seven torsional angles-, and base pair geometries. For both types of data sets, a very good match is observed between the results of the proposed clustering method and other known classifications, with only few exceptions. Second, new results are reported for base stacking geometries. In this case, the proposed classification is validated with respect to specific geometrical constraints, while the content and geometry of the new clusters are fully analyzed.
APA, Harvard, Vancouver, ISO, and other styles
50

Pérez, López Andrés. "Parametric analysis of ambisonic audio: a contributions to methods, applications and data generation." Doctoral thesis, Universitat Pompeu Fabra, 2020. http://hdl.handle.net/10803/669962.

Full text
Abstract:
Due to the recent advances in virtual and augmented reality, ambisonics has emerged as the de facto standard for immersive audio. Ambisonic audio can be captured using spherical microphone arrays, which are becoming increasingly popular. Yet, many methods for acoustic and microphone array signal processing are not speci cally tailored for spherical geometries. Therefore, there is still room for improvement in the eld of automatic analysis and description of ambisonic recordings. In the present thesis, we tackle this problem using methods based on the parametric analysis of the sound eld. Speci cally, we present novel contributions in the scope of blind reverberation time estimation, diffuseness estimation, and sound event localization and detection. Furthermore, several software tools developed for ambisonic dataset generation and management are also presented.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography