Дисертації з теми "EEG DENOISING"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: EEG DENOISING.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-21 дисертацій для дослідження на тему "EEG DENOISING".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Zhang, Shuoyue [Verfasser], and Jürgen [Akademischer Betreuer] Hennig. "Artifacts denoising of EEG acquired during simultaneous EEG-FMRI." Freiburg : Universität, 2021. http://d-nb.info/1228786968/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Becker, Hanna. "Débruitage, séparation et localisation de sources EEG dans le contexte de l'épilepsie." Thesis, Nice, 2014. http://www.theses.fr/2014NICE4075/document.

Повний текст джерела
Анотація:
L'électroencéphalographie (EEG) est une technique qui est couramment utilisée pour le diagnostic et le suivi de l'épilepsie. L'objectif de cette thèse consiste à fournir des algorithmes pour l'extraction, la séparation, et la localisation de sources épileptiques à partir de données EEG. D'abord, nous considérons deux étapes de prétraitement. La première étape vise à éliminer les artéfacts musculaires à l'aide de l'analyse en composantes indépendantes (ACI). Dans ce contexte, nous proposons un nouvel algorithme par déflation semi-algébrique qui extrait les sources épileptiques de manière plus efficace que les méthodes conventionnelles, ce que nous démontrons sur données EEG simulées et réelles. La deuxième étape consiste à séparer des sources corrélées. A cette fin, nous étudions des méthodes de décomposition tensorielle déterministe exploitant des données espace-temps-fréquence ou espace-temps-vecteur-d'onde. Nous comparons les deux méthodes de prétraitement à l'aide de simulations pour déterminer dans quels cas l'ACI, la décomposition tensorielle, ou une combinaison des deux approches devraient être utilisées. Ensuite, nous traitons la localisation de sources distribuées. Après avoir présenté et classifié les méthodes de l'état de l'art, nous proposons un algorithme pour la localisation de sources distribuées qui s'appuie sur les résultats du prétraitement tensoriel. L'algorithme est évalué sur données EEG simulées et réelles. En plus, nous apportons quelques améliorations à une méthode de localisation de sources basée sur la parcimonie structurée. Enfin, une étude des performances de diverses méthodes de localisation de sources est conduite sur données EEG simulées
Electroencephalography (EEG) is a routinely used technique for the diagnosis and management of epilepsy. In this context, the objective of this thesis consists in providing algorithms for the extraction, separation, and localization of epileptic sources from the EEG recordings. In the first part of the thesis, we consider two preprocessing steps applied to raw EEG data. The first step aims at removing muscle artifacts by means of Independent Component Analysis (ICA). In this context, we propose a new semi-algebraic deflation algorithm that extracts the epileptic sources more efficiently than conventional methods as we demonstrate on simulated and real EEG data. The second step consists in separating correlated sources that can be involved in the propagation of epileptic phenomena. To this end, we explore deterministic tensor decomposition methods exploiting space-time-frequency or space-time-wave-vector data. We compare the two preprocessing methods using computer simulations to determine in which cases ICA, tensor decomposition, or a combination of both should be used. The second part of the thesis is devoted to distributed source localization techniques. After providing a survey and a classification of current state-of-the-art methods, we present an algorithm for distributed source localization that builds on the results of the tensor-based preprocessing methods. The algorithm is evaluated on simulated and real EEG data. Furthermore, we propose several improvements of a source imaging method based on structured sparsity. Finally, a comprehensive performance study of various brain source imaging methods is conducted on physiologically plausible, simulated EEG data
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Hajipour, Sardouie Sepideh. "Signal subspace identification for epileptic source localization from electroencephalographic data." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S185/document.

Повний текст джерела
Анотація:
Lorsque l'on enregistre l'activité cérébrale en électroencéphalographie (EEG) de surface, le signal d'intérêt est fréquemment bruité par des activités différentes provenant de différentes sources de bruit telles que l'activité musculaire. Le débruitage de l'EEG est donc une étape de pré-traitement important dans certaines applications, telles que la localisation de source. Dans cette thèse, nous proposons six méthodes permettant la suppression du bruit de signaux EEG dans le cas particulier des activités enregistrées chez les patients épileptiques soit en période intercritique (pointes) soit en période critique (décharges). Les deux premières méthodes, qui sont fondées sur la décomposition généralisée en valeurs propres (GEVD) et sur le débruitage par séparation de sources (DSS), sont utilisées pour débruiter des signaux EEG épileptiques intercritiques. Pour extraire l'information a priori requise par GEVD et DSS, nous proposons une série d'étapes de prétraitement, comprenant la détection de pointes, l'extraction du support des pointes et le regroupement des pointes impliquées dans chaque source d'intérêt. Deux autres méthodes, appelées Temps Fréquence (TF) -GEVD et TF-DSS, sont également proposées afin de débruiter les signaux EEG critiques. Dans ce cas on extrait la signature temps-fréquence de la décharge critique par la méthode d'analyse de corrélation canonique. Nous proposons également une méthode d'Analyse en Composantes Indépendantes (ICA), appelé JDICA, basée sur une stratégie d'optimisation de type Jacobi. De plus, nous proposons un nouvel algorithme direct de décomposition canonique polyadique (CP), appelé SSD-CP, pour calculer la décomposition CP de tableaux à valeurs complexes. L'algorithme proposé est basé sur la décomposition de Schur simultanée (SSD) de matrices particulières dérivées du tableau à traiter. Nous proposons également un nouvel algorithme pour calculer la SSD de plusieurs matrices à valeurs complexes. Les deux derniers algorithmes sont utilisés pour débruiter des données intercritiques et critiques. Nous évaluons la performance des méthodes proposées pour débruiter les signaux EEG (simulés ou réels) présentant des activités intercritiques et critiques épileptiques bruitées par des artéfacts musculaires. Dans le cas des données simulées, l'efficacité de chacune de ces méthodes est évaluée d'une part en calculant l'erreur quadratique moyenne normalisée entre les signaux originaux et débruités, et d'autre part en comparant les résultats de localisation de sources, obtenus à partir des signaux non bruités, bruités, et débruités. Pour les données intercritiques et critiques, nous présentons également quelques exemples sur données réelles enregistrées chez des patients souffrant d'épilepsie partielle
In the process of recording electrical activity of the brain, the signal of interest is usually contaminated with different activities arising from various sources of noise and artifact such as muscle activity. This renders denoising as an important preprocessing stage in some ElectroEncephaloGraphy (EEG) applications such as source localization. In this thesis, we propose six methods for noise cancelation of epileptic signals. The first two methods, which are based on Generalized EigenValue Decomposition (GEVD) and Denoising Source Separation (DSS) frameworks, are used to denoise interictal data. To extract a priori information required by GEVD and DSS, we propose a series of preprocessing stages including spike peak detection, extraction of exact time support of spikes and clustering of spikes involved in each source of interest. Two other methods, called Time Frequency (TF)-GEVD and TF-DSS, are also proposed in order to denoise ictal EEG signals for which the time-frequency signature is extracted using the Canonical Correlation Analysis method. We also propose a deflationary Independent Component Analysis (ICA) method, called JDICA, that is based on Jacobi-like iterations. Moreover, we propose a new direct algorithm, called SSD-CP, to compute the Canonical Polyadic (CP) decomposition of complex-valued multi-way arrays. The proposed algorithm is based on the Simultaneous Schur Decomposition (SSD) of particular matrices derived from the array to process. We also propose a new Jacobi-like algorithm to calculate the SSD of several complex-valued matrices. The last two algorithms are used to denoise both interictal and ictal data. We evaluate the performance of the proposed methods to denoise both simulated and real epileptic EEG data with interictal or ictal activity contaminated with muscular activity. In the case of simulated data, the effectiveness of the proposed algorithms is evaluated in terms of Relative Root Mean Square Error between the original noise-free signals and the denoised ones, number of required ops and the location of the original and denoised epileptic sources. For both interictal and ictal data, we present some examples on real data recorded in patients with a drug-resistant partial epilepsy
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Romo, Vazquez Rebeca del Carmen. "Contribution à la détection et à l'analyse des signaux EEG épileptiques : débruitage et séparation de sources." Thesis, Vandoeuvre-les-Nancy, INPL, 2010. http://www.theses.fr/2010INPL005N/document.

Повний текст джерела
Анотація:
L'objectif principal de cette thèse est le pré-traitement des signaux d'électroencéphalographie (EEG). En particulier, elle vise à développer une méthodologie pour obtenir un EEG dit "propre" à travers l'identification et l'élimination des artéfacts extra-cérébraux (mouvements oculaires, clignements, activité cardiaque et musculaire) et du bruit. Après identification, les artéfacts et le bruit doivent être éliminés avec une perte minimale d'information, car dans le cas d'EEG, il est de grande importance de ne pas perdre d'information potentiellement utile à l'analyse (visuelle ou automatique) et donc au diagnostic médical. Plusieurs étapes sont nécessaires pour atteindre cet objectif : séparation et identification des sources d'artéfacts, élimination du bruit de mesure et reconstruction de l'EEG "propre". A travers une approche de type séparation aveugle de sources (SAS), la première partie vise donc à séparer les signaux EEG dans des sources informatives cérébrales et des sources d'artéfacts extra-cérébraux à éliminer. Une deuxième partie vise à classifier et éliminer les sources d'artéfacts et elle consiste en une étape de classification supervisée. Le bruit de mesure, quant à lui, il est éliminé par une approche de type débruitage par ondelettes. La mise en place d'une méthodologie intégrant d'une manière optimale ces trois techniques (séparation de sources, classification supervisée et débruitage par ondelettes) constitue l'apport principal de cette thèse. La méthodologie développée, ainsi que les résultats obtenus sur une base de signaux d'EEG réels (critiques et inter-critiques) importante, sont soumis à une expertise médicale approfondie, qui valide l'approche proposée
The goal of this research is the electroencephalographic (EEG) signals preprocessing. More precisely, we aim to develop a methodology to obtain a "clean" EEG through the extra- cerebral artefacts (ocular movements, eye blinks, high frequency and cardiac activity) and noise identification and elimination. After identification, the artefacts and noise must be eliminated with a minimal loss of cerebral activity information, as this information is potentially useful to the analysis (visual or automatic) and therefore to the medial diagnosis. To accomplish this objective, several pre-processing steps are needed: separation and identification of the artefact sources, noise elimination and "clean" EEG reconstruction. Through a blind source separation (BSS) approach, the first step aims to separate the EEG signals into informative and artefact sources. Once the sources are separated, the second step is to classify and to eliminate the identified artefacts sources. This step implies a supervised classification. The EEG is reconstructed only from informative sources. The noise is finally eliminated using a wavelet denoising approach. A methodology ensuring an optimal interaction of these three techniques (BSS, classification and wavelet denoising) is the main contribution of this thesis. The methodology developed here, as well the obtained results from an important real EEG data base (ictal and inter-ictal) is subjected to a detailed analysis by medical expertise, which validates the proposed approach
Стилі APA, Harvard, Vancouver, ISO та ін.
5

NIBHANUPUDI, SWATHI. "SIGNAL DENOISING USING WAVELETS." University of Cincinnati / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1070577417.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Villaron, Emilie. "Modèles aléatoires harmoniques pour les signaux électroencéphalographiques." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM4815.

Повний текст джерела
Анотація:
Cette thèse s'inscrit dans le contexte de l'analyse des signaux biomédicaux multicapteurs par des méthodes stochastiques. Les signaux auxquels nous nous intéressons présentent un caractère oscillant transitoire bien représenté par les décompositions dans le plan temps-fréquence c'est pourquoi nous avons choisi de considérer non plus les décours temporels de ces signaux mais les coefficients issus de la décomposition de ces derniers dans le plan temps-fréquence. Dans une première partie, nous décomposons les signaux multicapteurs sur une base de cosinus locaux (appelée base MDCT) et nous modélisons les coefficients à l'aide d'un modèle à états latents. Les coefficients sont considérés comme les réalisations de processus aléatoires gaussiens multivariés dont la distribution est gouvernée par une chaîne de Markov cachée. Nous présentons les algorithmes classiques liés à l'utilisation des modèles de Markov caché et nous proposons une extension dans le cas où les matrices de covariance sont factorisées sous forme d'un produit de Kronecker. Cette modélisation permet de diminuer la complexité des méthodes de calcul numérique utilisées tout en stabilisant les algorithmes associés. Nous appliquons ces modèles à des données électroencéphalographiques et nous montrons que les matrices de covariance représentant les corrélations entre les capteurs et les fréquences apportent des informations pertinentes sur les signaux analysés. Ceci est notamment illustré par un cas d'étude sur la caractérisation de la désynchronisation des ondes alpha dans le contexte de la sclérose en plaques
This thesis adresses the problem of multichannel biomedical signals analysis using stochastic methods. EEG signals exhibit specific features that are both time and frequency localized, which motivates the use of time-frequency signal representations. In this document the (time-frequency labelled) coefficients are modelled as multivariate random variables. In the first part of this work, multichannel signals are expanded using a local cosine basis (called MDCT basis). The approach we propose models the distribution of time-frequency coefficients (here MDCT coefficients) in terms of latent variables by the use of a hidden Markov model. In the framework of application to EEG signals, the latent variables describe some hidden mental state of the subject. The latter control the covariance matrices of Gaussian vectors of fixed-time vectors of multi-channel, multi-frequency, MDCT coefficients. After presenting classical algorithms to estimate the parameters, we define a new model in which the (space-frequency) covariance matrices are expanded as tensor products (also named Kronecker products) of frequency and channels matrices. Inference for the proposed model is developped and yields estimates for the model parameters, together with maximum likelihood estimates for the sequences of latent variables. The model is applied to electroencephalogram data, and it is shown that variance-covariance matrices labelled by sensor and frequency indices can yield relevant informations on the analyzed signals. This is illustrated with a case study, namely the detection of alpha waves in rest EEG for multiple sclerosis patients and control subjects
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Tikkanen, P. (Pauli). "Characterization and application of analysis methods for ECG and time interval variability data." Doctoral thesis, University of Oulu, 1999. http://urn.fi/urn:isbn:9514252144.

Повний текст джерела
Анотація:
Abstract The quantitation of the variability in cardiovascular signals provides information about the autonomic neural regulation of the heart and the circulatory system. Several factors have an indirect effect on these signals as well as artifacts and several types of noise are contained in the recorded signal. The dynamics of RR and QT interval time series have also been analyzed in order to predict a risk of adverse cardiac events and to diagnose them. An ambulatory measurement setting is an important and demanding condition for the recording and analysis of these signals. Sophisticated and robust signal analysis schemes are thus increasingly needed. In this thesis, essential points related to ambulatory data acquisition and analysis of cardiovascular signals are discussed including the accuracy and reproducibility of the variability measurement. The origin of artifacts in RR interval time series is discussed, and consequently their effects and possible correction procedures are concidered. The time series including intervals differing from a normal sinus rhythm which sometimes carry important information, but may not be as such suitable for an analysis performed by all approaches. A significant variation in the results in either intra- or intersubject analysis is unavoidable and should be kept in mind when interpreting the results. In addition to heart rate variability (HRV) measurement using RR intervals, the dy- namics of ventricular repolarization duration (VRD) is considered using the invasively obtained action potential duration (APD) and different estimates for a QT interval taken from a surface electrocardiogram (ECG). Estimating the low quantity of the VRD vari- ability involves obviously potential errors and more strict requirements. In this study, the accuracy of VRD measurement was improved by a better time resolution obtained through interpolating the ECG. Furthermore, RTmax interval was chosen as the best QT interval estimate using simulated noise tests. A computer program was developed for the time interval measurement from ambulatory ECGs. This thesis reviews the most commonly used analysis methods for cardiovascular vari- ability signals including time and frequency domain approaches. The estimation of the power spectrum is presented on the approach using an autoregressive model (AR) of time series, and a method for estimating the powers and the spectra of components is also presented. Time-frequency and time-variant spectral analysis schemes with applica- tions to HRV analysis are presented. As a novel approach, wavelet and wavelet packet transforms and the theory of signal denoising with several principles for the threshold selection is examined. The wavelet packet based noise removal approach made use of an optimized signal decomposition scheme called best tree structure. Wavelet and wavelet packet transforms are further used to test their effciency in removing simulated noise from the ECG. The power spectrum analysis is examined by means of wavelet transforms, which are then applied to estimate the nonstationary RR interval variability. Chaotic modelling is discussed with important questions related to HRV analysis.ciency in removing simulated noise from the ECG. The power spectrum analysis is examined by means of wavelet transforms, which are then applied to estimate the nonstationary RR interval variability. Chaotic modelling is discussed with important questions related to HRV analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Pantelopoulos, Alexandros A. "¿¿¿¿¿¿¿¿¿¿¿¿PROGNOSIS: A WEARABLE SYSTEM FOR HEALTH MONITORING OF PEOPLE AT RISK." Wright State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=wright1284754643.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Akhbari, Mahsa. "Analyse des intervalles ECG inter- et intra-battement sur des modèles d'espace d'état et de Markov cachés." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT026.

Повний текст джерела
Анотація:
Les maladies cardiovasculaires sont l'une des principales causes de mortalité chez l'homme. Une façon de diagnostiquer des maladies cardiaques et des anomalies est le traitement de signaux cardiaques tels que le ECG. Dans beaucoup de ces traitements, des caractéristiques inter-battements et intra-battements de signaux ECG doivent être extraites. Ces caractéristiques comprennent les points de repère des ondes de l’ECG (leur début, leur fin et leur point de pic), les intervalles significatifs et les segments qui peuvent être définis pour le signal ECG. L'extraction des points de référence de l'ECG consiste à identifier l'emplacement du pic, de début et de la fin de l'onde P, du complexe QRS et de l'onde T. Ces points véhiculent des informations cliniquement utiles, mais la segmentation precise de chaque battement de l'ECG est une tâche difficile, même pour les cardiologues expérimentés.Dans cette thèse, nous utilisons un cadre bayésien basé sur le modèle dynamique d'ECG proposé par McSharry. Depuis ce modèle s'appuyant sur la morphologie des ECG, il peut être utile pour la segmentation et l'analyse d'intervalles d'ECG. Afin de tenir compte de la séquentialité des ondes P, QRS et T, nous utiliserons également l'approche de Markov et des modèles de Markov cachés (MMC). En bref dans cette thèse, nous utilisons un modèle dynamique (filtre de Kalman), un modèle séquentiel (MMC) et leur combinaison (commutation de filtres de Kalman (SKF)). Nous proposons trois méthodes à base de filtres de Kalman, une méthode basée sur les MMC et un procédé à base de SKF. Nous utilisons les méthodes proposées pour l'extraction de points de référence et l'analyse d'intervalles des ECG. Le méthodes basées sur le filtrage de Kalman sont également utilisés pour le débruitage d'ECG, la détection de l'alternation de l'onde T, et la détection du pic R de l'ECG du foetus.Pour évaluer les performances des méthodes proposées pour l'extraction des points de référence de l'ECG, nous utilisons la base de données "Physionet QT", et une base de données "Swine" qui comprennent ECG annotations de signaux par les médecins. Pour le débruitage d'ECG, nous utilisons les bases de données "MIT-BIH Normal Sinus Rhythm", "MIT-BIH Arrhythmia" et "MIT-BIH noise stress test". La base de données "TWA Challenge 2008 database" est utilisée pour la détection de l'alternation de l'onde T. Enfin, la base de données "Physionet Computing in Cardiology Challenge 2013 database" est utilisée pour la détection du pic R de l'ECG du feotus. Pour l'extraction de points de reference, la performance des méthodes proposées sont évaluées en termes de moyenne, écart-type et l'erreur quadratique moyenne (EQM). Nous calculons aussi la sensibilité des méthodes. Pour le débruitage d'ECG, nous comparons les méthodes en terme d'amélioration du rapport signal à bruit
Cardiovascular diseases are one of the major causes of mortality in humans. One way to diagnose heart diseases and abnormalities is processing of cardiac signals such as ECG. In many of these processes, inter-beat and intra-beat features of ECG signal must be extracted. These features include peak, onset and offset of ECG waves, meaningful intervals and segments that can be defined for ECG signal. ECG fiducial point (FP) extraction refers to identifying the location of the peak as well as the onset and offset of the P-wave, QRS complex and T-wave which convey clinically useful information. However, the precise segmentation of each ECG beat is a difficult task, even for experienced cardiologists.In this thesis, we use a Bayesian framework based on the McSharry ECG dynamical model for ECG FP extraction. Since this framework is based on the morphology of ECG waves, it can be useful for ECG segmentation and interval analysis. In order to consider the time sequential property of ECG signal, we also use the Markovian approach and hidden Markov models (HMM). In brief in this thesis, we use dynamic model (Kalman filter), sequential model (HMM) and their combination (switching Kalman filter (SKF)). We propose three Kalman-based methods, an HMM-based method and a SKF-based method. We use the proposed methods for ECG FP extraction and ECG interval analysis. Kalman-based methods are also used for ECG denoising, T-wave alternans (TWA) detection and fetal ECG R-peak detection.To evaluate the performance of proposed methods for ECG FP extraction, we use the "Physionet QT database", and a "Swine ECG database" that include ECG signal annotations by physicians. For ECG denoising, we use the "MIT-BIH Normal Sinus Rhythm", "MIT-BIH Arrhythmia" and "MIT-BIH noise stress test" databases. "TWA Challenge 2008 database" is used for TWA detection and finally, "Physionet Computing in Cardiology Challenge 2013 database" is used for R-peak detection of fetal ECG. In ECG FP extraction, the performance of the proposed methods are evaluated in terms of mean, standard deviation and root mean square of error. We also calculate the Sensitivity for methods. For ECG denoising, we compare methods in their obtained SNR improvement
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Casaca, Wallace Correa de Oliveira. "Restauração de imagens digitais com texturas utilizando técnicas de decomposição e equações diferenciais parciais /." São José do Rio Preto : [s.n.], 2010. http://hdl.handle.net/11449/94247.

Повний текст джерела
Анотація:
Orientador: Maurílio Boaventura
Banca: Evanildo Castro Silva Júnior
Banca: Alagacone Sri Ranga
Resumo: Neste trabalho propomos quatro novas abordagens para tratar o problema de restauração de imagens reais contendo texturas sob a perspectiva dos temas: reconstrução de regiões danificadas, remoção de objetos, e eliminação de ruídos. As duas primeiras abor dagens são designadas para recompor partes perdias ou remover objetos de uma imagem real a partir de formulações envolvendo decomposiçãode imagens e inpainting por exem- plar, enquanto que as duas últimas são empregadas para remover ruído, cujas formulações são baseadas em decomposição de três termos e equações diferenciais parciais não lineares. Resultados experimentais atestam a boa performace dos protótipos apresentados quando comparados à modelagens correlatas da literatura.
Abstract: In this paper we propose four new approaches to address the problem of restoration of real images containing textures from the perspective of reconstruction of damaged areas, object removal, and denoising topics. The first two approaches are designed to reconstruct missing parts or to remove objects of a real image using formulations based on image de composition and exemplar based inpainting, while the last two other approaches are used to remove noise, whose formulations are based on decomposition of three terms and non- linear partial di®erential equations. Experimental results attest to the good performance of the presented prototypes when compared to modeling related in literature.
Mestre
Стилі APA, Harvard, Vancouver, ISO та ін.
11

JAMIL, MD DANISH. "EEG DENOISING USING WAVELET ENHANCED ICA." Thesis, 2016. http://dspace.dtu.ac.in:8080/jspui/handle/repository/14846.

Повний текст джерела
Анотація:
In this work we have presented a new approach towards wavelet enhanced ICA, we have used mMSE and kurtosis to detect the artifactual components automatically, Mahajan et al [28] displayed their performance in terms of sensitivity (90%) and specificity (98%), nMSE is good at recognizing EEG patterns because of its randomness and kurtosis is good at recognizing peaked signal because they have high kurtosis values. We compared our result with ICA based method zeroing ICA in terms of correlation, mutual information and coherence. Our result is far superior to it in all three terms, in correlation measure our method not only gives better results for unaffected recording channel but it improves the result from 0.44 to 0.58 for most affected recording channel, which means our method only suppresses the noise without introducing additional noise. When we compare the results in terms of mutual information it improves from 0.30 to 0.42 for most affected recording channel. When we study the coherence graph we notice that the wICA method is affecting those frequencies too which are not present in ocular artifacts frequency range but our method has only affected the frequency range 0-16 Hz which is ocular artifact’s frequency band. We can extend this work by using different mother wavelets to best approximate eye blink and other ocular artifacts.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

KUMAR, MANISH. "EEG DENOISING USING ARTIFICIAL NEURAL NETWORK WITH DIFFERENT LEARNING ALGORITHMS AND ACTIVATION FUNCTIONS." Thesis, 2016. http://dspace.dtu.ac.in:8080/jspui/handle/repository/15548.

Повний текст джерела
Анотація:
Electroencephalogram (EEG) recordings often experience interference by different kinds of noise, including white, muscle and baseline, severely limiting its utility. Artificial neural networks (ANNs) are effective and powerful tools for removing interference from EEGs. Several methods have been developed, but ANNs appear to be the most effective for reducing muscle and baseline contamination, especially when the contamination is greater in amplitude than the brain signal. An ANN as a filter for EEG recordings is proposed in this dissertation, developing a novel framework for investigating and comparing the relative performance of an ANN incorporating real EEG recordings. This method is based on a growing ANN that optimized the number of nodes in the hidden layer and the coefficient matrices, which are optimized by different learning mechanism method. The ANN improves the results obtained with the conventional EEG filtering techniques: wavelet, singular value decomposition, principal component analysis, adaptive filtering and independent components analysis. The system has been evaluated within a wide range of EEG signals. The present study introduces a new method of reducing all EEG interference signals in one step with low EEG distortion and high noise reduction.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Su, Chih-wei, and 蘇致瑋. "A Denoising Method based on the Hilbert-Huang Transform and an application to Current Source Identification from EEG Data." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/25805284754459631990.

Повний текст джерела
Анотація:
碩士
國立中央大學
物理研究所
95
We present a method for reducing or removing noise in a data set, such as a times series. The method is based on the empirical mode decomposition (EMD) of a data set, which is itself a by-product of the Hilbert-Huang transform. After decomposition the modes are partitioned into a signal set and a noise set and the standard signal-to-noise ratio (SNR) is computed. By experimenting on sets of data composed of signal and noises of a variety of intensities and studying the behavior of computed SNR in relation to the actual SNR, an algorithm for finding the best partition is derived. It is shown that under conditions that are not pathological, the algorithm is capable of giving a reliable estimation of the SNR and works well for signals of considerable complexity, provided the true SNR is greater than -10. We apply the denoising algorithm to the so-called current source-density (CSD) method used in the construction of current source from electroencephalogram (EEG) data. The method requires a discrete form of the second derivative to be taken on data simultaneously collected at a small set of equally spaced points. At each point the data is a time series. Although there is a mathematically correct procedure for taking increasingly accurate discrete second derivative based on Taylor’s expansion, the procedure breaks down when the data is contaminated with noise. This has led to the wide-spread practice of spatially smoothing the data — for noise removal — before computing the derivative. However, this method has the unfortunate drawback of the act of spatial smoothing itself compromising the very meaning of the second derivative. The method also lacks a provision for accuracy estimation. We show that by applying our denoising method to the times series of the EEG data, we are able to: (1) Estimate the SNR in the data to be of the order of 35; (2) Denoise the EEG and obtain second spatial derivatives of the data that converge in the Taylor expansion; (3) Show that spatial smoothing of the data is not only unnecessary, but also that when it is done, qualitatively alters the results.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Kumar, Shailesh. "Denoising of Electriocardiogram (ECG) Signal." Thesis, 2017. http://ethesis.nitrkl.ac.in/8900/1/2017_MT_SKumar.pdf.

Повний текст джерела
Анотація:
Plenty of information related to physiology of the heart can be collected from the electrocardiogram (ECG) signal. In reality, the ECG signal is interfered by the various noise sources. To extract the correct information related to physiology of the heart, the cancellation of noise present in the ECG signal is needed. In this thesis, the investigation on effectiveness of the empirical mode decomposition (EMD) with non - local mean (NLM) technique by using the value of differential standard deviation for denoising of ECG signal is performed. Differential standard deviation is calculated for collecting information related to the input noise so that appropriate formation in EMD and NLM framework can be performed. EMD framework in the proposed methodology is used for reduction of the noise from the ECG signal. The output of the EMD passes through NLM framework to preserve the edges of the ECG signal and cancel the noise present in the ECG signal after the EMD process. The performance of the proposed methodology based on EMD with NLM framework has been validated by using added white and color Gaussian noise to the clean ECG signal from MIT-BIH arrhythmia database at different signal to noise ratio (SNR). The proposed denoising technique shows lesser percent root mean square difference (PRD), mean square error (MSE), and better SNR improvement compared to other well-known methods. In this thesis comparison of the different technique for removal of muscle artifacts and baseline wander noise. Different methodology for removal of muscle artifacts are conventional filtering, wavelet denoising, and non-local mean (NLM) technique, in these wavelet denoising gives better SNR improvement and lesser MSE and PRD. Similarly for baseline wander removal, the performance of different techniques like two-stage median filter, single-stage median filter, two-stage moving average filter, single-stage moving average filter, low-pass filter, and band-pass filter have been evaluated using added baseline wander noise to synthetic ECG signal at different sampling frequency, among these two-stage median filter gives better SNR improvement and lesser MSE and PRD.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Srivastava, Shubhranshu. "Denoising and Artifacts Removal in ECG Signals." Thesis, 2015. http://ethesis.nitrkl.ac.in/7444/1/2015_Denoising_Shrivastava.pdf.

Повний текст джерела
Анотація:
ECG signal is a non-stationary biological signal and plays a pivotal role in the diagnosis of cardiac-related abnormalities. Reduction of noise in electrocardiography signals is a crucial and important problem because the artifacts corrupting the signal possesses similar frequency characteristics as that of the signal itself. Conventional techniques viz. filtering were proved to be uncap able of eliminating these interferences. Therefore the electrocardiography signals require a novel and efficient denoising strategy with a view to facilitate satisfactory noise-removal performance. A new yet adaptive and data-driven method for denoising of ECG signals using EMD and DFA algorithms has been investigated...The proposed algorithm has been tested with ECG signals (MIT-BIH Database) with added noise such as baseline wander and muscle contraction noise. Parameter are calculated to determine the effectiveness of the algorithm on a variety of signal types. The obtained results show that the proposed denoising algorithm is easy to implement and suitable to be applied with electrocardiography signals.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Adithya, Godishala. "Denoising of ECG Signal Using TMS320C6713 Processor." Thesis, 2015. http://ethesis.nitrkl.ac.in/7447/1/2015_Denoising_Adithya.pdf.

Повний текст джерела
Анотація:
The point of this thesis is to DE noising of ECG signal using TMS320C6713 DSK processor. Here the use of adaptive filtering technique gives us the idea of effective method for filtering of a signal. The ECG signal is generated in Mat lab software and is used as the optimum source samples i.e., it is used as the desired signal which is to be compared with the input signal. Here we use LMS algorithm as an adaptive algorithm for processing and analyzing of the ECG signal. Adaptive filters are best utilized as a part of situations where signal settings or framework limitations are gradually varying and filter is confirmed to adjust for this change. There are many adaptive algorithms of which LMS algorithm is one of them and is more accurate and precise. The C6713 is modern DSK processor which has both floating and fixed point processors. The earlier versions only had the fixed point processing. The random signal is removed from the ECG signal with the help of LMS filter code that is loaded into CCS (Code Compressor studio) and the output from the processor is verified.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Yang, Kai-Hsiang, and 楊凱翔. "Hilbert-Huang Transform based ICA for ECG Denoising." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/93426925857298956925.

Повний текст джерела
Анотація:
碩士
元智大學
資訊管理學系
97
Common techniques for ECG noise removal include wavelet transform, principal component analysis, independent component analysis (ICA) and Hilbert-Huang transform (HHT). This study proposes a two-stage ICA-based algorithm that integrates HHT with ICA to improve the performance of noise removal in ECG signals. Our performance results show that this two-stage algorithm outperforms pure ICA-based methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Tiwari, Tarun Mani. "An efficient algorithm for ECG denoising and beat detection /." 2008. http://proquest.umi.com/pqdweb?did=1654493421&sid=2&Fmt=2&clientId=10361&RQT=309&VName=PQD.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Chiang, Hsin-Tien, and 江欣恬. "ECG Compression and Denoising based on Fully Convolutional Autoencoder." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/6e327t.

Повний текст джерела
Анотація:
碩士
國立臺灣大學
電子工程學研究所
107
The electrocardiogram (ECG) is an efficient and non-invasive application for arrhythmia detection and prevention such as premature ventricular contractions (PVCs). Due to huge amounts of data generated by long-term ECG monitoring, an effective compression method is essential. This thesis proposed an autoencoder-based method utilizing fully convolutional network (FCN). The proposed approach is applied to 10 records of the MIT-BIH Arrhythmia database. Compared with discrete wavelet transform (DWT) and DNN-based autoencoder (AE), FCN acquires better performance for both normal and abnormal PVCs individuals. Moreover, FCN outperforms DNN in less percentage root-mean-squared difference (PRD) within identical compression ratio (CR). The proposed compression scheme exploits the fact that FCN achieves CR=33.64 with PRD=13.65% in average among all subjects. Because ECG signals are prone to be contaminated with noises during monitoring, we also validate FCN in denoising application. The results conducted on noisy ECG signals of different levels of signal-to-noise (SNR) show that FCN has better denoising performance than DNN. In summary, FCN yields higher CR and lower PRD in compression as well as higher SNR improvement in denoising and is believed to have good application prospect in clinical practice.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Hamed, Khald. "STUDY ON THE EFFECTIVENESS OF WAVELETS FOR DENOISING ECG SIGNALS USING SUBBAND DEPENDENT THRESHOLD." 2012. http://hdl.handle.net/10222/15832.

Повний текст джерела
Анотація:
An electrocardiogram (ECG) is a bioelectrical signal which records the heart’s electrical activity versus time on the body surface via contact electrodes. The recorded ECG signal is often contaminated by noise and artifacts that can be within the frequency band of interest. This noise can hide some important features of the ECG signal. The focus of this thesis is the application of new modified versions of the Universal threshold to allow additional enhancements in the reduction of ECG noise. Despite the fact that there are many types of contaminating noises in ECG signals, only white noise and baseline wandering will be considered. This type of noise is undesirable and needs to be removed prior to any additional signal processing for proper analysis and display of the ECG signal.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Su, Aron Wei-Hsiang. "ECG Noise Filtering Using Online Model-Based Bayesian Filtering Techniques." Thesis, 2013. http://hdl.handle.net/10012/7917.

Повний текст джерела
Анотація:
The electrocardiogram (ECG) is a time-varying electrical signal that interprets the electrical activity of the heart. It is obtained by a non-invasive technique known as surface electromyography (EMG), used widely in hospitals. There are many clinical contexts in which ECGs are used, such as medical diagnosis, physiological therapy and arrhythmia monitoring. In medical diagnosis, medical conditions are interpreted by examining information and features in ECGs. Physiological therapy involves the control of some aspect of the physiological effort of a patient, such as the use of a pacemaker to regulate the beating of the heart. Moreover, arrhythmia monitoring involves observing and detecting life-threatening conditions, such as myocardial infarction or heart attacks, in a patient. ECG signals are usually corrupted with various types of unwanted interference such as muscle artifacts, electrode artifacts, power line noise and respiration interference, and are distorted in such a way that it can be difficult to perform medical diagnosis, physiological therapy or arrhythmia monitoring. Consequently signal processing on ECGs is required to remove noise and interference signals for successful clinical applications. Existing signal processing techniques can remove some of the noise in an ECG signal, but are typically inadequate for extraction of the weak ECG components contaminated with background noise and for retention of various subtle features in the ECG. For example, the noise from the EMG usually overlaps the fundamental ECG cardiac components in the frequency domain, in the range of 0.01 Hz to 100 Hz. Simple filters are inadequate to remove noise which overlaps with ECG cardiac components. Sameni et al. have proposed a Bayesian filtering framework to resolve these problems, and this gives results which are clearly superior to the results obtained from application of conventional signal processing methods to ECG. However, a drawback of this Bayesian filtering framework is that it must run offline, and this of course is not desirable for clinical applications such as arrhythmia monitoring and physiological therapy, both of which re- quire online operation in near real-time. To resolve this problem, in this thesis we propose a dynamical model which permits the Bayesian filtering framework to function online. The framework with the proposed dynamical model has less than 4% loss in performance compared to the previous (offline) version of the framework. The proposed dynamical model is based on theory from fixed-lag smoothing.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії