Dissertations / Theses on the topic 'EEG DENOISING'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 21 dissertations / theses for your research on the topic 'EEG DENOISING.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Zhang, Shuoyue [Verfasser], and Jürgen [Akademischer Betreuer] Hennig. "Artifacts denoising of EEG acquired during simultaneous EEG-FMRI." Freiburg : Universität, 2021. http://d-nb.info/1228786968/34.
Full textBecker, Hanna. "Débruitage, séparation et localisation de sources EEG dans le contexte de l'épilepsie." Thesis, Nice, 2014. http://www.theses.fr/2014NICE4075/document.
Full textElectroencephalography (EEG) is a routinely used technique for the diagnosis and management of epilepsy. In this context, the objective of this thesis consists in providing algorithms for the extraction, separation, and localization of epileptic sources from the EEG recordings. In the first part of the thesis, we consider two preprocessing steps applied to raw EEG data. The first step aims at removing muscle artifacts by means of Independent Component Analysis (ICA). In this context, we propose a new semi-algebraic deflation algorithm that extracts the epileptic sources more efficiently than conventional methods as we demonstrate on simulated and real EEG data. The second step consists in separating correlated sources that can be involved in the propagation of epileptic phenomena. To this end, we explore deterministic tensor decomposition methods exploiting space-time-frequency or space-time-wave-vector data. We compare the two preprocessing methods using computer simulations to determine in which cases ICA, tensor decomposition, or a combination of both should be used. The second part of the thesis is devoted to distributed source localization techniques. After providing a survey and a classification of current state-of-the-art methods, we present an algorithm for distributed source localization that builds on the results of the tensor-based preprocessing methods. The algorithm is evaluated on simulated and real EEG data. Furthermore, we propose several improvements of a source imaging method based on structured sparsity. Finally, a comprehensive performance study of various brain source imaging methods is conducted on physiologically plausible, simulated EEG data
Hajipour, Sardouie Sepideh. "Signal subspace identification for epileptic source localization from electroencephalographic data." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S185/document.
Full textIn the process of recording electrical activity of the brain, the signal of interest is usually contaminated with different activities arising from various sources of noise and artifact such as muscle activity. This renders denoising as an important preprocessing stage in some ElectroEncephaloGraphy (EEG) applications such as source localization. In this thesis, we propose six methods for noise cancelation of epileptic signals. The first two methods, which are based on Generalized EigenValue Decomposition (GEVD) and Denoising Source Separation (DSS) frameworks, are used to denoise interictal data. To extract a priori information required by GEVD and DSS, we propose a series of preprocessing stages including spike peak detection, extraction of exact time support of spikes and clustering of spikes involved in each source of interest. Two other methods, called Time Frequency (TF)-GEVD and TF-DSS, are also proposed in order to denoise ictal EEG signals for which the time-frequency signature is extracted using the Canonical Correlation Analysis method. We also propose a deflationary Independent Component Analysis (ICA) method, called JDICA, that is based on Jacobi-like iterations. Moreover, we propose a new direct algorithm, called SSD-CP, to compute the Canonical Polyadic (CP) decomposition of complex-valued multi-way arrays. The proposed algorithm is based on the Simultaneous Schur Decomposition (SSD) of particular matrices derived from the array to process. We also propose a new Jacobi-like algorithm to calculate the SSD of several complex-valued matrices. The last two algorithms are used to denoise both interictal and ictal data. We evaluate the performance of the proposed methods to denoise both simulated and real epileptic EEG data with interictal or ictal activity contaminated with muscular activity. In the case of simulated data, the effectiveness of the proposed algorithms is evaluated in terms of Relative Root Mean Square Error between the original noise-free signals and the denoised ones, number of required ops and the location of the original and denoised epileptic sources. For both interictal and ictal data, we present some examples on real data recorded in patients with a drug-resistant partial epilepsy
Romo, Vazquez Rebeca del Carmen. "Contribution à la détection et à l'analyse des signaux EEG épileptiques : débruitage et séparation de sources." Thesis, Vandoeuvre-les-Nancy, INPL, 2010. http://www.theses.fr/2010INPL005N/document.
Full textThe goal of this research is the electroencephalographic (EEG) signals preprocessing. More precisely, we aim to develop a methodology to obtain a "clean" EEG through the extra- cerebral artefacts (ocular movements, eye blinks, high frequency and cardiac activity) and noise identification and elimination. After identification, the artefacts and noise must be eliminated with a minimal loss of cerebral activity information, as this information is potentially useful to the analysis (visual or automatic) and therefore to the medial diagnosis. To accomplish this objective, several pre-processing steps are needed: separation and identification of the artefact sources, noise elimination and "clean" EEG reconstruction. Through a blind source separation (BSS) approach, the first step aims to separate the EEG signals into informative and artefact sources. Once the sources are separated, the second step is to classify and to eliminate the identified artefacts sources. This step implies a supervised classification. The EEG is reconstructed only from informative sources. The noise is finally eliminated using a wavelet denoising approach. A methodology ensuring an optimal interaction of these three techniques (BSS, classification and wavelet denoising) is the main contribution of this thesis. The methodology developed here, as well the obtained results from an important real EEG data base (ictal and inter-ictal) is subjected to a detailed analysis by medical expertise, which validates the proposed approach
NIBHANUPUDI, SWATHI. "SIGNAL DENOISING USING WAVELETS." University of Cincinnati / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1070577417.
Full textVillaron, Emilie. "Modèles aléatoires harmoniques pour les signaux électroencéphalographiques." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM4815.
Full textThis thesis adresses the problem of multichannel biomedical signals analysis using stochastic methods. EEG signals exhibit specific features that are both time and frequency localized, which motivates the use of time-frequency signal representations. In this document the (time-frequency labelled) coefficients are modelled as multivariate random variables. In the first part of this work, multichannel signals are expanded using a local cosine basis (called MDCT basis). The approach we propose models the distribution of time-frequency coefficients (here MDCT coefficients) in terms of latent variables by the use of a hidden Markov model. In the framework of application to EEG signals, the latent variables describe some hidden mental state of the subject. The latter control the covariance matrices of Gaussian vectors of fixed-time vectors of multi-channel, multi-frequency, MDCT coefficients. After presenting classical algorithms to estimate the parameters, we define a new model in which the (space-frequency) covariance matrices are expanded as tensor products (also named Kronecker products) of frequency and channels matrices. Inference for the proposed model is developped and yields estimates for the model parameters, together with maximum likelihood estimates for the sequences of latent variables. The model is applied to electroencephalogram data, and it is shown that variance-covariance matrices labelled by sensor and frequency indices can yield relevant informations on the analyzed signals. This is illustrated with a case study, namely the detection of alpha waves in rest EEG for multiple sclerosis patients and control subjects
Tikkanen, P. (Pauli). "Characterization and application of analysis methods for ECG and time interval variability data." Doctoral thesis, University of Oulu, 1999. http://urn.fi/urn:isbn:9514252144.
Full textPantelopoulos, Alexandros A. "¿¿¿¿¿¿¿¿¿¿¿¿PROGNOSIS: A WEARABLE SYSTEM FOR HEALTH MONITORING OF PEOPLE AT RISK." Wright State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=wright1284754643.
Full textAkhbari, Mahsa. "Analyse des intervalles ECG inter- et intra-battement sur des modèles d'espace d'état et de Markov cachés." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT026.
Full textCardiovascular diseases are one of the major causes of mortality in humans. One way to diagnose heart diseases and abnormalities is processing of cardiac signals such as ECG. In many of these processes, inter-beat and intra-beat features of ECG signal must be extracted. These features include peak, onset and offset of ECG waves, meaningful intervals and segments that can be defined for ECG signal. ECG fiducial point (FP) extraction refers to identifying the location of the peak as well as the onset and offset of the P-wave, QRS complex and T-wave which convey clinically useful information. However, the precise segmentation of each ECG beat is a difficult task, even for experienced cardiologists.In this thesis, we use a Bayesian framework based on the McSharry ECG dynamical model for ECG FP extraction. Since this framework is based on the morphology of ECG waves, it can be useful for ECG segmentation and interval analysis. In order to consider the time sequential property of ECG signal, we also use the Markovian approach and hidden Markov models (HMM). In brief in this thesis, we use dynamic model (Kalman filter), sequential model (HMM) and their combination (switching Kalman filter (SKF)). We propose three Kalman-based methods, an HMM-based method and a SKF-based method. We use the proposed methods for ECG FP extraction and ECG interval analysis. Kalman-based methods are also used for ECG denoising, T-wave alternans (TWA) detection and fetal ECG R-peak detection.To evaluate the performance of proposed methods for ECG FP extraction, we use the "Physionet QT database", and a "Swine ECG database" that include ECG signal annotations by physicians. For ECG denoising, we use the "MIT-BIH Normal Sinus Rhythm", "MIT-BIH Arrhythmia" and "MIT-BIH noise stress test" databases. "TWA Challenge 2008 database" is used for TWA detection and finally, "Physionet Computing in Cardiology Challenge 2013 database" is used for R-peak detection of fetal ECG. In ECG FP extraction, the performance of the proposed methods are evaluated in terms of mean, standard deviation and root mean square of error. We also calculate the Sensitivity for methods. For ECG denoising, we compare methods in their obtained SNR improvement
Casaca, Wallace Correa de Oliveira. "Restauração de imagens digitais com texturas utilizando técnicas de decomposição e equações diferenciais parciais /." São José do Rio Preto : [s.n.], 2010. http://hdl.handle.net/11449/94247.
Full textBanca: Evanildo Castro Silva Júnior
Banca: Alagacone Sri Ranga
Resumo: Neste trabalho propomos quatro novas abordagens para tratar o problema de restauração de imagens reais contendo texturas sob a perspectiva dos temas: reconstrução de regiões danificadas, remoção de objetos, e eliminação de ruídos. As duas primeiras abor dagens são designadas para recompor partes perdias ou remover objetos de uma imagem real a partir de formulações envolvendo decomposiçãode imagens e inpainting por exem- plar, enquanto que as duas últimas são empregadas para remover ruído, cujas formulações são baseadas em decomposição de três termos e equações diferenciais parciais não lineares. Resultados experimentais atestam a boa performace dos protótipos apresentados quando comparados à modelagens correlatas da literatura.
Abstract: In this paper we propose four new approaches to address the problem of restoration of real images containing textures from the perspective of reconstruction of damaged areas, object removal, and denoising topics. The first two approaches are designed to reconstruct missing parts or to remove objects of a real image using formulations based on image de composition and exemplar based inpainting, while the last two other approaches are used to remove noise, whose formulations are based on decomposition of three terms and non- linear partial di®erential equations. Experimental results attest to the good performance of the presented prototypes when compared to modeling related in literature.
Mestre
JAMIL, MD DANISH. "EEG DENOISING USING WAVELET ENHANCED ICA." Thesis, 2016. http://dspace.dtu.ac.in:8080/jspui/handle/repository/14846.
Full textKUMAR, MANISH. "EEG DENOISING USING ARTIFICIAL NEURAL NETWORK WITH DIFFERENT LEARNING ALGORITHMS AND ACTIVATION FUNCTIONS." Thesis, 2016. http://dspace.dtu.ac.in:8080/jspui/handle/repository/15548.
Full textSu, Chih-wei, and 蘇致瑋. "A Denoising Method based on the Hilbert-Huang Transform and an application to Current Source Identification from EEG Data." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/25805284754459631990.
Full text國立中央大學
物理研究所
95
We present a method for reducing or removing noise in a data set, such as a times series. The method is based on the empirical mode decomposition (EMD) of a data set, which is itself a by-product of the Hilbert-Huang transform. After decomposition the modes are partitioned into a signal set and a noise set and the standard signal-to-noise ratio (SNR) is computed. By experimenting on sets of data composed of signal and noises of a variety of intensities and studying the behavior of computed SNR in relation to the actual SNR, an algorithm for finding the best partition is derived. It is shown that under conditions that are not pathological, the algorithm is capable of giving a reliable estimation of the SNR and works well for signals of considerable complexity, provided the true SNR is greater than -10. We apply the denoising algorithm to the so-called current source-density (CSD) method used in the construction of current source from electroencephalogram (EEG) data. The method requires a discrete form of the second derivative to be taken on data simultaneously collected at a small set of equally spaced points. At each point the data is a time series. Although there is a mathematically correct procedure for taking increasingly accurate discrete second derivative based on Taylor’s expansion, the procedure breaks down when the data is contaminated with noise. This has led to the wide-spread practice of spatially smoothing the data — for noise removal — before computing the derivative. However, this method has the unfortunate drawback of the act of spatial smoothing itself compromising the very meaning of the second derivative. The method also lacks a provision for accuracy estimation. We show that by applying our denoising method to the times series of the EEG data, we are able to: (1) Estimate the SNR in the data to be of the order of 35; (2) Denoise the EEG and obtain second spatial derivatives of the data that converge in the Taylor expansion; (3) Show that spatial smoothing of the data is not only unnecessary, but also that when it is done, qualitatively alters the results.
Kumar, Shailesh. "Denoising of Electriocardiogram (ECG) Signal." Thesis, 2017. http://ethesis.nitrkl.ac.in/8900/1/2017_MT_SKumar.pdf.
Full textSrivastava, Shubhranshu. "Denoising and Artifacts Removal in ECG Signals." Thesis, 2015. http://ethesis.nitrkl.ac.in/7444/1/2015_Denoising_Shrivastava.pdf.
Full textAdithya, Godishala. "Denoising of ECG Signal Using TMS320C6713 Processor." Thesis, 2015. http://ethesis.nitrkl.ac.in/7447/1/2015_Denoising_Adithya.pdf.
Full textYang, Kai-Hsiang, and 楊凱翔. "Hilbert-Huang Transform based ICA for ECG Denoising." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/93426925857298956925.
Full text元智大學
資訊管理學系
97
Common techniques for ECG noise removal include wavelet transform, principal component analysis, independent component analysis (ICA) and Hilbert-Huang transform (HHT). This study proposes a two-stage ICA-based algorithm that integrates HHT with ICA to improve the performance of noise removal in ECG signals. Our performance results show that this two-stage algorithm outperforms pure ICA-based methods.
Tiwari, Tarun Mani. "An efficient algorithm for ECG denoising and beat detection /." 2008. http://proquest.umi.com/pqdweb?did=1654493421&sid=2&Fmt=2&clientId=10361&RQT=309&VName=PQD.
Full textChiang, Hsin-Tien, and 江欣恬. "ECG Compression and Denoising based on Fully Convolutional Autoencoder." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/6e327t.
Full text國立臺灣大學
電子工程學研究所
107
The electrocardiogram (ECG) is an efficient and non-invasive application for arrhythmia detection and prevention such as premature ventricular contractions (PVCs). Due to huge amounts of data generated by long-term ECG monitoring, an effective compression method is essential. This thesis proposed an autoencoder-based method utilizing fully convolutional network (FCN). The proposed approach is applied to 10 records of the MIT-BIH Arrhythmia database. Compared with discrete wavelet transform (DWT) and DNN-based autoencoder (AE), FCN acquires better performance for both normal and abnormal PVCs individuals. Moreover, FCN outperforms DNN in less percentage root-mean-squared difference (PRD) within identical compression ratio (CR). The proposed compression scheme exploits the fact that FCN achieves CR=33.64 with PRD=13.65% in average among all subjects. Because ECG signals are prone to be contaminated with noises during monitoring, we also validate FCN in denoising application. The results conducted on noisy ECG signals of different levels of signal-to-noise (SNR) show that FCN has better denoising performance than DNN. In summary, FCN yields higher CR and lower PRD in compression as well as higher SNR improvement in denoising and is believed to have good application prospect in clinical practice.
Hamed, Khald. "STUDY ON THE EFFECTIVENESS OF WAVELETS FOR DENOISING ECG SIGNALS USING SUBBAND DEPENDENT THRESHOLD." 2012. http://hdl.handle.net/10222/15832.
Full textSu, Aron Wei-Hsiang. "ECG Noise Filtering Using Online Model-Based Bayesian Filtering Techniques." Thesis, 2013. http://hdl.handle.net/10012/7917.
Full text