Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: ECG extraction.

Dissertationen zum Thema „ECG extraction“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "ECG extraction" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Peddaneni, Hemanth. „Comparison of algorithms for fetal ECG extraction“. [Gainesville, Fla.] : University of Florida, 2004. http://purl.fcla.edu/fcla/etd/UFE0007480.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Niknazar, Mohammad. „Extraction et débruitage de signaux ECG du foetus“. Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00954175.

Der volle Inhalt der Quelle
Annotation:
Les malformations cardiaques congénitales sont la première cause de décès liés à une anomalie congénitale. L''electrocardiogramme du fœtus (ECGf), qui est censé contenir beaucoup plus d'informations par rapport aux méthodes échographiques conventionnelles, peut ˆêtre mesuré'e par des électrodes sur l'abdomen de la mère. Cependant, il est tr'es faible et mélangé avec plusieurs sources de bruit et interférence y compris l'ECG de la mère (ECGm) dont le niveau est très fort. Dans les études précédentes, plusieurs méthodes ont été proposées pour l'extraction de l'ECGf à partir des signaux enregistrés par des électrodes placées à la surface du corps de la mère. Cependant, ces méthodes nécessitent un nombre de capteurs important, et s'avèrent inefficaces avec un ou deux capteurs. Dans cette étude trois approches innovantes reposant sur une paramétrisation algébrique, statistique ou par variables d'état sont proposées. Ces trois méthodes mettent en œuvre des modélisations différentes de la quasi-périodicité du signal cardiaque. Dans la première approche, le signal cardiaque et sa variabilité sont modélisés par un filtre de Kalman. Dans la seconde approche, le signal est découpé en fenêtres selon les battements, et l'empilage constitue un tenseur dont on cherchera la décomposition. Dans la troisième approche, le signal n'est pas modélisé directement, mais il est considéré comme un processus Gaussien, caractérisé par ses statistiques à l'ordre deux. Dans les différentes modèles, contrairement aux études précédentes, l'ECGm et le (ou les) ECGf sont modélisés explicitement. Les performances des méthodes proposées, qui utilisent un nombre minimum de capteurs, sont évaluées sur des données synthétiques et des enregistrements réels, y compris les signaux cardiaques des fœtus jumeaux.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Michael, Pratheek. „Simulation Studies on ECG Vector Dipole Extraction in Liquid Medium“. Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6625.

Der volle Inhalt der Quelle
Annotation:
To circumvent some inherent problems in the conventional ECG, this research reinvestigates an ‘unassisted’ approach which enables ECG measurement without the placement of leads on the body. Employed in this research is a widely accepted assumption that the electrical activity of the heart may be represented, largely, by a 3-D time-varying current dipole (3D-CD). From the PhysioBank database, mECG and fECG data were obtained, and Singular Value Decomposition (SVD) was performed to estimate the time-varying Vector ECG dipole. To determine the sensing matrix responsible for transforming the activity of the 3D-CD into the potential distribution on the surface of the medium, the ECG vector dipole signals are used to excite a 3D-CD in water medium of a specific shape-containing-ellipsoid model(s) in COMSOL tool. The sensing matrix thereby estimated is then utilized to reconstruct the 3D-CD signals from the signals measured by the probes on the surface of the medium. Fairly low NRMSEs (Normalized Root-Mean-Squared Errors) are attained. The approach is also successfully extended to the case of two ellipsoids, one inside the other, representing a pregnant female subject. Low NRMSEs (Normalized Root-Mean-Squared Errors) are again observed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Darrington, John Mark. „Real time extraction of ECG fiducial points using shape based detection“. University of Western Australia. School of Computer Science and Software Engineering, 2009. http://theses.library.uwa.edu.au/adt-WU2009.0152.

Der volle Inhalt der Quelle
Annotation:
The electrocardiograph (ECG) is a common clinical and biomedical research tool used for both diagnostic and prognostic purposes. In recent years computer aided analysis of the ECG has enabled cardiographic patterns to be found which were hitherto not apparent. Many of these analyses rely upon the segmentation of the ECG into separate time delimited waveforms. The instants delimiting these segments are called the
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Bin, Safie Sairul Izwan. „Pulse domain novel feature extraction methods with application to ecg biometric authentication“. Thesis, University of Strathclyde, 2012. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=17829.

Der volle Inhalt der Quelle
Annotation:
This thesis presents the concept of representing finite signals in terms of sequential output pulses called pulse domain to extract Electrocardiogram (ECG) features for biometric authentication systems. Two novel methods based on the pulse domain philosophy namely Pulse Active (PA) and Adaptive Pulse Active (APA) techniques are presented in this thesis. A total of 11 algorithms are derived from these two methods and used to generate novel ECG feature vectors. Six algorithms of the PA technique are named as Pulse Active Bit (PAB), Pulse Active Width (PAW), Pulse Active Area (PAA), Pulse Active Mean (PAM), Pulse Active Ratio (PAR) and Pulse Active Harmonic (PAH). Five APA algorithms are named as Adaptive Pulse Active Bit (APAB), Adaptive Pulse Active Width (APAW), Adaptive Pulse Active Area (APAA), Adaptive Pulse Active Mean (APAM) and Adaptive Pulse Active Harmonic (APAH). The proposed techniques are validated using ECG experimental data from 112 subjects. Simulation results indicate that APAW generates the best biometric performance of all 11 algorithms. Selected ranges of PA and APA parameters are determined in this thesis that generates approximate similar biometric performance. Using this suggested range, these parameters are than used as a personal identification number (PIN) which are a part of the proposed PA-APA ECG based multilevel security biometric authentication system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Janjarasjitt, Suparerk. „A NEW QRS DETECTION AND ECG SIGNAL EXTRACTION TECHNIQUE FOR FETAL MONITORING“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=case1144263231.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Tang, Yu. „Feature Extraction for the Cardiovascular Disease Diagnosis“. Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-33742.

Der volle Inhalt der Quelle
Annotation:
Cardiovascular disease is a serious life-threatening disease. It can occur suddenly and progresses rapidly. Finding the right disease features in the early stage is important to decrease the number of deaths and to make sure that the patient can fully recover. Though there are several methods of examination, describing heart activities in signal form is the most cost-effective way. In this case, ECG is the best choice because it can record heart activity in signal form and it is safer, faster and more convenient than other methods of examination. However, there are still problems involved in the ECG. For example, not all the ECG features are clear and easily understood. In addition, the frequency features are not present in the traditional ECG. To solve these problems, the project uses the optimized CWT algorithm to transform data from the time domain into the time-frequency domain. The result is evaluated by three data mining algorithms with different mechanisms. The evaluation proves that the features in the ECG are successfully extracted and important diagnostic information in the ECG is preserved. A user interface is designed increasing efficiency, which facilitates the implementation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Islam, Mohd Siblee. „A Decision Support System for StressDiagnosis using ECG Sensor“. Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-11769.

Der volle Inhalt der Quelle
Annotation:
Diagnosis of stress is important because it can cause many diseases e.g., heart disease, headache, migraine, sleep problems, irritability etc. Diagnosis of stress in patients often involves acquisition of biological signals for example heart rate, finger temperature, electrocardiogram (ECG), electromyography signal (EMG), skin conductance signal (SC) etc. followed up by a careful analysis of the acquired signals. The accuracy is totally dependent on the experience of an expert. Again the number of such experts is also very limited. Heart rate is considered as an important parameter in determining stress. It reflects status of the autonomic nervous system (ANS) and thus is very effective in monitoring any imbalance in patient’s stress level. Therefore, a computer-aided system is useful to determine stress level based on various features that can be extracted from a patient’s heart rate signals. Stress diagnosis using biomedical signals is difficult and since the biomedical signals are too complex to generate any rule an experienced person or expert is needed to determine stress levels. Also, it is not feasible to use all the features that are available or possible to extract from the signal. So, relevant features should be chosen from the extracted features that are capable to diagnose stress. Again, ECG signal is frequently contaminated by outliers produced by the loose conduction of the electrode due to sneezing, itching etcetera that hampers the value of the features. A Case-Based Reasoning (CBR) System is helpful when it is really hard to formulate rule and the knowledge on the domain is also weak. A CBR system is developed to evaluate how closely it can diagnose stress levels compare to an expert. A study is done to find out mostly used features to reduce the number of features used in the system and in case library. A software prototype is developed that can collect ECG signal from a patient through ECG sensor and calculate Inter Beat Interval (IBI) signal and features from it. Instead of doing manual visual inspection a new way to remove outliers from the IBI signal is also proposed and implemented here. The case base has been initiated with 22 reference cases classified by an expert. A performance analysis has been done and the result considering how close the system can perform compare to the expert is presented. On the basis of the evaluations an accuracy of 86% is obtained compare to an expert. However, the correctly classified case for stressed group (Sensitivity) was 57% and it is quite important to increase as it is related to the safety issue of health. The reasons of relatively lower sensitivity and possible ways to improve it are also investigated and explained.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Koc, Bengi. „Detection And Classification Of Qrs Complexes From The Ecg Recordings“. Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/2/12610328/index.pdf.

Der volle Inhalt der Quelle
Annotation:
Electrocardiography (ECG) is the most important noninvasive tool used for diagnosing heart diseases. An ECG interpretation program can help the physician state the diagnosis correctly and take the corrective action. Detection of the QRS complexes from the ECG signal is usually the first step for an interpretation tool. The main goal in this thesis was to develop robust and high performance QRS detection algorithms, and using the results of the QRS detection step, to classify these beats according to their different pathologies. In order to evaluate the performances, these algorithms were tested and compared in Massachusetts Institute of Technology Beth Israel Hospital (MIT-BIH) database, which was developed for research in cardiac electrophysiology. In this thesis, four promising QRS detection methods were taken from literature and implemented: a derivative based method (Method I), a digital filter based method (Method II), Tompkin&rsquo
s method that utilizes the morphological features of the ECG signal (Method III) and a neural network based QRS detection method (Method IV). Overall sensitivity and positive predictivity values above 99% are achieved with each method, which are compatible with the results reported in literature. Method III has the best overall performance among the others with a sensitivity of 99.93% and a positive predictivity of 100.00%. Based on the detected QRS complexes, some features were extracted and classification of some beat types were performed. In order to classify the detected beats, three methods were taken from literature and implemented in this thesis: a Kth nearest neighbor rule based method (Method I), a neural network based method (Method II) and a rule based method (Method III). Overall results of Method I and Method II have sensitivity values above 92.96%. These findings are also compatible with those reported in the related literature. The classification made by the rule based approach, Method III, did not coincide well with the annotations provided in the MIT-BIH database. The best results were achieved by Method II with the overall sensitivity value of 95.24%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Noorzadeh, Saman. „Extraction de l'ECG du foetus et de ses caractéristiques grâce à la multi-modalité“. Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT135/document.

Der volle Inhalt der Quelle
Annotation:
La surveillance de la santé foetale permet aux cliniciens d’évaluer le bien-être du foetus,de faire une détection précoce des anomalies cardiaques foetales et de fournir les traitementsappropriés. Les développements technologies actuels visent à permettre la mesurede l’électrocardiogramme (ECG) foetal de façon non-invasive afin d’extraire non seulementle rythme cardiaque mais également la forme d’onde du signal. Cet objectif est rendudifficile par le faible rapport signal sur bruit des signaux mesurés sur l’abdomen maternel.Cette mesure est donc toujours un challenge auquel se confrontent beaucoup d’études quiproposent des solutions de traitement de signal basées sur la seule modalité ECG.Le but de cette thèse est d’utiliser la modélisation des processus Gaussiens pour améliorerl’extraction des signaux cardiaques foetaux, dans une base multi-modale. L’ECG est utiliséconjointement avec le signal Phonocardiogramme (PCG) qui peut apporter une informationcomplémentaire à l’ECG. Une méthode générale pour la modélisation des signauxquasi-périodiques est présentée avec l’application au débruitage de l’ECG et à l’extractionde l’ECG du foetus. Différents aspects de la multi-modalité (synchronisation, · · · ) proposéesont étudiées afin de détecter avec plus de robustesse les battements cardiaques foetaux.La méthode considère l’application sur les signaux ECG et PCG à travers deux aspects:l’aspect du traitement du signal et l’expérimental. La modélisation des processus Gaussien,avec le signal PCG pris comme la référence, est utilisée pour extraire des modèles flexibleset des estimations non linéaires de l’information. La méthode cherche également à faciliterla mise en oeuvre pratique en utilisant un codage 1-bit des signaux de référence.Le modèle proposé est validé sur des signaux synthétiques et également sur des donnéespréliminaires réelles qui ont été enregistrées afin d’amorcer la constitution d’une base dedonnées multi-modale synchronisée. Les premiers résultats montrent que la méthode permettraà terme aux cliniciens d’étudier les battements cardiaques ainsi que la morphologiede l’ECG. Ce dernier aspect était jusqu’à présent limité à l’analyse d’enregistrements ECGinvasifs prélevés pendant l’accouchement par le biais d’électrodes posées sur le scalp dufoetus
Fetal health must be carefully monitored during pregnancy to detect early fetal cardiac diseases, and provide appropriate treatment. Technological development allows a monitoring during pregnancy using the non-invasive fetal electrocardiogram (ECG). Noninvasive fetal ECG is a method not only to detect fetal heart rate, but also to analyze the morphology of fetal ECG, which is now limited to analysis of the invasive ECG during delivery. However, the noninvasive fetal ECG recorded from the mother's abdomen is contaminated with several noise sources among which the maternal ECG is the most prominent.In the present study, the problem of noninvasive fetal ECG extraction is tackled using multi-modality. Beside ECG signal, this approach benefits from the Phonocardiogram (PCG) signal as another signal modality, which can provide complementary information about the fetal ECG.A general method for quasi-periodic signal analysis and modeling is first described and its application to ECG denoising and fetal ECG extraction is explained. Considering the difficulties caused by the synchronization of the two modalities, the event detection in the quasi-periodic signals is also studied which can be specified to the detection of the R-peaks in the ECG signal.The method considers both clinical and signal processing aspects of the application on ECG and PCG signals. These signals are introduced and their characteristics are explained. Then, using PCG signal as the reference, the Gaussian process modeling is employed to provide the possibility of flexible models as nonlinear estimations. The method also tries to facilitate the practical implementation of the device by using the less possible number of channels and also by using only 1-bit reference signal.The method is tested on synthetic data and also on real data that is recorded to provide a synchronous multi-modal data set.Since a standard agreement for the acquisition of these modalities is not yet taken into much consideration, the factors which influence the signals in recording procedure are introduced and their difficulties and effects are investigated.The results show that the multi-modal approach is efficient in the detection of R-peaks and so in the extraction of fetal heart rate, and it also provides the results about the morphology of fetal ECG
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Hapuarachchi, Pasan. „Feature selection and artifact removal in sleep stage classification“. Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/2879.

Der volle Inhalt der Quelle
Annotation:
The use of Electroencephalograms (EEG) are essential to the analysis of sleep disorders in patients. With the use of electroencephalograms, electro-oculograms (EOG), and electromyograms (EMG), doctors and EEG technician can make conclusions about the sleep patterns of patients. In particular, the classification of the sleep data into various stages, such as NREM I-IV, REM, Awake, is extremely important. The EEG signal itself is highly sensitive to physiological and non-physiological artifacts. Trained human experts can accommodate for these artifacts while they are analyzing the EEG signal.

However, if some of these artifacts are removed prior to analysis, their job will be become easier. Furthermore, one of the biggest motivations, of our team's research is the construction of a portable device that can analyze the sleep data as they are being collected. For this task, the sleep data must be analyzed completely automatically in order to make the classifications.

The research presented in this thesis concerns itself with the denoising and the feature selection aspects of the teams' goals. Since humans are able to process artifacts and ignore them prior to classification, an automated system should have the same capabilities or close to them. As such, the denoising step is performed to condition the data prior to any other stages of the sleep stage neoclassicisms. As mentioned before, the denoising step, by itself, is useful to human EEG technicians as well.

The denoising step in this research mainly looks at EOG artifacts and artifacts isolated to a single EEG channel, such as electrode pop artifacts. The first two algorithms uses Wavelets exclusively (BWDA and WDA), while the third algorithm is a mixture of Wavelets and In- dependent Component Analysis (IDA). With the BWDA algorithm, determining consistent thresholds proved to be a difficult task. With the WDA algorithm, the performance was better, since the selection of the thresholds was more straight-forward and since there was more control over defining the duration of the artifacts. The IDA algorithm performed inferior to the WDA algorithm. This could have been due to the small number of measurement channels or the automated sub-classifier used to select the denoised EEG signal from the set of ICA demixed signals.

The feature selection stage is extremely important as it selects the most pertinent features to make a particular classification. Without such a step, the classifier will have to process useless data, which might result in a poorer classification. Furthermore, unnecessary features will take up valuable computer cycles as well. In a portable device, due to battery consumption, wasting computer cycles is not an option. The research presented in this thesis shows the importance of a systematic feature selection step in EEG classification. The feature selection step produced excellent results with a maximum use of just 5 features. During automated classification, this is extremely important as the automated classifier will only have to calculate 5 features for each given epoch.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Andreotti, Lage Fernando. „Extraction and Detection of Fetal Electrocardiograms from Abdominal Recordings“. Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-222460.

Der volle Inhalt der Quelle
Annotation:
The non-invasive fetal ECG (NIFECG), derived from abdominal surface electrodes, offers novel diagnostic possibilities for prenatal medicine. Despite its straightforward applicability, NIFECG signals are usually corrupted by many interfering sources. Most significantly, by the maternal ECG (MECG), whose amplitude usually exceeds that of the fetal ECG (FECG) by multiple times. The presence of additional noise sources (e.g. muscular/uterine noise, electrode motion, etc.) further affects the signal-to-noise ratio (SNR) of the FECG. These interfering sources, which typically show a strong non-stationary behavior, render the FECG extraction and fetal QRS (FQRS) detection demanding signal processing tasks. In this thesis, several of the challenges regarding NIFECG signal analysis were addressed. In order to improve NIFECG extraction, the dynamic model of a Kalman filter approach was extended, thus, providing a more adequate representation of the mixture of FECG, MECG, and noise. In addition, aiming at the FECG signal quality assessment, novel metrics were proposed and evaluated. Further, these quality metrics were applied in improving FQRS detection and fetal heart rate estimation based on an innovative evolutionary algorithm and Kalman filtering signal fusion, respectively. The elaborated methods were characterized in depth using both simulated and clinical data, produced throughout this thesis. To stress-test extraction algorithms under ideal circumstances, a comprehensive benchmark protocol was created and contributed to an extensively improved NIFECG simulation toolbox. The developed toolbox and a large simulated dataset were released under an open-source license, allowing researchers to compare results in a reproducible manner. Furthermore, to validate the developed approaches under more realistic and challenging situations, a clinical trial was performed in collaboration with the University Hospital of Leipzig. Aside from serving as a test set for the developed algorithms, the clinical trial enabled an exploratory research. This enables a better understanding about the pathophysiological variables and measurement setup configurations that lead to changes in the abdominal signal's SNR. With such broad scope, this dissertation addresses many of the current aspects of NIFECG analysis and provides future suggestions to establish NIFECG in clinical settings.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Behar, Joachim. „Extraction of clinical information from the non-invasive fetal electrocardiogram“. Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:94b866ff-dd57-4446-85ae-79dd6d983cac.

Der volle Inhalt der Quelle
Annotation:
Estimation of the fetal heart rate (FHR) has gained interest in the last century; low heart rate variability has been studied to identify intrauterine growth restricted fetuses (prepartum), and abnormal FHR patterns have been associated with fetal distress during delivery (intrapartum). Several monitoring techniques have been proposed for FHR estimation, including auscultation and Doppler ultrasound. This thesis focuses on the extraction of the non-invasive fetal electrocardiogram (NI-FECG) recorded from a limited set of abdominal sensors. The main challenge with NI-FECG extraction techniques is the low signal-to-noise ratio of the FECG signal on the abdominal mixture signal which consists of a dominant maternal ECG component, FECG and noise. However the NI-FECG offers many advantages over the alternative fetal monitoring techniques, the most important one being the opportunity to enable morphological analysis of the FECG which is vital for determining whether an observed FHR event is normal or pathological. In order to advance the field of NI-FECG signal processing, the development of standardised public databases and benchmarking of a number of published and novel algorithms was necessary. Databases were created depending on the application: FHR estimation with or without maternal chest lead reference or directed toward FECG morphology analysis. Moreover, a FECG simulator was developed in order to account for pathological cases or rare events which are often under-represented (or completely missing) in the existing databases. This simulator also serves as a tool for studying NI-FECG signal processing algorithms aimed at morphological analysis (which require underlying ground truth annotations). An accurate technique for the automatic estimation of the signal quality level was also developed, optimised and thoroughly tested on pathological cases. Such a technique is mandatory for any clinical applications of FECG analysis as an external confidence index of both the input signals and the analysis outputs. Finally, a Bayesian filtering approach was implemented in order to address the NI-FECG morphology analysis problem. It was shown, for the first time, that the NI-FECG can allow accurate estimation of the fetal QT interval, which opens the way for new clinical studies on the development of the fetus during the pregnancy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Akhbari, Mahsa. „Analyse des intervalles ECG inter- et intra-battement sur des modèles d'espace d'état et de Markov cachés“. Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT026.

Der volle Inhalt der Quelle
Annotation:
Les maladies cardiovasculaires sont l'une des principales causes de mortalité chez l'homme. Une façon de diagnostiquer des maladies cardiaques et des anomalies est le traitement de signaux cardiaques tels que le ECG. Dans beaucoup de ces traitements, des caractéristiques inter-battements et intra-battements de signaux ECG doivent être extraites. Ces caractéristiques comprennent les points de repère des ondes de l’ECG (leur début, leur fin et leur point de pic), les intervalles significatifs et les segments qui peuvent être définis pour le signal ECG. L'extraction des points de référence de l'ECG consiste à identifier l'emplacement du pic, de début et de la fin de l'onde P, du complexe QRS et de l'onde T. Ces points véhiculent des informations cliniquement utiles, mais la segmentation precise de chaque battement de l'ECG est une tâche difficile, même pour les cardiologues expérimentés.Dans cette thèse, nous utilisons un cadre bayésien basé sur le modèle dynamique d'ECG proposé par McSharry. Depuis ce modèle s'appuyant sur la morphologie des ECG, il peut être utile pour la segmentation et l'analyse d'intervalles d'ECG. Afin de tenir compte de la séquentialité des ondes P, QRS et T, nous utiliserons également l'approche de Markov et des modèles de Markov cachés (MMC). En bref dans cette thèse, nous utilisons un modèle dynamique (filtre de Kalman), un modèle séquentiel (MMC) et leur combinaison (commutation de filtres de Kalman (SKF)). Nous proposons trois méthodes à base de filtres de Kalman, une méthode basée sur les MMC et un procédé à base de SKF. Nous utilisons les méthodes proposées pour l'extraction de points de référence et l'analyse d'intervalles des ECG. Le méthodes basées sur le filtrage de Kalman sont également utilisés pour le débruitage d'ECG, la détection de l'alternation de l'onde T, et la détection du pic R de l'ECG du foetus.Pour évaluer les performances des méthodes proposées pour l'extraction des points de référence de l'ECG, nous utilisons la base de données "Physionet QT", et une base de données "Swine" qui comprennent ECG annotations de signaux par les médecins. Pour le débruitage d'ECG, nous utilisons les bases de données "MIT-BIH Normal Sinus Rhythm", "MIT-BIH Arrhythmia" et "MIT-BIH noise stress test". La base de données "TWA Challenge 2008 database" est utilisée pour la détection de l'alternation de l'onde T. Enfin, la base de données "Physionet Computing in Cardiology Challenge 2013 database" est utilisée pour la détection du pic R de l'ECG du feotus. Pour l'extraction de points de reference, la performance des méthodes proposées sont évaluées en termes de moyenne, écart-type et l'erreur quadratique moyenne (EQM). Nous calculons aussi la sensibilité des méthodes. Pour le débruitage d'ECG, nous comparons les méthodes en terme d'amélioration du rapport signal à bruit
Cardiovascular diseases are one of the major causes of mortality in humans. One way to diagnose heart diseases and abnormalities is processing of cardiac signals such as ECG. In many of these processes, inter-beat and intra-beat features of ECG signal must be extracted. These features include peak, onset and offset of ECG waves, meaningful intervals and segments that can be defined for ECG signal. ECG fiducial point (FP) extraction refers to identifying the location of the peak as well as the onset and offset of the P-wave, QRS complex and T-wave which convey clinically useful information. However, the precise segmentation of each ECG beat is a difficult task, even for experienced cardiologists.In this thesis, we use a Bayesian framework based on the McSharry ECG dynamical model for ECG FP extraction. Since this framework is based on the morphology of ECG waves, it can be useful for ECG segmentation and interval analysis. In order to consider the time sequential property of ECG signal, we also use the Markovian approach and hidden Markov models (HMM). In brief in this thesis, we use dynamic model (Kalman filter), sequential model (HMM) and their combination (switching Kalman filter (SKF)). We propose three Kalman-based methods, an HMM-based method and a SKF-based method. We use the proposed methods for ECG FP extraction and ECG interval analysis. Kalman-based methods are also used for ECG denoising, T-wave alternans (TWA) detection and fetal ECG R-peak detection.To evaluate the performance of proposed methods for ECG FP extraction, we use the "Physionet QT database", and a "Swine ECG database" that include ECG signal annotations by physicians. For ECG denoising, we use the "MIT-BIH Normal Sinus Rhythm", "MIT-BIH Arrhythmia" and "MIT-BIH noise stress test" databases. "TWA Challenge 2008 database" is used for TWA detection and finally, "Physionet Computing in Cardiology Challenge 2013 database" is used for R-peak detection of fetal ECG. In ECG FP extraction, the performance of the proposed methods are evaluated in terms of mean, standard deviation and root mean square of error. We also calculate the Sensitivity for methods. For ECG denoising, we compare methods in their obtained SNR improvement
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Waloszek, Vojtěch. „Identifikace a verifikace osob pomocí záznamu EKG“. Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442492.

Der volle Inhalt der Quelle
Annotation:
In the past years, utilization of ECG for verification and identification in biometry is investigated. The topic is investigated in this thesis. Recordings from ECG ID database from PhysioNet and our own ECG recordings recorded using Apple Watch 4 are used for training and testing this method. Many of the existing methods have proven the possibility of using ECG for biometry, however they were using clinical ECG devices. This thesis investigates using recordings from wearable devices, specifically smart watch. 16 features are extracted from ECG recordings and a random forest classifier is used for verification and identification. The features include time intervals between fiducial points, voltage difference between fiducial points and PR intervals variability in a recording. The average performance of verification model of 14 people is TRR 96,19 %, TAR 84,25 %.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Ředina, Richard. „Model fibrilace síní“. Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442495.

Der volle Inhalt der Quelle
Annotation:
The aim of this master thesis is to create a 3D electroanatomical model of a heart atria, which would be able to perform atrial fibrillation. To control the model, the differential equations of the FitzHugh-Nagumo model were chosen. These equations describe the change of voltage on the cell membrane. The equations have established parameters. The modification of them leads to changes in the behavior of the model. The simulations were performed in the COMSOL Multiphysics environment. In the first step, the simulations were performed on 2D models. Simulations of healthy heart, atrial flutter and atrial fibrillation were created. The acquired knowledge served as a basis for the creation of a 3D model on which atrial fibrillation was simulated on the basis of ectopic activity and reentry mechanism. Convincing results were obtained in accordance with the used literature. The advantages of computational modeling are its availability, zero ethical burden and the ability to simulate even rarer arrhythmias. The disadvantage of the procedure is the need to compromise between accuracy and computational complexity of simulations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Chan, Weng Chi. „ECG parameter extractor of intelligent home healthcare embedded system“. Thesis, University of Macau, 2005. http://umaclib3.umac.mo/record=b1445845.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Aydin, Serap. „Extraction Of Auditory Evoked Potentials From Ongoing Eeg“. Phd thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606641/index.pdf.

Der volle Inhalt der Quelle
Annotation:
In estimating auditory Evoked Potentials (EPs) from ongoing EEG the number of sweeps should be reduced to decrease the experimental time and to increase the reliability of diagnosis. The ¯
rst goal of this study is to demon- strate the use of basic estimation techniques in extracting auditory EPs (AEPs) from small number of sweeps relative to ensemble averaging (EA). For this purpose, three groups of basic estimation techniques are compared to the traditional EA with respect to the signal-to-noise ratio(SNR) improve- ments in extracting the template AEP. Group A includes the combinations of the Subspace Method (SM) with the Wiener Filtering (WF) approaches (the conventional WF and coherence weighted WF (CWWF). Group B con- sists of standard adaptive algorithms (Least Mean Square (LMS), Recursive Least Square (RLS), and one-step Kalman ¯
ltering (KF). The regularization techniques (the Standard Tikhonov Regularization (STR) and the Subspace Regularization (SR) methods) forms Group C. All methods are tested in sim- ulations and pseudo-simulations which are performed with white noise and EEG measurements, respectively. The same methods are also tested with experimental AEPs. Comparisons based on the output signal-to-noise ratio (SNR) show that: 1) the KF and STR methods are the best methods among the algorithms tested in this study,2) the SM can reduce the large amount of the background EEG noise from the raw data, 3) the LMS and WF algo- rithms show poor performance compared to EA. The SM should be used as 1 a pre-¯
lter to increase their performance. 4) the CWWF works better than the WF when it is combined with the SM, 5) the STR method is better than the SR method. It is observed that, most of the basic estimation techniques show de¯
nitely better performance compared to EA in extracting the EPs. The KF or the STR e®
ectively reduce the experimental time (to one-fourth of that required by EA). The SM is a useful pre-¯
lter to signi¯
cantly reduce the noise on the raw data. The KF and STR are shown to be computationally inexpensive tools to extract the template AEPs and should be used instead of EA. They provide a clear template AEP for various analysis methods. To reduce the noise level on single sweeps, the SM can be used as a pre-¯
lter before various single sweep analysis methods. The second goal of this study is to to present a new approach to extract single sweep AEPs without using a template signal. The SM and a modi- ¯
ed scale-space ¯
lter (MSSF) are applied consecutively. The SM is applied to raw data to increase the SNR. The less-noisy sweeps are then individu- ally ¯
ltered with the MSSF. This new approach is assessed in both pseudo- simulations and experimental studies. The MSSF is also applied to actual auditory brainstem response (ABR) data to obtain a clear ABR from a rel- atively small number of sweeps. The wavelet transform coe±
cients (WTCs) corresponding to the signal and noise become distinguishable after the SM. The MSSF is an e®
ective ¯
lter in selecting the WTCs of the noise. The esti- mated single sweep EPs highly resemble the grand average EP although less number of sweeps are evaluated. Small amplitude variations are observed among the estimations. The MSSF applied to EA of 50 sweeps yields an ABR that best ¯
ts to the grand average of 250 sweeps. We concluded that the combination of SM and MSSF is an e±
cient tool to obtain clear single sweep AEPs. The MSSF reduces the recording time to one-¯
fth of that re- quired by EA in template ABR estimation. The proposed approach does not use a template signal (which is generally obtained using the average of small number of sweeps). It provides unprecedented results that support the basic assumptions in the additive signal model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Palmadottir, Julia. „Extracting ECA rules from UML“. Thesis, University of Skövde, Department of Computer Science, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-541.

Der volle Inhalt der Quelle
Annotation:

Active technology in database management systems (DBMS) enables the movement of behaviour dependent on the system’s state, from the application software to a rule base in the DBMS. With active technology in database systems, the problem of how to design active behaviour has become an important issue. Modelling processes do not foresee support for design of active rules which can lead to conflicts between the event-condition-action (ECA) rules representing the active behaviour and the application systems, using the active DBMS. The unified modelling language (UML) is a widely used notation language and is the main subject in this project. Its features will be investigated to acknowledge to what extend UML modelling diagrams provide information that can be used to formulate ECA rules.

To achieve this, two methods where developed. One of the methods was applied on use-case UML modelling diagrams. The use-case models were developed by means of reflecting a real-life organisation. The results from applying the method on the use-case models were that there are features in UML that can be expressed with ECA rules.

Active technology in database management systems (DBMS) enables the movement of behaviour dependent on the system’s state, from the application software to a rule base in the DBMS. With active technology in database systems, the problem of how to design active behaviour has become an important issue. Modelling processes do not foresee support for design of active rules which can lead to conflicts between the event-condition-action (ECA) rules representing the active behaviour and the application systems, using the active DBMS. The unified modelling language (UML) is a widely used notation language and is the main subject in this project. Its features will be investigated to acknowledge to what extend UML modelling diagrams provide information that can be used to formulate ECA rules.

To achieve this, two methods where developed. One of the methods was applied on use-case UML modelling diagrams. The use-case models were developed by means of reflecting a real-life organisation. The results from applying the method on the use-case models were that there are features in UML that can be expressed with ECA rules.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Galvan, D'Alessandro Leandro. „Eco-procédés pour la récupération sélective d’antioxydants à partir d’Aronia melanocarpa et ses co-produits“. Thesis, Lille 1, 2013. http://www.theses.fr/2013LIL10134/document.

Der volle Inhalt der Quelle
Annotation:
Les fruits d’Aronia melanocarpa sont une des sources végétales les plus riches en substances phénoliques antioxydantes et notamment en anthocyanines. L’influence des principaux paramètres opératoires sur la cinétique et les rendements d’extraction assistée par ultrasons des anthocyanines et des polyphénols totaux à partir d’aronia et ses co-produits a été étudiée. La cinétique d’extraction a été décrite par des équations basées sur le modèle de Peleg. Le développement d’un modèle global (intégrant le temps, la température, la composition du solvant et la puissance d’ultrasons) a été proposé comme un outil pour l’optimisation des conditions d’extraction des antioxydants. Après une étude préliminaire d’enrichissement des extraits en utilisant différents supports solides, la résine macroporeuse Amberlite XAD7HP a été choisie comme la plus appropriée pour l’adsorption des molécules antioxydantes d’aronia. Finalement, un nouveau procédé écologique intégré extraction-adsorption a été proposé pour la récupération sélective des molécules antioxydantes. Ce procédé a permis d’extraire les antioxydants et de purifier simultanément les extraits, permettant d’enrichir plus de 15 fois la teneur des anthocyanines dans les extraits. La faisabilité de cet eco-procédé à l’échelle semi-pilote a également été démontrée
The Black Chokeberry fruits are one of the richest sources in phenolic antioxidant compounds, particularly in anthocyanins. The influence of the main operating parameters on the kinetics and yields of anthocyanins and total polyphenols extraction with ultrasound assistance was studied. The extraction kinetics was described by equations based on the Peleg’s model. The development of a global model (including time, temperature, solvent composition and ultrasound power) was proposed as a tool for optimizing the conditions of antioxidants extraction. After preliminary studies of on extract enrichment using different solid supports, Amberlite XAD7HP macroporous resins was chosen as the most suitable for the adsorption of antioxidants molecules from aronia. Finally, a new ecological integrated extraction-adsorption process was proposed for the selective recovery of antioxidant molecules. This process enabled to extract antioxidant substances and to simultaneously purify the extracts, which led to enrichments by more than 15 times of anthocyanins content in the extracts. Process feasibility on a semi-pilot scale of this new integrated process was demonstrated
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Kaufman, Irene Jennifer. „The Recovery of Protein from Egg Yolk Protein Extraction Granule Byproduct“. DigitalCommons@CalPoly, 2017. https://digitalcommons.calpoly.edu/theses/1738.

Der volle Inhalt der Quelle
Annotation:
In addition to proving an excellent source of nutrients, eggs are used in the food, cosmetic, and biotechnology industries for their rheological and bioactive properties. Much of the potential for the added value is in individual components of the egg, rather than the whole egg. At low speed centrifugation, yolk separates into two distinct fractions—granules and plasma. It is becoming increasingly popular in the industry to remove the plasma fraction of the egg yolk to use for its livetins, particularly immunoglobulin Y, leaving behind a granule by-product (“yellow cake”). Previous research has shown potential added-value from the granule fraction, especially from its phosvitin and phospholipids. Granules are protein aggregates with complexes of phosvitin and high density lipoproteins linked by phosphocalcic bridges. In their native form, the proteins are mostly insoluble, however previous studies have shown the links can be broken by alterations in pH, ionic strength, and mechanical treatments. This thesis project seeks to find potential uses for the egg yolk by product after the removal of the livetin fraction by means of further fractionation with mechanical treatment (filtration). Two variables were tested to extract more proteins from the yellow cake. Salt was added to 10% solids solution of yellow cake in water before filtration at four different NaCl levels: 0%, .05%, 1%, and 2.5%. Additionally pH was tested at four different levels: 4.6, 4.8, 5.0, 5.2. The samples were also tested for antibacterial properties against Escherichia v coli with a minimum inhibitory concentration assay (MIC). Analysis with BCA showed salt concentration had a significant effect on the yield of protein. The highest concentration of salt tested, 2.5%, had the highest protein yield. Additionally, SDS PAGE showed 2.5% salt had the most unique protein bands. This could be to the disruption of the phosphocalcic links between the phosvitin and HDL by NaCl, allowing the protein to solubilize. pH did not have a significant effect on the yield or types of proteins in the range tested in this experiment. There is no conclusive evidence of antibacterial properties against E. coli from the protein extract. The MIC assay had growth show up in all wells with the protein extract, however there was a visible decrease in turbidity with higher concentration of the protein extract. This could mean that the protein extract does have some antibacterial properties, but needs testing at higher concentrations or with isolated proteins/peptides. The SDS-PAGE revealed bands showing phosvitin present, which has known antibacterial properties. Overall, improvements to the methods for further protein extraction from egg yolk by-products will help lead the industry to finding novel uses and product applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Wahl, Casper. „Training Autoencoders for feature extraction of EEG signals for motor imagery“. Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-53392.

Der volle Inhalt der Quelle
Annotation:
Electroencephalography (EEG) is a common technique used to read brain activity from an individual, and can be used for a wide range of applications, one example is during the rehab process of stroke victims. Loss of motor function is a common side effect of strokes, and the EEG signals can show if sufficient activation of the part of the brain related to the motor function that the patient is training has been achieved. Reading and understanding such data manually requires extensive training. This thesis proposes to use machine learning to automate the process of determining if sufficient activation has been achieved. The process consists of a Long Short Term Memory (LSTM) Autoencoder that trains to extract features of the EEG data to be used for classification using various machine learning classification methods. In order to answer the research questions: “How to extract features from EEG signals using Autoencoders?” “Which supervised machine learning algorithm identifies as the best classification based on the features generated by the Autoencoder?” The results show that the accuracy varies greatly from individual to individual, and that the number of features created by the Autoencoder for the classification algorithms to work with has a large impact on accuracy. The choice of classification algorithm played a role for the result as well, with Support Vector Machine (SVM) performing the best, but had less impact than the previously mentioned factors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Nahman, Michal Rachel. „Israeli extraction : an ethnographic study of egg donation and national imaginaries“. Thesis, Lancaster University, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.431739.

Der volle Inhalt der Quelle
Annotation:
This thesis derives from ethnographic research undertaken in sites of Israeli NF and egg donation between January and September 2002. The thesis begins with an examination of some features of the general context of Israeli ova donation through an analysis of a set of stories about the theft of ova and an egg shortage crisis, which emerged in the year prior to my fieldwork in Israel (2001). It then moves to an examination of NF and egg donation at a state run clinic in Jerusalem. From there I trace some new practices of transnational ova donation in three sites and sets of practices: an IVF clinic in Tel Aviv; donor trait selection at this Tel Aviv clinic; and an Israeli egg donation and extraction clinic in Romania. I trace some key features of these sites and practices. Through this analysis, I explore some of the ways in which discursive practices of Israeli . extraction, exchange, and implantation are important sites in the making of gender, religious, race and kinship relations, and are thereby implicated in the making of the Israeli nation. The study frames egg donation practices as 'national imaginaries', which are resonant with, and implicated in, the politics of (re)producing the state of Israel as Jewish and Euro-American, One element of this which is identified here has been the shift towards privatisation of health care. I document some of the features and consequences of this privatisation in the sphere of Israeli IVF and transnational ova trafficking. Conducted during a period in which political and military negotiations of Israeli borders were intense, this research examines another, but related, site of border struggles .- medically assisted reproduction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Dejoye, Céline. „Eco-extraction et analyse de lipides de micro-algues pour la production d'algo-carburant“. Thesis, Avignon, 2013. http://www.theses.fr/2013AVIG0251/document.

Der volle Inhalt der Quelle
Annotation:
La biodiversité des micro-algues constitue un réel potentiel pour la recherche et l’industrie. En comparaison des plantes terrestres, elles sont une piste prometteuse pour les biocarburants. Néanmoins, un certain nombre de verrous technologiques restent à lever comme l’extraction de l’huile algale. L’objectif de cette thèse a donc consisté en l’innovation et le développement de nouvelles méthodologies dits « vertes » d’extraction de lipides de micro-algues pour une application biocarburant.La première partie de ce manuscrit propose une alternative à l’utilisation de solvants pétrochimiques (n-hexane) pour l’extraction des lipides à partir d’une biomasse sèche grâce aux solvants terpéniques d’origine végétale.Ces résultats encourageants ont permis dans une seconde partie de s’orienter vers le développement d’une technique permettant l’extraction des lipides de micro-algues humides (80% d’humidité) par des solvants terpéniques : le SDEP (Simultaneous Distillation and Extraction Process).Afin d’accélérer le processus d’extraction, dans la troisième et dernière partie de ce travail les micro-ondes ont permis d’intensifier ce procédé. Les micro-ondes permettent un échauffement rapide de l’eau environnante et contenue dans les cellules impliquant ainsi une libération rapide du contenu cellulaire vers le milieu extérieur. Ce travail a consisté en l’intensification et l’optimisation de cette technique d’extraction (SDEP) destinée à l’extraction des lipides à partir de micro-algues humides : le SDEP assisté par micro-ondes. L’appareillage permet des extractions rapides, non destructrices et généralisables à différentes espèces de micro-algues discutées dans cette troisième partie
The biodiversity of microalgae is a real potential for research and industry. Compared to terrestrial plants, they are a promising route for biofuels. Still, a number of technological locks is to lift as the the algal oil extraction. The objective of this thesis has consisted of innovation and development of new methodologies, so-called “green” extraction of lipid of microalgae for biofuel application.The first part of this manuscript proposes an alternative to the use of petrochemical solvents (n-hexane) extraction of lipids on a dry biomass with terpene solvents from plant origin.These encouraging results have allowed in a second part of move towards the development of a technique for the extraction of lipids of wet microalgae (80% of humidity) by terpene solvents; the SDEP (Simultaneous Distillation and Extraction Process).To accelerate the process of extraction, in the third and final part of this work the microwaves have allowed to intensify this process. Microwave allow a fast heating of water surrounding and contained in cells thus implying a rapid release of the cell contents into the environment. Our work consisted in the intensification and optimization of the extraction technique (SDEP) intended for the extraction of lipids from wet microalgae: SDEP assisted by microwaves. The apparatus allows rapid and non-destructive extractions that can be generalizable for different species of microalgae discussed in this third part
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Grippe, Edward, und Mattias Lönnerberg. „Detecting Epileptic Seizures : Optimal Feature Extraction from EEG for Support Vector Machines“. Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-166702.

Der volle Inhalt der Quelle
Annotation:
Epilepsy is a chronic neurological brain disorder causing the affected to have seizures.Looking at EEG recordings, an expert is able to identify epileptic activity and diagnosepatients with epilepsy. This process is time consuming and calls for automatization. Theautomation process is done through feature extraction and classification. The featureextraction finds features of the signal and the classification uses the features to classify thesignal as epileptic or not. The accuracy of the classification varies depending on both whichfeatures is chosen to represent each signal and which classification method is used. Onepopular method for classification of data is the SVM. This report tests and analyses six featureextraction methods with a linear SVM to see which method resulted in best classificationperformance when classifying epileptic EEG data. The results showed that two differentmethods resulted in classification accuracies significantly higher than the rest. The waveletbased method for feature extraction got a classification accuracy of 98.83% and the Hjorthfeatures method got a classification accuracy of 97.42%. However the results of these twomethods was too similar to be considered significantly different and therefore no conclusioncould be drawn of which was the best.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Alamedine, Dima. „Selection of EHG parameter characteristics for the classification of uterine contractions“. Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2201/document.

Der volle Inhalt der Quelle
Annotation:
Un des marqueurs biophysique le plus prometteur pour la détection des accouchements prématurés (AP) est l'activité électrique de l'utérus, enregistrée sur l’abdomen des femmes enceintes, l’électrohystérogramme (EHG). Plusieurs outils de traitement du signal (linéaires, non linéaires) ont déjà été utilisés pour l'analyse de l'excitabilité et de la propagation de l’EHG, afin de différencier les contractions de grossesse, qui sont inefficaces, des contractions efficaces d’accouchement, qui pourraient provoquer un AP. Dans ces études nombreuses, les paramètres sont calculés sur des bases de données de signaux différentes, obtenus avec des protocoles d'enregistrement différents. Il est donc difficile de comparer les résultats afin de choisir les «meilleurs» paramètres pour la détection de l’AP. En outre, ce grand nombre de paramètres augmente la complexité de calcul dans un but de diagnostic. Par conséquent, l'objectif principal de cette thèse est de tester, sur une population de femmes donnée, quels outils de traitement du signal EHG permettent une discrimination entre les deux types de contractions (grossesse/accouchement). Dans ce but plusieurs méthodes de sélection de paramètres sont testées afin de sélectionner les paramètres les plus discriminants. La première méthode, développée dans cette thèse, est basée sur la mesure de la distance entre les histogrammes des paramètres pour les différentes classes (grossesse et accouchement) en utilisant la méthode « Jeffrey divergence (JD)». Les autres sont des méthodes de fouille de données existantes issues de la littérature. Les EHG ont été enregistrés en utilisant un système multivoies posé sur l'abdomen de la femme enceinte, pour l'enregistrement simultané de 16 voies d'EHG. Une approche monovariée (caractérisation d’une seule voie) et bivariée (couplage entre deux voies) sont utilisées dans notre travail. Utiliser toutes les voies, analyse monovariée, ou toutes les combinaisons de voies, analyse bivariée, conduit à une grande dimension des paramètres. Par conséquent, un autre objectif de notre thèse est la sélection des voies, ou des combinaisons de voies, qui fournissent l'information la plus utile pour distinguer entre les contractions de grossesse et d’accouchement. Cette étape de sélection de voie est suivie par la sélection des paramètres, sur les voies ou les combinaisons de voies sélectionnées. De plus, nous avons développé cette approche en utilisant des signaux monopolaires et bipolaires.Les résultats de ce travail nous permettent de mettre en évidence, lors du traitement de l’EHG, les paramètres et les voies qui donnent la meilleure discrimination entre les contractions de grossesse et celles d’accouchement. Ces résultats pourront ensuite être utilisés pour la détection des menaces d’accouchement prématuré
One of the most promising biophysical markers of preterm labor is the electrical activity of the uterus, picked up on woman’s abdomen, the electrohysterogram (EHG). Several processing tools of the EHG signal (linear, nonlinear), allow the analysis of both excitability and propagation of the uterine electrical activity in order to differentiate between pregnancy contractions, which are ineffective, from labor effective contractions that might cause preterm birth. Therefore, on these multiple studies, the parameters being computed from different signal databases, obtained with different recording protocols, it is sometimes difficult to compare their results in order to choose the “best” parameter for preterm labor detection. Additionally, this large number of parameters increases the computational complexity for diagnostic purpose. Therefore, the main objective of this thesis is to select, among all the features of interest extracted from multiple studies, the most pertinent feature subsets in order to discriminate, on a given population, pregnancy and labor contractions. For this purpose, several methods for feature selection are tested. The first one, developed in this work, is based on the measurement of the Jeffrey divergence (JD) distance between the histograms of the parameters of the 2 classes, pregnancy and labor. The other are “Filter” and “Wrapper” Data Mining methods, extracted from the literature. In our work monovariate (in one given EHG channel) and bivariate analysis (propagation of EHG by measuring the coupling between channels) are used. The EHG signals are recorded using a multichannel system positioned on the woman’s abdomen for the simultaneous recording of 16 channels of EHG. Using all channels, for the monovariate, or all combinations of channels for the bivariate analysis, leads to a large dimension of parameters for each contraction. Therefore, another objective of our thesis is the selection of the best channels, for the monovariate, or best channel combinations, for the bivariate analysis, that provide the most useful information to discriminate between pregnancy and labor classes. This channel selection step is then followed by the feature selection for the channels or channel combinations selected. Additionally, we tested all our work using monopolar and bipolar signals.The results of this thesis permits us to evidence, when processing the EHG, which channels and features can be used with the best chance of success as inputs of a diagnosis system for discrimination between pregnancy and labor contractions. This could be further used for preterm labor diagnosis
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Gallis, Rodrigo Bezerra de Araújo. „Extração semi-automática da malha viária em imagens aéreas digitais de áreas rurais utilizando otimização por programação dinâmica no espaço objeto /“. Presidente Prudente : [s.n.], 2006. http://hdl.handle.net/11449/100262.

Der volle Inhalt der Quelle
Annotation:
Resumo: Este trabalho propõe uma nova metodologia para extração de rodovias utilizando imagens aéreas digitais. A inovação baseia-se no algoritmo de Programação dinâmica (PD), que nesta metodologia realiza o processo de otimização no espaço objeto, e não no espaço imagem como as metodologias tradicionais de extração de rodovias por PD. A feição rodovia é extraída no espaço objeto, o qual implica um rigoroso modelo matemático, que é necessário para estabelecer os pontos entre o espaço imagem e objeto. Necessita-se que o operador forneça alguns pontos sementes no espaço imagem para descrever grosseiramente a rodovia, e estes pontos devem ser transformados para o espaço objeto para inicialização do processo de otimização por PD. Esta metodologia pode operar em diferentes modos (modo mono e estéreo), e com diversos tipos de imagens, incluindo imagens multisensores. Este trabalho apresenta detalhes da metodologia mono e estéreo e também os experimentos realizados e os resultados obtidos.
Abstract: This work proposes a novel road extraction methodology from digital images. The innovation is based on the dynamic programming (DP) algorithm to carry out the optimisation process in the object space, instead of doing it in the image space such as the DP traditional methodologies. Road features are traced in the object space, which implies that a rigorous mathematical model is necessary to be established between image and object space points. It is required that the operator measures a few seed points in the image space to describe sparsely and coarsely the roads, which must be transformed into the object space to make possible the initialisation of the DP optimisation process. Although the methodology can operate in different modes (mono-plotting or stereoplotting), and with several image types, including multisensor images, this work presents details of our single and stereo image methodology, along with the experimental results.
Orientador: João Fernando Custódio da Silva
Coorientador: Aluir Porfírio Dal Poz
Banca: Júlio Kiyoshi Hasegawa
Banca: Messias Meneguette Júnior
Doutor
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Villalpando, Mayra Bittencourt. „Stimuli and feature extraction methods for EEG-based brain-machine interfaces: a systematic comparison“. Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/3/3152/tde-19032018-090128/.

Der volle Inhalt der Quelle
Annotation:
A brain-machine interface (BMI) is a system that allows the communication between the central nervous system (CNS) and an external device (Wolpaw et al. 2002). Applications of BMIs include the control of external prostheses, cursors and spellers, to name a few. The BMIs developed by various research groups differ in their characteristics (e.g. continuous or discrete, synchronous or asynchronous, degrees of freedom, others) and, in spite of several initiatives towards standardization and guidelines, the cross comparison across studies remains a challenge (Brunner et al. 2015; Thompson et al. 2014). Here, we used a 64-channel EEG equipment to acquire data from 19 healthy participants during three different tasks (SSVEP, P300 and hybrid) that allowed four choices to the user and required no previous neurofeedback training. We systematically compared the offline performance of the three tasks on the following parameters: a) accuracy, b) information transfer rate, c) illiteracy/inefficiency, and d) individual preferences. Additionally, we selected the best performing channels per task and evaluated the accuracy as a function of the number of electrodes. Our results demonstrate that the SSVEP task outperforms the other tasks in accuracy, ITR and illiteracy/inefficiency, reaching an average ITR** of 52,8 bits/min and a maximum ITR** of 104,2 bits/min. Additionally, all participants achieved an accuracy level above 70% (illiteracy/inefficiency threshold) in both SSVEP and P300 tasks. Furthermore, the average accuracy of all tasks did not deteriorate if a reduced set with only the 8 best performing electrodes were used. These results are relevant for the development of online BMIs, including aspects related to usability, user satisfaction and portability.
A interface cérebro-máquina (ICM) é um sistema que permite a comunicação entre o sistema nervoso central e um dispositivo externo (Wolpaw et al., 2002). Aplicações de ICMs incluem o controle de próteses externa, cursores e teclados virtuais, para citar alguns. As ICMs desenvolvidas por vários grupos de pesquisa diferem em suas características (por exemplo, contínua ou discreta, síncrona ou assíncrona, graus de liberdade, outras) e, apesar de várias iniciativas voltadas para diretrizes de padronização, a comparação entre os estudos continua desafiadora (Brunner et al. 2015, Thompson et al., 2014). Aqui, utilizamos um equipamento EEG de 64 canais para adquirir dados de 19 participantes saudáveis ao longo da execução de três diferentes tarefas (SSVEP, P300 e híbrida) que permitiram quatro escolhas ao usuário e não exigiram nenhum treinamento prévio. Comparamos sistematicamente o desempenho \"off-line\" das três tarefas nos seguintes parâmetros: a) acurácia, b) taxa de transferência de informação, c) analfabetismo / ineficiência e d) preferências individuais. Além disso, selecionamos os melhores canais por tarefa e avaliamos a acurácia em função do número de eletrodos. Nossos resultados demonstraram que a tarefa SSVEP superou as demais em acurácia, ITR e analfabetismo/ineficiência, atingindo um ITR** médio de 52,8 bits/min e um ITR** máximo de 104,2 bits/min. Adicionalmente, todos os participantes alcançaram um nível de acurácia acima de 70% (limiar de analfabetismo/ineficiência) nas tarefas SSVEP e P300. Além disso, a acurácia média de todas as tarefas não se deteriorou ao se utilizar um conjunto reduzido composto apenas pelos melhores 8 eletrodos. Estes resultados são relevantes para o desenvolvimento de ICMs \"online\", incluindo aspectos relacionados à usabilidade, satisfação do usuário e portabilidade.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Pradal, Delphine. „Eco-procédés d'extraction de polyphénols antioxydants à partir d'un co-produit agro-alimentaire“. Thesis, Lille 1, 2016. http://www.theses.fr/2016LIL10190/document.

Der volle Inhalt der Quelle
Annotation:
Dans un contexte de développement durable, des méthodologies pour l'optimisation multicritère d'éco-procédés pour la récupération de polyphénols antioxydants à partir de co-produits ont été proposées, tenant compte des rendements en polyphénols totaux, de l'activité antioxydante des extraits obtenus à partir de marc de chicorée ainsi que de la consommation d'énergie de l'équipement durant le temps de traitement. L'étude d'un procédé d'extraction assistée par ultrasons a permis de mettre en évidence les gains en durée de traitement et en énergie grâce à l'application des ultrasons. Un modèle global a été développé comme outil pour l’optimisation multicritère (rendement en polyphénols, activité antioxydante et consommation d'énergie) des conditions d’extraction des polyphénols (intégrant le temps, la température, la composition du solvant et la puissance des ultrasons). Après une étude préliminaire d’enrichissement des extraits en utilisant différents adsorbants, la résine Amberlite XAD 16 a été choisie comme la plus appropriée pour l’adsorption des polyphénols extraits du marc de chicorée. Un procédé intégré permettant d'extraire et de purifier simultanément a permis un enrichissement en polyphénols de 2 à 4 fois des extraits de marc de chicorée. Un modèle permettant l'optimisation multicritère de ce procédé a été proposé en tenant compte de la quantité de polyphénols récupérés, de l'activité antioxydante des extraits et de la consommation d'énergie de l'équipement sur la base des conditions opératoires temps de traitement, débit de la phase aqueuse et ratio marc de chicorée-adsorbant
In a sustainable context, methodologies for multi-criteria optimization of green processes for the recovery of antioxidant polyphenols from by-products have been proposed, taking into account the total polyphenols yield, the antioxidant activity of the extracts obtained from chicory ground and the energy consumption of the equipment during processing time. Study on ultrasound-assisted extraction has helped to highlight the gains in processing time and energy through the application of ultrasounds. A comprehensive model was developed as a tool for multi-criteria optimization (total polyphenols yield, antioxidant activity and energy consumption) of extraction conditions (including time, temperature, solvent composition and power of ultrasounds). After preliminary studies on extract's enrichment using different adsorbents, the Amberlite XAD 16 resin was chosen as the most suitable for the adsorption of polyphenols extracted from chicory ground. An integrated process for simultaneous extraction and purification allowed enrichment in polyphenols of 2 to 4 times of chicory ground extracts. A model for multi-criteria optimization of this process has been proposed taking into account the amount of recovered polyphenols, the antioxidant activity of the extracts and the energy consumption of the equipment in function of operating conditions: processing time, aqueous phase flow and chicory ground-adsorbent ratio
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Angoy, Alice. „Eco-extraction par micro-ondes couplée à un champ centrifuge Development of microwave-assisted dynamic extraction by combination with centrifugal force for polyphenols extraction from lettuce Microwave technology for food applications“. Thesis, Avignon, 2019. http://www.theses.fr/2019AVIG0274.

Der volle Inhalt der Quelle
Annotation:
Les préoccupations environnementales actuelles comme l’épuisement des ressources fossiles, l’émission de gaz à effets de serre ou le réchauffement climatique imposent aux industriels de réduire leur impact sur l’environnement et de s’insérer dans une démarche plus verte. Dans le domaine de l’extraction cela se traduit, depuis quelques années, par le développement de techniques innovantes pour remplacer les procédés actuels utilisant des solvants pétro-sourcés et très énergivores. L’objectif de cette thèse a donc consisté en la recherche et le développement d’un nouveau procédé d’éco-extraction de produits végétaux, grâce à la combinaison d’un effet thermique, le chauffage par micro-ondes et d’un effet mécanique, la centrifugation. L’extraction est réalisée directement sur la matrice végétale fraîche, l’eau intracellulaire de la plante jouant le rôle du solvant extracteur.La première partie de ce manuscrit présentera le pilote expérimental à l’échelle semi-industrielle combinant micro-ondes et centrifugation et son adaptation potentielle pour le domaine de l’éco-extraction.Dans la seconde partie, les essais réalisés à l’aide de ce pilote ont été décrits pour l’extraction de métabolites secondaires choisis. Les résultats, obtenus sur des produits modèles comme la salade et les écorces d’oranges, mettent en évidence que ce pilote est opérationnel pour l’extraction de certains micronutriments. De plus, l’utilisation d’une centrifugation combinée à l’application de micro-ondes permet d’intensifier le rendement d’extraction et un gain de temps. Néanmoins, des questions quant à la compréhension relative à l’aéraulique du système et à la distribution du champ de température au cours de l’extraction sont soulevées pour maîtriser parfaitement tous les paramètres d’extraction.Enfin, ce procédé apparaît comme une réelle innovation dans le domaine de l’extraction et est très prometteur car il peut encore être optimisé. Il peut apporter aux industriels une solution alternative aux procédés classiques
Current environmental concerns such as the depletion of fossil fuels, the emission of greenhouse gases or global warming are forcing industry to reduce their impact on the environment and to be part of a greener approach. In the field of extraction, this has led, for some years now, to the development of innovative techniques to replace current processes using petroleum-based and energy-intensive solvents. The aim of this thesis was therefore to research and develop a new green extraction process for plant products, thanks to the combination of a thermal effect (microwave heating) and an effect mechanical (centrifugal force). The extraction is carried out directly on the fresh vegetable matrix, the intracellular water of the plant acting as the extracting solvent.The first part of this manuscript will present the experimental pilot at the semi-industrial scale combining microwaves and centrifugation and its potential adaptation for the field of green extraction.In the second part, the tests carried out using this pilot have been described for the extraction of selected secondary metabolites. The results, obtained on “model” products such as salad and orange peel, highlight that this pilot is operational for the extraction of certain micronutrients. In addition, the use of a centrifugation combined with the application of microwaves makes it possible to intensify the extraction yield and a saving of time. Nevertheless, questions about the understanding of the aeraulic system and the distribution of the temperature field during extraction are raised to fully control all extraction parameters.Finally, this process appears as a real innovation in the field of extraction and is very promising because it
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Chaabani, Emna. „Eco-extraction et valorisation des métabolites primaires et secondaires des différentes parties de Pistacia lentiscus“. Thesis, Avignon, 2019. http://www.theses.fr/2019AVIG0714.

Der volle Inhalt der Quelle
Annotation:
Le développement de la chimie verte, l’épuisement des ressources pétrolières et la prise de conscience des risques liée à l’utilisation des solvants pétroliers ont conduit à la recherche de nouvelles alternatives pour réduire l’utilisation des solvants nocifs non renouvelables tels que l’hexane. L’objectif de cette thèse a donc consisté en la recherche de solvants alternatifs plus respectueux de la santé et de l’environnement pour l’éco-extraction des composés phénoliques et des acides gras à partir des graines de Pistacia lentiscus et des arômes à partir de ces feuilles. Pour ce faire, une première approche in silico basée sur des outils de prédictions tels que COSMO-RS a été complétée par une approche expérimentale associée à des traitements chimiométriques. Cette démarche a conduit à la sélection de quatre solvants verts, le MetHF pour l’extraction des acides gras, l’EtOAc pour l’extraction des arômes, l’EtOH/H2O (70/30) pour l’extraction des polyphénols et des flavonoïdes et l’EtOH/H2O (80/20) pour l’extraction des anthocyanes. Par la suite, l’activité anti-inflammatoire de l’extrait lipidique obtenu par le MeTHF et l’activité antioxydante des extraits aromatiques et des extraits phénoliques de P. lentiscus ont été évaluées in vitro. Ces travaux ont montré que l’huile végétale a présenté une activité anti-inflammatoire potentielle, inhibant de 91,9% la libération d’oxyde nitrique (NO.) dans les macrophages RAW 264,7. De plus, les résultats ont permis de mettre en évidence la richesse des fruits en antioxydants. En effet, l’extrait obtenu par l’EtOH/H2O (80/20) a montré une activité antiradicalaire (IC50 = 2,39 μg/ml) comparable à celle de l’antioxydant de synthèse le Trolox (IC50 = 2,56 μg/ml). En outre, l’extrait aromatique de P. lentiscus obtenu avec l’EtOAc a présenté une activité antiradicalaire intéressante contre le DPPH (IC50 = 5,82 μg/ml)
The development of Green chemistry, the depletion of petroleum resources and the awareness of the risks associated to the use of petroleum solvents have led to search a new alternatives to reduce the use of non-renewable petrochemical harmful solvents such as hexane. The objective of this thesis has consisted in the research of alternative solvents more respectful of health and environment for the eco-extraction of fatty acids and phenolic compounds from Pistacia lentiscus fruits and aromas from these leaves. A first in silico approach using the COSMO-RS predictions was supplemented by an experimental approach paired with chemometrics analysis. This led to selection of four alternative solvents, MeTHF for oil extraction, EtOAc for aromas extraction, EtOH/H2O (70/30) for polyphenols and flavonoids extraction and EtOH/H2O (80/20) for anthocyanins extraction. Subsequently, the anti-inflammatory activity of MeTHF lipid extract and the antioxidant activity of aromatic and phenolic extracts of P. lentiscus were evaluated in vitro. Results showed that vegetable oil exhibited a potential anti-inflammatory activity, inhibiting by 91.9% the release of (nitric oxide) NO in RAW 264.7 macrophages. In addition, results highlighted the richness of the fruits in antioxidants. In fact, EtOH/H2O (80/20) extract showed a good antiradical activity (IC50 = 2.39 μg/ml) comparable to that of the synthetic antioxidant Trolox (IC50 = 2.56 μg / ml). In addition, the aromatic extract obtained with EtOAc showed an interesting anti-radical activity against DPPH (IC50 = 5.82 μg/ml)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Leouffre, Marc. „Extraction de sources d'électromyogrammes et évaluation des tensions musculaires“. Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENT009/document.

Der volle Inhalt der Quelle
Annotation:
L'évaluation des tensions musculaires chez l'Homme dans les sciences du mouvement et les études posturales présente un grand intérêt pour le sport, la santé ou encore l'ergonomie. La biomécanique s'intéresse tout particulièrement à ces problèmes utilise la cinématique inverse pour recalculer, à partir de mesures physiques externes, les tensions musculaires internes. Le verrou scientifique principal de cette technique est la redondance musculaire, propre au vivant. En effet les actionneurs (muscles) sont plus nombreux que les degrés de liberté à contrôler. Les problèmes de cinématique inverse sont sous-déterminés, ils présentent plus d'inconnues que d'équations, et nécessitent l'usage de procédures d'optimisation. Dans ce contexte l'usage de l'électromyographie (EMG), signal électro-physiologique mesurable à la surface de la peau et témoin de l'activité musculaire, peut donner une idée de l'activité des muscles sous-jacents. La connaissance de l'activité des muscles permettrait d'introduire de l'information supplémentaire dans cette méthodologie inverse afin d'améliorer l'estimation des tensions musculaires réelles au cours de mouvements ou dans une posture donnée. De plus certaines applications ne permettent pas ou peu l'enregistrement de forces ou positions articulaires externes qui nécessitent un appareillage conséquent et rendent difficile l'étude de situations de la vie courante. L'électromyographie est dans un tel contexte une mesure non-invasive et peu encombrante, facilement réalisable. Elle a cependant elle aussi ses propres verrous scientifiques. L'EMG de surface sur de petits muscles très rapprochés comme les nombreux muscles des avant-bras peut être sujette à ce qui est communément appelé « cross-talk » ; la contamination croisée des voies. Ce cross-talk est le résultat de la propagation des signaux musculaires sur plusieurs voies simultanément, si bien qu'il est compliqué d'associer l'activité d'un muscle à une unique voie EMG. Le traitement numérique du signal dispose d'outils permettant, dans certaines conditions, de retrouver des sources inconnues mélangées sur plusieurs capteurs. Ainsi la séparation de sources peut être utilisée sur des signaux EMG afin de retrouver de meilleures estimations des signaux sources reflétant plus fidèlement l'activité de muscles sans l'effet du cross-talk. Ce travail de thèse montre dans un premier temps l'intérêt de l'EMG dans l'étude de l'utilisation d'un prototype d'interface homme-machine novateur. L'EMG permet en particulier de mettre en évidence la présence forte de cocontraction musculaire permettant de stabiliser les articulations pour permettre un contrôle précis du dispositif. En outre des perspectives d'analyse plus fines seraient envisageables en utilisant des techniques de séparation de sources performantes en électromyographie. Dans un second temps l'accent est mis sur l'étude des conditions expérimentales précises permettant l'utilisation des techniques de séparation de sources en contexte linéaire instantané en électromyographie de surface. L'hypothèse d'instantanéité du mélange des sources en particulier est étudiée et sa validité est vérifiée sur des signaux réels. Enfin une solution d'amélioration de la robustesse de la séparation de sources à l'hypothèse de l'instantanéité est proposée. Celle-ci repose sur la factorisation en matrices non-négatives (NMF) des enveloppes des signaux EMG
Evaluation of muscle tensions in movement and gait sciences is of great interest in the fields of sports, health or ergonomics. Biomechanics in particular has been looking forward to solving these problems and developed the use of inverse kinematics to compute internal muscle tensions from external physical measures. Muscular redundancy remains however a complex issue, there are more muscles than degrees of freedom and thus more unknown variables which makes inverse kinematics an under-determined problem needing optimization techniques to be solved. In this context using electromyography (EMG), an electro-physiological signal that can be measured on the skin surface, gives an idea of underlying muscle activities. Knowing muscle activities could be additional information to feed the optimization procedures with and could help improving accuracy of estimated muscle tensions during real gestures or gait situation. There are even situations in which measuring external physical variables like forces, positions or accelerations is not feasible because it might require equipment incompatible with the object of the study. It is often the case in ergonomics when equipping the object of the study with sensors is either too expensive or physically too cumbersome. In such cases EMG can become very handy as a non-invasive measure that does not require the environment to be equipped with other sensors. EMG however has its own limits, surface EMG on small and closely located muscles like muscles of the forearm can be subject to “cross-talk”. Cross-talk is the cross contamination of several sensors it is the result of signal propagation of more than one muscle on one sensor. In presence of cross-talk it is not possible to associate an EMG sensor with a given muscle. There are signal processing techniques dealing with this kind of problem. Source separation techniques allow estimation of unknown sources from several sensors recording mixtures of these sources. Applying source separation techniques on EMG can provide EMG source estimations reflecting individual muscle activities without the effect of cross-talk. First the benefits of using surface EMG during an ergonomics study of an innovative human-computer interface are shown. EMG pointed out a relatively high level of muscle co-contraction that can be explained by the need to stabilize the joints for a more accurate control of the device. It seems legitimate to think that using source separation techniques would provide signals that better represent single muscle activities and these would improve the quality of this study. Then the precise experimental conditions for linear instantaneous source separation techniques to work are studied. Validity of the instantaneity hypothesis in particular is tested on real surface EMG signals and its strong dependency on relative sensor locations is shown. Finally a method to improve robustness of linear instantaneous source separation versus instantaneity hypothesis is proposed. This method relies on non-negative matrix factorization of EMG signal envelopes
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Caparos, Matthieu. „Analyse automatique des crises d'épilepsie du lobe temporal à partir des EEG de surface“. Phd thesis, Institut National Polytechnique de Lorraine - INPL, 2006. http://tel.archives-ouvertes.fr/tel-00118993.

Der volle Inhalt der Quelle
Annotation:
L'objectif de la thèse est le développement d'une méthode de caractérisation des crises d'épilepsie du lobe temporal à partir des EEG de surface et plus particulièrement de la zone épileptogène (ZE) à l'origine des crises.
Des travaux récents validés en stéréoélectroencéphalographie (SEEG) ont démontré une évolution des synchronisations entre structures cérébrales permettant une caractérisation de la dynamique des crises du lobe temporal.
L'originalité des travaux consiste à étendre les méthodes développées en SEEG, à l'étude des signaux EEG de surface. Du point de vue médical, ce travail s'inscrit dans le cadre de l'aide au diagnostic préchirugical.
Des méthodes de mesure de relation, telles que la cohérence, la Directed Transfer Function (DTF), la corrélation linéaire (r²) ou la corrélation non-linéaire (h²), ont été adaptées pour répondre à cette problématique. Différents critères, définis à partir d'indications cliniques, ont permis la mise en évidence des avantages du coefficient de corrélation non-linéaire dans l'étude de l'épilepsie par les EEG de surface.
L'exploitation de l'évolution du coefficient de corrélation non-linéaire est à la base de trois applications de traitement automatique du signal EEG :
– La première est la détermination de la latéralisation de la ZE au départ d'une crise. Cette information constitue l'étape préliminaire lors de la recherche de la localisation de la ZE.
– La recherche d'une signature épileptique constitue la seconde application. La signature est extraite par un algorithme de mise en correspondance et de mesure de similarités en intra-patients.
– Une classification des crises du lobe temporal constitue la troisième application. Elle est réalisée en extrayant un ensemble de caractéristiques des signatures trouvées par l'algorithme de l'étape 2.
La base de données qui contient quarante-trois patients et quatre-vingt-sept crises (deux crises par patient, trois pour l'un d'entre eux) garantit une certaine significativité statistique.
En ce qui concerne les résultats, un taux de bonne latéralisation de l'ordre de 88% est obtenu. Ce taux est très intéressant, car dans la littérature, il peut être quelques fois atteint, mais en exploitant des données multimodalités et avec des méthodes non-automatiques. A l'issue de la classification, 85% des crises mésiales ont été correctement classifiées ainsi que 58% des crises mésio-latérales.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Barea, Jaqueline Alves. „Extração de DNA de material de arquivo e fontes escassas para utilização em reação de polimerização em cadeia (PCR) /“. Botucatu : [s.n.], 2001. http://hdl.handle.net/11449/94771.

Der volle Inhalt der Quelle
Annotation:
Orientador: Maria Inês de Moura Campos Pardini
Resumo: Este trabalho visou a comparação de 5 diferentes métodos de extração de DNA, a partir de amostras de materiais de arquivo (tecido incluído em parafina, lâmina de hemograma corada e não corada com Leishman, lâmina de mielograma, gotas de sangue em Guthrie card) e fontes escassas (células bucais, 1 e 3 bulbos capilares, 2 mL de urina), para avaliar a facilidade de aplicação dos mesmos e possibilidade de amplificação desse DNA pela técnica de PCR. Os métodos incluíram digestão por proteinase K, seguida e não seguida por purificação com fenol/clorofórmio, utilização de Chelex 100 Ò, utilização de InstaGeneÒ e fervura em água estéril. O DNA obtido, foi testado por PCR, para a amplificação de três fragmentos gênicos: de Brainderived neutrophic factor (764 pb), de Fator V Leiden (220 pb) e de Abelson (106 pb), sendo que, a amplificação para o primeiro, eliminava a necessidade dos demais. Conforme o tamanho do fragmento gênico estudado, a fonte potencial de DNA e o método de extração utilizado, os resultados caracterizaram o melhor caminho para padronização dos procedimentos técnicos a serem incluídos e apresentados no manual de Procedimentos Operacionais Padrão do Laboratório de Biologia Molecular do Hemocentro - HC - UNESP - Botucatu.
Abstract: The present work aimed to compare five different methods of DNA extraction of archieved materials samples (paraffin-embedded tissues, periferic blood smear - stained or non-stained with Leshman, aspired bone marrow smears and blood guts in Guthrie card) and rare sources (oral cells, 1 and 3 capilar bulbs, 2 mL urine), to avaliate the aplication facility and the amplification possibility one for PCR. The methods included proteinase K digestion - followed or non by phenol/chloroform purification, Chelex 100Ò (BioRad), InstaGeneÒ (BioRad) and boilling in sterile water. The DNA obteined, was tested for amplification of 3 genic fragments: from Brainderived neutrophic factor gene (764 bp), Factor V Leiden gene (220 bp) and Abelson gene (106 bp). According to the genic fragment lenght studed, the DNA potential source and the extraction method used, the results characterized better guidelines for padronization of the techniques procedures for to Good Manufacturing Practices from Molecular Biology Laboratory from Blood Center - Medicine School - UNESP - Botucatu .
Mestre
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Mnayer, Dima. „Eco-Extraction des huiles essentielles et des arômes alimentaires en vue d'une application comme agents antioxydants et antimicrobiens“. Thesis, Avignon, 2014. http://www.theses.fr/2014AVIG0257/document.

Der volle Inhalt der Quelle
Annotation:
Les huiles essentielles et les arômes des plantes constituent un réel potentiel pour l’industrie dans le but de substituer aux composés synthétiques ayant des effets néfastes sur la santé et l’environnement. Afin de contribuer aux principes de la chimie verte, cette étude porte sur l’éco-extraction et la valorisation des extraits naturels des plantes et le développement d’une nouvelle technologie «verte» pour l’extraction des composés aromatiques naturels. La première partie de ce manuscrit mets en évidence les propriétés biologiques des huiles essentielles et des arômes et l’importance de leurs applications dans différents domaines surtout le domaine agroalimentaire. Dans la deuxième partie, les études sur les propriétés biologiques des huiles essentielles des plantes des Alliacées montrent leurs bonnes activités antioxydantes et antimicrobiennes. Ces résultats encourageants ont permis dans la troisième partie de valoriser les sous-produits d’oignon issus de la turbo hydrodistillation et qui sont considérés normalement comme déchets. La technique offre une bonne extraction des composés phénoliques et des flavonoïdes utilisant l’eau comme solvant naturel. La quatrième et la dernière partie de ce travail s’est orientée vers l’optimisation et le développement d’une technologie « verte» utilisant les ultrasons et l’huile de tournesol comme solvant naturel pour l’extraction des composés aromatiques du thym. Cette nouvelle approche écologique permet l’extraction des absolues dépourvues de cire et des résidus de solvants pétroliers, contenant la teneur la plus élevée en thymol et exerçant la plus forte activité antioxydante
Plant essential oils and aromas are a real potential for the industry to substitute the synthetic compounds that might have harmful effects on the human health and the environment. In order to contribute to the green chemistry principles, this study focuses on the “eco-extraction” and valorization of natural plant extracts and the development of a new «green» technology for the extraction of aromatic compounds. The first part of this manuscript highlights the biological properties of the essential oils and aromas and the importance of their applications in various sectors especially in the food industry. In the second part, studies on the biological properties of the essential oils from plants of the Alliaceae family show their good antioxidant and antimicrobial activities. These encouraging results have allowed in the third part to evaluate the onion by-products resulting from the turbo hydrodistillation, which are normally considered as waste. The technique offers a good extraction of flavonoids and phenols using water as natural solvent. The fourth and final part of this work deals with the optimization and the development of a «green» technology using ultrasound and sunflower oil as a natural solvent for the extraction of aromatic compounds from thyme. This new ecological approach allows the extraction of absolutes free from waxes and petroleum solvent residues, having the highest content in thymol and exerting the strongest antioxidant activity
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

An, Tien Li. „Efeitos da retração dentária sobre o ponto "A" em pacientes submetidos ao tratamento ortodôntico /“. Araçatuba : [s.n.], 2003. http://hdl.handle.net/11449/95808.

Der volle Inhalt der Quelle
Annotation:
Orientador: Osmar Aparecido Cuoghi
Banca: Luiz Gonzaga Gandini Júnior
Banca: Eduardo César Almada Santos
Resumo: Objetivou-se avaliar o efeito da retração dentária sobre o ponto A nos sentidos ântero-posterior e vertical, bem como a correlação e a previsibilidade dos comportamentos dessas estruturas. Utilizou-se 60 telerradiogafias em norma lateral, tomadas no início e no final do tratamento ortodôntico corretivo de 30 pacientes (22 femininos e 8 masculinos) entre 10 e 17 anos, com má oclusão de Classe II, divisão 1 e de Classe I, com extração dos primeiros pré-molares superiores. Além das grandezas 1.NA, 1-NA, 1.PP e 1-A, distâncias lineares horizontais e verticais foram mensuradas utilizando como referência uma linha horizontal 7o abaixo do plano SN e a sua perpendicular. Sendo normalmente distribuídos, todos os dados foram mensurados duas vezes, cujos valores médios foram submetidos ao teste t emparelhado, testes de correlação e de regressão linear. Em média, o ponto A retraiu 0,71 mm e deslocou para baixo 2,38 mm, acompanhando uma retração do ápice radicular de 1,03 mm e da borda incisal de 4,13 mm e uma extrusão dentária de 2,35 mm. Houve correlação positiva entre a retração do ponto A e do ápice radicular (r=0,75; a<0,0001) e da borda incisal (r=0,70; a<0,0001), demonstrando um padrão previsível no comportamento ântero-posterior. Concluiu-se que o ponto A retraiu e deslocou para baixo acompanhando o dente, demonstrando padrão previsível no sentido ântero-posterior.
Abstract: It was aimed to evaluate the effect of retraction of anterior teeth on the point A antero-posteriorly and vertically, as well as the correlation and the predictability of the behavior of these structures. Sixty telerradiographs in lateral norm were taken, at the beginning and the end of corrective orthodontic treatment, from thirty patient (22 female and 8 male) aging from 10 to 17 years, with Class II, division 1 and Class I malocclusion and underwent maxillary first premolars extractions. Besides the variables 1.NA, 1-NA, U1/PP and U1?Avert, horizontal and vertical linear measurements were made in relation to a horizontal reference line constructed from the S-N plane minus 7o and its perpendicular. As distributed normally, all the data were measured twice, and the mean values were submitted to paired t test, linear correlation and regression tests. In average, point A retracted 0,71 mm and moved 2,38 mm downwardly, following 1,03 mm of root apex and 4,13 mm of incisal edge retraction, and 2,35 mm of tooth extrusion. The retraction of point A was positively correlated with root apex (r=0,75; p<0,0001) and with incisal edge retraction (r=0,70; p<0,0001), showing a predictable antero-posterior behavior. It was concluded that point A retracted and moved downwardly following the tooth, and the retraction of point A in relation to the anterior tooth showed predictable pattern.
Mestre
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Vachirachewin, Ratchaya [Verfasser]. „Thermostability of selected viruses in the presence of liquid egg yolk and distribution of virus during acetone extraction and microfiltration of egg yolk phospholipids / Ratchaya Vachirachewin“. Berlin : Freie Universität Berlin, 2013. http://d-nb.info/1034073893/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Gaumer, Gaëtan. „Résumé de données en extraction de connaissances à partir des données (ECD) : application aux données relationnelles et textuelles“. Nantes, 2003. http://www.theses.fr/2003NANT2025.

Der volle Inhalt der Quelle
Annotation:
Les travaux présentés dans cette thèse ont été réalisés dans le cadre d'un contrat CNET JMINER, dont le but est l'étude de pré-traitement et post-traitements en extraction de connaissances à partir des données, appliqués aux lettres de réclamation de France Télécom. Les caractéristiques particulières des données de cette application ont orienté nos travaux de recherche. Nous nous sommes tout d'abord intéressés aux problèmes liés à l'extraction de connaissances à partir de très importants volumes de données. Nous proposons, pour résoudre ces problèmes, de remplacer les données à traiter par un résumé de ces données possédant les mêmes caractéristiques. Cette proposition s'est concrétisée par le développement du logiciel CFSUMM, un système de création de résumés de données utilisant des mesures de similarités et d'indiscernabilités entre instances. Nous montrons pourquoi et comment les caractéristiques de ce logiciel le destine particulièrementà la réduction d'importants volumes de données, qu'ils soient issus de bases de données relationnelles ou d'indexation de documents non structurés (texte, html, etc). . .
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Souza, Flávia Vieira de. „Curva de crescimento e exportação de nutrientes e sódio por frutos de mangueira Palmer, Haden e Tommy Atkins /“. Jaboticabal : [s.n.], 2007. http://hdl.handle.net/11449/88321.

Der volle Inhalt der Quelle
Annotation:
Resumo: Objetivou-se com o presente estudo determinar a curva de crescimento e a extração de nutrientes e sódio por frutos de mangueira Haden, Palmer e Tommy Atkins. O estudo foi conduzido em uma área de produção comercial de mangas em Janaúba-MG. O delineamento experimental para cada variedade de manga foi inteiramente casualizado, com cinco repetições e os tratamentos corresponderam as épocas de amostragem das panículas . Cada unidade experimental foi composta por cinco plantas. Durante o pleno florescimento, foram demarcadas seis panículas por planta, iniciando a coleta quando os frutos atingiram o estádio de chumbinho, aproximadamente cinco dias após a antese, e finalizando quando os frutos atingiram o ponto de colheita, totalizando 15, 19 e 19 amostragens, para as variedades Haden, Palmer e Tommy Atkins, respectivamente. Após cada coleta, determinaram-se a massa fresca, massa seca e os teores de nutrientes e sódio . A variedade Haden atingiu o ponto de colheita aos 92 dias após a antese, seguida pela variedade Tommy Atkins aos 115 dias e Palmer aos 117 dias. As curvas de crescimento dos frutos para as três variedades de mangueira apresentaram padrão sigmoidal. A ordem decrescente de extração de nutrientes e sódio pelos frutos da mangueira Haden foi: K>N>Ca>Mg>S>P>Mn>Fe>Na>B>Zn>Cu. Na mangueira variedade Palmer: K>N>Ca>P>Mg>S>Mn>Na>Fe>Cu>B>Zn e na mangueira Tommy Atkins foi: K>N>Ca>P>Mg>S>Mn>Fe>Na>Cu>B>Zn. Os nutrientes extraídos em maiores quantidades pelas três variedades foram: K>N>Ca. A variedade Haden extrai maior quantidade de todos os nutrientes, exceto o P.
Abstract: It was aimed at with the present study determinate the growth curve and extraction of nutrients and sodium for mango fruits Haden, Palmer and Tommy Atkins. The study was carried in the area of commercial production de mango in Janaúba - MG. The experimental design for wich varietie of mango was completely randomized, with five replications and the treatments corresponded the times of sampling of the panicles. Each experimental unit was composed for five plants. During the full bloom, they had been demarcated six panicles for plant, initiating the collection five days after anthesis, and finishing when the fruits had reached the point of harvest, totalizing 15, 19 and 19 samplings, for the varieties Haden, Palmer and Tommy Atkins, respectively. After each collection, fresh mass, dry mass and the concentrations of nutrients and sodium the fruits had been determined it. The Haden variety after reached the point of harvest to the 92 days anthesis, followed for the variety Tommy Atkins to the 115 days and Palmer to the 118 days. The growth curves of the fruits for the three varieties presented pattern sigmoid. Rate of nutrients and sodium extraction in decreasing order for the fruits of the varieties Hadem was: K>N>Ca>Mg>S>P>Mn>Fe>Na>B>Zn>Cu; in the variety Palmer: K>N>Ca>P>Mg>S>Mn>Na>Fe>Cu>B>Zn and in the variety Tommy Atkins: K>N>Ca>P>Mg>S>Mn>Fe>Na>Cu>B>Zn. The extracted nutrients in larger amounts for the three varieties were: K>N>Ca. The variety Haden extracts larger amount of all the nutrients, except the P.
Orientador: William Natale
Coorientador: Dilermando Dourado Pacheco
Banca: José Carlos Barbosa
Banca: José Ricardo Mantovani
Mestre
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Mendes, Tatiana Sussel Gonçalves. „Extração semi-automática de rodovias em imagens digitais usando técnicas de correlação e o princípio de teste ativo /“. Presidente Prudente : [s.n.], 2005. http://hdl.handle.net/11449/88530.

Der volle Inhalt der Quelle
Annotation:
Orientador: Aluir Porfírio Dal Poz
Resumo: É esperado que o operador humano permaneça, por um longo tempo, como parte integrante do sistema de extração de feições. Portanto, as pesquisas que caminham para o desenvolvimento de novos métodos semi-automáticos são ainda de grande importância. Nesta linha, esta pesquisa propõe um método semi-automático para a extração de rodovias em imagens digitais. A metodologia é uma combinação entre técnicas de correlação e estratégia de teste ativo. Os resultados experimentais obtidos da aplicação do método em imagens reais mostram que o método funciona corretamente, demonstrando que pode ser usado em esquemas de captura de dados.
Abstract: The human operator is still expected to remain as part of the feature extraction system for a relative long time. Therefore, researches for the development of new semi-automatic methods is still of great importance. Following this line, this research proposes a semi-automatic method for road extraction from digital images. It is based on a combination between correlation techniques and an active testing strategy. In order to initialize the extraction process, the operator needs to supply two close seed points plus another one at the end of road segment selected to be extracted. Experimental results obtained from the application of the method to real image data show that the method works properly, demonstrating that the developed method can be used in data capturing schemes.
Mestre
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Kawaguchi, Hirokazu. „Signal Extraction and Noise Removal Methods for Multichannel Electroencephalographic Data“. 京都大学 (Kyoto University), 2014. http://hdl.handle.net/2433/188593.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Sicaire, Anne-Gaëlle. „Solvants alternatifs et techniques innovantes pour l'éco-extraction des huiles végétales à partir de graines oléagineuses“. Thesis, Avignon, 2016. http://www.theses.fr/2016AVIG0260.

Der volle Inhalt der Quelle
Annotation:
Ces dernières années, l’intérêt croissant porté aux considérations environnementales et à la sécurité des procédés pose la question de l’utilisation de solvants pétrochimiques nocifs non renouvelables tels que l’hexane, mais aussi de la quantité d’énergie investie dans le procédé de trituration des graines oléagineuses. L’objectif de cette thèse a donc consisté en la recherche et le développement de procédés d’éco-extraction d’huile végétale, issue de graines oléagineuses, grâce à des technologies innovantes (ultrasons et micro-ondes) et des solvants alternatifs plus respectueux de la santé et de l’environnement. La première partie de ce manuscrit propose en premier lieu l’optimisation du procédé d’extraction à l’hexane d’huile de colza à partir d’écailles de pression avec des ultrasons. Bien que ceux-ci aient un impact positif sur les rendements, le temps d’extraction et la consommation de solvant, l’utilisation de l’hexane reste problématique. Dans une deuxième partie, la substitution de l’hexane par des solvants alternatifs plus « verts » a donc été considérée. Une première approche expérimentale a été complétée par une approche prédictive grâce à l’utilisation d’outils d’aide à la décision : les paramètres de solubilité de Hansen et le modèle COSMO-RS. Cette démarche a conduit à la sélection d’un solvant, le 2-méthyltétrahydrofurane, pour la réalisation d’une étude complète allant de l’échelle laboratoire à l’échelle pilote. Dans une troisième et dernière partie, la combinaison de solvants alternatifs avec une technique innovante, les micro-ondes, pour l’extraction d’huile colza à partir d’écailles de pression a été envisagée. Cette étude a mis en évidence l’intérêt des micro-ondes dans le cas d’une sélectivité de chauffage entre la biomasse et le solvant
In recent years, the growing interest in environmental considerations and process safety raises the issue of the use of non-renewable petrochemical harmful solvents such as hexane, but also the amount of energy invested in the process of oilseed crushing. The objective of this thesis has consisted in the research and development of green extraction processes of vegetable oil from oil seeds through innovative technologies (ultrasonic and microwave) and alternative solvents more respectful of health and environment. The first part of this manuscript describes the optimization with ultrasound of the extraction process, using hexane, of oil from rapeseed cake. Although ultrasound have a positive impact on extraction yield, extraction time and solvent consumption, the use of hexane remains questionable. In the second part, substituting hexane by "green" alternative solvents has been considered. A first experimental approach was supplemented by a predictive approach through the use of decision tools: Hansen solubility parameters and COSMO-RS model. This led to the selection of a solvent, 2-methyltetrahydrofuran, for conducting a comprehensive study from laboratory to pilot scale. In a third and final part, the combination of alternative solvents with an innovative technology, microwaves, for the extraction of oil from rapeseed cake was investigated. This study highlighted the benefit of microwaves in the case of a selective heating between biomass and solvent
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Delvar, Alice. „Valorisation par bioraffinage des co-produits des fruits de la Passion et de Goyavier pour la mise en oeuvre de peintures écoconçues“. Thesis, Toulouse, INPT, 2019. http://www.theses.fr/2019INPT0047.

Der volle Inhalt der Quelle
Annotation:
Les peintures naturelles, plus écologiques et plus saines, constituent une alternative intéressante aux peintures solvantées pour certaines applications. Les peintures naturelles de la gamme Natura,développée par la société DERIVERY, sont constituées d’un liant à base d’huile végétale mise en émulsion dans l’eau et répondent aux critères du label PURE. Afin de développer une nouvelle gamme de peintures écoconçues adaptées notamment au marché de l'Outre-Mer, de nouvelles sources locales d’approvisionnement en huile végétale tropicale sont nécessaires. Un biocide naturel est également recherché pour remplacer les biocides synthétiques actuellement utilisés dans les peintures en émulsion. Pour répondre à ces objectifs de sélection de nouveaux ingrédients naturels, les matières premières identifiées sont des co-produits issus de l’industrie agroalimentaire réunionnaise, notamment ceux en provenance de la production de jus des fruits de la passion et du goyavier rouge. Ces co-produits, ou ces écarts de production, sont composés de graines, de pulpes résiduelles et de peaux, non ou peu valorisés actuellement. Les travaux réalisés ont montré la faisabilité technique de procédés d'extraction d’ingrédients multifonctionnels limitant les impacts environnementaux. Ainsi, en fonction de la nature du co-produit, les extractions d’huile des graines ont été réalisées par pression à froid, par macération éthanolique et par fluide supercritique (SC-CO2) et, pour les pulpes et peaux, des extraits aqueux et éthanoliques ont été étudiés. Les huiles des graines de fruit de la passion et de goyavier rouge ont des compositions en acides gras les classant comme huiles semi-siccatives, avec plus de 70 % d’acide linoléique. L’huile de fruit de la passion est riche en caroténoïdes agissant pour une meilleure conservation. L’huile de goyavier comporte une teneur élevée en stérols végétaux, intéressants pour la formulation de peinture grâce à leurs propriétés émulsifiantes. Les fractions obtenues à partir des pulpes de ces deux fruits tropicaux présentent des teneurs élevées en polyphénols associées à des activités antioxydantes notables, en particulier pour le goyavier rouge. Les extraits éthanoliques montrent également une activité antimicrobienne vis-à-vis de plusieurs souches bactériennes et d’une souche fongique. Deux autres procédés alternatifs d’extraction basés sur une activation thermo-mécanique ont été mis en œuvre à partir du fruit de la passion, en accord avec une démarche de bioraffinerie. Ces procédés permettent de réaliser simultanément une étape d'extraction et de pré-formulation d’émulsions, avec une extraction combinée des différentes molécules actives, lipophiles et hydrophiles. Le rôle des polyphénols et des protéines dans la stabilisation des émulsions est mis en évidence par rhéologie et par le suivi cinétique du crémage. Des nouvelles émulsions ont été préparées avec les molécules actives obtenues et évaluées en tant que liants dans la fabrication des peintures naturelles de la gamme Natura. Les résultats ont permis de valider le concept d’utilisation de ces actifs végétaux pour l’écoconception de peintures naturelles dont les propriétés correspondent aux critères souhaités par l’industriel. Les tests microbiologiques réalisés sur les formulations ont montré la capacité des extraits éthanoliques des deux fruits à améliorer la résistance des peintures émulsionnées vis-àvis d’une contamination microbienne
Natural paints, with lower environmental and health impacts, are an interesting alternative to solvent-based paints for some applications. The company Derivery has developed a range ofnatural paints called Natura in agreement with the criteria of the Pure ecological label. The binder of these paints consists of an emulsion based on vegetable oils. In order to develop a new range of eco-designed paints adapted especially to the overseas market, new local sources of tropical vegetable oils are needed. A natural biocide is also sought to replace the synthetic biocides currently used in emulsion paints. To meet these objectives of selecting new natural ingredients, the raw materials identified are co-products from the Reunion food industry, especially the one resulting from the production of passion fruit and red strawberry guava juices. These co-products, or these output gaps, are composed of seeds, pulp and shells, which are currently not valued. In this work, we showed the technical feasibility of processes with limited environmental impacts for the extraction of multifunctional ingredients. Thus, depending on the the co-product nature, the oil extractions were carried out by cold pressing, by maceration in ethanol or by SC-CO2 from the seeds and aqueous or ethanolic extracts of the pulps were studied. Vegetable oils obtained from the seeds of these two fruits have fatty acid compositions classifying them as semi-drying oils, with more than 70% of linoleic acid. Passion fruit oil is rich in carotenoids acting for better preservation. The guava oil has a high content of sterols, with emulsifying properties interesting for the paint formulation. The fractions obtained from the pulps have high levels of polyphenols associated with significant antioxidant activities, especially for the red strawberry guava. The ethanolic extracts of the two fruits also show an antimicrobial activity against several bacterial strains and a fungal strain. Two alternative extraction methods based on a thermo-mechanical activation were implemented from the passion fruit, in accordance with a biorefinery approach. These methods make it possible to simultaneously carry out the extraction and the pre-formulation of emulsion, with a combined extraction of the different hydrophilic and lipophilic molecules. The role of polyphenols and proteins in the emulsions stabilization was assessed by rheological measurements and by kinetic monitoring of creaming. New emulsions were prepared with the active molecules obtained and were tested as natural paint binders. The properties of the obtained formulations correspond to the industrial specifications thus validating the use of these new ingredients for the eco-design of natural paints. The microbiological tests carried out on the formulations showed the ability of the ethanolic extracts of both fruits to improve the resistance of the emulsified paints against microbial contamination
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Huang, Li-An, und 黃立安. „Development of PC-Based ECG system and Image ECG Features Extraction“. Thesis, 2006. http://ndltd.ncl.edu.tw/handle/01214174034474159870.

Der volle Inhalt der Quelle
Annotation:
碩士
國立陽明大學
醫學工程研究所
94
Abstract The image electrocardiogram (ECG) display method can give an integral view in temporal evolution and spatial distribution of cardiac signals, and provide 12-lead ECG to enhance the efficacy of differentiation of the signal relativity and variation. In this research, lead II was not only applied to detect P, R, and T wave, but also served as an assistant of localization by the spatial distribution of image ECG. The function of automatic spatial location can facilitate the amateur cardiology doctors understanding the location of waves in image ECG. The automatic locator prevents user from getting lost in image ECG and simultaneously extracts the features of image ECG, which provide user friendly interface and avoid the unpredictable user-dependent factor. It facilitate doctors more easily differentiate the normal and abnormal cardiac signals. In order to discriminate the shape of waves in the image ECG, the cardiac signals can be divided based on the image ECG automatic locator in the future. In this research, the PC-based 12 lead ECG acquisition system also has been developed. This system combines the ECG analog and digital signal processing system. In order to reduce the volume, weight, and noise interference of this system, the system uses battery to provide power and fiber to transmit signals, which also facilitate the convenience and portability in clinical.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

YOU, HONG-ZHI, und 游宏志. „Adaptive filter design for ECG waveform extraction“. Thesis, 1989. http://ndltd.ncl.edu.tw/handle/22051571465035696949.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Chiu, Shao-Yu, und 邱少禹. „Extraction of ECG, EGG and respiratory signal from single composite abdominal signal“. Thesis, 2009. http://ndltd.ncl.edu.tw/handle/60950518097141706285.

Der volle Inhalt der Quelle
Annotation:
碩士
國立臺灣大學
醫學工程學研究所
97
The lack of integrated bio-signal detection instruments made monitor patients’ multiple physiology parameters rather complicated in the past. Many electrodes need be applied to the body surface at the same time. Those recording devices may have interference with by each other. In addition, patients at home may have sudden attack of discomfort, an easy implemented device that can record a variety of essential physiological signals through simple operation will be extremely helpful. These signals can also be transferred through the network to health care specialists. For above purposes, we implemented a portable device using few electrodes on abdominal wall to measure various patients’ electrophysiology signals simultaneously. The signals were acquisited through three electrodes placed on abdomen wall and were separated into Electrocardiogram (ECG), Electrogastrogram (EGG) and respiratory rhythm according to their individual rhythmic characters. In this thesis, it set up a combinatory ECG, EGG and respiratory signal system which includes the hardware for data acquisition and storage. In ECG signal processing, dynamic window with the baseline wandering fitting algorithm was noted to solve the drifting problem caused by respiration. The validation of our combinatory monitoring system was verified by synchronous recording using commercial available individual system. Good ECG correlation was demonstrated in 17 subjects in a long duration (1 hour) or short time (5 minutes) analysis. In EGG signal processing, a special designed electrode was used to ensure simultaneously recording. In a 10 subjects study, a long duration (1 hour) or short time(20 minutes) analysis are both show good correlation. The respiratory signal component was verified by twice down-sampling processing and the usage of twice filtering. A good respiratory signal correlation was demonstrated in 10 subjects. In brief. We had set up a system which can accurately record three sets of physiological signals with three electrodes on upper abdomen. High frequency high amplitude ECG signals and low frequency low amplitude ECG signals in accompany with respiratory movement signal can be simultaneously recorded. The mixed tracing can then be separated according to their characteristics. This simple design is very user friendly and can be applied to ambulatory physiological monitoring especially for the purpose of symptom correlation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Liang, Sz-Ying, und 梁思潁. „Wavelet-Based ECG Features Extraction and Noise Reduction“. Thesis, 2013. http://ndltd.ncl.edu.tw/handle/11716642464682328078.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Liao, Chun-Kai, und 廖俊凱. „Implementation Real-Time ECG Feature Extraction Using FPGA“. Thesis, 2011. http://ndltd.ncl.edu.tw/handle/73033282462503282882.

Der volle Inhalt der Quelle
Annotation:
碩士
中原大學
生物醫學工程研究所
98
The objective is to develop an algorithm for processing the electrocardiogram (ECG) signals that will be extracting the ECG features and to implement it using the FPGA as a testing prototype for System on Chip (Soc) design. The algorithm will be analyzing the component of the ECG signals and information in real time and to identify the abnormal rhythm and heart beat. The program controls the detection, analysis and monitoring of the LCD, USB and Flash Memory of the FPGA. Both the ECG signal in real time and the analyzed information can be displayed on the LCD panel. The information can be transmitted and presented using self-developed software that is designed with Borland C++ Builder through USB device. The performance of algorithm was tested using MATLAB and valided based on the MIT-BIH Arrhythmia database which has been annotated by cardiologists. This overall detection tolerance of the algorithm was 0.02 seconds. The prototype system has been tested in real-time. The ECG signals from five volunteers were acquired, tested and analyzed and displayed in on line.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Wu, S.-chien, und 吳思謙. „Wavelet-Based 12 Lead ECG Characteristic Waves Extraction“. Thesis, 2006. http://ndltd.ncl.edu.tw/handle/95646358979592593779.

Der volle Inhalt der Quelle
Annotation:
碩士
中華大學
資訊工程學系碩士班
95
12-lead electrocardiograph (E.C.G.) characteristic waves extraction is important key technique for cardiac disease diagnosis by auto-computer analysis. So far, 12-lead ECG instruments is not having diagnosis and analysis by auto-computer in Taiwan besides HP、Pagewrite200、PHILIPS. Therefore, rely on the development of 12-lead ECG characteristic point extraction, establish a instrument which has cardiac disease diagnosis by auto-computer analysis in Taiwan. Our research is ECG characteristic waves extraction based wavelet transform for all kinds of cardiac diseases. The cardiac diseases in our research include Acute Myocardial Infarction、Hyperkalemia、Normal and heart rate more than 150. Depend on different scale wavelet coefficients, and implement the new algorithm to find the P wave、QRS complete and T wave. The result is follow:(1)the average Sensitivity of R wave is 99%, the Specificity is 99.9%; (2) the average Sensitivity of Q and J wave is 97%, Specificity is 99.9%; (3) the average Sensitivity of T wave is 95%, Specificity is 99.9%; (4) the average Sensitivity of P wave is 92.5%, Specificity is 99.9%. In the future, explicit the developed ECG characteristic waves extraction technique to reach cardiac disease diagnosis by auto-computer analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Lin, Jun Rong, und 林俊榮. „The study of ECG features extraction and classification“. Thesis, 1995. http://ndltd.ncl.edu.tw/handle/60362558851765244178.

Der volle Inhalt der Quelle
Annotation:
碩士
中原大學
醫學工程學系
83
For the purpose of real time ECG diagnostic, the paper discusses the method to recognize ECG pattern. The ECG is one of the main cardiac diagnosis tools. The abnormal ECG in the sequential Heart beat is the most difficult to identify, e.g. the data of Holter ECG recording, bedside monitor, or exercise ECG recording. When the training of SOM neural network finished, the output layer would classify pattern. The weighting values of the features vector and the center node would be fixed. We combined the weighting values with features vector using Euler distance equation to pick out the abnormal ECG signal. Using MIT/BIH arrhythmia database. The total of 1150 training samples has been used as training groups. The total of 923 samples has been used as to test the method. The feature of lead II ECG has been automatic extracted. A normalized data is fed to SOM neural network for pattern classification. From the classified pattern of SOM, the weighting value of the center node and features vector are forming a pattern distance equation(minimum Euler distance equation). Then, put in all training samples to normal equation that a interval range of NOR value can be obtained. In this thesis, five different ECG patterns has been tested which are normal(NOR), premature ventricular contraction(PVC), fusion premature ventricular contraction (FUS PVC), right bundle branch block(RBBB) and left bundle branch block(LBBB). To test the method, the study extracted 12 features of ECG. The SOM uses 400 processing elements. The neighborhood radius is 20. The learning cycle is 80 iterations. The learning rate coefficient is 0.5. Using ECG pattern in MIT/BIH database, the system demonstrates more than 98 percent correct classification. For the purpose of labeling the abnormal ECG, the equation can be further reduced to only 8 dominant features. Therefore, the method can be implemented into real time process to screen the ECG data. The method has been tested in screen 14 adult subjects to real
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie