Dissertations / Theses on the topic 'Principle component analysis'

To see the other types of publications on this topic, follow the link: Principle component analysis.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Principle component analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chen, Huanting. "Portfolio Construction Using Principle Component Analysis." Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-theses/927.

Full text
Abstract:
"Principal Components Analysis (PCA) is an important mathematical technique widely used in the world of quantitative finance. The ultimate goal of this paper is to construct a portfolio with hedging positions, which is able to outperform the SPY benchmark in terms of the Sharpe ratio. Mathematical techniques implemented in this paper besides principle component analysis are the Sharpe ratio, ARMA, ARCH, GARCH, ACF, and Markowitz methodology. Information about these mathematical techniques is listed in the introduction section. Through conducting in sample analysis, out sample analysis, and back testing, it is demonstrated that the quantitative approach adopted in this paper, such as principle component analysis, can be used to find the major driving factor causing movements of a portfolio, and we can perform a more effective portfolio analysis by using principle component analysis to reduce the dimensions of a financial model."
APA, Harvard, Vancouver, ISO, and other styles
2

Shawli, Alaa. "Scoring the SF-36 health survey in scleroderma using independent component analysis and principle component analysis." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=97180.

Full text
Abstract:
The short form SF-36 survey is a widely used survey of patient health related quality of life. It yields eight subscale scores of functional health and well-being that are summarized by two physical and mental component summary scores. However, recent studies have reported inconsistent results between the eight subscales and the two component summary measures when the scores are from a sick population. They claim that this problem is due to the method used to compute the SF-36 component summary scores, which is based on principal component analysis with orthogonal rotation.In this thesis, we explore various methods in order to identify a method that is more accurate in obtaining the SF-36 physical and mental component component summary scores (PCS and MCS), with a focus on diseased patient subpopulations. We first explore traditional data analysis methods such as principal component analysis (PCA) and factor analysis using maximum likelihoodestimation and apply orthogonal and oblique rotations with both methods to data from the Canadian Scleroderma Research Group registry. We compare these common approaches to a recently developed data analysis method from signal processing and neural network research, independent component analysis (ICA). We found that oblique rotation is the only method that reduces the meanmental component scores to best match the mental subscale scores. In order to try to better elucidate the differences between the orthogonal and oblique rotation, we studied the performance of PCA with the two approaches for recovering the true physical and mental component summary scores in a simulated diseased population where we knew the truth. We explored the methods in situations where the true scores were independent and when they were also correlated. We found that ICA and PCA with orthogonal rotation performed very similarly when the data were generated to be independent, but differently (with ICA performing worse) when the data were generated to be correlated. PCA with oblique rotation tended to perform worse than both methods when the data were independent, but better when the data were correlated. We also discuss the connection between ICA and PCA with orthogonal rotation, which lends strength to the use of the varimax rotation for the SF-36.Finally, we applied ICA to the scleroderma data and found relatively low correlation between ICA and unrotated PCA in estimating the PCS and MCS scores and very high correlation between ICA and PCA with varimax rotation. PCA with oblique rotation also had a relatively high correlation with ICA. Hence, we concluded that ICA could be seen as a compromise solution between the two methods.
La version abrégée du questionnaire SF-36 est largement utilisée pour valider la qualité de vie reliée à la santé. Ce questionnaire fournit huit scores s'attardant à la capacité fonctionnelle et au bien-être, lesquels sont regroupés en cotes sommaires attribuées aux composantes physiques et mentales. Cependant, des études récentes ont rapporté des résultats contradictoires entre les huit sous-échelles et les deux cotes sommaires lorsque les scores sont obtenus auprès de sujets malades. Cette discordance serait due à la méthode utilisée pour calculer les cotes sommaires du SF-36 qui est fondée sur l'analyse en composantes principales avec rotation orthogonale.Dans cette thèse, nous explorons diverses méthodes dans le but d'identifier une méthode plus précise pour calculer les cotes sommaires du SF-36 attribuées aux composantes physiques et mentales (CCP et CCM), en mettant l'accent sur des sous-populations de sujets malades. Nous évaluerons d'abord des méthodes traditionnelles d'analyse de données, telles que l'analyse en composantes principales (ACP) et l'analyse factorielle, en utilisant l'étude de l'estimation du maximum de vraisemblance et en appliquant les rotations orthogonale et oblique aux deux méthodes sur les données du registre du Groupe de recherche canadien sur la sclérodermie. Nous comparons ces approches courantes à une méthode d'analyse de données développée récemment à partir de travaux de recherche sur le réseau neuronal et le traitement du signal, l'analyse en composantes indépendantes (ACI).Nous avons découvert que la rotation oblique est la seule méthode qui réduit les cotes attribuées aux composantes mentales moyennes afin de mieux les corréler aux scores de la sous-échelle des symptômes mentaux. Dans le but de mieux comprendre les différences entre la rotation orthogonale et la rotation oblique, nous avons étudié le rendement de l'ACP avec deux approches pour déterminer les véritables cotes sommaires attribuées aux composantes physiques et mentales dans une population simulée de sujets malades pour laquelle les données étaient connues. Nous avons exploré les méthodes dans des situations où les scores véritables étaient indépendants et lorsqu'ils étaient corrélés. Nous avons conclu que le rendement de l'ACI et de l'ACP associées à la rotation orthogonale était très similaire lorsque les données étaient indépendantes, mais que le rendement différait lorsque les données étaient corrélées (ACI étant moins performante). L'ACP associée à la rotation oblique a tendance à être moins performante que les deux méthodes lorsque les données étaient indépendantes, mais elle est plus performante lorsque les données étaient corrélées. Nous discutons également du lien entre l'ACI et l'ACP avec la rotation orthogonale, ce qui appuie l'emploi de la rotation varimax dans le questionnaire SF 36.Enfin, nous avons appliqué l'ACI aux données sur la sclérodermie et nous avons mis en évidence une corrélation relativement faible entre l'ACI et l'ACP sans rotation dans l'estimation des scores CCP et CCM, et une corrélation très élevée entre l'ACI et l'ACP avec rotation varimax. L'ACP avec rotation oblique présentait également une corrélation relativement élevée avec l'ACI. Par conséquent, nous en avons conclu que l'ACI pourrait servir de solution de compromis entre ces deux méthodes.
APA, Harvard, Vancouver, ISO, and other styles
3

Yaseen, Muhammad Usman. "Identification of cause of impairment in spiral drawings, using non-stationary feature extraction approach." Thesis, Högskolan Dalarna, Datateknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:du-6473.

Full text
Abstract:
Parkinson’s disease is a clinical syndrome manifesting with slowness and instability. As it is a progressive disease with varying symptoms, repeated assessments are necessary to determine the outcome of treatment changes in the patient. In the recent past, a computer-based method was developed to rate impairment in spiral drawings. The downside of this method is that it cannot separate the bradykinetic and dyskinetic spiral drawings. This work intends to construct the computer method which can overcome this weakness by using the Hilbert-Huang Transform (HHT) of tangential velocity. The work is done under supervised learning, so a target class is used which is acquired from a neurologist using a web interface. After reducing the dimension of HHT features by using PCA, classification is performed. C4.5 classifier is used to perform the classification. Results of the classification are close to random guessing which shows that the computer method is unsuccessful in assessing the cause of drawing impairment in spirals when evaluated against human ratings. One promising reason is that there is no difference between the two classes of spiral drawings. Displaying patients self ratings along with the spirals in the web application is another possible reason for this, as the neurologist may have relied too much on this in his own ratings.
APA, Harvard, Vancouver, ISO, and other styles
4

Holm, Klaus Herman. "Assessment of Atlanta’s PM [subscript 2.5] source profiles using principle component analysis and positive matrix factorization." Thesis, Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/20751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mahmood, Muhammad Tariq. "Face Detection by Image Discriminating." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4352.

Full text
Abstract:
Human face recognition systems have gained a considerable attention during last few years. There are very many applications with respect to security, sensitivity and secrecy. Face detection is the most important and first step of recognition system. Human face is non rigid and has very many variations regarding image conditions, size, resolution, poses and rotation. Its accurate and robust detection has been a challenge for the researcher. A number of methods and techniques are proposed but due to a huge number of variations no one technique is much successful for all kinds of faces and images. Some methods are exhibiting good results in certain conditions and others are good with different kinds of images. Image discriminating techniques are widely used for pattern and image analysis. Common discriminating methods are discussed.
SIPL, Mechatronics, GIST 1 Oryong-Dong, Buk-Gu, Gwangju, 500-712 South Korea tel. 0082-62-970-2997
APA, Harvard, Vancouver, ISO, and other styles
6

Chisholm, Daniel J. "Use of Principle Component Analysis for the identification and mapping of phases from energy-dispersive x-ray spectra." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1999. http://handle.dtic.mil/100.2/ADA359572.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Yancan. "The Effects of Ownership on Bank Performance: A Study of Commercial Banks in China." Scholarship @ Claremont, 2012. http://scholarship.claremont.edu/cmc_theses/515.

Full text
Abstract:
Many Chinese commercial banks have experienced ownership transitions during the past decade, along with significant improvements in performance. In order to examine the effect of ownership on bank performance, an empirical study of Chinese commercial banks is performed. A dataset covering 16 Chinese commercial banks over the period of 2002 — 2011 is tested using linear regression model and principle component analysis. It is found that being a Joint-Stock Commercial Bank has a positive effect on earnings per share (EPS), and being a City Commercial Bank increases return on assets (ROA). On the contrary, operating as a Stated-Owned Commercial Bank affects both EPS and ROA negatively. The empirical results also indicate that undergoing initial public offering on the Hong Kong Stock Exchange helps a bank to improve performance, while the listing in Mainland China does not.
APA, Harvard, Vancouver, ISO, and other styles
8

Chemistruck, Heather Michelle. "A Galerkin Approach to Define Measured Terrain Surfaces with Analytic Basis Vectors to Produce a Compact Representation." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/29585.

Full text
Abstract:
The concept of simulation-based engineering has been embraced by virtually every research and industry sector (Sinha, Liang et al. 2001; Mocko and Fenves 2003). Engineering and science communities have become increasingly aware that computer simulation is an indispensable tool for resolving a multitude of scientific and technological problems. It is clearly desirable to gain a reliable perspective on the behaviour of a system early in the design stage, long before building costly prototypes (Chul and Ro 2002; Letherwood, Gunter et al. 2004; Makarand Datar 2007; Ersal, Fathy et al. 2008; Mueller, Ferris et al. 2009). Simulation tools have become a critical part of the automotive industry due to their ability to reduce the time and money spent in the development process. Terrain is the principle source of vertical excitation to the vehicle and must be accurately represented in order to correctly predict the vehicle response in simulation. In this dissertation, non-deformable terrain surfaces are defined as a sequence of vectors, where each vector comprises terrain heights at locations oriented perpendicular to the direction of travel. The evolution and implications of terrain surface measurement techniques and existing methods for correcting INS drift are reviewed as a framework for a new compensation method for INS drift in terrain surface measurements. Each measurement is considered a combination of the true surface and the error surface, defined on a Hilbert vector space, in which the error is decomposed into drift (global error) and noise (local error). It is also desirable to develop a compact, path-specific, terrain surface representation that exploits the inherent anisotropicity in terrain over which vehicles traverse. In order to obtain this, a set of analytic basis vectors is formed from Gegenbauer polynomials, parameterized to approximate the empirical basis vectors of the true terrain surface. It is also desirable to evaluate vehicle models and tire models over a wide range of terrain types, but it is computationally impractical to store long distances of every terrain surface variation. This dissertation examines the terrain surface, rather than the terrain profile, to maximize the information available to the tire model (i.e. wheel path data). A method to decompose the terrain surface as a combination of deterministic and stochastic components is also developed.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

Zito, Tiziano. "Exploring the slowness principle in the auditory domain." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I, 2012. http://dx.doi.org/10.18452/16450.

Full text
Abstract:
In dieser Arbeit werden - basierend auf dem Langsamkeitsprinzip - Modelle und Algorithmen für das auditorische System entwickelt. Verschiedene experimentelle Ergebnisse, sowie die erfolgreichen Ergebnisse im visuellen System legen nahe, dass, trotz der unterschiedlichen Beschaffenheit visueller und auditorischer sensorischer Signale, das Langsamkeitsprinzip auch im auditorischen System eine bedeutsame Rolle spielen könnte, und vielleicht auch im Kortex im Allgemeinen. Es wurden verschiedene Modelle für unterschiedliche Repräsentationen des auditorischen Inputs realisiert. Es werden die Beschränkungen der jeweiligen Ansätze aufgezeigt. Im Bereich der Signalverarbeitung haben sich das Langsamkeitsprinzip und dessen direkte Implementierung als Signalverarbeitungsalgorithmus, Slow Feature Analysis, über die biologisch inspirierte Modellierung hinaus als nützlich erwiesen. Es wird ein neuer Algorithmus für das Problem der nichtlinearen blinden Signalquellentrennung beschrieben, der auf einer Kombination von Langsamkeitsprinzip und dem Prinzip der statistischen Unabhängigkeit basiert, und der anhand von künstlichen und realistischen Audiosignalen getestet wird. Außerdem wird die Open Source Software Bibliothek Modular toolkit for Data Processing vorgestellt.
In this thesis we develop models and algorithms based on the slowness principle in the auditory domain. Several experimental results as well as the successful results in the visual domain indicate that, despite the different nature of the sensory signals, the slowness principle may play an important role in the auditory domain as well, if not in the cortex as a whole. Different modeling approaches have been used, which make use of several alternative representations of the auditory stimuli. We show the limitations of these approaches. In the domain of signal processing, the slowness principle and its straightforward implementation, the Slow Feature Analysis algorithm, has been proven to be useful beyond biologically inspired modeling. A novel algorithm for nonlinear blind source separation is described that is based on a combination of the slowness and the statistical independence principles, and is evaluated on artificial and real-world audio signals. The Modular toolkit for Data Processing open source software library is additionally presented.
APA, Harvard, Vancouver, ISO, and other styles
10

Bloxson, Julie M. "Characterization of the Porosity Distribution within the Clinton Formation, Ashtabula County, Ohio by Geophysical Core and Well Logging." Kent State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=kent1341879463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Pham, Duong Hung. "Contributions to the analysis of multicomponent signals : synchrosqueezing and associated methods." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM044/document.

Full text
Abstract:
De nombreux signaux physiques incluant des signaux audio (musique, parole), médicaux (ECG, PCG), de mammifères marins ou d'ondes gravitationnelles peuvent être modélisés comme une superposition d'ondes modulées en amplitude et en fréquence (modes AM-FM), appelés signaux multicomposantes (SMCs). L'analyse temps-fréquence (TF) joue un rôle central pour la caractérisation de tels signaux et, dans ce cadre, diverses méthodes ont été développées au cours de la dernière décennie. Néanmoins, ces méthodes souffrent d'une limitation intrinsèque appelée le principe d'incertitude. Dans ce contexte, la méthode de réallocation (MR) a été développée visant à améliorer les représentations TF (RTFs) données respectivement par la transformée de Fourier à court terme (TFCT) et la transformée en ondelette continue (TOC), en les concentrant autour des lignes de crête correspondant aux fréquences instantanées. Malheureusement, elle ne permet pas de reconstruction des modes, contrairement à sa variante récente connue sous le nom de transformée synchrosqueezée (TSS). Toutefois, de nombreux problèmes associés à cette dernière restent encore à traiter tels que le traitement des fortes modulations en fréquence, la reconstruction des modes d'un SMC à partir de sa TFCT sous-échantillonnée or l'estimation des signatures TF de modes irréguliers et discontinus. Cette thèse traite principalement de tels problèmes afin de construire des nouvelles méthodes TF inversibles plus puissantes et précises pour l'analyse des SMCs.Cette thèse offre six nouvelles contributions précieuses. La première contribution introduit une extension de TSS d'ordre deux appliqué à la TOC ainsi qu'une discussion sur son analyse théorique et sa mise en œuvre pratique. La seconde contribution propose une généralisation des techniques de synchrosqueezing construites sur la TFCT, connue sous le nom de transformée synchrosqueezée d'ordre supérieur (FTSSn), qui permet de mieux traiter une large gamme de SMCs. La troisième contribution propose une nouvelle technique utilisant sur la transformée synchrosqueezée appliquée à la TFCT de second ordre (FTSS2) et une procédure de démodulation, appelée DTSS2, conduisant à une meilleure performance de la reconstruction des modes. La quatrième contribution est celle d'une nouvelle approche permettant la récupération des modes d'un SMC à partir de sa TFCT sous-échantillonnée. La cinquième contribution présente une technique améliorée, appelée calcul de représentation des contours adaptatifs (CRCA), utilisée pour une estimation efficace des signatures TF d'une plus grande classe de SMCs. La dernière contribution est celle d'une analyse conjointe entre l'CRCA et la factorisation matricielle non-négative (FMN) pour un débruitage performant des signaux phonocardiogrammes (PCG)
Many physical signals including audio (music, speech), medical data (ECG, PCG), marine mammals or gravitational-waves can be accurately modeled as a superposition of amplitude and frequency-modulated waves (AM-FM modes), called multicomponent signals (MCSs). Time-frequency (TF) analysis plays a central role in characterizing such signals and in that framework, numerous methods have been proposed over the last decade. However, these methods suffer from an intrinsic limitation known as the uncertainty principle. In this regard, reassignment method (RM) was developed with the purpose of sharpening TF representations (TFRs) given respectively by the short-time Fourier transform (STFT) or the continuous wavelet transform (CWT). Unfortunately, it did not allow for mode reconstruction, in opposition to its recent variant known as synchrosqueezing transforms (SST). Nevertheless, many critical problems associated with the latter still remain to be addressed such as the weak frequency modulation condition, the mode retrieval of an MCS from its downsampled STFT or the TF signature estimation of irregular and discontinuous signals. This dissertation mainly deals with such problems in order to provide more powerful and accurate invertible TF methods for analyzing MCSs.This dissertation gives six valuable contributions. The first one introduces a second-order extension of wavelet-based SST along with a discussion on its theoretical analysis and practical implementation. The second one puts forward a generalization of existing STFT-based synchrosqueezing techniques known as the high-order STFT-based SST (FSSTn) that enables to better handle a wide range of MCSs. The third one proposes a new technique established on the second-order STFT-based SST (FSST2) and demodulation procedure, called demodulation-FSST2-based technique (DSST2), enabling a better performance of mode reconstruction. The fourth contribution is that of a novel approach allowing for the retrieval of modes of an MCS from its downsampled STFT. The fifth one presents an improved method developed in the reassignment framework, called adaptive contour representation computation (ACRC), for an efficient estimation of TF signatures of a larger class of MCSs. The last contribution is that of a joint analysis of ACRC with non-negative matrix factorization (NMF) to enable an effective denoising of phonocardiogram (PCG) signals
APA, Harvard, Vancouver, ISO, and other styles
12

Andersson, Moa. "Who supports non-traditional gender roles? : Exploring the Relationship Between Self-interest, Contextual Exposure and Gender Attitudes in Sweden." Thesis, Stockholms universitet, Sociologiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-118772.

Full text
Abstract:
Abstract Beliefs about which behaviors and responsibilities should typical be assumed by women and men are central in shaping gender relations and gender equality in society. The belief that women should be responsible for domestic work, while men should provide economically for the family gives rise to an uneven opportunity structure, situating women in a disadvantaged position compared to men. In order to achieve gender equality traditional gender role attitudes need to liberalize. This thesis examines who supports non-traditional gender roles in Sweden. Data representative of the Swedish population between the ages of 18-79 were used to explore the relationship between social context and individual self-interest and gender role attitudes. The results showed that women are more likely to be positive towards non-traditional gender roles if they are situated in highly educated social contexts. Conversely, men were found to be more likely to be positive if situated in gender equal contexts. This indicates that men’s beliefs regarding what is appropriate for women might be countered by women in gender equal contexts, while women may find confirmation regarding their non-traditional gender role attitude in other equally liberal women.
APA, Harvard, Vancouver, ISO, and other styles
13

Abuasbeh, Mohammad. "Fault Detection and Diagnosis for Brine to Water Heat Pump Systems." Thesis, KTH, Tillämpad termodynamik och kylteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-183595.

Full text
Abstract:
The overall objective of this thesis is to develop methods for fault detection and diagnosis for ground source heat pumps that can be used by servicemen to assist them to accurately detect and diagnose faults during the operation of the heat pump. The aim of this thesis is focused to develop two fault detection and diagnosis methods, sensitivity ratio and data-driven using principle component analysis. For the sensitivity ratio method model, two semi-empirical models for heat pump unit were built to simulate fault free and faulty conditions in the heat pump. Both models have been cross-validated by fault free experimental data. The fault free model is used as a reference. Then, fault trend analysis is performed in order to select a pair of uniquely sensitive and insensitive parameters to calculate the sensitivity ratio for each fault. When a sensitivity ratio value for a certain fault drops below a predefined value, that fault is diagnosed and an alarm message with that fault appears. The simulated faults data is used to test the model and the model successfully detected and diagnosed the faults types that were tested for different operation conditions. In the second method, principle component analysis is used to drive linear correlations of the original variables and calculate the principle components to reduce the dimensionality of the system. Then simple clustering technique is used for operation conditions classification and fault detection and diagnosis process. Each fault is represented by four clusters connected with three lines where each cluster represents different fault intensity level. The fault detection is performed by measuring the shortest orthogonal distance between the test point and the lines connecting the faults’ clusters. Simulated fault free and faulty data are used to train the model. Then, a new set of simulated faults data is used to test the model and the model successfully detected and diagnosed all faults type and intensity level of the tested faults for different operation conditions. Both models used simple seven temperature measurements, two pressure measurements (from which the condensation and evaporation temperatures are calculated) and the electrical power, as an input to the fault detection and diagnosis model. This is to reduce the cost and make it more convenient to implement. Finally, for each models, a user friendly graphical user interface is built to facilitate the model operation by the serviceman.
APA, Harvard, Vancouver, ISO, and other styles
14

Uygun, Nazli. "Validity Of Science Items In The Student Selection Test In Turkey." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/3/12609716/index.pdf.

Full text
Abstract:
This thesis presents content-related and construct-related validity evidence for science sub-tests within Student Selection Test (SST) in Turkey via underlying the content, cognitive processes, item characteristics, factorial structure, and group differences based on high school type. A total number of 126,245 students were present in the research from six type of school in the data of SST 2006. Reliability Analysis, Item Analysis, Principle Component Analysis (PCA) and one-way ANOVA have been carried out to evaluate the content-related and construct-related evidence of validity of SST. SPSS and ITEMAN programs were used to conduct the above-mentioned analyses. According to the results of content analysis, science items in the SST 2006 found to be measuring various cognitive processes under knowledge, understanding and problem solving cognitive domains. Those items loaded under three factors according to PCA findings which were measuring very close dimensions. Moreover, a threat to validity was detected via one-way ANOVA due to significant mean difference across high school types.
APA, Harvard, Vancouver, ISO, and other styles
15

Vadapalli, Hima Bindu. "Recognition of facial action units from video streams with recurrent neural networks : a new paradigm for facial expression recognition." University of the Western Cape, 2011. http://hdl.handle.net/11394/5415.

Full text
Abstract:
Philosophiae Doctor - PhD
This research investigated the application of recurrent neural networks (RNNs) for recognition of facial expressions based on facial action coding system (FACS). Support vector machines (SVMs) were used to validate the results obtained by RNNs. In this approach, instead of recognizing whole facial expressions, the focus was on the recognition of action units (AUs) that are defined in FACS. Recurrent neural networks are capable of gaining knowledge from temporal data while SVMs, which are time invariant, are known to be very good classifiers. Thus, the research consists of four important components: comparison of the use of image sequences against single static images, benchmarking feature selection and network optimization approaches, study of inter-AU correlations by implementing multiple output RNNs, and study of difference images as an approach for performance improvement. In the comparative studies, image sequences were classified using a combination of Gabor filters and RNNs, while single static images were classified using Gabor filters and SVMs. Sets of 11 FACS AUs were classified by both approaches, where a single RNN/SVM classifier was used for classifying each AU. Results indicated that classifying FACS AUs using image sequences yielded better results than using static images. The average recognition rate (RR) and false alarm rate (FAR) using image sequences was 82.75% and 7.61%, respectively, while the classification using single static images yielded a RR and FAR of 79.47% and 9.22%, respectively. The better performance by the use of image sequences can be at- tributed to RNNs ability, as stated above, to extract knowledge from time-series data. Subsequent research then investigated benchmarking dimensionality reduction, feature selection and network optimization techniques, in order to improve the performance provided by the use of image sequences. Results showed that an optimized network, using weight decay, gave best RR and FAR of 85.38% and 6.24%, respectively. The next study was of the inter-AU correlations existing in the Cohn-Kanade database and their effect on classification models. To accomplish this, a model was developed for the classification of a set of AUs by a single multiple output RNN. Results indicated that high inter-AU correlations do in fact aid classification models to gain more knowledge and, thus, perform better. However, this was limited to AUs that start and reach apex at almost the same time. This suggests the need for availability of a larger database of AUs, which could provide both individual and AU combinations for further investigation. The final part of this research investigated use of difference images to track the motion of image pixels. Difference images provide both noise and feature reduction, an aspect that was studied. Results showed that the use of difference image sequences provided the best results, with RR and FAR of 87.95% and 3.45%, respectively, which is shown to be significant when compared to use of normal image sequences classified using RNNs. In conclusion, the research demonstrates that use of RNNs for classification of image sequences is a new and improved paradigm for facial expression recognition.
APA, Harvard, Vancouver, ISO, and other styles
16

Grener, Doreen Elaine. "A Content Analysis of Elementary Science Textbook Series From 1930 Through 1990 for The Presentation of The Principle of Humans as a Component of The Ecosystem /." The Ohio State University, 1996. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487933648650631.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Bršlíková, Jana. "Analýza úmrtnostních tabulek pomocí vybraných vícerozměrných statistických metod." Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-201859.

Full text
Abstract:
The mortality is historically one of the most important demographic indicator and definitely reflects the maturity of each country. The objective of this diploma thesis is the comparison of mortality rates in analyzed countries around the world over time and among each other using the principle component analysis that allows assessing data different way. The big advantage of this method is minimal loss of information and quite understandable interpretation of mortality in each country. This thesis offers several interesting graphical outputs, that for example confirm higher mortality rate in Eastern European countries compared to Western European countries and show that Czech republic is country where mortality has fallen most in context of post-communist countries between 1990 and 2010. Source of the data is Human Mortality Database and all data were processed in statistical tool SPSS.
APA, Harvard, Vancouver, ISO, and other styles
18

Nunes, Madalena Baioa Paraíso. "Portfolio selection : a study using principal component analysis." Master's thesis, Instituto Superior de Economia e Gestão, 2017. http://hdl.handle.net/10400.5/14598.

Full text
Abstract:
Mestrado em Finanças
Nesta tese aplicámos a análise de componentes principais ao mercado bolsista português usando os constituintes do índice PSI-20, de Julho de 2008 a Dezembro de 2016. Os sete primeiros componentes principais foram retidos, por se ter verificado que estes representavam as maiores fontes de risco deste mercado em específico. Assim, foram construídos sete portfólios principais e comparámo-los com outras estratégias de alocação. Foram construídos o portfólio 1/N (portfólio com investimento igual para cada um dos 26 ativos), o PPEqual (portfólio com igual investimento em cada um dos 7 principal portfólios) e o portfólio MV (portfólio que tem por base a teoria moderna de gestão de carteiras de Markowitz (1952)). Concluímos que estes dois últimos portfólios apresentavam os melhores resultados em termos de risco e retorno, sendo o portfólio PPEqual mais adequado a um investidor com maior grau de aversão ao risco e o portfólio MV mais adequado a um investidor que estaria disposto a arriscar mais em prol de maior retorno. No que diz respeito ao nível de risco, o PPEqual é o portfólio com melhores resultados e nenhum outro portfólio conseguiu apresentar valores semelhantes. Assim encontrámos um portfólio que é a ponderação de todos os portfólios principais por nós construídos e este era o portfólio mais eficiente em termos de risco.
In this thesis we apply principal component analysis to the Portuguese stock market using the constituents of the PSI-20 index from July 2008 to December 2016. The first seven principal components were retained, as we verified that these represented the major risk sources in this specific market. Seven principal portfolios were constructed and we compared them with other allocation strategies. The 1/N portfolio (with an equal investment in each of the 26 stocks), the PPEqual portfolio (with an equal investment in each of the 7 principal portfolios) and the MV portfolio (based on Markowitz's (1952) mean-variance strategy) were constructed. We concluded that these last two portfolios presented the best results in terms of return and risk, with PPEqual portfolio being more suitable for an investor with a greater degree of risk aversion and the MV portfolio more suitable for an investor willing to risk more in favour of higher returns. Regarding the level of risk, PPEqual is the portfolio with the best results and, so far, no other portfolio has presented similar values. Therefore, we found an equally-weighted portfolio among all the principal portfolios we built, which was the most risk efficient.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
19

Koyuncu, Fulya. "Validity Of Biology Items In 2006, 2007, And 2008 Student Selection Test In Turkey." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12612946/index.pdf.

Full text
Abstract:
Student selection test in Turkey compose of two parts. The purpose of the first part is to assess students&rsquo
higher order thinking skills like analytical thinking, interpretation and reasoning about elementary school curriculum and 9th grade curricula objectives. On the other hand, second part of the test aims to assess students&rsquo
higher order thinking skills given in the high school curriculum. The main aim of this thesis is to analyze to what extend 2006, 2007 and 2008 student selection tests biology items assess higher order cognitive skills. In accordance with this purpose, elementary and high school curriculum and the appropriateness of the questions in the student selection test with the educational objectives of the curriculum are examined. In addition, dimensions of 2006, 2007, and 2008 SST biology items are examined by Exploratory Component Analysis and Confirmatory Component Analysis techniques. The result of those analysis revealed that SST biology items mostly focus on remembering skill and fail to assess higher order thinking skills. Additionally, there is not any consistency among 2006, 2007, and 2008 SSTs biology items in terms of dimensions which means there is not any construct in biology subtests of SSTs. The other aim of the present study is to identify how much academic and non-academic factors explain the biology achievement. While for academic factors reading comprehension, mathematics, physics, and chemistry achievements of students are used, age, gender, and school type are used for non-academic factors. Findings of the research revealed that academic factors, especially chemistry achievement, have significant affect on biology achievement. In terms of non-academic factors, graduating from selecting high school has important role for biology achievement. Additionally, older students and girls tend to have higher grades in biology.
APA, Harvard, Vancouver, ISO, and other styles
20

Brand, Hilmarie. "PCA and CVA biplots : a study of their underlying theory and quality measures." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/80363.

Full text
Abstract:
Thesis (MComm)--Stellenbosch University, 2013.
ENGLISH ABSTRACT: The main topics of study in this thesis are the Principal Component Analysis (PCA) and Canonical Variate Analysis (CVA) biplots, with the primary focus falling on the quality measures associated with these biplots. A detailed study of different routes along which PCA and CVA can be derived precedes the study of the PCA biplot and CVA biplot respectively. Different perspectives on PCA and CVA highlight different aspects of the theory that underlie PCA and CVA biplots respectively and so contribute to a more solid understanding of these biplots and their interpretation. PCA is studied via the routes followed by Pearson (1901) and Hotelling (1933). CVA is studied from the perspectives of Linear Discriminant Analysis, Canonical Correlation Analysis as well as a two-step approach introduced in Gower et al. (2011). The close relationship between CVA and Multivariate Analysis of Variance (MANOVA) also receives some attention. An explanation of the construction of the PCA biplot is provided subsequent to the study of PCA. Thereafter follows an in depth investigation of quality measures of the PCA biplot as well as the relationships between these quality measures. Specific attention is given to the effect of standardisation on the PCA biplot and its quality measures. Following the study of CVA is an explanation of the construction of the weighted CVA biplot as well as two different unweighted CVA biplots based on the two-step approach to CVA. Specific attention is given to the effect of accounting for group sizes in the construction of the CVA biplot on the representation of the group structure underlying a data set. It was found that larger groups tend to be better separated from other groups in the weighted CVA biplot than in the corresponding unweighted CVA biplots. Similarly it was found that smaller groups tend to be separated to a greater extent from other groups in the unweighted CVA biplots than in the corresponding weighted CVA biplot. A detailed investigation of previously defined quality measures of the CVA biplot follows the study of the CVA biplot. It was found that the accuracy with which the group centroids of larger groups are approximated in the weighted CVA biplot is usually higher than that in the corresponding unweighted CVA biplots. Three new quality measures that assess that accuracy of the Pythagorean distances in the CVA biplot are also defined. These quality measures assess the accuracy of the Pythagorean distances between the group centroids, the Pythagorean distances between the individual samples and the Pythagorean distances between the individual samples and group centroids in the CVA biplot respectively.
AFRIKAANSE OPSOMMING: Die hoofonderwerpe van studie in hierdie tesis is die Hoofkomponent Analise (HKA) bistipping asook die Kanoniese Veranderlike Analise (KVA) bistipping met die primêre fokus op die kwaliteitsmaatstawwe wat daarmee geassosieer word. ’n Gedetailleerde studie van verskillende roetes waarlangs HKA en KVA afgelei kan word, gaan die studie van die HKA en KVA bistippings respektiewelik vooraf. Verskillende perspektiewe op HKA en KVA belig verskillende aspekte van die teorie wat onderliggend is tot die HKA en KVA bistippings respektiewelik en dra sodoende by tot ’n meer breedvoerige begrip van hierdie bistippings en hulle interpretasies. HKA word bestudeer volgens die roetes wat gevolg is deur Pearson (1901) en Hotelling (1933). KVA word bestudeer vanuit die perspektiewe van Linieêre Diskriminantanalise, Kanoniese Korrelasie-analise sowel as ’n twee-stap-benadering soos voorgestel in Gower et al. (2011). Die noue verwantskap tussen KVA en Meerveranderlike Analise van Variansie (MANOVA) kry ook aandag. ’n Verduideliking van die konstruksie van die HKA bistipping word voorsien na afloop van die studie van HKA. Daarna volg ’n indiepte-ondersoek van die HKA bistipping kwaliteitsmaatstawwe sowel as die onderlinge verhoudings tussen hierdie kwaliteitsmaatstawe. Spesifieke aandag word gegee aan die effek van die standaardisasie op die HKA bistipping en sy kwaliteitsmaatstawe. Opvolgend op die studie van KVA is ’n verduideliking van die konstruksie van die geweegde KVA bistipping sowel as twee veskillende ongeweegde KVA bistippings gebaseer op die twee-stap-benadering tot KVA. Spesifieke aandag word gegee aan die effek wat die inagneming van die groepsgroottes in die konstruksie van die KVA bistipping op die voorstelling van die groepstruktuur onderliggend aan ’n datastel het. Daar is gevind dat groter groepe beter geskei is van ander groepe in die geweegde KVA bistipping as in die oorstemmende ongeweegde KVA bistipping. Soortgelyk daaraan is gevind dat kleiner groepe tot ’n groter mate geskei is van ander groepe in die ongeweegde KVA bistipping as in die oorstemmende geweegde KVA bistipping. ’n Gedetailleerde ondersoek van voorheen gedefinieerde kwaliteitsmaatstawe van die KVA bistipping volg op die studie van die KVA bistipping. Daar is gevind dat die akkuraatheid waarmee die groepsgemiddeldes van groter groepe benader word in die geweegde KVA bistipping, gewoonlik hoër is as in die ooreenstemmende ongeweegde KVA bistippings. Drie nuwe kwaliteitsmaatstawe wat die akkuraatheid van die Pythagoras-afstande in die KVA bistipping meet, word gedefinieer. Hierdie kwaliteitsmaatstawe beskryf onderskeidelik die akkuraatheid van die voorstelling van die Pythagoras-afstande tussen die groepsgemiddeldes, die Pythagoras-afstande tussen die individuele observasies en die Pythagoras-afstande tussen die individuele observasies en groepsgemiddeldes in die KVA bistipping.
APA, Harvard, Vancouver, ISO, and other styles
21

Nandong, Jobrun. "Modelling and control strategies for extractive alcoholic fermentation: partial control approach." Thesis, Curtin University, 2010. http://hdl.handle.net/20.500.11937/2197.

Full text
Abstract:
The vast majority of chemical and bio-chemical process plants are normally characterized by large number of measurements and relatively small number of manipulated variables; these thin plants have more output than input variables. As the number of manipulated variables restricts the number of controlled variables, thin plant has presented a daunting challenge to the engineers in selecting which measured variables to be controlled. In general, this is an important problem in modern process control today, because controlled variables selection is one of the key questions which must be carefully addressed in order to effectively design control strategies for process plants. While the issue relating to controlled variables selection has remained the key question to be resolved since the articulation of CSD problem by Foss in 1970s, the work described in this thesis points out to another equally important question in CSD, that is, what is the sufficient number of controlled variables required? Thinking over this question leads one to the necessity for gaining a rational understating of the governing principle in partial control design, namely the variables interaction.In this thesis, we propose a novel data-oriented approach to solving the control structure problem within the context of partial control framework. This approach represents a significant departure from the mainstream methods in CSD, which currently can be broadly classified into two major categories as the mathematical-oriented and heuristic-hierarchical approaches. The key distinguishing feature of the proposed approach lies in its adoption of technique based on the Principal Component Analysis (PCA), which is used to systematically determine the suitable controlled variables. Conversely, the determination of the controlled variables in mathematical-oriented and heuristic-hierarchical approaches is done via the mathematical optimization and process knowledge/engineering experience, respectively. It is important to note that, the data-oriented approach in this thesis emerges from the fusion of two important concepts, namely the partial control structure and PCA. While partial control concept provides the sound theoretical framework for addressing the CSD problem in a systematic manner, the PCA-based technique helps in determining not only the suitable controlled variables but also the sufficient number of controlled variables required.Since the classical framework of partial control is not amendable to a systematic way in the identification of controlled variables, it is necessary to develop a new framework of partial control in this thesis. Within this new framework the dominant variable can be clearly defined, and which in turn allows the incorporation of PCA-based technique for the systematic identification of controlled variables.The application of the data-oriented approach is demonstrated on a nonlinear multivariable bioprocess case study, called the two-stage continuous extractive (TSCE) alcoholic fermentation process. The system consists of 5 interlinked units: 2 bioreactors in series, a centrifuge, vacuum flash vessel and treatment tank. The comparison of the two-stage design with that of single-stage design reported in literature shows that: (1) both designs exhibit comparable performance in term of the maximum allowable trade-off values between yield and productivity, and (2) two-stage design exhibits stronger nonlinear behaviour than that of single-stage. Thus, the design of control strategies for the former is expected to be more challenging.Various partial control strategies are developed for the case study, such as basic partial control strategy, complete partial control strategies with and without PID enhancement technique and optimal size partial control strategy. Note that, this system consists of 16 output variables and only 6 potential manipulated variables, which has approximately 4,000,000 control structure alternatives. Therefore, the application of mathematical approach relying on optimization is not practical for this case study – i.e. assuming that evaluation of each alternative takes 30 seconds of optimization time, thus, complete screening will require almost 4 years to complete.Several exciting new insights crystallize from the simulation study performed on the case study, where two of them are most important from the perspective of effective design of partial control strategy: 1) There is an optimal size of partial control structure where too many controlled variables can lead to the presence of bottleneck control-loop, which in turn can severely limit the dynamic response of overall control system. On the other hand, too few controlled variables can lead to unacceptable variation or loss in performance measures. 2) The nature of variables interaction depends on the choice of control structure. Thus, it is important to ensure that the nature of open-loop variables interaction is preserved by the implementation of a particular control strategy. When this is achieved, then we say that this control system works synergistically with the inherent control capability of a given process – i.e. achieving the synergistic external-inherent control system condition.The proposed approach has been successfully applied to the case study, where the optimal partial control structure is found to be 3x3 i.e. 3 controlled variables are sufficient to meet all 3 types of control objectives: overall (implicit) performance objectives, constraint and inventory control objectives. Finally, the proposed approach effectively unifies the advantages of both mathematical-oriented and heuristic-hierarchical approaches, and while at the same time capable of overcoming many limitations faced by these two mainstream approaches.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Yuyao. "Non-linear dimensionality reduction and sparse representation models for facial analysis." Thesis, Lyon, INSA, 2014. http://www.theses.fr/2014ISAL0019/document.

Full text
Abstract:
Les techniques d'analyse du visage nécessitent généralement une représentation pertinente des images, notamment en passant par des techniques de réduction de la dimension, intégrées dans des schémas plus globaux, et qui visent à capturer les caractéristiques discriminantes des signaux. Dans cette thèse, nous fournissons d'abord une vue générale sur l'état de l'art de ces modèles, puis nous appliquons une nouvelle méthode intégrant une approche non-linéaire, Kernel Similarity Principle Component Analysis (KS-PCA), aux Modèles Actifs d'Apparence (AAMs), pour modéliser l'apparence d'un visage dans des conditions d'illumination variables. L'algorithme proposé améliore notablement les résultats obtenus par l'utilisation d'une transformation PCA linéaire traditionnelle, que ce soit pour la capture des caractéristiques saillantes, produites par les variations d'illumination, ou pour la reconstruction des visages. Nous considérons aussi le problème de la classification automatiquement des poses des visages pour différentes vues et différentes illumination, avec occlusion et bruit. Basé sur les méthodes des représentations parcimonieuses, nous proposons deux cadres d'apprentissage de dictionnaire pour ce problème. Une première méthode vise la classification de poses à l'aide d'une représentation parcimonieuse active (Active Sparse Representation ASRC). En fait, un dictionnaire est construit grâce à un modèle linéaire, l'Incremental Principle Component Analysis (Incremental PCA), qui a tendance à diminuer la redondance intra-classe qui peut affecter la performance de la classification, tout en gardant la redondance inter-classes, qui elle, est critique pour les représentations parcimonieuses. La seconde approche proposée est un modèle des représentations parcimonieuses basé sur le Dictionary-Learning Sparse Representation (DLSR), qui cherche à intégrer la prise en compte du critère de la classification dans le processus d'apprentissage du dictionnaire. Nous faisons appel dans cette partie à l'algorithme K-SVD. Nos résultats expérimentaux montrent la performance de ces deux méthodes d'apprentissage de dictionnaire. Enfin, nous proposons un nouveau schéma pour l'apprentissage de dictionnaire adapté à la normalisation de l'illumination (Dictionary Learning for Illumination Normalization: DLIN). L'approche ici consiste à construire une paire de dictionnaires avec une représentation parcimonieuse. Ces dictionnaires sont construits respectivement à partir de visages illuminées normalement et irrégulièrement, puis optimisés de manière conjointe. Nous utilisons un modèle de mixture de Gaussiennes (GMM) pour augmenter la capacité à modéliser des données avec des distributions plus complexes. Les résultats expérimentaux démontrent l'efficacité de notre approche pour la normalisation d'illumination
Face analysis techniques commonly require a proper representation of images by means of dimensionality reduction leading to embedded manifolds, which aims at capturing relevant characteristics of the signals. In this thesis, we first provide a comprehensive survey on the state of the art of embedded manifold models. Then, we introduce a novel non-linear embedding method, the Kernel Similarity Principal Component Analysis (KS-PCA), into Active Appearance Models, in order to model face appearances under variable illumination. The proposed algorithm successfully outperforms the traditional linear PCA transform to capture the salient features generated by different illuminations, and reconstruct the illuminated faces with high accuracy. We also consider the problem of automatically classifying human face poses from face views with varying illumination, as well as occlusion and noise. Based on the sparse representation methods, we propose two dictionary-learning frameworks for this pose classification problem. The first framework is the Adaptive Sparse Representation pose Classification (ASRC). It trains the dictionary via a linear model called Incremental Principal Component Analysis (Incremental PCA), tending to decrease the intra-class redundancy which may affect the classification performance, while keeping the extra-class redundancy which is critical for sparse representation. The other proposed work is the Dictionary-Learning Sparse Representation model (DLSR) that learns the dictionary with the aim of coinciding with the classification criterion. This training goal is achieved by the K-SVD algorithm. In a series of experiments, we show the performance of the two dictionary-learning methods which are respectively based on a linear transform and a sparse representation model. Besides, we propose a novel Dictionary Learning framework for Illumination Normalization (DL-IN). DL-IN based on sparse representation in terms of coupled dictionaries. The dictionary pairs are jointly optimized from normally illuminated and irregularly illuminated face image pairs. We further utilize a Gaussian Mixture Model (GMM) to enhance the framework's capability of modeling data under complex distribution. The GMM adapt each model to a part of the samples and then fuse them together. Experimental results demonstrate the effectiveness of the sparsity as a prior for patch-based illumination normalization for face images
APA, Harvard, Vancouver, ISO, and other styles
23

Dickens, Peter Martin. "Facilitating Emergence: Complex, Adaptive Systems Theory and the Shape of Change." Antioch University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=antioch1339016565.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Gunnarsson, Fredrik. "Filtered Historical SimulationValue at Risk for Options : A Dimension Reduction Approach to Model the VolatilitySurface Shifts." Thesis, Umeå universitet, Institutionen för fysik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-160344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Kpamegan, Neil Racheed. "Robust Principal Component Analysis." Thesis, American University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10784806.

Full text
Abstract:

In multivariate analysis, principal component analysis is a widely popular method which is used in many different fields. Though it has been extensively shown to work well when data follows multivariate normality, classical PCA suffers when data is heavy-tailed. Using PCA with the assumption that the data follows a stable distribution, we will show through simulations that a new method is better. We show the modified PCA can be used for heavy-tailed data and that we can more accurately estimate the correct number of components compared to classical PCA and more accurately identify the subspace spanned by the important components.

APA, Harvard, Vancouver, ISO, and other styles
26

Akinduko, Ayodeji Akinwumi. "Multiscale principal component analysis." Thesis, University of Leicester, 2016. http://hdl.handle.net/2381/36616.

Full text
Abstract:
The problem of approximating multidimensional data with objects of lower dimension is a classical problem in complexity reduction. It is important that data approximation capture the structure(s) and dynamics of the data, however distortion to data by many methods during approximation implies that some geometric structure(s) of the data may not be preserved during data approximation. For methods that model the manifold of the data, the quality of approximation depends crucially on the initialization of the method. The first part of this thesis investigates the effect of initialization on manifold modelling methods. Using Self Organising Maps (SOM) as a case study, we compared the quality of learning of manifold methods for two popular initialization methods; random initialization and principal component initialization. To further understand the dynamics of manifold learning, datasets were further classified into linear, quasilinear and nonlinear. The second part of this thesis focuses on revealing geometric structure(s) in high dimension data using an extension of Principal Component Analysis (PCA). Feature extraction using (PCA) favours direction with large variance which could obfuscate other interesting geometric structure(s) that could be present in the data. To reveal these intrinsic structures, we analysed the local PCA structures of the dataset. An equivalent definition of PCA is that it seeks subspaces that maximize the sum of pairwise distances of data projection; extending this definition we define localization in term of scale as maximizing the sum of weighted squared pairwise distances between data projections for various distributions of weights (scales). Since for complex data various regions of the dataspace could have different PCA structures, we also define localization with regards to dataspace. The resulting local PCA structures were represented by the projection matrix corresponding to the subspaces and analysed to reveal some structures in the data at various localizations.
APA, Harvard, Vancouver, ISO, and other styles
27

Der, Ralf, Ulrich Steinmetz, Gerd Balzuweit, and Gerrit Schüürmann. "Nonlinear principal component analysis." Universität Leipzig, 1998. https://ul.qucosa.de/id/qucosa%3A34520.

Full text
Abstract:
We study the extraction of nonlinear data models in high-dimensional spaces with modified self-organizing maps. We present a general algorithm which maps low-dimensional lattices into high-dimensional data manifolds without violation of topology. The approach is based on a new principle exploiting the specific dynamical properties of the first order phase transition induced by the noise of the data. Moreover we present a second algorithm for the extraction of generalized principal curves comprising disconnected and branching manifolds. The performance of the algorithm is demonstrated for both one- and two-dimensional principal manifolds and also for the case of sparse data sets. As an application we reveal cluster structures in a set of real world data from the domain of ecotoxicology.
APA, Harvard, Vancouver, ISO, and other styles
28

Solat, Karo. "Generalized Principal Component Analysis." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/83469.

Full text
Abstract:
The primary objective of this dissertation is to extend the classical Principal Components Analysis (PCA), aiming to reduce the dimensionality of a large number of Normal interrelated variables, in two directions. The first is to go beyond the static (contemporaneous or synchronous) covariance matrix among these interrelated variables to include certain forms of temporal (over time) dependence. The second direction takes the form of extending the PCA model beyond the Normal multivariate distribution to the Elliptically Symmetric family of distributions, which includes the Normal, the Student's t, the Laplace and the Pearson type II distributions as special cases. The result of these extensions is called the Generalized principal component analysis (GPCA). The GPCA is illustrated using both Monte Carlo simulations as well as an empirical study, in an attempt to demonstrate the enhanced reliability of these more general factor models in the context of out-of-sample forecasting. The empirical study examines the predictive capacity of the GPCA method in the context of Exchange Rate Forecasting, showing how the GPCA method dominates forecasts based on existing standard methods, including the random walk models, with or without including macroeconomic fundamentals.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
29

Khwambala, Patricia Helen. "The importance of selecting the optimal number of principal components for fault detection using principal component analysis." Master's thesis, University of Cape Town, 2012. http://hdl.handle.net/11427/11930.

Full text
Abstract:
Includes summary.
Includes bibliographical references.
Fault detection and isolation are the two fundamental building blocks of process monitoring. Accurate and efficient process monitoring increases plant availability and utilization. Principal component analysis is one of the statistical techniques that are used for fault detection. Determination of the number of PCs to be retained plays a big role in detecting a fault using the PCA technique. In this dissertation focus has been drawn on the methods of determining the number of PCs to be retained for accurate and effective fault detection in a laboratory thermal system. SNR method of determining number of PCs, which is a relatively recent method, has been compared to two commonly used methods for the same, the CPV and the scree test methods.
APA, Harvard, Vancouver, ISO, and other styles
30

Fučík, Vojtěch. "Principal component analysis in Finance." Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-264205.

Full text
Abstract:
The main objective of this thesis is to summarize and possibly interconnect the existing methodology on principal components analysis, hierarchical clustering and topological organization in the financial and economic networks, linear regression and GARCH modeling. In the thesis the clustering ability of PCA is compared with the more conventional approaches on a set of world stock market indices returns in different time periods where the time division is represented by The World Financial Crisis of 2007-2009. It is also observed whether the clustering of DJIA index components is underlied by the industry sector to which the individual stocks belong. Joining together PCA with classical linear regression creates principal components regression which is further in the thesis applied to the German DAX 30 index logarithmic returns forecasting using various macroeconomic and financial predictors. The correlation between two energy stocks returns - Chevron and ExxonMobil is forecasted using orthogonal (or PCA) GARCH. The constructed forecast is then compared with the predictions constructed by the conventional multivariate volatility models - EWMA and DCC GARCH.
APA, Harvard, Vancouver, ISO, and other styles
31

Wedlake, Ryan Stuart. "Robust principal component analysis biplots." Thesis, Link to the online version, 2008. http://hdl.handle.net/10019/929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Brennan, Victor L. "Principal component analysis with multiresolution." [Gainesville, Fla.] : University of Florida, 2001. http://etd.fcla.edu/etd/uf/2001/ank7079/brennan%5Fdissertation.pdf.

Full text
Abstract:
Thesis (Ph. D.)--University of Florida, 2001.
Title from first page of PDF file. Document formatted into pages; contains xi, 124 p.; also contains graphics. Vita. Includes bibliographical references (p. 120-123).
APA, Harvard, Vancouver, ISO, and other styles
33

Phala, Adeela Colyne. "Application of multivariate regression techniques to paint: for the quantitive FTIR spectroscopic analysis of polymeric components." Thesis, Cape Peninsula University of Technology, 2011. http://hdl.handle.net/20.500.11838/733.

Full text
Abstract:
Thesis submitted in fulfilment of the requirements for the degree Master of Technology Chemistry in the Faculty of (Science) Supervisor: Professor T.N. van der Walt Bellville campus Date submitted: October 2011
It is important to quantify polymeric components in a coating because they greatly influence the performance of a coating. The difficulty associated with analysis of polymers by Fourier transform infrared (FTIR) analysis’s is that colinearities arise from similar or overlapping spectral features. A quantitative FTIR method with attenuated total reflectance coupled to multivariate/ chemometric analysis is presented. It allows for simultaneous quantification of 3 polymeric components; a rheology modifier, organic opacifier and styrene acrylic binder, with no prior extraction or separation from the paint. The factor based methods partial least squares (PLS) and principle component regression (PCR) permit colinearities by decomposing the spectral data into smaller matrices with principle scores and loading vectors. For model building spectral information from calibrators and validation samples at different analysis regions were incorporated. PCR and PLS were used to inspect the variation within the sample set. The PLS algorithms were found to predict the polymeric components the best. The concentrations of the polymeric components in a coating were predicted with the calibration model. Three PLS models each with different analysis regions yielded a coefficient of correlation R2 close to 1 for each of the components. The root mean square error of calibration (RMSEC) and root mean square error of prediction (RMSEP) was less than 5%. The best out-put was obtained where spectral features of water was included (Trial 3). The prediction residual values for the three models ranged from 2 to -2 and 10 to -10. The method allows paint samples to be analysed in pure form and opens many opportunities for other coating components to be analysed in the same way.
APA, Harvard, Vancouver, ISO, and other styles
34

Jahirul, Md Islam. "Experimental and statistical investigation of Australian native plants for second-generation biodiesel production." Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/83778/9/Jahirul_Islam_Thesis.pdf.

Full text
Abstract:
This work explores the potential of Australian native plants as a source of second-generation biodiesel for internal combustion engines application. Biodiesels were evaluated from a number of non-edible oil seeds which are grow naturally in Queensland, Australia. The quality of the produced biodiesels has been investigated by several experimental and numerical methods. The research methodology and numerical model developed in this study can be used for a broad range of biodiesel feedstocks and for the future development of renewable native biodiesel in Australia.
APA, Harvard, Vancouver, ISO, and other styles
35

Cadima, Jorge Filipe Campinos Landerset. "Topics in descriptive Principal Component Analysis." Thesis, University of Kent, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.314686.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Lee, Colin K. "Infrared face recognition." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Jun%5FLee%5FColin.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, June 2004.
Thesis advisor(s): Monique P. Fargues, Gamani Karunasiri. Includes bibliographical references (p. 135-136). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
37

Isaac, Benjamin. "Principal component analysis based combustion models." Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209278.

Full text
Abstract:
Energy generation through combustion of hydrocarbons continues to dominate, as the most common method for energy generation. In the U.S. nearly 84% of the energy consump- tion comes from the combustion of fossil fuels. Because of this demand there is a continued need for improvement, enhancement and understanding of the combustion process. As computational power increases, and our methods for modelling these complex combustion systems improve, combustion modelling has become an important tool in gaining deeper insight and understanding for these complex systems. The constant state of change in computational ability lead to a continual need for new combustion models that can take full advantage of the latest computational resources. To this end, the research presented here encompasses the development of new models, which can be tailored to the available resources, allowing one to increase or decrease the amount of modelling error based on the available computational resources, and desired accuracy. Principal component analysis (PCA) is used to identify the low-dimensional manifolds which exist in turbulent combustion systems. These manifolds are unique in there ability to represent a larger dimensional space with fewer components resulting in a minimal addition of error. PCA is well suited for the problem at hand because of its ability to allow the user to define the amount of error in approximation, depending on the resources at hand. The research presented here looks into various methods which exploit the benefits of PCA in modelling combustion systems, demonstrating several models, and providing new and interesting perspectives for the PCA based approaches to modelling turbulent combustion.
Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
38

Alfonso, Miñambres Javier de. "Face recognition using principal component analysis." Master's thesis, Universidade de Aveiro, 2010. http://hdl.handle.net/10773/10221.

Full text
Abstract:
Mestrado em Engenharia Electrónica e Telecomunicações
The purpose of this dissertation was to analyze the image processing method known as Principal Component Analysis (PCA) and its performance when applied to face recognition. This algorithm spans a subspace (called facespace) where the faces in a database are represented with a reduced number of features (called feature vectors). The study focused on performing various exhaustive tests to analyze in what conditions it is best to apply PCA. First, a facespace was spanned using the images of all the people in the database. We obtained then a new representation of each image by projecting them onto this facespace. We measured the distance between the projected test image with the other projections and determined that the closest test-train couple (k-Nearest Neighbour) was the recognized subject. This first way of applying PCA was tested with the Leave{One{Out test. This test takes an image in the database for test and the rest to build the facespace, and repeats the process until all the images have been used as test image once, adding up the successful recognitions as a result. The second test was to perform an 8{Fold Cross{Validation, which takes ten images as eligible test images (there are 10 persons in the database with eight images each) and uses the rest to build the facespace. All test images are tested for recognition in this fold, and the next fold is carried out, until all eight folds are complete, showing a different set of results. The other way to use PCA we used was to span what we call Single Person Facespaces (SPFs, a group of subspaces, each spanned with images of a single person) and measure subspace distance using the theory of principal angles. Since the database is small, a way to synthesize images from the existing ones was explored as a way to overcoming low successful recognition rates. All of these tests were performed for a series of thresholds (a variable which selected the number of feature vectors the facespaces were built with, i.e. the facespaces' dimension), and for the database after being preprocessed in two different ways in order to reduce statistically redundant information. The results obtained throughout the tests were within what expected from what can be read in literature: success rates of around 85% in some cases. Special mention needs to be made on the great result improvement between SPFs before and after extending the database with synthetic images. The results revealed that using PCA to project the images in the group facespace is very accurate for face recognition, even when having a small number of samples per subject. Comparing personal facespaces is more effective when we can synthesize images or have a natural way of acquiring new images of the subject, like for example using video footage. The tests and results were obtained with a custom software with user interface, designed and programmed by the author of this dissertation.
O propósito desta Dissertação foi a aplicação da Analise em Componentes Principais (PCA, de acordo com as siglas em inglês), em sistemas para reconhecimento de faces. Esta técnica permite calcular um subespaço (chamado facespace, onde as imagens de uma base de dados são representadas por um número reduzido de características (chamadas feature vectors). O estudo realizado centrou-se em vários testes para analisar quais são as condições óptimas para aplicar o PCA. Para começar, gerou-se um faces- pace utilizando todas as imagens da base de dados. Obtivemos uma nova representação de cada imagem, após a projecção neste espaço, e foram medidas as distâncias entre as projecções da imagem de teste e as de treino. A dupla de imagens de teste-treino mais próximas determina o sujeito reconhecido (classificador vizinhos mais próximos). Esta primeira forma de aplicar o PCA, e o respectivo classificador, foi avaliada com as estratégias Leave{One{Out e 8{Fold Cross{Validation. A outra forma de utilizar o PCA foi gerando subespaços individuais (designada por SPF, Single Person Facespace), onde cada subespaço era gerado com imagens de apenas uma pessoa, para a seguir medir a distância entre estes espaços utilizando o conceito de ângulos principais. Como a base de dados era pequena, foi explorada uma forma de sintetizar novas imagens a partir das já existentes. Todos estes teste foram feitos para uma série de limiares (uma variável threshold que determinam o número de feature vectors com os que o faces- pace é construído) e diferentes formas de pre-processamento. Os resultados obtidos estavam dentro do esperado: taxas de acerto aproximadamente iguais a 85% em alguns casos. Pode destacar-se uma grande melhoria na taxa de reconhecimento após a inclusão de imagens sintéticas na base de dados. Os resultados revelaram que o uso do PCA para projectar imagens no subespaço da base de dados _e viável em sistemas de reconhecimento de faces, principalmente se comparar subespaço individuais no caso de base de dados com poucos exemplares em que _e possível sintetizar imagens ou em sistemas com captura de vídeo.
APA, Harvard, Vancouver, ISO, and other styles
39

Brubaker, S. Charles. "Extensions of principal components analysis." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29645.

Full text
Abstract:
Thesis (Ph.D)--Computing, Georgia Institute of Technology, 2009.
Committee Chair: Santosh Vempala; Committee Member: Adam Kalai; Committee Member: Haesun Park; Committee Member: Ravi Kannan; Committee Member: Vladimir Koltchinskii. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
40

Broadbent, Lane David. "Recognition of Infrastructure Events Using Principal Component Analysis." BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/6197.

Full text
Abstract:
Information Technology systems generate system log messages to allow for the monitoring of the system. In increasingly large and complex systems the volume of log data can overwhelm the analysts tasked with monitoring these systems. A system was developed that utilizes Principal Component Analysis to assist the analyst in the characterization of system health and events. Once trained, the system was able to accurately identify a state of heavy load on a device with a low false positive rate. The system was also able to accurately identify an error condition when trained on a single event. The method employed is able to assist in the real time monitoring of large complex systems, increasing the efficiency of trained analysts.
APA, Harvard, Vancouver, ISO, and other styles
41

Teixeira, Sérgio Coichev. "Utilização de análise de componentes principais em séries temporais." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-09052013-224741/.

Full text
Abstract:
Um dos principais objetivos da análise de componentes principais consiste em reduzir o número de variáveis observadas em um conjunto de variáveis não correlacionadas, fornecendo ao pesquisador subsídios para entender a variabilidade e a estrutura de correlação dos dados observados com uma menor quantidade de variáveis não correlacionadas chamadas de componentes principais. A técnica é muito simples e amplamente utilizada em diversos estudos de diferentes áreas. Para construção, medimos a relação linear entre as variáveis observadas pela matriz de covariância ou pela matriz de correlação. Entretanto, as matrizes de covariância e de correlação podem deixar de capturar importante informações para dados correlacionados sequencialmente no tempo, autocorrelacionados, desperdiçando parte importante dos dados para interpretação das componentes. Neste trabalho, estudamos a técnica de análise de componentes principais que torna possível a interpretação ou análise da estrutura de autocorrelação dos dados observados. Para isso, exploramos a técnica de análise de componentes principais para o domínio da frequência que fornece para dados autocorrelacionados um resultado mais específico e detalhado do que a técnica de componentes principais clássica. Pelos métodos SSA (Singular Spectrum Analysis) e MSSA (Multichannel Singular Spectrum Analysis), a análise de componentes principais é baseada na correlação no tempo e entre as diferentes variáveis observadas. Essas técnicas são muito utilizadas para dados atmosféricos na identificação de padrões, tais como tendência e periodicidade.
The main objective of principal component analysis (PCA) is to reduce the number of variables in a small uncorrelated data sets, providing support and helping researcher understand the variation present in all the original variables with small uncorrelated amount of variables, called components. The principal components analysis is very simple and frequently used in several areas. For its construction, the components are calculated through covariance matrix. However, the covariance matrix does not capture the autocorrelation information, wasting important information about data sets. In this research, we present some techniques related to principal component analysis, considering autocorrelation information. However, we explore the principal component analysis in the domain frequency, providing more accurate and detailed results than classical component analysis time series case. In subsequent method SSA (Singular Spectrum Analysis) and MSSA (Multichannel Singular Spectrum Analysis), we study the principal component analysis considering relationship between locations and time points. These techniques are broadly used for atmospheric data sets to identify important characteristics and patterns, such as tendency and periodicity.
APA, Harvard, Vancouver, ISO, and other styles
42

Burka, Zak. "Perceptual audio classification using principal component analysis /." Online version of thesis, 2010. http://hdl.handle.net/1850/12247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Patak, Zdenek. "Robust principal component analysis via projection pursuit." Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/29737.

Full text
Abstract:
In principal component analysis (PCA), the principal components (PC) are linear combinations of the variables that minimize some objective function. In the classical setup the objective function is the variance of the PC's. The variance of the PC's can be easily upset by outlying observations; hence, Chen and Li (1985) proposed a robust alternative for the PC's obtained by replacing the variance with an M-estimate of scale. This approach cannot achieve a high breakdown point (BP) and efficiency at the same time. To obtain both high BP and efficiency, we propose to use MM- and τ-estimates in place of the M-estimate. Although outliers may cause bias in both the direction and the size of the PC's, Chen and Li looked at the scale bias only, whereas we consider both. All proposed robust methods are based on the minimization of a non-convex objective function; hence, a good initial starting point is required. With this in mind, we propose an orthogonal version of the least median of squares (Rousseeuw and Leroy, 1987) and a new method that is orthogonal equivariant, robust and easy to compute. Extensive Monte Carlo study shows promising results for the proposed method. Orthogonal regression and detection of multivariate outliers are discussed as possible applications of PCA.
Science, Faculty of
Statistics, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
44

Monahan, Adam Hugh. "Nonlinear principal component analysis of climate data." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ48678.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Nilsson, Jakob, and Tim Lestander. "Detecting network failures using principal component analysis." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-132258.

Full text
Abstract:
The dataset is first analyzed on a basic level by looking at the correlations between number of measurements and average download speed for every day. Second, our PCA-based methodology applied on the dataset, taking into account many factors, including the number of correlated measurements. The results from each analysis is compared and evaluated. Based on the results, we give insights to just how efficient the tested methods are and what improvements that can be made on the methods.This thesis investigates the efficiency of a methodology that first performs a Principal Component Analysis (PCA), followed by applying a threshold-based algorithm with a static threshold to detect potential network degradation and network attacks. Then a proof of concept of an online algorithm that is using the same methodology except for using training data to set the threshold is presented and analyzed. The analysis and algorithms are used on a large crowd-sourced dataset of Internet speed measurements, in this case from the crowd-based speed test application Bredbandskollen.se.
APA, Harvard, Vancouver, ISO, and other styles
46

Dauwe, Alexander. "Principal component analysis of the yield curve." Master's thesis, NSBE - UNL, 2009. http://hdl.handle.net/10362/9439.

Full text
Abstract:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics
This report deals with one of the remaining key problems in financial decision taking: the forecast of the term structure at different time horizons. Specifically: I will forecast the Euro Interest Rate Swap with a macro factor augmented autoregressive principal component model. I achieve forecasts that significantly outperform the Random Walk for medium to long term horizons when using a short rolling time window. Including macro factors leads to even better results.
APA, Harvard, Vancouver, ISO, and other styles
47

Graner, Johannes. "On Asymptotic Properties of Principal Component Analysis." Thesis, Uppsala universitet, Tillämpad matematik och statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-420649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Li, Liubo Li. "Trend-Filtered Projection for Principal Component Analysis." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1503277234178696.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Roy, Samita. "Pyrite oxidation in coal-bearing strata : controls on in-situ oxidation as a precursor of acid mine drainage formation." Thesis, Durham University, 2002. http://etheses.dur.ac.uk/3753/.

Full text
Abstract:
Pyrite oxidation in coal-bearing strata is recognised as the main precursor to Acidic Mine Drainage (AMD) generation. Predicting AMD quality and quantity for remediation, or proposed extraction, requires assessment of interactions between oxidising fluids and pyrite, and between oxidation products and groundwater. Current predictive methods and models rarely account for individual mineral weathering rates, or their distribution within rock. Better constraints on the importance of such variables in controlling rock leachate are required to provide more reliable predictions of AMD quality. In this study assumptions made during modelling of AMD generation were tested including; homogeneity of rock chemical and physical characteristics, controls on the rate of embedded pyrite oxidation and oxidation front ingress. The main conclusions of this work are:• The ingress of a pyrite oxidation front into coal-bearing strata depends on dominant oxidant transport mechanism, pyrite morphology and rock pore-size distribution.• Although pyrite oxidation rates predicted from rate laws and derived from experimental weathering of coal-bearing strata agree, uncertainty in surface area of framboids produces at least an order of magnitude error in predicted rates.• Pyrite oxidation products in partly unsaturated rock are removed to solution via a cycle of dissolution and precipitation at the water-rock interface. Dissolution mainly occurs along rock cleavage planes, as does diffusion of dissolved oxidant.• Significant variance of whole seam S and pyrite wt % existed over a 30 m exposure of an analysed coal seam. Assuming a seam mean pyrite wt % to predict net acid producing potential for coal and shale seams may be unsuitable, at this scale at least.• Seasonal variation in AMD discharge chemistry indicates that base-flow is not necessarily representative of extreme poor quality leachate. Summer and winter storms, following relatively dry periods, tended to release the greatest volume of pyrite oxidation products.
APA, Harvard, Vancouver, ISO, and other styles
50

Janeček, David. "Sdružená EEG-fMRI analýza na základě heuristického modelu." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-221334.

Full text
Abstract:
The master thesis deals with the joint EEG-fMRI analysis based on a heuristic model that describes the relationship between changes in blood flow in active brain areas and in the electrical activity of neurons. This work also discusses various methods of extracting of useful information from the EEG and their influence on the final result of joined analysis. There were tested averaging methods of electrodes interest, decomposition by principal components analysis and decomposition by independent component analysis. Methods of averaging and decomposition by PCA give similar results, but information about a stimulus vector can not be extracted. Using ICA decomposition, we are able to obtain information relating to the certain stimulation, but there is the problem in the final interpretation and selection of the right components in a blind search for variability coupled with the experiment. It was found out that although components calculated from the time sequence EEG are independent for each to other, their spectrum shifts are correlated. This spectral dependence was eliminated by PCA / ICA decomposition from vectors of spectrum shifts. For this method, each component brings new information about brain activity. The results of the heuristic approach were compared with the results of the joined analysis based on the relative and absolute power approach from frequency bands of interest. And the similarity between activation maps was founded, especially for the heuristic model and the relative power from the gamma band (20-40Hz).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography