Rozprawy doktorskie na temat „Principal component analysis”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Principal component analysis.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Principal component analysis”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Nunes, Madalena Baioa Paraíso. "Portfolio selection : a study using principal component analysis". Master's thesis, Instituto Superior de Economia e Gestão, 2017. http://hdl.handle.net/10400.5/14598.

Pełny tekst źródła
Streszczenie:
Mestrado em Finanças
Nesta tese aplicámos a análise de componentes principais ao mercado bolsista português usando os constituintes do índice PSI-20, de Julho de 2008 a Dezembro de 2016. Os sete primeiros componentes principais foram retidos, por se ter verificado que estes representavam as maiores fontes de risco deste mercado em específico. Assim, foram construídos sete portfólios principais e comparámo-los com outras estratégias de alocação. Foram construídos o portfólio 1/N (portfólio com investimento igual para cada um dos 26 ativos), o PPEqual (portfólio com igual investimento em cada um dos 7 principal portfólios) e o portfólio MV (portfólio que tem por base a teoria moderna de gestão de carteiras de Markowitz (1952)). Concluímos que estes dois últimos portfólios apresentavam os melhores resultados em termos de risco e retorno, sendo o portfólio PPEqual mais adequado a um investidor com maior grau de aversão ao risco e o portfólio MV mais adequado a um investidor que estaria disposto a arriscar mais em prol de maior retorno. No que diz respeito ao nível de risco, o PPEqual é o portfólio com melhores resultados e nenhum outro portfólio conseguiu apresentar valores semelhantes. Assim encontrámos um portfólio que é a ponderação de todos os portfólios principais por nós construídos e este era o portfólio mais eficiente em termos de risco.
In this thesis we apply principal component analysis to the Portuguese stock market using the constituents of the PSI-20 index from July 2008 to December 2016. The first seven principal components were retained, as we verified that these represented the major risk sources in this specific market. Seven principal portfolios were constructed and we compared them with other allocation strategies. The 1/N portfolio (with an equal investment in each of the 26 stocks), the PPEqual portfolio (with an equal investment in each of the 7 principal portfolios) and the MV portfolio (based on Markowitz's (1952) mean-variance strategy) were constructed. We concluded that these last two portfolios presented the best results in terms of return and risk, with PPEqual portfolio being more suitable for an investor with a greater degree of risk aversion and the MV portfolio more suitable for an investor willing to risk more in favour of higher returns. Regarding the level of risk, PPEqual is the portfolio with the best results and, so far, no other portfolio has presented similar values. Therefore, we found an equally-weighted portfolio among all the principal portfolios we built, which was the most risk efficient.
info:eu-repo/semantics/publishedVersion
Style APA, Harvard, Vancouver, ISO itp.
2

Kpamegan, Neil Racheed. "Robust Principal Component Analysis". Thesis, American University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10784806.

Pełny tekst źródła
Streszczenie:

In multivariate analysis, principal component analysis is a widely popular method which is used in many different fields. Though it has been extensively shown to work well when data follows multivariate normality, classical PCA suffers when data is heavy-tailed. Using PCA with the assumption that the data follows a stable distribution, we will show through simulations that a new method is better. We show the modified PCA can be used for heavy-tailed data and that we can more accurately estimate the correct number of components compared to classical PCA and more accurately identify the subspace spanned by the important components.

Style APA, Harvard, Vancouver, ISO itp.
3

Akinduko, Ayodeji Akinwumi. "Multiscale principal component analysis". Thesis, University of Leicester, 2016. http://hdl.handle.net/2381/36616.

Pełny tekst źródła
Streszczenie:
The problem of approximating multidimensional data with objects of lower dimension is a classical problem in complexity reduction. It is important that data approximation capture the structure(s) and dynamics of the data, however distortion to data by many methods during approximation implies that some geometric structure(s) of the data may not be preserved during data approximation. For methods that model the manifold of the data, the quality of approximation depends crucially on the initialization of the method. The first part of this thesis investigates the effect of initialization on manifold modelling methods. Using Self Organising Maps (SOM) as a case study, we compared the quality of learning of manifold methods for two popular initialization methods; random initialization and principal component initialization. To further understand the dynamics of manifold learning, datasets were further classified into linear, quasilinear and nonlinear. The second part of this thesis focuses on revealing geometric structure(s) in high dimension data using an extension of Principal Component Analysis (PCA). Feature extraction using (PCA) favours direction with large variance which could obfuscate other interesting geometric structure(s) that could be present in the data. To reveal these intrinsic structures, we analysed the local PCA structures of the dataset. An equivalent definition of PCA is that it seeks subspaces that maximize the sum of pairwise distances of data projection; extending this definition we define localization in term of scale as maximizing the sum of weighted squared pairwise distances between data projections for various distributions of weights (scales). Since for complex data various regions of the dataspace could have different PCA structures, we also define localization with regards to dataspace. The resulting local PCA structures were represented by the projection matrix corresponding to the subspaces and analysed to reveal some structures in the data at various localizations.
Style APA, Harvard, Vancouver, ISO itp.
4

Der, Ralf, Ulrich Steinmetz, Gerd Balzuweit i Gerrit Schüürmann. "Nonlinear principal component analysis". Universität Leipzig, 1998. https://ul.qucosa.de/id/qucosa%3A34520.

Pełny tekst źródła
Streszczenie:
We study the extraction of nonlinear data models in high-dimensional spaces with modified self-organizing maps. We present a general algorithm which maps low-dimensional lattices into high-dimensional data manifolds without violation of topology. The approach is based on a new principle exploiting the specific dynamical properties of the first order phase transition induced by the noise of the data. Moreover we present a second algorithm for the extraction of generalized principal curves comprising disconnected and branching manifolds. The performance of the algorithm is demonstrated for both one- and two-dimensional principal manifolds and also for the case of sparse data sets. As an application we reveal cluster structures in a set of real world data from the domain of ecotoxicology.
Style APA, Harvard, Vancouver, ISO itp.
5

Solat, Karo. "Generalized Principal Component Analysis". Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/83469.

Pełny tekst źródła
Streszczenie:
The primary objective of this dissertation is to extend the classical Principal Components Analysis (PCA), aiming to reduce the dimensionality of a large number of Normal interrelated variables, in two directions. The first is to go beyond the static (contemporaneous or synchronous) covariance matrix among these interrelated variables to include certain forms of temporal (over time) dependence. The second direction takes the form of extending the PCA model beyond the Normal multivariate distribution to the Elliptically Symmetric family of distributions, which includes the Normal, the Student's t, the Laplace and the Pearson type II distributions as special cases. The result of these extensions is called the Generalized principal component analysis (GPCA). The GPCA is illustrated using both Monte Carlo simulations as well as an empirical study, in an attempt to demonstrate the enhanced reliability of these more general factor models in the context of out-of-sample forecasting. The empirical study examines the predictive capacity of the GPCA method in the context of Exchange Rate Forecasting, showing how the GPCA method dominates forecasts based on existing standard methods, including the random walk models, with or without including macroeconomic fundamentals.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
6

Fučík, Vojtěch. "Principal component analysis in Finance". Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-264205.

Pełny tekst źródła
Streszczenie:
The main objective of this thesis is to summarize and possibly interconnect the existing methodology on principal components analysis, hierarchical clustering and topological organization in the financial and economic networks, linear regression and GARCH modeling. In the thesis the clustering ability of PCA is compared with the more conventional approaches on a set of world stock market indices returns in different time periods where the time division is represented by The World Financial Crisis of 2007-2009. It is also observed whether the clustering of DJIA index components is underlied by the industry sector to which the individual stocks belong. Joining together PCA with classical linear regression creates principal components regression which is further in the thesis applied to the German DAX 30 index logarithmic returns forecasting using various macroeconomic and financial predictors. The correlation between two energy stocks returns - Chevron and ExxonMobil is forecasted using orthogonal (or PCA) GARCH. The constructed forecast is then compared with the predictions constructed by the conventional multivariate volatility models - EWMA and DCC GARCH.
Style APA, Harvard, Vancouver, ISO itp.
7

Wedlake, Ryan Stuart. "Robust principal component analysis biplots". Thesis, Link to the online version, 2008. http://hdl.handle.net/10019/929.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Brennan, Victor L. "Principal component analysis with multiresolution". [Gainesville, Fla.] : University of Florida, 2001. http://etd.fcla.edu/etd/uf/2001/ank7079/brennan%5Fdissertation.pdf.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--University of Florida, 2001.
Title from first page of PDF file. Document formatted into pages; contains xi, 124 p.; also contains graphics. Vita. Includes bibliographical references (p. 120-123).
Style APA, Harvard, Vancouver, ISO itp.
9

Cadima, Jorge Filipe Campinos Landerset. "Topics in descriptive Principal Component Analysis". Thesis, University of Kent, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.314686.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Isaac, Benjamin. "Principal component analysis based combustion models". Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209278.

Pełny tekst źródła
Streszczenie:
Energy generation through combustion of hydrocarbons continues to dominate, as the most common method for energy generation. In the U.S. nearly 84% of the energy consump- tion comes from the combustion of fossil fuels. Because of this demand there is a continued need for improvement, enhancement and understanding of the combustion process. As computational power increases, and our methods for modelling these complex combustion systems improve, combustion modelling has become an important tool in gaining deeper insight and understanding for these complex systems. The constant state of change in computational ability lead to a continual need for new combustion models that can take full advantage of the latest computational resources. To this end, the research presented here encompasses the development of new models, which can be tailored to the available resources, allowing one to increase or decrease the amount of modelling error based on the available computational resources, and desired accuracy. Principal component analysis (PCA) is used to identify the low-dimensional manifolds which exist in turbulent combustion systems. These manifolds are unique in there ability to represent a larger dimensional space with fewer components resulting in a minimal addition of error. PCA is well suited for the problem at hand because of its ability to allow the user to define the amount of error in approximation, depending on the resources at hand. The research presented here looks into various methods which exploit the benefits of PCA in modelling combustion systems, demonstrating several models, and providing new and interesting perspectives for the PCA based approaches to modelling turbulent combustion.
Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished
Style APA, Harvard, Vancouver, ISO itp.
11

Alfonso, Miñambres Javier de. "Face recognition using principal component analysis". Master's thesis, Universidade de Aveiro, 2010. http://hdl.handle.net/10773/10221.

Pełny tekst źródła
Streszczenie:
Mestrado em Engenharia Electrónica e Telecomunicações
The purpose of this dissertation was to analyze the image processing method known as Principal Component Analysis (PCA) and its performance when applied to face recognition. This algorithm spans a subspace (called facespace) where the faces in a database are represented with a reduced number of features (called feature vectors). The study focused on performing various exhaustive tests to analyze in what conditions it is best to apply PCA. First, a facespace was spanned using the images of all the people in the database. We obtained then a new representation of each image by projecting them onto this facespace. We measured the distance between the projected test image with the other projections and determined that the closest test-train couple (k-Nearest Neighbour) was the recognized subject. This first way of applying PCA was tested with the Leave{One{Out test. This test takes an image in the database for test and the rest to build the facespace, and repeats the process until all the images have been used as test image once, adding up the successful recognitions as a result. The second test was to perform an 8{Fold Cross{Validation, which takes ten images as eligible test images (there are 10 persons in the database with eight images each) and uses the rest to build the facespace. All test images are tested for recognition in this fold, and the next fold is carried out, until all eight folds are complete, showing a different set of results. The other way to use PCA we used was to span what we call Single Person Facespaces (SPFs, a group of subspaces, each spanned with images of a single person) and measure subspace distance using the theory of principal angles. Since the database is small, a way to synthesize images from the existing ones was explored as a way to overcoming low successful recognition rates. All of these tests were performed for a series of thresholds (a variable which selected the number of feature vectors the facespaces were built with, i.e. the facespaces' dimension), and for the database after being preprocessed in two different ways in order to reduce statistically redundant information. The results obtained throughout the tests were within what expected from what can be read in literature: success rates of around 85% in some cases. Special mention needs to be made on the great result improvement between SPFs before and after extending the database with synthetic images. The results revealed that using PCA to project the images in the group facespace is very accurate for face recognition, even when having a small number of samples per subject. Comparing personal facespaces is more effective when we can synthesize images or have a natural way of acquiring new images of the subject, like for example using video footage. The tests and results were obtained with a custom software with user interface, designed and programmed by the author of this dissertation.
O propósito desta Dissertação foi a aplicação da Analise em Componentes Principais (PCA, de acordo com as siglas em inglês), em sistemas para reconhecimento de faces. Esta técnica permite calcular um subespaço (chamado facespace, onde as imagens de uma base de dados são representadas por um número reduzido de características (chamadas feature vectors). O estudo realizado centrou-se em vários testes para analisar quais são as condições óptimas para aplicar o PCA. Para começar, gerou-se um faces- pace utilizando todas as imagens da base de dados. Obtivemos uma nova representação de cada imagem, após a projecção neste espaço, e foram medidas as distâncias entre as projecções da imagem de teste e as de treino. A dupla de imagens de teste-treino mais próximas determina o sujeito reconhecido (classificador vizinhos mais próximos). Esta primeira forma de aplicar o PCA, e o respectivo classificador, foi avaliada com as estratégias Leave{One{Out e 8{Fold Cross{Validation. A outra forma de utilizar o PCA foi gerando subespaços individuais (designada por SPF, Single Person Facespace), onde cada subespaço era gerado com imagens de apenas uma pessoa, para a seguir medir a distância entre estes espaços utilizando o conceito de ângulos principais. Como a base de dados era pequena, foi explorada uma forma de sintetizar novas imagens a partir das já existentes. Todos estes teste foram feitos para uma série de limiares (uma variável threshold que determinam o número de feature vectors com os que o faces- pace é construído) e diferentes formas de pre-processamento. Os resultados obtidos estavam dentro do esperado: taxas de acerto aproximadamente iguais a 85% em alguns casos. Pode destacar-se uma grande melhoria na taxa de reconhecimento após a inclusão de imagens sintéticas na base de dados. Os resultados revelaram que o uso do PCA para projectar imagens no subespaço da base de dados _e viável em sistemas de reconhecimento de faces, principalmente se comparar subespaço individuais no caso de base de dados com poucos exemplares em que _e possível sintetizar imagens ou em sistemas com captura de vídeo.
Style APA, Harvard, Vancouver, ISO itp.
12

Roveroni, Alessandro <1997&gt. "Principal Component Analysis on ESG data". Master's Degree Thesis, Università Ca' Foscari Venezia, 2021. http://hdl.handle.net/10579/19941.

Pełny tekst źródła
Streszczenie:
The objective of the dissertation is to perform a principal component analysis on ESG data. The ESG database is provided by the ESG-Credit.eu project and contains information of 11.104 firms of all Europe. The research focuses on companies residing in French, Italy and Germany. The data present in the database are extracted from three different sources of information: Bloomberg, Thomson Reuters Eikon and CDP. The number of overall measures collected is 609 distributed among the three main pillars of the ESG score: Environment, Social and Governance. In the 20th century, ESG has played a crucial role in the financial environment, especially when it comes to the selection of investments and analysis of financial performance. It derives that the main purpose of the analysis in this research is to determine how clear and transparent is the information provided by the ESG data and which sub-components have the highest impact on each pillar (E, S, G) and on the overall score. By analysing the Principal Components, it would also be possible to determine whether the information contributed by each pillar is redundant or not. Finally, a principal component analysis of the data prior and post the “Paris Agreement on Climate Change” signed in 2016 is performed in order to evaluate the impact of the agreement on ESG data and a possible change of influence of the sub-components on the overall score.
Style APA, Harvard, Vancouver, ISO itp.
13

Burka, Zak. "Perceptual audio classification using principal component analysis /". Online version of thesis, 2010. http://hdl.handle.net/1850/12247.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Patak, Zdenek. "Robust principal component analysis via projection pursuit". Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/29737.

Pełny tekst źródła
Streszczenie:
In principal component analysis (PCA), the principal components (PC) are linear combinations of the variables that minimize some objective function. In the classical setup the objective function is the variance of the PC's. The variance of the PC's can be easily upset by outlying observations; hence, Chen and Li (1985) proposed a robust alternative for the PC's obtained by replacing the variance with an M-estimate of scale. This approach cannot achieve a high breakdown point (BP) and efficiency at the same time. To obtain both high BP and efficiency, we propose to use MM- and τ-estimates in place of the M-estimate. Although outliers may cause bias in both the direction and the size of the PC's, Chen and Li looked at the scale bias only, whereas we consider both. All proposed robust methods are based on the minimization of a non-convex objective function; hence, a good initial starting point is required. With this in mind, we propose an orthogonal version of the least median of squares (Rousseeuw and Leroy, 1987) and a new method that is orthogonal equivariant, robust and easy to compute. Extensive Monte Carlo study shows promising results for the proposed method. Orthogonal regression and detection of multivariate outliers are discussed as possible applications of PCA.
Science, Faculty of
Statistics, Department of
Graduate
Style APA, Harvard, Vancouver, ISO itp.
15

Monahan, Adam Hugh. "Nonlinear principal component analysis of climate data". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ48678.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Nilsson, Jakob, i Tim Lestander. "Detecting network failures using principal component analysis". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-132258.

Pełny tekst źródła
Streszczenie:
The dataset is first analyzed on a basic level by looking at the correlations between number of measurements and average download speed for every day. Second, our PCA-based methodology applied on the dataset, taking into account many factors, including the number of correlated measurements. The results from each analysis is compared and evaluated. Based on the results, we give insights to just how efficient the tested methods are and what improvements that can be made on the methods.This thesis investigates the efficiency of a methodology that first performs a Principal Component Analysis (PCA), followed by applying a threshold-based algorithm with a static threshold to detect potential network degradation and network attacks. Then a proof of concept of an online algorithm that is using the same methodology except for using training data to set the threshold is presented and analyzed. The analysis and algorithms are used on a large crowd-sourced dataset of Internet speed measurements, in this case from the crowd-based speed test application Bredbandskollen.se.
Style APA, Harvard, Vancouver, ISO itp.
17

Dauwe, Alexander. "Principal component analysis of the yield curve". Master's thesis, NSBE - UNL, 2009. http://hdl.handle.net/10362/9439.

Pełny tekst źródła
Streszczenie:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics
This report deals with one of the remaining key problems in financial decision taking: the forecast of the term structure at different time horizons. Specifically: I will forecast the Euro Interest Rate Swap with a macro factor augmented autoregressive principal component model. I achieve forecasts that significantly outperform the Random Walk for medium to long term horizons when using a short rolling time window. Including macro factors leads to even better results.
Style APA, Harvard, Vancouver, ISO itp.
18

Graner, Johannes. "On Asymptotic Properties of Principal Component Analysis". Thesis, Uppsala universitet, Tillämpad matematik och statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-420649.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Li, Liubo Li. "Trend-Filtered Projection for Principal Component Analysis". The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1503277234178696.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Broadbent, Lane David. "Recognition of Infrastructure Events Using Principal Component Analysis". BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/6197.

Pełny tekst źródła
Streszczenie:
Information Technology systems generate system log messages to allow for the monitoring of the system. In increasingly large and complex systems the volume of log data can overwhelm the analysts tasked with monitoring these systems. A system was developed that utilizes Principal Component Analysis to assist the analyst in the characterization of system health and events. Once trained, the system was able to accurately identify a state of heavy load on a device with a low false positive rate. The system was also able to accurately identify an error condition when trained on a single event. The method employed is able to assist in the real time monitoring of large complex systems, increasing the efficiency of trained analysts.
Style APA, Harvard, Vancouver, ISO itp.
21

Khwambala, Patricia Helen. "The importance of selecting the optimal number of principal components for fault detection using principal component analysis". Master's thesis, University of Cape Town, 2012. http://hdl.handle.net/11427/11930.

Pełny tekst źródła
Streszczenie:
Includes summary.
Includes bibliographical references.
Fault detection and isolation are the two fundamental building blocks of process monitoring. Accurate and efficient process monitoring increases plant availability and utilization. Principal component analysis is one of the statistical techniques that are used for fault detection. Determination of the number of PCs to be retained plays a big role in detecting a fault using the PCA technique. In this dissertation focus has been drawn on the methods of determining the number of PCs to be retained for accurate and effective fault detection in a laboratory thermal system. SNR method of determining number of PCs, which is a relatively recent method, has been compared to two commonly used methods for the same, the CPV and the scree test methods.
Style APA, Harvard, Vancouver, ISO itp.
22

Chen, Shaokang. "Robust discriminative principal component analysis for face recognition /". [St. Lucia, Qld.], 2005. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe18934.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Dimitrov, Darko [Verfasser]. "Geometric applications of principal component analysis / Darko Dimitrov". Berlin : Freie Universität Berlin, 2009. http://d-nb.info/102346392X/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Binongo, Jose Nilo G. "Stylometry and its implementation by principal component analysis". Thesis, University of Ulster, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.311585.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Tan, Murat Hasan. "Principal component analysis for signal-based system identification". Thesis, University of Southampton, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.430735.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Kharva, Mohamed. "Monitoring of froth systems using principal component analysis". Thesis, Stellenbosch : Stellenbosch University, 2002. http://hdl.handle.net/10019.1/52945.

Pełny tekst źródła
Streszczenie:
Thesis (MScEng)--Stellenbosch University, 2002.
ENGLISH ABSTRACT: Flotation is notorious for its susceptibility to process upsets and consequently its poor performance, making successful flotation control systems an elusive goal. The control of industrial flotation plants is often based en the visual appearance of the froth phase, and depends to a large extent on the experience and ability of a human operator. Machine vision systems provide a novel solution to several of the problems encountered in conventional flotation systems for monitoring and control. The rapid development in computer VISIon, computational resources and artificial intelligence and the integration of these technologies are creating new possibilities in the design and implementation of commercial machine vision systems for the monitoring and control of flotation plants. Current machine vision systems are available but not without their shortcomings. These systems cannot deal with fine froths where the bubbles are very small due to the segmentation techniques employed by them. These segmentation techniques are cumbersome and computationally expensive making them slow in real time operation. The approach followed in this work uses neural networks to solve the problems mentioned above. Neural networks are able to extract information from images of the froth phase without regard to the type and structure of the froth. The parallel processing capability of neural networks, ease of implementation and the advantages of supervised or unsupervised training of neural networks make them potentially suited for real-time industrial machine vision systems. In principle, neural network models can be implemented in an adaptive manner, so that changes in the characteristics of processes are taken into account. This work documents the development of linear and non-linear principal component models, which can be used in a real-time machine vision system for the monitoring, and control of froth flotation systems. Features from froth images of flotation processes were extracted via linear and non-linear principal component analysis. Conventional linear principal component analysis and three layer autoassociative neural networks were used in the extraction of linear principal components from froth images. Non-linear principal components were extracted from froth images by a three and five layer autoassociative neural network, as well as localised principal component analysis based on k-means clustering. Three principal components were extracted for each image. The correlation coefficient was used as a measure of the amount of variance captured by each principal component. The principal components were used to classify the froth images. A probabilistic neural network and a feedforward neural network classifier were developed for the classification of the froth images. Multivariate statistical process control models were developed using the linear and non-linear principal component models. Hotellings T2 statistic and the squared prediction error based on linear and non-linear principal component models were used in the development of multivariate control charts. It was found that the first three features extracted with autoassociative neural networks were able to capture more variance in froth images than conventional linear principal components, the features extracted by the five layer autoassociative neural networks were able to classify froth images more accurately than features extracted by conventional linear principal component analysis and three layer autoassociative neural networks. As applied, localised principal component analysis proved to be ineffective, owing to difficulties with the clustering of the high dimensional image data. Finally the use of multivariate statistical process control models to detect deviations from normal plant operations are discussed and it is shown that Hotellings T2 and squared prediction error control charts are able to clearly identify non-conforming plant behaviour.
AFRIKAANSE OPSOMMING: Flottasie is berug daarvoor dat dit vatbaar vir prosesversteurings is en daarom dikwels nie na wense presteer nie. Suksesvolle flottasiebeheerstelsels bly steeds 'n ontwykende doelwit. Die beheer van nywerheidsflottasie-aanlegte word dikwels gebaseer op die visuele voorkoms van die skuimfase en hang tot 'n groot mate af van die ervaring en vaardighede van die menslike operateur. Masjienvisiestelsels voorsien 'n vindingryke oplossing tot verskeie van die probleme wat voorkom by konvensionele flottasiestelsels ten opsigte van monitering en beheer. Die vinnige ontwikkeling van rekenaarbeheerde visie, rekenaarverwante hulpbronne en kunsmatige intelligensie, asook die integrasie van hierdie tegnologieë, skep nuwe moontlikhede in die ontwerp en inwerkingstelling van kommersiële masjienvisiestelsels om flottasie-aanlegte te monitor en te beheer. Huidige masjienvisiestelsels is wel beskikbaar, maar is nie sonder tekortkominge nie. Hierdie stelsels kan nie fyn skuim hanteer nie, waar die borreltjies baie klein is as gevolg van die segmentasietegnieke wat hulle aanwend. Hierdie segmentasietegnieke is omslagtig en rekenaargesproke duur, wat veroorsaak dat dit stadig in reële tyd-aanwendings is. Die benadering wat in hierdie werk gevolg is, wend neurale netwerke aan om die bovermelde probleme op te los. Neurale netwerke is instaat om inligting te onttrek uit beelde van die skuimfase sonder om ag te slaan op die tipe en struktuur van die skuim. Die parallelle prosesseringsvermoëns van neurale netwerke, die gemak van implementering en die voordele van die opleiding van neurale netwerke met of sonder toesig maak hulle potensieel nuttig as reële tydverwante industriële masjienvisiestelsels. In beginsel kan neurale netwerke op 'n aanpassende wyse geïmplementeer word, sodat veranderinge in die kenmerke van die prosesse deurlopend in aanmerking geneem word. Kenmerke van die beelde van die skuim tydens die flottasieproses is verkry by wyse van lineêre en nie-lineêre hootkomponentsanalise. Konvensionele lineêre hoofkomponentsanalise en drie-laag outo-assosiatiewe neurale netwerke is gebruik in die onttrekking van lineêre hoofkomponente uit die beelde van die skuim. Nie-lineêre hoofkomponente is uit die beelde van die skuim onttrek by wyse van 'n drie- en vyf-laag outo-assosiatiewe neurale netwerk, asook deur 'n gelokaliseerde hoofkomponentsanalise wat op k-gemiddelde trosanalise gebaseer is. Drie hoofkomponente is vir elke beeld onttrek. Die korrelasiekoëffisiënt is gebruik as 'n maatstaf van die afwyking wat deur elke hoofkomponent aangetoon is. Die hoofkomponente is gebruik om die beelde van die skuim te klassifiseer. 'n Probalistiese neurale netwerk en 'n voorwaarts voerende neurale netwerk is vir die klassifisering van die beelde van die skuim ontwerp. Multiveranderlike statistiese prosesbeheermodelle is ontwerp met die gebruik van die lineêre en nie-lineêre hoofkomponentmodelle. Hotelling se T2 statistiek en die gekwadreerde voorspellingsfout, gebaseer op lineêre en nie-lineêre hoofkomponentmodelle, is gebruik in die ontwikkeling van multiveranderlike kontrolekaarte. Dit is gevind dat die eerste drie eienskappe wat met behulp van die outo-assosiatiewe neurale netwerke onttrek is, instaat was om meer variansie by beelde van skuim vas te vang as konvensionele lineêre hoofkomponente. Die eienskappe wat deur die vyf-laag outo-assosiatiewe neurale netwerke onttrek is, was instaat om beelde van skuim akkurater te klassifiseer as daardie eienskappe wat by wyse van konvensionele lineêre hoofkomponentanalalise en drie-laag outo-assosiatiewe neurale netwerke onttrek is. Soos toegepas, het dit geblyk dat gelokaliseerde hoofkomponentsanalise nie effektief is nie, as gevolg van die probleme rondom die trosanalise van die hoë-dimensionele beelddata. Laastens word die aanwending van multiveranderlike statistiese prosesbeheermodelle, om afwykings in normale aanlegoperasies op te spoor, bespreek. Dit word aangetoon dat Hotelling se T2 statistiek en gekwadreerdevoorspellingsfoutbeheerkaarte instaat is om afwykende aanlegwerksverrigting duidelik aan te dui.
Style APA, Harvard, Vancouver, ISO itp.
27

See, Kyoungah. "Three-mode principal component analysis in designed experiments". Diss., Virginia Tech, 1993. http://hdl.handle.net/10919/40079.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Al-Kandari, Noriah Mohammed. "Variable selection and interpretation in principal component analysis". Thesis, University of Aberdeen, 1998. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU067766.

Pełny tekst źródła
Streszczenie:
In many research fields such as medicine, psychology, management and zoology, large numbers of variables are sometimes measured on each individual. As a result, the researcher will end up with a huge data set consisting of large number of variables, say p. Using this collected data set in any statistical analyses may cause several troubles. Thus, many cases demand a prior selection of the best subset of variables of size q, with q « p, to represent the entire data set in any data analysis. Evidently, the best subset of size q for some specified objective can always be determined by investigating systematically all possible subsets of size q, but such a procedure may be computationally difficult especially for large p. Also, in many applications, when a Principal Component Analysis (PCA) is done on a large number of variables, the resultant Principal Components (PCs) may not be easy to interpret. To aid interpretation, it is useful to reduce the number of variables as much as possible whilst capturing most of the variation of the complete data set, X. Thus, this thesis is aimed to reduce the studied number of variables in a given data set by selecting the best q out of p measured variables to highlight the main features of a structured data set as well as aiding the simultaneous interpretation of the first k (covariance or correlation) PCs. This desired aim can be achieved by generating several artificial data sets having different types of structures such as nearly independent variables, highly dependent variables and clustered variables. Then, for each structure, several Variable Selection Criteria (VSC) are applied in order to retain some subsets of size q. The efficiencies of these subsets retained are measured in order to determine the best criteria for retaining subsets of size q. Finally, the general results obtained from the entire artificial data analyses are evaluated on some real data sets having interesting covariance and correlation structures.
Style APA, Harvard, Vancouver, ISO itp.
29

Li, Xiaomeng. "Human Promoter Recognition Based on Principal Component Analysis". Thesis, The University of Sydney, 2008. http://hdl.handle.net/2123/3656.

Pełny tekst źródła
Streszczenie:
This thesis presents an innovative human promoter recognition model HPR-PCA. Principal component analysis (PCA) is applied on context feature selection DNA sequences and the prediction network is built with the artificial neural network (ANN). A thorough literature review of all the relevant topics in the promoter prediction field is also provided. As the main technique of HPR-PCA, the application of PCA on feature selection is firstly developed. In order to find informative and discriminative features for effective classification, PCA is applied on the different n-mer promoter and exon combined frequency matrices, and principal components (PCs) of each matrix are generated to construct the new feature space. ANN built classifiers are used to test the discriminability of each feature space. Finally, the 3 and 5-mer feature matrix is selected as the context feature in this model. Two proposed schemes of HPR-PCA model are discussed and the implementations of sub-modules in each scheme are introduced. The context features selected by PCA are III used to build three promoter and non-promoter classifiers. CpG-island modules are embedded into models in different ways. In the comparison, Scheme I obtains better prediction results on two test sets so it is adopted as the model for HPR-PCA for further evaluation. Three existing promoter prediction systems are used to compare to HPR-PCA on three test sets including the chromosome 22 sequence. The performance of HPR-PCA is outstanding compared to the other four systems.
Style APA, Harvard, Vancouver, ISO itp.
30

Li, Xiaomeng. "Human Promoter Recognition Based on Principal Component Analysis". University of Sydney, 2008. http://hdl.handle.net/2123/3656.

Pełny tekst źródła
Streszczenie:
Master of Engineering
This thesis presents an innovative human promoter recognition model HPR-PCA. Principal component analysis (PCA) is applied on context feature selection DNA sequences and the prediction network is built with the artificial neural network (ANN). A thorough literature review of all the relevant topics in the promoter prediction field is also provided. As the main technique of HPR-PCA, the application of PCA on feature selection is firstly developed. In order to find informative and discriminative features for effective classification, PCA is applied on the different n-mer promoter and exon combined frequency matrices, and principal components (PCs) of each matrix are generated to construct the new feature space. ANN built classifiers are used to test the discriminability of each feature space. Finally, the 3 and 5-mer feature matrix is selected as the context feature in this model. Two proposed schemes of HPR-PCA model are discussed and the implementations of sub-modules in each scheme are introduced. The context features selected by PCA are III used to build three promoter and non-promoter classifiers. CpG-island modules are embedded into models in different ways. In the comparison, Scheme I obtains better prediction results on two test sets so it is adopted as the model for HPR-PCA for further evaluation. Three existing promoter prediction systems are used to compare to HPR-PCA on three test sets including the chromosome 22 sequence. The performance of HPR-PCA is outstanding compared to the other four systems.
Style APA, Harvard, Vancouver, ISO itp.
31

Khawaja, Antoun. "Automatic ECG analysis using principal component analysis and wavelet transformation". Karlsruhe Univ.-Verl. Karlsruhe, 2007. http://www.uvka.de/univerlag/volltexte/2007/227/.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Schmid, Martin. "Anwendung der Principal Component Analysis auf die Commodity-Preise". St. Gallen, 2007. http://www.biblio.unisg.ch/org/biblio/edoc.nsf/wwwDisplayIdentifier/02282663001/$FILE/02282663001.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Skittides, Christina. "Statistical modelling of wind energy using Principal Component Analysis". Thesis, Heriot-Watt University, 2015. http://hdl.handle.net/10399/2930.

Pełny tekst źródła
Streszczenie:
The statistical method of Principal Component Analysis (PCA) is developed here from a time-series analysis method used in nonlinear dynamical systems to a forecasting tool and a Measure-Correlate-Predict (MCP) and then applied to wind speed data from a set of Met. Office stations from Scotland. PCA for time-series analysis is a method to separate coherent information from noise of measurements arising from some underlying dynamics and can then be used to describe the underlying dynamics. In the first step, this thesis shows that wind speed measurements from one or more weather stations can be interpreted as measurements originating from some coherent underlying dynamics, amenable to PCA time series analysis. In a second step, the PCA method was used to capture the underlying time-invariant short-term dynamics from an anemometer. These were then used to predict or forecast the wind speeds from some hours ahead to a day ahead. Benchmarking the PCA prediction against persistence, it could be shown that PCA outperforms persistence consistently for forecasting horizons longer than around 8 hours ahead. In the third stage, the PCA method was extended to the MCP problem (PCA-MCP) by which a short set of concurrent data from two sites is used to build a transfer function for the wind speed and direction from one (reference) site to the other (target) site, and then apply that transfer function for a longer period of data from the reference site to predict the expected wind speed and direction at the target site. Different to currently used MCP methods which treat the target site wind speed as the independent variable and the reference site wind speed as the dependent variable, the PCA-MCP does not impose that link but treats the two sites as joint observables from the same underlying coherent dynamics plus some independent variability for each site. PCA then extracts the joint coherent dynamics. A key development step was then to extend the identification of the joint dynamics description into a transfer function in which the expected values at the target site could be inferred from the available measurements at the reference site using the joint dynamics. This extended PCA-MCP was applied to a set of Met. Office data from Scotland and benchmarked a standard linear regression MCP method. For the majority of cases, the error of the resource prediction in terms of wind speed and wind direction distributions at the target site was found to be between 10% and 50% of that made using the standard linear regression. The target mean absolute error was also found to be only the 29% of the linear regression one.
Style APA, Harvard, Vancouver, ISO itp.
34

Shannak, Kamal Majed. "On Non-Linear Principal Component Analysis for Process Monitoring". Fogler Library, University of Maine, 2004. http://www.library.umaine.edu/theses/pdf/ShannakKM2004.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

ABDELWAHAB, MOATAZ MAHMOUD. "NOVEL FACIAL IMAGE RECOGNITION TECHNIQUES EMPLOYING PRINCIPAL COMPONENT ANALYSIS". Doctoral diss., University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2181.

Pełny tekst źródła
Streszczenie:
Recently, pattern recognition/classification has received considerable attention in diverse engineering fields such as biomedical imaging, speaker identification, fingerprint recognition, and face recognition, etc. This study contributes novel techniques for facial image recognition based on the Two dimensional principal component analysis in the transform domain. These algorithms reduce the storage requirements by an order of magnitude and the computational complexity by a factor of 2 while maintaining the excellent recognition accuracy of the recently reported methods. The proposed recognition systems employ different structures, multicriteria and multitransform. In addition, principal component analysis in the transform domain in conjunction with vector quantization is developed which result in further improvement in the recognition accuracy and dimensionality reduction. Experimental results confirm the excellent properties of the proposed algorithms.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Electrical Engineering PhD
Style APA, Harvard, Vancouver, ISO itp.
36

Ragozzine, Brett A. "Modeling the Point Spread Function Using Principal Component Analysis". Ohio University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1224684806.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Brock, James L. "Acoustic classification using independent component analysis /". Link to online version, 2006. https://ritdml.rit.edu/dspace/handle/1850/2067.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Chivers, Daniel Stephen. "Human Action Recognition by Principal Component Analysis of Motion Curves". Wright State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=wright1353374113.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Teixeira, Sérgio Coichev. "Utilização de análise de componentes principais em séries temporais". Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-09052013-224741/.

Pełny tekst źródła
Streszczenie:
Um dos principais objetivos da análise de componentes principais consiste em reduzir o número de variáveis observadas em um conjunto de variáveis não correlacionadas, fornecendo ao pesquisador subsídios para entender a variabilidade e a estrutura de correlação dos dados observados com uma menor quantidade de variáveis não correlacionadas chamadas de componentes principais. A técnica é muito simples e amplamente utilizada em diversos estudos de diferentes áreas. Para construção, medimos a relação linear entre as variáveis observadas pela matriz de covariância ou pela matriz de correlação. Entretanto, as matrizes de covariância e de correlação podem deixar de capturar importante informações para dados correlacionados sequencialmente no tempo, autocorrelacionados, desperdiçando parte importante dos dados para interpretação das componentes. Neste trabalho, estudamos a técnica de análise de componentes principais que torna possível a interpretação ou análise da estrutura de autocorrelação dos dados observados. Para isso, exploramos a técnica de análise de componentes principais para o domínio da frequência que fornece para dados autocorrelacionados um resultado mais específico e detalhado do que a técnica de componentes principais clássica. Pelos métodos SSA (Singular Spectrum Analysis) e MSSA (Multichannel Singular Spectrum Analysis), a análise de componentes principais é baseada na correlação no tempo e entre as diferentes variáveis observadas. Essas técnicas são muito utilizadas para dados atmosféricos na identificação de padrões, tais como tendência e periodicidade.
The main objective of principal component analysis (PCA) is to reduce the number of variables in a small uncorrelated data sets, providing support and helping researcher understand the variation present in all the original variables with small uncorrelated amount of variables, called components. The principal components analysis is very simple and frequently used in several areas. For its construction, the components are calculated through covariance matrix. However, the covariance matrix does not capture the autocorrelation information, wasting important information about data sets. In this research, we present some techniques related to principal component analysis, considering autocorrelation information. However, we explore the principal component analysis in the domain frequency, providing more accurate and detailed results than classical component analysis time series case. In subsequent method SSA (Singular Spectrum Analysis) and MSSA (Multichannel Singular Spectrum Analysis), we study the principal component analysis considering relationship between locations and time points. These techniques are broadly used for atmospheric data sets to identify important characteristics and patterns, such as tendency and periodicity.
Style APA, Harvard, Vancouver, ISO itp.
40

Cao, Zisheng, i 曹子晟. "Incremental algorithms for multilinear principal component analysis of tensor objects". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hdl.handle.net/10722/208151.

Pełny tekst źródła
Streszczenie:
In recent years, massive data sets are generated in many areas of science and business, and are gathered by using advanced data acquisition techniques. New approaches are therefore required to facilitate effective data management and data analysis in this big data era, especially to analyze multidimensional data for real-time applications. This thesis aims at developing generic and effective algorithms for compressing and recovering online multidimensional data, and applying such algorithms in image processing and other related areas. Since multidimensional data are usually represented by tensors, this research uses multilinear algebra as the mathematical foundation to facilitate development. After reviewing the techniques of singular value decomposition (SVD), principal component analysis (PCA) and tensor decomposition, this thesis deduces an effective multilinear principal component analysis (MPCA) method to process such data by seeking optimal orthogonal basis functions that map the original tensor space to a tensor subspace with minimal reconstruction error. Two real examples, 3D data compression for positron emission tomography (PET) and offline fabric defect detection, are used to illustrate the tensor decomposition method and the deduced MPCA method, respectively. Based on the deduced MPCA method, this research develops an incremental MPCA (IMPCA) algorithm which targets at compressing and recovering online tensor objects. To reduce computational complexity of the IMPCA algorithm, this research investigates the low-rank updates of singular values in the matrix and tensor domains, which leads to the development of a sequential low-rank update scheme similar to the sequential Karhunen-Loeve algorithm (SKL) for incremental matrix singular value decomposition, a sequential low-rank update scheme for incremental tensor decomposition, and a quick subspace tracking (QST) algorithm to further enhance the low-rank updates of singular values if the matrix is positive-symmetric definite. Although QST is slightly inferior to the SKL algorithm in terms of accuracy in estimating eigenvector and eigenvalue, the algorithm has lower computational complexity. Two fast incremental MPCA (IMPCA) algorithms are then developed by incorporating the SKL algorithm and the QST algorithm separately into the IMPCA algorithm. Results obtained from applying the developed IMPCA algorithms to detect anomalies from online multidimensional data in a number of numerical experiments, and to track and reconstruct the global surface temperature anomalies over the past several decades clearly confirm the excellent performance of the algorithms. This research also applies the developed IMPCA algorithms to solve an online fabric defect inspection problem. Unlike existing pixel-wise detection schemes, the developed algorithms employ a scanning window to extract tensor objects from fabric images, and to detect the occurrence of anomalies. The proposed method is unsupervised because no pre-training is needed. Two image processing techniques, selective local Gabor binary patterns (SLGBP) and multi-channel feature combination, are developed to accomplish the feature extraction of textile patterns and represent the features as tensor objects. Results of experiments conducted by using a real textile dataset confirm that the developed algorithms are comparable to existing supervised methods in terms of accuracy and computational complexity. A cost-effective parallel implementation scheme is developed to solve the problem in real-time.
published_or_final_version
Industrial and Manufacturing Systems Engineering
Doctoral
Doctor of Philosophy
Style APA, Harvard, Vancouver, ISO itp.
41

Roy, Samita. "Pyrite oxidation in coal-bearing strata : controls on in-situ oxidation as a precursor of acid mine drainage formation". Thesis, Durham University, 2002. http://etheses.dur.ac.uk/3753/.

Pełny tekst źródła
Streszczenie:
Pyrite oxidation in coal-bearing strata is recognised as the main precursor to Acidic Mine Drainage (AMD) generation. Predicting AMD quality and quantity for remediation, or proposed extraction, requires assessment of interactions between oxidising fluids and pyrite, and between oxidation products and groundwater. Current predictive methods and models rarely account for individual mineral weathering rates, or their distribution within rock. Better constraints on the importance of such variables in controlling rock leachate are required to provide more reliable predictions of AMD quality. In this study assumptions made during modelling of AMD generation were tested including; homogeneity of rock chemical and physical characteristics, controls on the rate of embedded pyrite oxidation and oxidation front ingress. The main conclusions of this work are:• The ingress of a pyrite oxidation front into coal-bearing strata depends on dominant oxidant transport mechanism, pyrite morphology and rock pore-size distribution.• Although pyrite oxidation rates predicted from rate laws and derived from experimental weathering of coal-bearing strata agree, uncertainty in surface area of framboids produces at least an order of magnitude error in predicted rates.• Pyrite oxidation products in partly unsaturated rock are removed to solution via a cycle of dissolution and precipitation at the water-rock interface. Dissolution mainly occurs along rock cleavage planes, as does diffusion of dissolved oxidant.• Significant variance of whole seam S and pyrite wt % existed over a 30 m exposure of an analysed coal seam. Assuming a seam mean pyrite wt % to predict net acid producing potential for coal and shale seams may be unsuitable, at this scale at least.• Seasonal variation in AMD discharge chemistry indicates that base-flow is not necessarily representative of extreme poor quality leachate. Summer and winter storms, following relatively dry periods, tended to release the greatest volume of pyrite oxidation products.
Style APA, Harvard, Vancouver, ISO itp.
42

Söderström, Ulrik. "Very Low Bitrate Video Communication : A Principal Component Analysis Approach". Doctoral thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-1808.

Pełny tekst źródła
Streszczenie:
A large amount of the information in conversations come from non-verbal cues such as facial expressions and body gesture. These cues are lost when we don't communicate face-to-face. But face-to-face communication doesn't have to happen in person. With video communication we can at least deliver information about the facial mimic and some gestures. This thesis is about video communication over distances; communication that can be available over networks with low capacity since the bitrate needed for video communication is low. A visual image needs to have high quality and resolution to be semantically meaningful for communication. To deliver such video over networks require that the video is compressed. The standard way to compress video images, used by H.264 and MPEG-4, is to divide the image into blocks and represent each block with mathematical waveforms; usually frequency features. These mathematical waveforms are quite good at representing any kind of video since they do not resemble anything; they are just frequency features. But since they are completely arbitrary they cannot compress video enough to enable use over networks with limited capacity, such as GSM and GPRS. Another issue is that such codecs have a high complexity because of the redundancy removal with positional shift of the blocks. High complexity and bitrate means that a device has to consume a large amount of energy for encoding, decoding and transmission of such video; with energy being a very important factor for battery-driven devices. Drawbacks of standard video coding mean that it isn't possible to deliver video anywhere and anytime when it is compressed with such codecs. To resolve these issues we have developed a totally new type of video coding. Instead of using mathematical waveforms for representation we use faces to represent faces. This makes the compression much more efficient than if waveforms are used even though the faces are person-dependent. By building a model of the changes in the face, the facial mimic, this model can be used to encode the images. The model consists of representative facial images and we use a powerful mathematical tool to extract this model; namely principal component analysis (PCA). This coding has very low complexity since encoding and decoding only consist of multiplication operations. The faces are treated as single encoding entities and all operations are performed on full images; no block processing is needed. These features mean that PCA coding can deliver high quality video at very low bitrates with low complexity for encoding and decoding. With the use of asymmetrical PCA (aPCA) it is possible to use only semantically important areas for encoding while decoding full frames or a different part of the frames. We show that a codec based on PCA can compress facial video to a bitrate below 5 kbps and still provide high quality. This bitrate can be delivered on a GSM network. We also show the possibility of extending PCA coding to encoding of high definition video.
Style APA, Harvard, Vancouver, ISO itp.
43

Verdebout, Thomas. "Optimal inference for one-sample and multisample principal component analysis". Doctoral thesis, Universite Libre de Bruxelles, 2008. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210448.

Pełny tekst źródła
Streszczenie:
Parmi les outils les plus classiques de l'Analyse Multivariée, les Composantes Principales sont aussi un des plus anciens puisqu'elles furent introduites il y a plus d'un siècle par Pearson (1901) et redécouvertes ensuite par Hotelling (1933). Aujourd'hui, cette méthode est abondamment utilisée en Sciences Sociales, en Economie, en Biologie et en Géographie pour ne citer que quelques disciplines. Elle a pour but de réduire de façon optimale (dans un certain sens) le nombre de variables contenues dans un jeu de données.

A ce jour, les méthodes d'inférence utilisées en Analyse en Composantes Principales par les praticiens sont généralement fondées sur l'hypothèse de normalité des observations. Hypothèse qui peut, dans bien des situations, être remise en question.

Le but de ce travail est de construire des procédures de test pour l'Analyse en Composantes Principales qui soient valides sous une famille plus importante de lois de probabilité, la famille des lois elliptiques. Pour ce faire, nous utilisons la méthodologie de Le Cam combinée au principe d'invariance. Ce dernier stipule que si une hypothèse nulle reste invariante sous l'action d'un groupe de transformations, alors, il faut se restreindre à des statistiques de test également invariantes sous l'action de ce groupe. Toutes les hypothèses nulles associées aux problèmes considérés dans ce travail sont invariantes sous l'action d'un groupe de transformations appellées monotones radiales. L'invariant maximal associé à ce groupe est le vecteur des signes multivariés et des rangs des distances de Mahalanobis entre les observations et l'origine.

Les paramètres d'intérêt en Analyse en composantes Principales sont les vecteurs propres et valeurs propres de matrices définies positives. Ce qui implique que l'espace des paramètres n'est pas linéaire. Nous développons donc une manière d'obtenir des procédures optimales pour des suite d'experiences locales courbées.

Les statistiques de test introduites sont optimales au sens de Le Cam et mesurables en l'invariant maximal décrit ci-dessus.

Les procédures de test basées sur ces statistiques possèdent de nombreuses propriétés attractives: elles sont valides sous la famille des lois elliptiques, elles sont efficaces sous une densité spécifiée et possèdent de très bonnes efficacités asymptotiques relatives par rapport à leurs concurrentes. En particulier, lorsqu'elles sont basées sur des scores Gaussiens, elles sont aussi efficaces que les procédures Gaussiennes habituelles et sont bien plus efficaces que ces dernières si l'hypothèse de normalité des observations n'est pas remplie.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished

Style APA, Harvard, Vancouver, ISO itp.
44

Harasti, Paul Robert. "Hurricane properties by principal component analysis of Doppler radar data". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq53836.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Yang, Libin. "An Application of Principal Component Analysis to Stock Portfolio Management". Thesis, University of Canterbury. Department of economics and finance, 2015. http://hdl.handle.net/10092/10293.

Pełny tekst źródła
Streszczenie:
This thesis investigates the application of principal component analysis to the Australian stock market using ASX200 index and its constituents from April 2000 to February 2014. The first ten principal components were retained to present the major risk sources in the stock market. We constructed portfolio based on each of the ten principal components and named these “principal portfolios
Style APA, Harvard, Vancouver, ISO itp.
46

Wu, Rui. "A comparison study of principal component analysis and nonlinear principal component analysis". 2007. http://etd.lib.fsu.edu/theses/available/etd-04042007-191940.

Pełny tekst źródła
Streszczenie:
Thesis (M.S.)--Florida State University, 2007.
Advisor: Jerry F. Magnan, Florida State University, College of Arts and Sciences, Dept. of Mathematics. Title and description from dissertation home page (viewed July 12, 2007). Document formatted into pages; contains xi, 68 pages. Includes bibliographical references.
Style APA, Harvard, Vancouver, ISO itp.
47

Kurylowicz, Martin. "Principal Component Analysis of Gramicidin". Thesis, 2010. http://hdl.handle.net/1807/24790.

Pełny tekst źródła
Streszczenie:
Computational research making use of molecular dynamics (MD) simulations has begun to expand the paradigm of structural biology to include dynamics as the mediator between structure and function. This work aims to expand the utility of MD simulations by developing Principal Component Analysis (PCA) techniques to extract the biologically relevant information in these increasingly complex data sets. Gramicidin is a simple protein with a very clear functional role and a long history of experimental, theoretical and computational study, making it an ideal candidate for detailed quantitative study and the development of new analysis techniques. First we quantify the convergence of our PCA results to underwrite the scope and validity of three 64 ns simulations of gA and two covalently linked analogs (SS and RR) solvated in a glycerol mono-oleate (GMO) membrane. Next we introduce a number of statistical measures for identifying regions of anharmonicity on the free energy landscape and highlight the utility of PCA in identifying functional modes of motion at both long and short wavelengths. We then introduce a simple ansatz for extracting physically meaningful modes of collective dynamics from the results of PCA, through a weighted superposition of eigenvectors. Applied to the gA, SS and RR backbone, this analysis results in a small number of collective modes which relate structural differences among the three analogs to dynamic properties with functional interpretations. Finally, we apply elements of our analysis to the GMO membrane, yielding two simple modes of motion from a large number of noisy and complex eigenvectors. Our results demonstrate that PCA can be used to isolate covariant motions on a number of different length and time scales, and highlight the need for an adequate structural and dynamical account of many more PCs than have been conventionally examined in the analysis of protein motion.
Style APA, Harvard, Vancouver, ISO itp.
48

Wijnen, Michael. "Online Tensor Robust Principal Component Analysis". Thesis, 2018. http://hdl.handle.net/1885/170630.

Pełny tekst źródła
Streszczenie:
Tensor Robust Principal Component Analysis (TRPCA) is a procedure for recovering a data structure that has been corrupted by noise. In this thesis, a proof (inspired by Lu et al. (2018)) is given that TRPCA successfully performs this operation. An online optimisation algorithm to perform this procedure for p-dimensional tensors is proposed (based on a similar algorithm for the 3-dimensional case from Z. Zhang, Liu, Aeron, & Vetro (2016)). The required tensor identities to apply a proof of convergence (similar to the approach of Feng, Xu, & Yan (2013) for the matrix case) are derived and then applied. Examples using satellite image data and synthetic data are provided throughout to demonstrate the utility of the theoretical work and examine hypotheses.
Style APA, Harvard, Vancouver, ISO itp.
49

Tseng, Chi-Chieh, i 鄭期傑. "Earthquake Detection By Principal Component Analysis". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/88853561622128415704.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣科技大學
資訊工程系
104
Earthquake is a major disaster in many countries, and its effect can be devastating. Due to the challenges in predicting earthquakes, researchers have turned their attention to detecting the occurrence of an earthquake as soon as possible, a concept known as earthquake early warning (EEW). In this paper, we propose a novel method for detecting earthquakes based on Principal Component Analysis (PCA), built upon the Palert seismic sensor network in Taiwan. By building statistical models for the behavior of the network, we can better understand the behavior during of the noise, allowing us to separate an earthquake from the constant false alarms. Experiment results with real world data show that our method can detect earthquakes earlier than existing methods without increase in false alarm rate or decrease in detection rate, which is pivotal in ensuring the credibility and effectiveness of the system. Our system is ready for real world deployment, and can potentially save lives and prevent property damage caused by earthquakes.
Style APA, Harvard, Vancouver, ISO itp.
50

Chao, Hsiang-Chi, i 趙湘琪. "3-way data principal component analysis". Thesis, 1995. http://ndltd.ncl.edu.tw/handle/39703156662028311670.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii