Dissertations / Theses on the topic 'Linear principal componetns analysis'

To see the other types of publications on this topic, follow the link: Linear principal componetns analysis.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Linear principal componetns analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Shannak, Kamal Majed. "On Non-Linear Principal Component Analysis for Process Monitoring." Fogler Library, University of Maine, 2004. http://www.library.umaine.edu/theses/pdf/ShannakKM2004.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jia, Feng. "Application of linear and non-linear principal component analysis in multivariate statistical process control." Thesis, University of Newcastle Upon Tyne, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.311152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

MENNI, CRISTINA. "Population stratification in genome-wide association studies: a comparison among different multivariate analysis methods for dimensionality reduction." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2011. http://hdl.handle.net/10281/19317.

Full text
Abstract:
INTRODUCTION: Genome-wide association studies (GWAS) are large-scale association mapping using SNPs, making no assumptions on the genomic location of the causal variant. They hold substantial promise for unraveling the genetic basis of common human diseases. A well known problem with such studies is population stratification (PS), a form of confounding which arises when there are two or more strata in the study population, and both the risk of disease and the frequency of marker alleles differ between strata. It therefore may appear that the risk of disease is related to the marker alleles when in fact it is not. Many statistical methods were developed to account for PS so that association studies could proceed even in the presence of structure and for GWAS, linear principal components analysis (PCA) represents a sort of gold-standard. PCA uses genotype data to extract continuous (principal) axis of variation, which can be used to adjust for association attributable to ancestry along each axis. The assumption underlying PCA, however, is that the variable under studies are continuous and so SNPs are quantified by fixing for each marker a reference and a variant allele and by counting the number of mutations. This implies that the distance between homozygous wild type and heterozygous is the same as the distance between heterozygous and homozygous mutant and it thus implies an additive model of inheritance. This model is very conservative, is very static and most importantly it is not necessarily the correct one. AIM: The aim of this thesis is to treat SNPs as ordinal qualitative variables. This means that there is a distance between homozygous wild type, heterozygous and homozygous mutant, but that the distance between each pair is not necessarily the same. So, we no longer assume any model of inheritance and can potentially better capture some information that linear PCA misses out. METHODS: We apply a multivariate technique to reduce dimensionality in the presence of non-metric data known as non linear principal components analysis (NLPCA, also known as PRINCALS: Principal components analysis by means of alternating least squares). PRINCALS belongs to “Gifi’ s system”, a unified theoretical framework under which many well known descriptive multivariate techniques are organised. We apply both PCA and PRINCALS to a sample dataset of 90 individuals belonging to three very distinct subpopulations and 1,000 randomly chosen uncorrelated SNPs and compare the results graphically, using Procrustean superimposition approach and the test Protest and finally with a scenarios analysis. RESULTS: When we compare the performances of PCA and PRINCALS, we find that the two methods yield similar scores for markers with a low/null genotypic variability across the study sample, while scores differ as the level of genotypic variability increases. This suggests that the two methods capture intra-subject variability differently. Procrustes analysis and scenarios analysis confirm this. Indeed, the matrix of principal components obtained with PCA and the matrix of dimensions obtained with PRINCALS are shown to be statistically different by the test PROTEST and, in the scenarios analysis, we find that, as the level of PS increases, PRINCALS appears to outperform PCA. CONCLUSION: PCA and PRINCALS behave differently. Validation analyses are needed to confirm these results.
APA, Harvard, Vancouver, ISO, and other styles
4

Archer, Cynthia. "A framework for representing non-stationary data with mixtures of linear models /." Full text open access at:, 2002. http://content.ohsu.edu/u?/etd,585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Savery, James Roy. "A modular non-linear approach to empirical principal component analysis based process modelling." Thesis, University College London (University of London), 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.417693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pascoto, Tamara Vieira. "Análises fatorial e de componentes principais aplicadas ao estudo dos fatores influenciadores de processos erosivos /." Bauru, 2020. http://hdl.handle.net/11449/192266.

Full text
Abstract:
Orientador: Anna Silvia Palcheco Peixoto
Resumo: A erosão é um problema ambiental em que a perda de solo pode acarretar problemas econômicos e, quando próximos a urbanizações, problemas sociais. A cidade de São Manuel está situada no interior de São Paulo e apresenta, tanto solos argilosos com baixa suscetibilidade a erosão, como solos arenosos com alta suscetibilidade a erosão. Uma vez que existem áreas suscetíveis à erosão próximas à área urbana capazes de colocar a população em risco, surgiu a necessidade de analisá-las a fim de auxiliar políticas públicas para minimizar suas consequências. Com isso, a presente pesquisa propôs o desenvolvimento de uma metodologia para gerar índices de erosão, por Análise de Componente Principal (PCA) e por Análise Fatorial, fundamentado em alguns dos principais fatores influenciadores nos processos erosivos que ocorrem na área urbana do município. Nessa etapa foram considerados: textura do solo; declividade; permeabilidade; uso e ocupação; pluviosidade; e erodibilidade dos solos. Inicialmente, foram levantadas as feições erosivas existentes na área urbana, espacializadas e classificadas. Entre as 9 feições espacializadas, duas eram provenientes de processos fluviais, duas estavam recuperadas, restando cinco feições erosivas lineares para serem estudadas. Uma das cinco, apesar de estar estabilizada, apresentou um avanço significativo em um dos braços. Das feições estudadas, apenas uma foi classificada como ravina, sendo as demais classificadas como voçorocas. Após levantados os fatores in... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: Erosion is an environmental problem where soil loss can lead either to economic problems or, whether close to urbanization, social problems. São Manuel town is located in the interior of São Paulo and has both clayey soils with low susceptibility to erosion and sandy soils with high susceptibility to erosion. Since there are areas susceptible to erosion close to the urban area capable of putting the population at risk, the need arose to analyze them in order to assist public policies to minimize their consequences. Therefore, this research proposed the development of a methodology to generate erosion indexes, by Principal Component Analysis (PCA) and Factorial Analysis, based on some of the main factors influencing erosion processes that occur in the urban area of the municipality. At this stage the following factors were considered: soil texture; slope; permeability; use and occupation; rainfall; and soil erodibility. Initially, the erosive features existing in the urban area were surveyed, spatialized and classified. Among the 9 spatialized features, two were from fluvial processes, two were recovered, leaving five linear erosive features to be studied. One of the five, despite being stabilized, presented a significant advance in one of the arms. Of the studied features, only one was classified as ravine, the others being classified as gullies. After surveyed the influencing factors, they were evaluated according to two methodologies: Method A - it was based on the analysis... (Complete abstract click electronic access below)
Mestre
APA, Harvard, Vancouver, ISO, and other styles
7

Marbach, Matthew James. "Use of principal component analysis with linear predictive features in developing a blind SNR estimation system /." Full text available online, 2006. http://www.lib.rowan.edu/find/theses.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Varanis, Marcus Vinicius Monteiro 1979. "Detecção de falhas em motores elétricos através da transformada wavelet packet e métodos de redução de dimensionalidade." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/265889.

Full text
Abstract:
Orientador: Robson Pederiva
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica
Made available in DSpace on 2018-08-26T01:39:31Z (GMT). No. of bitstreams: 1 Varanis_MarcusViniciusMonteiro_D.pdf: 5116959 bytes, checksum: b16ac36565b93c6bf49eb1863f7e9823 (MD5) Previous issue date: 2014
Resumo: Motores elétricos são componentes de grande importância na maioria dos equipamentos de plantas industriais. As diversas falhas que ocorrem nas máquinas de indução podem gerar consequências severas no processo industrial. Os principais problemas estão relacionados à elevação dos custos de produção, piora nas condições do processo e de segurança e, sobretudo piora na qualidade do produto final. Muitas destas falhas mostram-se progressivas. Neste trabalho, apresenta-se uma contribuição ao estudo de Técnicas de Processamento de Sinais Baseadas na Transformada Wavelet para extração de parâmetros de Energia e Entropia a partir de sinais de vibração para detecção de falhas no regime não-estacionário (parada e partida do motor). Em conjunto com a transformada Wavelet utilizam-se métodos de redução de dimensionalidade como, a análise em componentes principais (PCA e a análise Linear Discriminante (LDA). O uso de uma bancada experimental mostra que os resultados da classificação têm alta precisão
Abstract: Electric motors are very important components in most industrial plants equipment. The several faults occurring in induction machines can generate severe consequences in the industrial process. The main problems are related to high production costs, worsening the conditions of process and security, and especially poor quality of the final product. Many of these failures are shown progressive. This work presents a contribution to the study of Signal Processing Techniques Based on Wavelet Packet Transform for extracting parameters of Energy and Entropy, together makes the use of dimensionality reduction methods like the Principal components Analysis (PCA) and Linear Dscriminant Analysis (LDA). This analysis is done from the acquisition of vibration signals in Non-Stationary state (stop and start the engine). The results show that the performance of classification has high accuracy based on experimental work
Doutorado
Mecanica dos Sólidos e Projeto Mecanico
Doutor em Engenharia Mecânica
APA, Harvard, Vancouver, ISO, and other styles
9

Khosla, Nitin, and n/a. "Dimensionality Reduction Using Factor Analysis." Griffith University. School of Engineering, 2006. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20061010.151217.

Full text
Abstract:
In many pattern recognition applications, a large number of features are extracted in order to ensure an accurate classification of unknown classes. One way to solve the problems of high dimensions is to first reduce the dimensionality of the data to a manageable size, keeping as much of the original information as possible and then feed the reduced-dimensional data into a pattern recognition system. In this situation, dimensionality reduction process becomes the pre-processing stage of the pattern recognition system. In addition to this, probablility density estimation, with fewer variables is a simpler approach for dimensionality reduction. Dimensionality reduction is useful in speech recognition, data compression, visualization and exploratory data analysis. Some of the techniques which can be used for dimensionality reduction are; Factor Analysis (FA), Principal Component Analysis(PCA), and Linear Discriminant Analysis(LDA). Factor Analysis can be considered as an extension of Principal Component Analysis. The EM (expectation maximization) algorithm is ideally suited to problems of this sort, in that it produces maximum-likelihood (ML) estimates of parameters when there is a many-to-one mapping from an underlying distribution to the distribution governing the observation, conditioned upon the obervations. The maximization step then provides a new estimate of the parameters. This research work compares the techniques; Factor Analysis (Expectation-Maximization algorithm based), Principal Component Analysis and Linear Discriminant Analysis for dimensionality reduction and investigates Local Factor Analysis (EM algorithm based) and Local Principal Component Analysis using Vector Quantization.
APA, Harvard, Vancouver, ISO, and other styles
10

Khosla, Nitin. "Dimensionality Reduction Using Factor Analysis." Thesis, Griffith University, 2006. http://hdl.handle.net/10072/366058.

Full text
Abstract:
In many pattern recognition applications, a large number of features are extracted in order to ensure an accurate classification of unknown classes. One way to solve the problems of high dimensions is to first reduce the dimensionality of the data to a manageable size, keeping as much of the original information as possible and then feed the reduced-dimensional data into a pattern recognition system. In this situation, dimensionality reduction process becomes the pre-processing stage of the pattern recognition system. In addition to this, probablility density estimation, with fewer variables is a simpler approach for dimensionality reduction. Dimensionality reduction is useful in speech recognition, data compression, visualization and exploratory data analysis. Some of the techniques which can be used for dimensionality reduction are; Factor Analysis (FA), Principal Component Analysis(PCA), and Linear Discriminant Analysis(LDA). Factor Analysis can be considered as an extension of Principal Component Analysis. The EM (expectation maximization) algorithm is ideally suited to problems of this sort, in that it produces maximum-likelihood (ML) estimates of parameters when there is a many-to-one mapping from an underlying distribution to the distribution governing the observation, conditioned upon the obervations. The maximization step then provides a new estimate of the parameters. This research work compares the techniques; Factor Analysis (Expectation-Maximization algorithm based), Principal Component Analysis and Linear Discriminant Analysis for dimensionality reduction and investigates Local Factor Analysis (EM algorithm based) and Local Principal Component Analysis using Vector Quantization.
Thesis (Masters)
Master of Philosophy (MPhil)
School of Engineering
Full Text
APA, Harvard, Vancouver, ISO, and other styles
11

Bird, Gregory David. "Linear and Nonlinear Dimensionality-Reduction-Based Surrogate Models for Real-Time Design Space Exploration of Structural Responses." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8653.

Full text
Abstract:
Design space exploration (DSE) is a tool used to evaluate and compare designs as part of the design selection process. While evaluating every possible design in a design space is infeasible, understanding design behavior and response throughout the design space may be accomplished by evaluating a subset of designs and interpolating between them using surrogate models. Surrogate modeling is a technique that uses low-cost calculations to approximate the outcome of more computationally expensive calculations or analyses, such as finite element analysis (FEA). While surrogates make quick predictions, accuracy is not guaranteed and must be considered. This research addressed the need to improve the accuracy of surrogate predictions in order to improve DSE of structural responses. This was accomplished by performing comparative analyses of linear and nonlinear dimensionality-reduction-based radial basis function (RBF) surrogate models for emulating various FEA nodal results. A total of four dimensionality reduction methods were investigated, namely principal component analysis (PCA), kernel principal component analysis (KPCA), isometric feature mapping (ISOMAP), and locally linear embedding (LLE). These methods were used in conjunction with surrogate modeling to predict nodal stresses and coordinates of a compressor blade. The research showed that using an ISOMAP-based dual-RBF surrogate model for predicting nodal stresses decreased the estimated mean error of the surrogate by 35.7% compared to PCA. Using nonlinear dimensionality-reduction-based surrogates did not reduce surrogate error for predicting nodal coordinates. A new metric, the manifold distance ratio (MDR), was introduced to measure the nonlinearity of the data manifolds. When applied to the stress and coordinate data, the stress space was found to be more nonlinear than the coordinate space for this application. The upfront training cost of the nonlinear dimensionality-reduction-based surrogates was larger than that of their linear counterparts but small enough to remain feasible. After training, all the dual-RBF surrogates were capable of making real-time predictions. This same process was repeated for a separate application involving the nodal displacements of mode shapes obtained from a FEA modal analysis. The modal assurance criterion (MAC) calculation was used to compare the predicted mode shapes, as well as their corresponding true mode shapes obtained from FEA, to a set of reference modes. The research showed that two nonlinear techniques, namely LLE and KPCA, resulted in lower surrogate error in the more complex design spaces. Using a RBF kernel, KPCA achieved the largest average reduction in error of 13.57%. The results also showed that surrogate error was greatly affected by mode shape reversal. Four different approaches of identifying reversed mode shapes were explored, all of which resulted in varying amounts of surrogate error. Together, the methods explored in this research were shown to decrease surrogate error when performing DSE of a turbomachine compressor blade. As surrogate accuracy increases, so does the ability to correctly make engineering decisions and judgements throughout the design process. Ultimately, this will help engineers design better turbomachines.
APA, Harvard, Vancouver, ISO, and other styles
12

Azarmehr, Ramin. "Real-time Embedded Age and Gender Classification in Unconstrained Video." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32463.

Full text
Abstract:
Recently, automatic demographic classification has found its way into embedded applications such as targeted advertising in mobile devices, and in-car warning systems for elderly drivers. In this thesis, we present a complete framework for video-based gender classification and age estimation which can perform accurately on embedded systems in real-time and under unconstrained conditions. We propose a segmental dimensionality reduction technique utilizing Enhanced Discriminant Analysis (EDA) to minimize the memory and computational requirements, and enable the implementation of these classifiers for resource-limited embedded systems which otherwise is not achievable using existing resource-intensive approaches. On a multi-resolution feature vector we have achieved up to 99.5% compression ratio for training data storage, and a maximum performance of 20 frames per second on an embedded Android platform. Also, we introduce several novel improvements such as face alignment using the nose, and an illumination normalization method for unconstrained environments using bilateral filtering. These improvements could help to suppress the textural noise, normalize the skin color, and rectify the face localization errors. A non-linear Support Vector Machine (SVM) classifier along with a discriminative demography-based classification strategy is exploited to improve both accuracy and performance of classification. We have performed several cross-database evaluations on different controlled and uncontrolled databases to assess the generalization capability of the classifiers. Our experiments demonstrated competitive accuracies compared to the resource-demanding state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
13

Fokoue, Harold Hilarion. "Emprego de estatística multivariada no estudo quimiossistemática da família Asteraceae e da sua tribo Heliantheae." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/46/46135/tde-12082013-152437/.

Full text
Abstract:
Este trabalho análise as ocorrências de 12 classes de substâncias (monoterpenos, sesquiterpenos, lactonas sesquiterpênicas, diterpenos, triterpenos, cumarinas, flavonóides, poliacetilenos, benzofuranos, benzopiranos, acetofenonas e fenilpropanóides) na família Asteraceae e na sua tribo Heliantheae. Pretende-se demonstrar a existência de correlações na produção de metabólitos secundários em níveis taxonômicos baixos (tribos, subtribos e gêneros). Utilizou-se um banco de dados com cerca de 36.000 ocorrências das principais substâncias isoladas em plantas da família. O estudo do equilíbrio químico na produção de metabólitos secundários foi feito utilizando-se Regressão Linear Múltipla. As afinidades entre os grupos com base na sua Química foram pesquisadas por vários métodos tais como: Análise de componentes principais, Análise de Cluster e Análises cladísticas. Observou-se também o grau de oxidação médio de vários metabólitos e sua utilidade como ferramenta em análises quimiotaxonômicas. Foi possível mostrar a existência de um equilíbrio na produção das 12 classes de metabólitos em níveis das tribos e subtribos. Mas, no nível dos gêneros um equilíbrio moderado foi encontrado. Também foi possível mostrar a existência de um equilíbrio oxidativo em vários níveis (tribos, subtribos). No nível dos gêneros nenhum equilíbrio foi encontrado utilizando-se o parâmetro passo oxidativo. Foi possível agrupar algumas das subfamílias de Asteraceae segundo Bremer e subtribos da tribo Heliantheae segundo Stuessy usando Análises de componentes principais e Análise de Cluster
This work analyse the occurrence of 12 classes of substances (monoterpenes, sesquiterpenes, sesquiterpene lactones, diterpenes, triterpenes, coumarins, flavonoids, polyacetylenes, Benzofurans, benzopyrans, acetophenones and phenylpropanoids) in the Asteraceae family and its Heliantheae tribe. This study intends to demonstrate the existence of correlations in the production of secondary metabolites in lower taxonomic levels (tribes, subtribes and genera). We used a database of about 36,000 occurrences of the main substances isolated from the plant family. The study of chemical equilibrium in the production of secondary metabolites was done using Multiple Linear Regression. The affinities between the groups based on their chemistry were investigated by various methods such as principal component analysis, Cluster and cladistic analysis. There was also the average degree of oxidation of various metabolites and their usefulness as a tool in chemotaxonomic analysis. It was possible to show the existence of a balance in the production of 12 classes of metabolites in the levels of the tribes and subtribes. But the level of the genus balance was found moderate. It was also possible to show the existence of an oxidative equilibrium in various levels (tribes, subtribes). The level of genus balance was not found using the parameter oxidation step. We could group some of the subfamilies of Asteraceae according to Bremer and the subtribes of Heliantheae according to Stuessy using the principal component analysis and Cluster Analysis
APA, Harvard, Vancouver, ISO, and other styles
14

DUARTE, Daniel Duarte. "Classificação de lesões em mamografias por análise de componentes independentes, análise discriminante linear e máquina de vetor de suporte." Universidade Federal do Maranhão, 2008. http://tedebc.ufma.br:8080/jspui/handle/tede/1816.

Full text
Abstract:
Submitted by Rosivalda Pereira (mrs.pereira@ufma.br) on 2017-08-14T18:15:08Z No. of bitstreams: 1 DanielCosta.pdf: 1087754 bytes, checksum: ada5f863f42efd8298fff788c37bded3 (MD5)
Made available in DSpace on 2017-08-14T18:15:08Z (GMT). No. of bitstreams: 1 DanielCosta.pdf: 1087754 bytes, checksum: ada5f863f42efd8298fff788c37bded3 (MD5) Previous issue date: 2008-02-25
Female breast cancer is the major cause of death in western countries. Efforts in Computer Vision have been made in order to add improve the diagnostic accuracy by radiologists. In this work, we present a methodology that uses independent component analysis (ICA) along with support vector machine (SVM) and linear discriminant analysis (LDA) to distinguish between mass or non-mass and benign or malign tissues from mammograms. As a result, it was found that: LDA reaches 90,11% of accuracy to discriminante between mass or non-mass and 95,38% to discriminate between benign or malignant tissues in DDSM database and in mini-MIAS database we obtained 85% to discriminate between mass or non-mass and 92% of accuracy to discriminate between benign or malignant tissues; SVM reaches 99,55% of accuracy to discriminate between mass or non-mass and the same percentage to discriminate between benign or malignat tissues in DDSM database whereas, and in MIAS database it was obtained 98% to discriminate between mass or non-mass and 100% to discriminate between benign or malignant tissues.
Câncer de mama feminino é o câncer que mais causa morte nos países ocidentais. Esforços em processamento de imagens foram feitos para melhorar a precisão dos diagnósticos por radiologistas. Neste trabalho, nós apresentamos uma metodologia que usa análise de componentes independentes (ICA) junto com análise discriminante linear (LDA) e máquina de vetor de suporte (SVM) para distinguir as imagens entre nódulos ou não-nódulos e os tecidos em benignos ou malignos. Como resultado, obteve-se com LDA 90,11% de acurácia na discriminação entre nódulo ou não-nódulo e 95,38% na discriminação de tecidos benignos ou malignos na base de dados DDSM. Na base de dados mini- MIAS, obteve-se 85% e 92% na discriminação entre nódulos ou não-nódulos e tecidos benignos ou malignos respectivamente. Com SVM, alcançou-se uma taxa de até 99,55% na discriminação de nódulos ou não-nódulos e a mesma porcentagem na discriminação entre tecidos benignos ou malignos na base de dados DDSM enquanto que na base de dados mini-MIAS, obteve-se 98% e até 100% na discriminação de nódulos ou não-nódulos e tecidos benignos ou malignos, respectivamente.
APA, Harvard, Vancouver, ISO, and other styles
15

Bastos, Claudio. "MODELOS DE PREVISÃO DE RECURSOS PARA ANTIMICROBIANOS NO HOSPITAL UNIVERSITÁRIO DE SANTA MARIA." Universidade Federal de Santa Maria, 2009. http://repositorio.ufsm.br/handle/1/8122.

Full text
Abstract:
The scarce resources of public health makes the administrator manage the destination of resources, aiming to rationalize and optimize its collection, in order to improve the assistance to patients because the hospital is a public institution and does not get profits but promotes the community well-being. Thus, the hospital infection is acquired after the patient comes to the hospital of after he goes home and might be associated with his staying in hospital or with hospital procedures. This cost must be avoided. Once the complete eradication is not impossible, it is necessary to analyze and to control the monthly cost of the main antibiotics used for its treatment so that there is enough knowledge to foresee the resource collection to buy them. In this context, the main reason of this research is to carry out a forecast of the monthly cost and of the resource collection needed to purchase those medicine used in the treatment of hospital infections at the University Hospital of Santa Maria. To do so, a methodology for forecast by dynamic and multiple linear regressions was used. They were combined with to a multivariate technique by principal components. The technique of principal components was used to eliminate the multiple linearity existing among the original variants so, the resulting principal components were used as variables in the construction of the model of multiple linear regression and of dynamic regression. Therefore, these methodologies are applied to a case study of public health, in order to foresee and to conclude about which model is more suitable to forecast the monthly cost of antibiotics in hospital infections. The results obtained from the two models were considered satisfactory but the model of dynamic regression was chosen to be more suitable because it presented a mean absolute percentage error (MAPE). Finally, the findings might be a managerial tool for hospital administration when they offer subsides for the budget of planning and of the resource finances, especially in a time when resources are globally scarce, making health even more expensive.
Os escassos recursos da saúde pública impõem ao administrador gerenciar a destinação dos recursos buscando racionalizar e otimizar sua alocação, permitindo, desta forma, melhorar o atendimento aos pacientes, pois o hospital, sendo uma entidade pública, não tem por objetivo o lucro, mas sim promover o bem estar da comunidade. Com isso, a infecção hospitalar que é adquirida após a internação do paciente e se manifesta durante a internação ou mesmo após a alta, podendo ser relacionada com a internação ou procedimentos hospitalares, deve ser evitada. Uma vez que sua total erradicação não é possível, se faz necessário analisar e controlar o custo mensal dos principais antibióticos utilizados no seu tratamento a fim de se ter embasamento suficiente para prever a alocação de recursos para sua aquisição. Nesse contexto, o principal objetivo desta pesquisa é realizar a previsão do custo mensal e de alocação de recursos necessários para aquisição de medicamentos utilizados no tratamento de infecções hospitalares no Hospital Universitário de Santa Maria. Para isso, utilizou-se a metodologia de previsão por regressão linear múltipla e de regressão dinâmica combinada com a técnica multivariada de componentes principais que foi utilizada para eliminar a multicolinearidade existente entre as variáveis originais. Com isso, as componentes principais resultantes foram utilizadas como variáveis independentes na construção do modelo de regressão linear múltipla e de regressão dinâmica. Portanto, essas metodologias são aplicadas a um estudo de caso na saúde pública, a fim de fazer previsões e tirar conclusões a respeito de qual modelo é mais adequado para realizar a previsão do custo mensal dos antibióticos em infecções hospitalares. Os resultados obtidos nos dois modelos foram considerados satisfatórios, mas foi escolhido, como modelo mais adequado para realizar as previsões, o modelo de regressão dinâmica, porque apresentou o menor erro percentual absoluto médio (MAPE). Por fim, as previsões encontradas, podem se constituir em uma ferramenta gerencial para a administração hospitalar ao fornecer subsídios para o planejamento orçamentário e financeiro dos recursos, especialmente em uma época em que há escassez de recursos em escala global, com reflexos muito intensos nos custos da saúde.
APA, Harvard, Vancouver, ISO, and other styles
16

Twagirumukiza, Etienne. "Analysis of Faculty Evaluation by Students as a Reliable Measure of Faculty Teaching Performance." Digital Archive @ GSU, 2011. http://digitalarchive.gsu.edu/math_theses/105.

Full text
Abstract:
Most American universities and colleges require students to provide faculty evaluation at end of each academic term, as a way of measuring faculty teaching performance. Although some analysts think that this kind of evaluation does not necessarily provide a good measurement of teaching effectiveness, there is a growing agreement in the academic world about its reliability. This study attempts to find any strong statistical evidence supporting faculty evaluation by students as a measure of faculty teaching effectiveness. Emphasis will be on analyzing relationships between instructor ratings by students and corresponding students’ grades. Various statistical methods are applied to analyze a sample of real data and derive conclusions. Methods considered include multivariate statistical analysis, principal component analysis, Pearson's correlation coefficient, Spearman's and Kendall’s rank correlation coefficients, linear and logistic regression analysis.
APA, Harvard, Vancouver, ISO, and other styles
17

Tinoco, Bruno Miguel Aleixo. "O impacto da comunicação social na tomada de decisão da compra e venda de acções." Master's thesis, Instituto Superior de Economia e Gestão, 2014. http://hdl.handle.net/10400.5/8114.

Full text
Abstract:
Mestrado em Decisão Económica e Empresarial
Com a realização deste estudo procurou-se aferir a influência da comunicação social na tomada de decisão no momento de comprar e vender acções, determinando quais as notícias que mais influenciaram a tomada de decisão. O estudo ao índice PSI 20. Os dados que permitiram a realização do estudo foram recolhidos no período de 15-12-2008 a 16-05-2011 tendo sido consideradas todas as notícias presentes na primeira página do Jornal de Negócios e Diário Económico e as cotações de fecho em bolsa dos títulos da EDP, ALTRI SGPS e BES, empresas estas que se encontram ambas cotadas no índice PSI 20. A análise foi iniciada com a categorização das notícias recolhidas, através do software IBM SPSS Modeler. Após a conclusão deste processo e tendo em conta a possível relação existente entre algumas das categorias, foi utilizada a análise das componentes principais, tendo sido obtidos componentes formados por duas ou mais categorias, que na prática podem ser vistas como temas de notícias publicadas nos referidos jornais. Por fim e com o intuito de aferir a relação existente entre as componentes obtidas e as decisões dos investidores, os dados existentes foram analisados através de uma regressão linear múltipla, utilizando para o efeito o software IBM SPSS Statistics, que permitiu constatar que a decisão de compra e venda de acções é influenciada por notícias relacionadas com a crise actual, por negócios inerentes à compra ou venda de uma percentagem considerável de participações de empresas nacionais e por casos de crime e corrupção mediáticos em Portugal.
The goal of this work is to review and prove the existence of influence of the social communication on decision making when buying or selling market stocks and to determine which news influence such decisions. The work was applied in the real conditions of the Portuguese market and its primary stock market index PSI 20. The necessary data for this study was collected between 15 December 2008 and 16 May 2011 including three major stocks EDP, ALTRI SGPS and BES and all the related news published on the first pages of the most influent Portuguese economical-financial journals, namely Jornal de Negócios and Diário Económico. At the beginning of the analysis, the collected data has been categorized with the IBM SPSS Modeler. After concluding this process, having in mind that relations may exist among some categories, the component analysis was performed. Naturally there were components formed by two or more categories which can be seen as different topics published in referred journals. Finally, in order to assess an existing relationship between obtained components and decisions made by investors, the data was analysed through a multiple linear regression using IBM SPSS Statistics. This analysis allowed to conclude that a decision whether to buy or sell a stock is influenced by news related to the actual financial crisis on the world market, by news inherent to a purchase or disposal of considerable amount of participations owned by large national companies and by "medialized" cases of crime and corruption in Portugal.
APA, Harvard, Vancouver, ISO, and other styles
18

WANG, Xinguang. "The Dimensionality and Control of Human Walking." Thesis, The University of Sydney, 2012. http://hdl.handle.net/2123/8945.

Full text
Abstract:
The aim of the work presented in this thesis was to investigate the control mechanism of human walking. From motor control theory, a motor synergy has two main features, sharing and error compensation (Latash, 2008). Therefore, this thesis focused on these two aspects of the mechanism by investigating: the coupling and correlations between the joint angles, and the variability due to the compensation of “errors” during walking. Thus, a more complete picture of walking in terms of coordination and control would be drawn. In order to evaluate the correlations between joint angles and detect the dimensionality of human walking, a new approach was developed as presented in Chapter 3 that overcame an important limitation of current methods for assessing the dimensionality of data sets. In Chapter 4, this new method is applied to 40 whole body joint angles to detect the coordinative structure of walking. Chapters 5 and 6 focus on between-subject and within-subject kinematic variability of walking, respectively, and investigate the effects of gender and speed on variability. The findings on walking variability inspired us to further determine the relationships between joint angles and walking speed, the results of which are shown in Chapter 7. A summary of each individual study is presented in the following text. Chapter 3 Principal components analysis is a powerful and popular technique for the decomposition of muscle activity and kinematic patterns into independent modular v components or synergies. The analysis is based on a matrix of either correlations or covariances between all pairs of signals in the data set. A primary limitation of such matrices is that they do not account for dynamic relations between signals - characterised by phase differences or frequency-dependent variations in amplitude ratio - yet such relations are widespread in the sensorimotor system. Low correlations may thus be obtained and signals may appear ‘independent’ despite a dynamic linear relation between them. To address this limitation, the matrix of overall coherence values between signal pairs may be used. Overall coherence can be calculated using linear systems analysis and provides a measure of the strength of the relationship between signals taking both phase differences and frequency-dependent variation in amplitude ratio into account. Using the ankle, knee and hip sagittal-plane angles from six healthy subjects during over-ground walking at preferred speed, it is shown that with conventional correlation matrices the first principal component accounted for ~ 50% of total variance in the data set, while with overall coherence matrices the first component accounted for > 95% of total variance. The results demonstrate that the dimensionality of the coordinative structure can be overestimated using conventional correlation, whereas with overall coherence a more parsimonious structure is identified. Overall coherence can enhance the power of principal components analysis in capturing redundancy in human motor output. Chapter 4 The control of human movement is simplified by organising actions into linkages or couplings between body segments known as ‘synergies’. Many studies have vi supported the existence of ‘synergies’ during human walking and demonstrated that multi-segmental movements are highly coupled and correlated. Since correlations in the movements between body segments can be used to understand the control of walking by identifying synergies, the nature of the coordinative structure of walking was investigated. Principal components analysis uses information about the relationship between segments in movement and can identify independent synergies. A dynamic linear systems analysis was employed to compute the overall coherence between the movements of body segments. This is a measure of the strength of the relationship between movements where both amplitude and phase differences in the movements can be accounted for. In contrast, the Pearson moment product correlation coefficient only accounts for amplitude differences in the movements. Therefore, overall coherence was assumed to be a better estimate of the true relationship between segments. The present study investigated whole body movement in terms of 40 joint angles during normal walking. Principal components analysis showed that one synergy (component) could cumulatively account for over 86% of total variance when applying overall coherence, while seven components were required when using Pearson correlation coefficient. The findings suggested that the relationships between joint angles are more complex than the simple linear relations described by Pearson correlation coefficient. When the dynamic linear relation was considered, a higher correlation between joint angles and greater reduction of degree of freedom could be obtained. The coordinative structure of human walking could therefore be low dimensional and even simply explained by a single component. An additional degree of freedom could be required to perform an vii additional voluntary task during walking by superimposing the voluntary task control signal on the basic walking motor control program. Chapter 5 Walking is a complex task which requires coordinated movement of many body segments. As a practised motor skill, walking has a low level of variability. Information regarding the variability of walking can provide valuable insight into control mechanisms and locomotor deficits. Most previous studies have assessed the stride-to-stride walking variability within subjects; little information is available for between-subject variability, especially for whole body movement. This information could provide an indication of how similar the control mechanism is between subjects during walking. Forty joint angles from the whole body were recorded using a motion analysis system in 22 healthy subjects at four walking speeds. The between-subject variability of the waveform patterns of the joint angles was evaluated using the amplitude of the mean kinematic pattern (MP) and the standard deviation of the pattern (SDP) for each angle. Regression analyses of SDP onto MP showed that at each walking speed, SDP across subjects increased with MP at a similar rate for all angles except the hip and knee in the sagittal plane. This may indicate a different control mechanism for hip and knee sagittal-plane movements which had a lower ‘signal to noise’ ratio that all other angles. A strong linear relationship was observed between SDP and MP for all joint angles. The variability between male subjects was comparable to the variability between female subjects. A trend of decreasing slopes of the regression lines with walking speed was observed with fast walking showing least variability, possibly reflecting higher angular viii accelerations producing a greater ‘tightening’ of the joints compared to slow walking, so that the rate of increase of waveform variability with increased waveform magnitude is reduced. The existence of an intercept other than zero in the SDP - MP relations suggested that the coefficient of variation should be used carefully when quantifying kinematic walking variability, because it may contain sources of variability independent of the mean amplitude of the angles. Chapter 6 Although most previous studies of walking variability have examined within-subject variability, little information is available for the variability of the whole body. This study measured the within-subject variability of both upper and lower body joint angles to increase the understanding of the mechanism of whole body movement. Whereas the between-subject variability was investigated in chapter 5, the within-subject variability of the waveform patterns of the joint angles was evaluated here, again using the amplitude of the mean kinematic pattern (MP) and the standard deviation of the pattern (SDP) for each angle. The within-subject variability was clearly less than the between-subject variability reported in Chapter 5, showing as would be expected that the repeatability of joint motion was greater within than across individuals. The results again showed that hip and knee flexion-extension demonstrated a consistently lower variability compared to all other joint angles. Comparison of males and females showed that the repeatability of joint motion was lower in females, this difference being mostly centred around the angles of the foot. The within-subject variability showed a quadratic relationship with walking speed, with minimum variability at preferred speed. Analysis of the regressions between ix SDP and MP of the joint angles also showed significant differences between females and males, with females showing a higher slope of the SDP and MP relation. As was the case for between-subject variability, the slopes of the SDP vs MP regression lines again decreased with walking speed for within-subject variability. Chapter 7 The relationship between walking parameters and speed has been widely investigated but most studies have investigated only a few joint angles and little has been reported about the relationship between the kinematics of the upper body and walking speed. In this study the relationship between walking speed and the range of the joint angles was evaluated. Linear correlations with walking speed were observed in both upper and lower body joint angles. Different mechanisms may be applied by the upper and lower limbs in relation to changes in walking speed. While hip and knee flexion-extension were found to play the most important role in changing walking speed, changes of large magnitude associated with walking speed occurred at the shoulder, elbow and trunk, apparently the result of changes in balance requirements and to help stabilise the body motion.
APA, Harvard, Vancouver, ISO, and other styles
19

Santos, Sérgio Manuel Rodrigues dos. "Characterization of the methanol recovery process at Prio Biocombustíveis S. A." Master's thesis, Universidade de Aveiro, 2015. http://hdl.handle.net/10773/16068.

Full text
Abstract:
Mestrado em Engenharia Química
In a industrial environment, to know the process one is working with is crucial to ensure its good functioning. In the present work, developed at Prio Biocombustíveis S.A. facilities, using process data, collected during the present work, and historical process data, the methanol recovery process was characterized, having started with the characterization of key process streams. Based on the information retrieved from the stream characterization, Aspen Plus® process simulation software was used to replicate the process and perform a sensitivity analysis with the objective of accessing the relative importance of certain key process variables (reflux/feed ratio, reflux temperature, reboiler outlet temperature, methanol, glycerol and water feed compositions). The work proceeded with the application of a set of statistical tools, starting with the Principal Components Analysis (PCA) from which the interactions between process variables and their contribution to the process variability was studied. Next, the Design of Experiments (DoE) was used to acquire experimental data and, with it, create a model for the water amount in the distillate. However, the necessary conditions to perform this method were not met and so it was abandoned. The Multiple Linear Regression method (MLR) was then used with the available data, creating several empiric models for the water at distillate, the one with the highest fit having a R2 equal to 92.93% and AARD equal to 19.44%. Despite the AARD still being relatively high, the model is still adequate to make fast estimates of the distillate’s quality. As for fouling, its presence has been noticed many times during this work. Not being possible to directly measure the fouling, the reboiler inlet steam pressure was used as an indicator of the fouling growth and its growth variation with the amount of Used Cooking Oil incorporated in the whole process. Comparing the steam cost associated to the reboiler’s operation when fouling is low (1.5 bar of steam pressure) and when fouling is high (reboiler’s steam pressure of 3 bar), an increase of about 58% occurs when the fouling increases.
Em ambiente industrial, conhecer o processo em que se está a trabalhar é crucial para assegurar o seu bom funcionamento. No presente trabalho, desenvolvido nas instalações da Prio Biocombustíveis, utilizando dados do processo, recolhidos no decorrer do trabalho, e dados do histórico de produção caracterizou-se o processo de recuperação de metanol, tendo-se começando pela caracterização das correntes chave do mesmo. Com base na informação obtida na caracterização de correntes, o software de simulação de processos químicos Aspen Plus® foi utilizado para replicar o processo e realizar uma análise de sensibilidade com o fim de discernir a importância relativa de variáveis chave do processo (rácio refluxo/alimentação, temperatura de refluxo, temperatura á saída do reboiler, composições na alimentação de metanol, glicerol e água). O trabalho continuou com a aplicação de um conjunto de ferramentas estatísticas, começando pela Análise aos Componentes Principais onde se estudaram as interações entre variáveis e a sua contribuição para a variabilidade do processo. De seguida, o método de Desenho de Experiencias foi utilizado para obter dados experimentais para com eles criar um modelo capaz de simular a quantidade de água no destilado. No entanto, para este método, as condições necessárias à sua realização não se verificaram, levando ao seu abandono. Passou-se então para o método de Regressão Linear Múltipla, utilizando dados observacionais, do qual surgiram vários modelos empíricos, o melhor apresentando um R2 igual a 92.93% a AARD igual a 19.44%. Apesar de o AARD ainda ser relativamente alto, considera-se que o modelo é adequado para realizar estimativas rápidas da condição do destilado na coluna. A influência do fouling no processo foi também muitas vezes notada ao longo deste trabalho. Não sendo possível a medição direta do fouling no processo, a pressão do vapor à entrada do reboiler foi usada como indicador do estado do fouling, tendo sido utilizada para estudar o desenvolvimento do fouling e a influência da quantidade de UCO, incorporado no processo, na sua formação. Quando se compara o custo do vapor associado à operação do reboiler, quando a coluna opera com fouling (3 bar de pressão de vapor), ou sem fouling (1.5 bar de pressão de vapor), verifica-se um aumento de cerca de 58% nos custos para o caso em que o fouling é maior.
APA, Harvard, Vancouver, ISO, and other styles
20

Sakarya, Hatice. "A Contribution To Modern Data Reduction Techniques And Their Applications By Applied Mathematics And Statistical Learning." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612819/index.pdf.

Full text
Abstract:
High-dimensional data take place from digital image processing, gene expression micro arrays, neuronal population activities to financial time series. Dimensionality Reduction - extracting low dimensional structure from high dimension - is a key problem in many areas like information processing, machine learning, data mining, information retrieval and pattern recognition, where we find some data reduction techniques. In this thesis we will give a survey about modern data reduction techniques, representing the state-of-the-art of theory, methods and application, by introducing the language of mathematics there. This needs a special care concerning the questions of, e.g., how to understand discrete structures as manifolds, to identify their structure, preparing the dimension reduction, and to face complexity in the algorithmically methods. A special emphasis will be paid to Principal Component Analysis, Locally Linear Embedding and Isomap Algorithms. These algorithms are studied by a research group from Vilnius, Lithuania and Zeev Volkovich, from Software Engineering Department, ORT Braude College of Engineering, Karmiel, and others. The main purpose of this study is to compare the results of the three of the algorithms. While the comparison is beeing made we will focus the results and duration.
APA, Harvard, Vancouver, ISO, and other styles
21

Musafer, Gnai Nishani. "Non-linear univariate and multivariate spatial modelling and optimal design." Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/95625/1/Gnai%20Nishani_Musafer_Thesis.pdf.

Full text
Abstract:
This thesis developed a novel adaptive methodology for the optimal design of additional sampling based on a geostatistical model that can preserve both multivariate non-linearity and spatial non-linearity present in spatial variables. This methodology can be applied in mining or any other field that deals with spatial data. The results from the different environment case studies demonstrated the potential of the proposed design methodology.
APA, Harvard, Vancouver, ISO, and other styles
22

Elnady, Maged Elsaid. "On-shaft vibration measurement using a MEMS accelerometer for faults diagnosis in rotating machines." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/onshaft-vibration-measurement-using-a-mems-accelerometer-for-faults-diagnosis-in-rotating-machines(cf9b9848-972d-49ff-a6b0-97bef1ad0e93).html.

Full text
Abstract:
The healthy condition of a rotating machine leads to safe and cheap operation of almost all industrial facilities and mechanical systems. To achieve such a goal, vibration-based condition monitoring has proved to be a well-accepted technique that detects incipient fault symptoms. The conventional way of On-Bearing Vibration Measurement (OBVM) captures symptoms of different faults, however, it requires a relatively expensive setup, an additional space for the auxiliary devices and cabling in addition to an experienced analyst. On-Shaft Vibration Measurement (OSVM) is an emerging method proposed to offer more reliable Faults Diagnosis (FD) tools with less number of sensors, minimal processing time and lower system and maintenance costs. The advancement in sensor and wireless communications technologies enables attaching a MEMS accelerometer with a miniaturised wireless data acquisition unit directly to the rotor without altering the machine dynamics. In this study, OSVM is analysed during constant speed and run-up operations of a test rig. The observations showed response modulation, hence, a Finite Element (FE) analysis has been carried out to help interpret the experimental observations. The FE analysis confirmed that the modulation is due to the rotary motion of the on-shaft sensor. A demodulation method has been developed to solve this problem. The FD capability of OSVM has been compared to that of OBVM using conventional analysis where the former provided more efficient diagnosis with less number of sensors. To incorporate more features, a method has been developed to diagnose faults based on Principal Component Analysis and Nearest Neighbour classifier. Furthermore, the method is enhanced using Linear Discriminant Analysis to do the diagnosis without the need for a classifier. Another faults diagnosis method has been developed that ensures the generalisation of extracted faults features from OSVM data of a specific machine to similar machines mounted on different foundations.
APA, Harvard, Vancouver, ISO, and other styles
23

Andersson, Veronika, and Hanna Sjöstedt. "Improved effort estimation of software projects based on metrics." Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5269.

Full text
Abstract:

Saab Ericsson Space AB develops products for space for a predetermined price. Since the price is fixed, it is crucial to have a reliable prediction model to estimate the effort needed to develop the product. In general software effort estimation is difficult, and at the software department this is a problem.

By analyzing metrics, collected from former projects, different prediction models are developed to estimate the number of person hours a software project will require. Models for predicting the effort before a project begins is first developed. Only a few variables are known at this state of a project. The models developed are compared to a current model used at the company. Linear regression models improve the estimate error with nine percent units and nonlinear regression models improve the result even more. The model used today is also calibrated to improve its predictions. A principal component regression model is developed as well. Also a model to improve the estimate during an ongoing project is developed. This is a new approach, and comparison with the first estimate is the only evaluation.

The result is an improved prediction model. There are several models that perform better than the one used today. In the discussion, positive and negative aspects of the models are debated, leading to the choice of a model, recommended for future use.

APA, Harvard, Vancouver, ISO, and other styles
24

Gul, Ahmet Bahtiyar. "Holistic Face Recognition By Dimension Reduction." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1056738/index.pdf.

Full text
Abstract:
Face recognition is a popular research area where there are different approaches studied in the literature. In this thesis, a holistic Principal Component Analysis (PCA) based method, namely Eigenface method is studied in detail and three of the methods based on the Eigenface method are compared. These are the Bayesian PCA where Bayesian classifier is applied after dimension reduction with PCA, the Subspace Linear Discriminant Analysis (LDA) where LDA is applied after PCA and Eigenface where Nearest Mean Classifier applied after PCA. All the three methods are implemented on the Olivetti Research Laboratory (ORL) face database, the Face Recognition Technology (FERET) database and the CNN-TURK Speakers face database. The results are compared with respect to the effects of changes in illumination, pose and aging. Simulation results show that Subspace LDA and Bayesian PCA perform slightly well with respect to PCA under changes in pose
however, even Subspace LDA and Bayesian PCA do not perform well under changes in illumination and aging although they perform better than PCA.
APA, Harvard, Vancouver, ISO, and other styles
25

Bayik, Tuba Makbule. "Automatic Target Recognition In Infrared Imagery." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/2/12605388/index.pdf.

Full text
Abstract:
The task of automatically recognizing targets in IR imagery has a history of approximately 25 years of research and development. ATR is an application of pattern recognition and scene analysis in the field of defense industry and it is still one of the challenging problems. This thesis may be viewed as an exploratory study of ATR problem with encouraging recognition algorithms implemented in the area. The examined algorithms are among the solutions to the ATR problem, which are reported to have good performance in the literature. Throughout the study, PCA, subspace LDA, ICA, nearest mean classifier, K nearest neighbors classifier, nearest neighbor classifier, LVQ classifier are implemented and their performances are compared in the aspect of recognition rate. According to the simulation results, the system, which uses the ICA as the feature extractor and LVQ as the classifier, has the best performing results. The good performance of this system is due to the higher order statistics of the data and the success of LVQ in modifying the decision boundaries.
APA, Harvard, Vancouver, ISO, and other styles
26

Ankoud, Farah. "Modélisation d’un parc de machines pour la surveillance. : Application aux composants en centrale nucléaire." Thesis, Vandoeuvre-les-Nancy, INPL, 2011. http://www.theses.fr/2011INPL102N/document.

Full text
Abstract:
Cette thèse porte sur la conception de méthodes de surveillance de système à partir de données collectées sur des composants de conceptions identiques exploités par plusieurs processus. Nous nous sommes intéressés aux approches de diagnostic sans modèle a priori et plus particulièrement à l'élaboration des modèles de bon fonctionnement des composants à partir des données collectées sur le parc. Nous avons ainsi abordé ce problème comme un problème d'apprentissage multi-tâches qui consiste à élaborer conjointement les modèles de chaque composant, l'hypothèse sous-jacente étant que ces modèles partagent des parties communes. Dans le deuxième chapitre, on considère, dans un premier temps, des modèles linéaires de type multi-entrées/mono-sortie, ayant des structures a priori connues. Dans une première approche, après une phase d'analyse des modèles obtenus par régression linéaire pour les machines prises indépendamment les unes des autres, on identifie leurs parties communes, puis on procède à une nouvelle estimation des coefficients des modèles pour tenir compte des parties communes. Dans une seconde approche, on identifie simultanément les coefficients des modèles ainsi que leurs parties communes. Dans un deuxième temps, on cherche à obtenir directement les relations de redondance existant entre les variables mesurées par l'ACP. On s'affranchit alors des hypothèses sur la connaissance des structures des modèles et on prend en compte la présence d'erreurs sur l'ensemble des variables. Dans un troisième chapitre, une étude de la discernabilité des modèles est réalisée. Il s'agit de déterminer les domaines de variation des variables d'entrée garantissant la discernabilité des sorties des modèles. Ce problème d'inversion ensembliste est résolu soit en utilisant des pavés circonscrits aux différents domaines soit une approximation par pavage de ces domaines. Finalement, une application des approches proposées est réalisée sur des simulateurs d'échangeurs thermiques
This thesis deals with the conception of diagnosis systems using the data collected on identical machines working under different conditions. We are interested in the fault diagnosis method without a priori model and in modelling a fleet of machines using the data collected on all the machines. Hence, the problem can be formulated as a multi-task learning problem where models of the different machines are constructed simultaneously. These models are supposed to share some common parts. In the second chapter, we first consider linear models of type multiple-input/single-output. A first approach consists in analyzing the linear regression models generated using the data of each machine independently from the others in order to identify their common parts. Using this knowledge, new models for the machines are generated. The second approach consists in identifying simultaneously the coefficients of the models and their common parts. Secondly, the redundancy models are searched for using PCA. This way, no hypothesis on the knowledge of the structures of models describing the normal behavior of each machine is needed. In addition, this method allows to take into consideration the errors existing on all the variables since it does not differentiate between input or output variables. In the third chapter, a study on the discernibility of the outputs of the models is realized. The problem consists in identifying the range of variation of the input variables leading to discernible outputs of the models. This problem is solved using either the confined pavements to the different domains or a pavement method. Finally, the multi-task modelling approaches are applied on simulators of heat exchangers
APA, Harvard, Vancouver, ISO, and other styles
27

Vaizurs, Raja Sarath Chandra Prasad. "Atrial Fibrillation Signal Analysis." Scholar Commons, 2011. http://scholarcommons.usf.edu/etd/3386.

Full text
Abstract:
Atrial fibrillation (AF) is the most common type of cardiac arrhythmia encountered in clinical practice and is associated with an increased mortality and morbidity. Identification of the sources of AF has been a goal of researchers for over 20 years. Current treatment procedures such as Cardio version, Radio Frequency Ablation, and multiple drugs have reduced the incidence of AF. Nevertheless, the success rate of these treatments is only 35-40% of the AF patients as they have limited effect in maintaining the patient in normal sinus rhythm. The problem stems from the fact that there are no methods developed to analyze the electrical activity generated by the cardiac cells during AF and to detect the aberrant atrial tissue that triggers it. In clinical practice, the sources triggering AF are generally expected to be at one of the four pulmonary veins in the left atrium. Classifying the signals originated from four pulmonary veins in left atrium has been the mainstay of signal analysis in this thesis which ultimately leads to correctly locating the source triggering AF. Unlike many of the current researchers where they use ECG signals for AF signal analysis, we collect intra cardiac signals along with ECG signals for AF analysis. AF Signal collected from catheters placed inside the heart gives us a better understanding of AF characteristics compared to the ECG. . In recent years, mechanisms leading to AF induction have begun to be explored but the current state of research and diagnosis of AF is mainly about the inspection of 12 lead ECG, QRS subtraction methods, spectral analysis to find the fibrillation rate and limited to establishment of its presence or absence. The main goal of this thesis research is to develop methodology and algorithm for finding the source of AF. Pattern recognition techniques were used to classify the AF signals originated from the four pulmonary veins. The classification of AF signals recorded by a stationary intra-cardiac catheter was done based on dominant frequency, frequency distribution and normalized power. Principal Component Analysis was used to reduce the dimensionality and further, Linear Discriminant Analysis was used as a classification technique. An algorithm has been developed and tested during recorded periods of AF with promising results.
APA, Harvard, Vancouver, ISO, and other styles
28

Thrush, Corey. "Modern Analysis of Passing Plays in the National Football League." Bowling Green State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1624982176445411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Einestam, Ragnar, and Karl Casserfelt. "PiEye in the Wild: Exploring Eye Contact Detection for Small Inexpensive Hardware." Thesis, Malmö högskola, Fakulteten för teknik och samhälle (TS), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20696.

Full text
Abstract:
Ögonkontakt-sensorer skapar möjligheten att tolka användarens uppmärksamhet, vilketkan användas av system på en mängd olika vis. Dessa inkluderar att skapa nya möjligheterför människa-dator-interaktion och mäta mönster i uppmärksamhet hos individer.I den här uppsatsen gör vi ett försök till att konstruera en ögonkontakt-sensor med hjälpav en Raspberry Pi, med målet att göra den praktisk i verkliga scenarion. För att fastställaatt den är praktisk satte vi upp ett antal kriterier baserat på tidigare användning avögonkontakt-sensorer. För att möta dessa kriterier valde vi att använda en maskininlärningsmetodför att träna en klassificerare med bilder för att lära systemet att upptäcka omen användare har ögonkontakt eller ej. Vårt mål var att undersöka hur god prestanda vikunde uppnå gällande precision, hastighet och avstånd. Efter att ha testat kombinationerav fyra olika metoder för feature extraction kunde vi fastslå att den bästa övergripandeprecisionen uppnåddes genom att använda LDA-komprimering på pixeldatan från varjebild, medan PCA-komprimering var bäst när input-bilderna liknande de från träningen.När vi undersökte systemets hastighet fann vi att nedskalning av bilder hade en stor effektpå hastigheten, men detta sänkte också både precision och maximalt avstånd. Vi lyckadesminska den negativa effekten som en minskad skala hos en bild hade på precisionen, mendet maximala avståndet som sensorn fungerade på var fortfarande relativ till skalan och iförlängningen hastigheten.
Eye contact detection sensors have the possibility of inferring user attention, which can beutilized by a system in a multitude of different ways, including supporting human-computerinteraction and measuring human attention patterns. In this thesis we attempt to builda versatile eye contact sensor using a Raspberry Pi that is suited for real world practicalusage. In order to ensure practicality, we constructed a set of criteria for the system basedon previous implementations. To meet these criteria, we opted to use an appearance-basedmachine learning method where we train a classifier with training images in order to inferif users look at the camera or not. Our aim was to investigate how well we could detecteye contacts on the Raspberry Pi in terms of accuracy, speed and range. After extensivetesting on combinations of four different feature extraction methods, we found that LinearDiscriminant Analysis compression of pixel data provided the best overall accuracy, butPrincipal Component Analysis compression performed the best when tested on imagesfrom the same dataset as the training data. When investigating the speed of the system,we found that down-scaling input images had a huge effect on the speed, but also loweredthe accuracy and range. While we managed to mitigate the effects the scale had on theaccuracy, the range of the system is still relative to the scale of input images and byextension speed.
APA, Harvard, Vancouver, ISO, and other styles
30

Onder, Murat. "Face Detection And Active Robot Vision." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/2/12605290/index.pdf.

Full text
Abstract:
The main task in this thesis is to design a robot vision system with face detection and tracking capability. Hence there are two main works in the thesis: Firstly, the detection of the face on an image that is taken from the camera on the robot must be achieved. Hence this is a serious real time image processing task and time constraints are very important because of this reason. A processing rate of 1 frame/second is tried to be achieved and hence a fast face detection algorithm had to be used. The Eigenface method and the Subspace LDA (Linear Discriminant Analysis) method are implemented, tested and compared for face detection and Eigenface method proposed by Turk and Pentland is decided to be used. The images are first passed through a number of preprocessing algorithms to obtain better performance, like skin detection, histogram equalization etc. After this filtering process the face candidate regions are put through the face detection algorithm to understand whether there is a face or not in the image. Some modifications are applied to the eigenface algorithm to detect the faces better and faster. Secondly, the robot must move towards the face in the image. This task includes robot motion. The robot to be used for this purpose is a Pioneer 2-DX8 Plus, which is a product of ActivMedia Robotics Inc. and only the interfaces to move the robot have been implemented in the thesis software. The robot is to detect the faces at different distances and arrange its position according to the distance of the human to the robot. Hence a scaling mechanism must be used either in the training images, or in the input image taken from the camera. Because of timing constraint and low camera resolution, a limited number of scaling is applied in the face detection process. With this reason faces of people who are very far or very close to the robot will not be detected. A background independent face detection system is tried to be designed. However the resultant algorithm is slightly dependent to the background. There is no any other constraints in the system.
APA, Harvard, Vancouver, ISO, and other styles
31

Solomon, Mary Joanna. "Multivariate Analysis of Korean Pop Music Audio Features." Bowling Green State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1617105874719868.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Xuechuan, and n/a. "Feature Extraction and Dimensionality Reduction in Pattern Recognition and Their Application in Speech Recognition." Griffith University. School of Microelectronic Engineering, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030619.162803.

Full text
Abstract:
Conventional pattern recognition systems have two components: feature analysis and pattern classification. Feature analysis is achieved in two steps: parameter extraction step and feature extraction step. In the parameter extraction step, information relevant for pattern classification is extracted from the input data in the form of parameter vector. In the feature extraction step, the parameter vector is transformed to a feature vector. Feature extraction can be conducted independently or jointly with either parameter extraction or classification. Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) are the two popular independent feature extraction algorithms. Both of them extract features by projecting the parameter vectors into a new feature space through a linear transformation matrix. But they optimize the transformation matrix with different intentions. PCA optimizes the transformation matrix by finding the largest variations in the original feature space. LDA pursues the largest ratio of between-class variation and within-class variation when projecting the original feature space to a subspace. The drawback of independent feature extraction algorithms is that their optimization criteria are different from the classifier’s minimum classification error criterion, which may cause inconsistency between feature extraction and the classification stages of a pattern recognizer and consequently, degrade the performance of classifiers. A direct way to overcome this problem is to conduct feature extraction and classification jointly with a consistent criterion. Minimum classification Error (MCE) training algorithm provides such an integrated framework. MCE algorithm was first proposed for optimizing classifiers. It is a type of discriminative learning algorithm but achieves minimum classification error directly. The flexibility of the framework of MCE algorithm makes it convenient to conduct feature extraction and classification jointly. Conventional feature extraction and pattern classification algorithms, LDA, PCA, MCE training algorithm, minimum distance classifier, likelihood classifier and Bayesian classifier, are linear algorithms. The advantage of linear algorithms is their simplicity and ability to reduce feature dimensionalities. However, they have the limitation that the decision boundaries generated are linear and have little computational flexibility. SVM is a recently developed integrated pattern classification algorithm with non-linear formulation. It is based on the idea that the classification that a.ords dot-products can be computed efficiently in higher dimensional feature spaces. The classes which are not linearly separable in the original parametric space can be linearly separated in the higher dimensional feature space. Because of this, SVM has the advantage that it can handle the classes with complex nonlinear decision boundaries. However, SVM is a highly integrated and closed pattern classification system. It is very difficult to adopt feature extraction into SVM’s framework. Thus SVM is unable to conduct feature extraction tasks. This thesis investigates LDA and PCA for feature extraction and dimensionality reduction and proposes the application of MCE training algorithms for joint feature extraction and classification tasks. A generalized MCE (GMCE) training algorithm is proposed to mend the shortcomings of the MCE training algorithms in joint feature and classification tasks. SVM, as a non-linear pattern classification system is also investigated in this thesis. A reduced-dimensional SVM (RDSVM) is proposed to enable SVM to conduct feature extraction and classification jointly. All of the investigated and proposed algorithms are tested and compared firstly on a number of small databases, such as Deterding Vowels Database, Fisher’s IRIS database and German’s GLASS database. Then they are tested in a large-scale speech recognition experiment based on TIMIT database.
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Xuechuan. "Feature Extraction and Dimensionality Reduction in Pattern Recognition and Their Application in Speech Recognition." Thesis, Griffith University, 2003. http://hdl.handle.net/10072/365680.

Full text
Abstract:
Conventional pattern recognition systems have two components: feature analysis and pattern classification. Feature analysis is achieved in two steps: parameter extraction step and feature extraction step. In the parameter extraction step, information relevant for pattern classification is extracted from the input data in the form of parameter vector. In the feature extraction step, the parameter vector is transformed to a feature vector. Feature extraction can be conducted independently or jointly with either parameter extraction or classification. Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) are the two popular independent feature extraction algorithms. Both of them extract features by projecting the parameter vectors into a new feature space through a linear transformation matrix. But they optimize the transformation matrix with different intentions. PCA optimizes the transformation matrix by finding the largest variations in the original feature space. LDA pursues the largest ratio of between-class variation and within-class variation when projecting the original feature space to a subspace. The drawback of independent feature extraction algorithms is that their optimization criteria are different from the classifier’s minimum classification error criterion, which may cause inconsistency between feature extraction and the classification stages of a pattern recognizer and consequently, degrade the performance of classifiers. A direct way to overcome this problem is to conduct feature extraction and classification jointly with a consistent criterion. Minimum classification Error (MCE) training algorithm provides such an integrated framework. MCE algorithm was first proposed for optimizing classifiers. It is a type of discriminative learning algorithm but achieves minimum classification error directly. The flexibility of the framework of MCE algorithm makes it convenient to conduct feature extraction and classification jointly. Conventional feature extraction and pattern classification algorithms, LDA, PCA, MCE training algorithm, minimum distance classifier, likelihood classifier and Bayesian classifier, are linear algorithms. The advantage of linear algorithms is their simplicity and ability to reduce feature dimensionalities. However, they have the limitation that the decision boundaries generated are linear and have little computational flexibility. SVM is a recently developed integrated pattern classification algorithm with non-linear formulation. It is based on the idea that the classification that a.ords dot-products can be computed efficiently in higher dimensional feature spaces. The classes which are not linearly separable in the original parametric space can be linearly separated in the higher dimensional feature space. Because of this, SVM has the advantage that it can handle the classes with complex nonlinear decision boundaries. However, SVM is a highly integrated and closed pattern classification system. It is very difficult to adopt feature extraction into SVM’s framework. Thus SVM is unable to conduct feature extraction tasks. This thesis investigates LDA and PCA for feature extraction and dimensionality reduction and proposes the application of MCE training algorithms for joint feature extraction and classification tasks. A generalized MCE (GMCE) training algorithm is proposed to mend the shortcomings of the MCE training algorithms in joint feature and classification tasks. SVM, as a non-linear pattern classification system is also investigated in this thesis. A reduced-dimensional SVM (RDSVM) is proposed to enable SVM to conduct feature extraction and classification jointly. All of the investigated and proposed algorithms are tested and compared firstly on a number of small databases, such as Deterding Vowels Database, Fisher’s IRIS database and German’s GLASS database. Then they are tested in a large-scale speech recognition experiment based on TIMIT database.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Microelectronic Engineering
Full Text
APA, Harvard, Vancouver, ISO, and other styles
34

Mikušková, Martina. "Statistické modelování znečištění ovzduší prašným aerosolem." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2014. http://www.nusl.cz/ntk/nusl-231405.

Full text
Abstract:
The diploma thesis deals with multivariate statistical methods and their environmental applications. The theoretical part is devoted to selected methods of linear regression analysis, method of principal components and the model of classical and robust factor analysis is also described. In the practical part of thesis, the main emission sources of PM1 aerosols in summer and winter period in Brno and Šlapanice are determined by using the classical factor analysis. The main aerosol emission sources in summer and winter in Šlapanice are also identified by using the robust factor analysis. Furthermore, the prediction of concentrations of PM1 aerosols in summer and winter period in Brno and Šlapanice is performed by using the linear regression model.
APA, Harvard, Vancouver, ISO, and other styles
35

Aygar, Alper. "Doppler Radar Data Processing And Classification." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609890/index.pdf.

Full text
Abstract:
In this thesis, improving the performance of the automatic recognition of the Doppler radar targets is studied. The radar used in this study is a ground-surveillance doppler radar. Target types are car, truck, bus, tank, helicopter, moving man and running man. The input of this thesis is the output of the real doppler radar signals which are normalized and preprocessed (TRP vectors: Target Recognition Pattern vectors) in the doctorate thesis by Erdogan (2002). TRP vectors are normalized and homogenized doppler radar target signals with respect to target speed, target aspect angle and target range. Some target classes have repetitions in time in their TRPs. By the use of these repetitions, improvement of the target type classification performance is studied. K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) algorithms are used for doppler radar target classification and the results are evaluated. Before classification PCA (Principal Component Analysis), LDA (Linear Discriminant Analysis), NMF (Nonnegative Matrix Factorization) and ICA (Independent Component Analysis) are implemented and applied to normalized doppler radar signals for feature extraction and dimension reduction in an efficient way. These techniques transform the input vectors, which are the normalized doppler radar signals, to another space. The effects of the implementation of these feature extraction algoritms and the use of the repetitions in doppler radar target signals on the doppler radar target classification performance are studied.
APA, Harvard, Vancouver, ISO, and other styles
36

Mohammadzadeh, Soroush. "System identification and control of smart structures: PANFIS modeling method and dissipativity analysis of LQR controllers." Digital WPI, 2013. https://digitalcommons.wpi.edu/etd-theses/868.

Full text
Abstract:
"Maintaining an efficient and reliable infrastructure requires continuous monitoring and control. In order to accomplish these tasks, algorithms are needed to process large sets of data and for modeling based on these processed data sets. For this reason, computationally efficient and accurate modeling algorithms along with data compression techniques and optimal yet practical control methods are in demand. These tools can help model structures and improve their performance. In this thesis, these two aspects are addressed separately. A principal component analysis based adaptive neuro-fuzzy inference system is proposed for fast and accurate modeling of time-dependent behavior of a structure integrated with a smart damper. Since a smart damper can only dissipate energy from structures, a challenge is to evaluate the dissipativity of optimal control methods for smart dampers to decide if the optimal controller can be realized using the smart damper. Therefore, a generalized deterministic definition for dissipativity is proposed and a commonly used controller, LQR is proved to be dissipative. Examples are provided to illustrate the effectiveness of the proposed modeling algorithm and evaluating the dissipativity of LQR control method. These examples illustrate the effectiveness of the proposed modeling algorithm and dissipativity of LQR controller."
APA, Harvard, Vancouver, ISO, and other styles
37

Lamonica, Laura de Castro. "Avaliação da qualidade do diagnóstico do meio biótico de EIAs do Estado de São Paulo." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/100/100136/tde-16112016-161910/.

Full text
Abstract:
A Política Nacional do Meio Ambiente visa compatibilizar o desenvolvimento socioeconômico com a qualidade ambiental. A Avaliação de Impacto Ambiental, um de seus instrumentos, utiliza-se do Estudo de Impacto Ambiental (EIA) na sua aplicação em projetos ou empreendimentos. A elaboração do EIA envolve a etapa de diagnóstico para análise da qualidade ambiental da área. A qualidade do EIA e do diagnóstico tem sido objeto de críticas e descrédito junto à sociedade, principalmente à comunidade científica e às associações ambientalistas. Sabe-se que a qualidade do diagnóstico influencia diretamente a efetividade processual do EIA e seu papel como influenciador da tomada de decisão; assim, uma avaliação da qualidade dessa etapa do EIA contribui com a aplicação mais efetiva desse instrumento. A pesquisa visou avaliar a qualidade do diagnóstico biótico dos EIAs do Estado de São Paulo elaborados entre 2005 e 2014. Para isso, proposições ao diagnóstico biótico foram reunidas em uma lista de verificação, utilizada para a avaliação de 55 diagnósticos bióticos e 35 termos de referência de EIAs. Os resultados foram analisados qualitativamente e em comparação com as recomendações dos termos de referência (TRs) analisados. Posteriormente, a qualidade dos diagnósticos foi analisada sob três perspectivas: aprovação dos estudos, tipo de empreendimento e ano de elaboração do EIA. Por fim, foi realizada análise de componentes principais não-linear (NLPCA) para os dados de diagnóstico, no intuito de testar a sugestão de aplicação dessa ferramenta para a identificação dos critérios determinantes para a qualidade dos diagnósticos e possíveis relações entre esses critérios e entre os estudos. A qualidade dos diagnósticos bióticos analisados foi mais satisfatória para aspectos descritivos do que analíticos. Foram determinantes para a qualidade dos estudos critérios relativos à coleta de dados quantitativos e levantamentos para espécies raras, segundo a NLPCA. Tempo de levantamento e sazonalidade foram considerados insatisfatórios, e apresentaram relação estatística com a identificação do grau de vulnerabilidade da área. Os resultados realçaram a importância da sistematização de dados de biodiversidade em fontes confiáveis e atualizadas para elaboração e análise de diagnósticos, e para TRs mais específicos, uma vez que, apesar de estarem sendo cumpridos pelos estudos, os TRs são genéricos e apresentam mais recomendações descritivas do que analíticas. Não houve diferença representativa entre a qualidade dos diagnósticos referentes a estudos aprovados e não aprovados, o setor de Obras Hidráulicas apresentou avaliações mais satisfatórias, o que foi salientado pela NLPCA e pode estar relacionado ao porte do projeto, e a análise temporal evidenciou uma tendência de melhora dos estudos e TRs. Tanto a lista de verificação quanto a NLPCA se mostraram ferramentas adequadas para a investigação da qualidade de diagnósticos biológicos de EIA
The Brazilian National Environmental Policy established Environmental Impact Assessment (EIA) as one of the 13 tools to reconcile socio-economic development with environmental quality. EIA involves the Environmental Impact Statements (EIS) in its application to development projects. EIS drafting involves a baseline step for analysis of environmental quality of the area. The quality of the EIS and the baseline process has been criticized by society, especially by scientific community and environmental groups, and this quality directly influences the effectiveness of the EIA procedure and its role as a decision making tool. Thus, an evaluation of the quality of this EIS step contributes to a more effective application of this instrument. The research aimed to evaluate the quality of biotic baseline studies of EIS drawn up between 2005 and 2014 in the state of São Paulo. We assessed 55 biotic baseline studies and 35 terms of reference (TRs) of EISs by a checklist which consists of a set of recommendations from literature and regulations to biotic baseline studies. The results of baseline and TRs were analyzed qualitatively and compared to one another. Then, we looked at the baseline quality under three approaches: license emission, sector and project type of activity, and year of EIS preparation. Finally, multivariate analysis was performed by Nonlinear Principal Component Analysis (NLPCA) for the baseline quality data in order to test the application of this analysis for the identification of critical and determinant criteria for the quality of baseline and the investigation of how these criteria and the EISs are related to one another. Results point to more satisfactory descriptive than analytical issues. Criteria of quantitative data collecting and surveys of rare species were determinants for baseline quality. Time of survey and seasonality was an unsatisfactory criterion, and statistically related to the vulnerability degree of the area. Results highlighted the importance of systematization of biodiversity data in reliable and updated sources useful for EISs preparation and analysis and for the draft of TRs in a more specific way. TRs were satisfactorily complied by the baseline content, but they are generic and present more descriptive than analytical recommendations. There was no representative difference between the quality of baseline of approved and not approved EISs. Hydraulic project showed more satisfactory evaluations, emphasized by NLPCA, and it may be related to the size of the project. Temporal analysis highlighted an improvement trend of studies and TRs. Thus, both the checklist as NLPCA proved to be suitable tools to the assessment of biological baseline studies of EIS
APA, Harvard, Vancouver, ISO, and other styles
38

Macenauer, Oto. "Identifikace obličeje." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237221.

Full text
Abstract:
This document introduces the reader to area of face recognition. Miscellaneous methods are mentioned and categorized to be able to understand the process of face recognition. Main focus of this document is on issues of current face recognition and possibilities do solve these inconveniences in order to be able to massively spread face recognition. The second part of this work is focused on implementation of selected methods, which are Linear Discriminant Analysis and Principal Component Analysis. Those methods are compared to each other and results are given at the end of work.
APA, Harvard, Vancouver, ISO, and other styles
39

Santos, Anderson Rodrigo dos. "Identificação de faces humanas através de PCA-LDA e redes neurais SOM." Universidade de São Paulo, 2005. http://www.teses.usp.br/teses/disponiveis/18/18133/tde-21042006-222231/.

Full text
Abstract:
O uso de dados biométricos da face para verificação automática de identidade é um dos maiores desafios em sistemas de controle de acesso seguro. O processo é extremamente complexo e influenciado por muitos fatores relacionados à forma, posição, iluminação, rotação, translação, disfarce e oclusão de características faciais. Hoje existem muitas técnicas para se reconhecer uma face. Esse trabalho apresenta uma investigação buscando identificar uma face no banco de dados ORL com diferentes grupos de treinamento. É proposto um algoritmo para o reconhecimento de faces baseado na técnica de subespaço LDA (PCA + LDA) utilizando uma rede neural SOM para representar cada classe (face) na etapa de classificação/identificação. Aplicando o método do subespaço LDA busca-se extrair as características mais importantes na identificação das faces previamente conhecidas e presentes no banco de dados, criando um espaço dimensional menor e discriminante com relação ao espaço original. As redes SOM são responsáveis pela memorização das características de cada classe. O algoritmo oferece maior desempenho (taxas de reconhecimento entre 97% e 98%) com relação às adversidades e fontes de erros que prejudicam os métodos de reconhecimento de faces tradicionais.
The use of biometric technique for automatic personal identification is one of the biggest challenges in the security field. The process is complex because it is influenced by many factors related to the form, position, illumination, rotation, translation, disguise and occlusion of face characteristics. Now a days, there are many face recognition techniques. This work presents a methodology for searching a face in the ORL database with some different training sets. The algorithm for face recognition was based on sub-space LDA (PCA + LDA) technique using a SOM neural net to represent each class (face) in the stage of classification/identification. By applying the sub-space LDA method, we extract the most important characteristics in the identification of previously known faces that belong to the database, creating a reduced and more discriminated dimensional space than the original space. The SOM nets are responsible for the memorization of each class characteristic. The algorithm offers great performance (recognition rates between 97% and 98%) considering the adversities and sources of errors inherent to the traditional methods of face recognition.
APA, Harvard, Vancouver, ISO, and other styles
40

Pinto, Adena. "The Landscape of Food and Beverage Advertising to Children and Adolescents on Canadian Television." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/41408.

Full text
Abstract:
Background: Canadian youth obesity, and comorbidities, have paralleled trends in consuming nutrient-poor foods marketed by the food industry. In Canada, food marketing is largely self-regulated by the food industry under the Canadian Children’s Food and Beverage Advertising Initiative (CAI). Methods: Public television programming records benchmarked the volume of food advertising targeted to preschoolers, children, adolescents, and adults on Canadian television. Food advertising rates and frequencies were compared by age group, television station, month, food category, and company, using regression modelling, chi-square tests and principal component analysis. Results: Food advertising rates significantly differed by all independent variables. Fast food companies dominated advertising during adolescent-programming while food and beverage manufacturers dominated advertising during programming to all other age groups. CAI signatories contributed more advertising during children’s programming than non-signatories. Conclusion: Failings of self-regulation in limiting food advertising to Canadian youth demonstrate the need for statutory restrictions to rectify youth’s obesogenic media environments and their far-reaching health effects.
APA, Harvard, Vancouver, ISO, and other styles
41

LI, Songyu. "A New Hands-free Face to Face Video Communication Method : Profile based frontal face video reconstruction." Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-152457.

Full text
Abstract:
This thesis proposes a method to reconstruct a frontal facial video basedon encoding done with the facial profile of another video sequence.The reconstructed facial video will have the similar facial expressionchanges as the changes in the profile video. First, the profiles for boththe reference video and for the test video are captured by edge detection.Then, asymmetrical principal component analysis is used to model thecorrespondence between the profile and the frontal face. This allows en-coding from a profile and decoding of the frontal face of another video.Another solution is to use dynamic time warping to match the profilesand select the best matching corresponding frontal face frame for re-construction. With this method, we can reconstructed the test frontalvideo to make it have the similar changing in facial expressions as thereference video. To improve the quality of the result video, Local Lin-ear Embedding is used to give the result video a smoother transitionbetween frames.
APA, Harvard, Vancouver, ISO, and other styles
42

Švábek, Hynek. "Nalezení a rozpoznání dominantních rysů obličeje." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237180.

Full text
Abstract:
This thesis deals with the increasingly developing field of biometric systems which is the identification of faces. The thesis deals with the possibilities of face localization in pictures and their normalization, which is necessary due to external influences and the influence of different scanning techniques. It describes various techniques of localization of dominant features of the face such as eyes, mouth or nose. Not least, it describes different approaches to the identification of faces. Furthermore a it deals with an implementation of the Dominant Face Features Recognition application, which demonstrates chosen methods for localization of the dominant features (Hough Transform for Circles, localization of mouth using the location of the eyes) and for identification of a face (Linear Discriminant Analysis, Kernel Discriminant Analysis). The last part of the thesis contains a summary of achieved results and a discussion.
APA, Harvard, Vancouver, ISO, and other styles
43

Gao, Hui. "Extracting key features for analysis and recognition in computer vision." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1141770523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Hu, Wenbiao. "Applications of Spatio-temporal Analytical Methods in Surveillance of Ross River Virus Disease." Thesis, Queensland University of Technology, 2005. https://eprints.qut.edu.au/16109/1/Wenbiao_Hu_Thesis.pdf.

Full text
Abstract:
The incidence of many arboviral diseases is largely associated with social and environmental conditions. Ross River virus (RRV) is the most prevalent arboviral disease in Australia. It has long been recognised that the transmission pattern of RRV is sensitive to socio-ecological factors including climate variation, population movement, mosquito-density and vegetation types. This study aimed to assess the relationships between socio-environmental variability and the transmission of RRV using spatio-temporal analytic methods. Computerised data files of daily RRV disease cases and daily climatic variables in Brisbane, Queensland during 1985-2001 were obtained from the Queensland Department of Health and the Australian Bureau of Meteorology, respectively. Available information on other socio-ecological factors was also collected from relevant government agencies as follows: 1) socio-demographic data from the Australia Bureau of Statistics; 2) information on vegetation (littoral wetlands, ephemeral wetlands, open freshwater, riparian vegetation, melaleuca open forests, wet eucalypt, open forests and other bushland) from Brisbane City Council; 3) tidal activities from the Queensland Department of Transport; and 4) mosquito-density from Brisbane City Council. Principal components analysis (PCA) was used as an exploratory technique for discovering spatial and temporal pattern of RRV distribution. The PCA results show that the first principal component accounted for approximately 57% of the information, which contained the four seasonal rates and loaded highest and positively for autumn. K-means cluster analysis indicates that the seasonality of RRV is characterised by three groups with high, medium and low incidence of disease, and it suggests that there are at least three different disease ecologies. The variation in spatio-temporal patterns of RRV indicates a complex ecology that is unlikely to be explained by a single dominant transmission route across these three groupings. Therefore, there is need to explore socio-economic and environmental determinants of RRV disease at the statistical local area (SLA) level. Spatial distribution analysis and multiple negative binomial regression models were employed to identify the socio-economic and environmental determinants of RRV disease at both the city and local (ie, SLA) levels. The results show that RRV activity was primarily concentrated in the northeast, northwest and southeast areas in Brisbane. The negative binomial regression models reveal that RRV incidence for the whole of the Brisbane area was significantly associated with Southern Oscillation Index (SOI) at a lag of 3 months (Relative Risk (RR): 1.12; 95% confidence interval (CI): 1.06 - 1.17), the proportion of people with lower levels of education (RR: 1.02; 95% CI: 1.01 - 1.03), the proportion of labour workers (RR: 0.97; 95% CI: 0.95 - 1.00) and vegetation density (RR: 1.02; 95% CI: 1.00 - 1.04). However, RRV incidence for high risk areas (ie, SLAs with higher incidence of RRV) was significantly associated with mosquito density (RR: 1.01; 95% CI: 1.00 - 1.01), SOI at a lag of 3 months (RR: 1.48; 95% CI: 1.23 - 1.78), human population density (RR: 3.77; 95% CI: 1.35 - 10.51), the proportion of indigenous population (RR: 0.56; 95% CI: 0.37 - 0.87) and the proportion of overseas visitors (RR: 0.57; 95% CI: 0.35 - 0.92). It is acknowledged that some of these risk factors, while statistically significant, are small in magnitude. However, given the high incidence of RRV, they may still be important in practice. The results of this study suggest that the spatial pattern of RRV disease in Brisbane is determined by a combination of ecological, socio-economic and environmental factors. The possibility of developing an epidemic forecasting system for RRV disease was explored using the multivariate Seasonal Auto-regressive Integrated Moving Average (SARIMA) technique. The results of this study suggest that climatic variability, particularly precipitation, may have played a significant role in the transmission of RRV disease in Brisbane. This finding cannot entirely be explained by confounding factors such as other socio-ecological conditions because they have been unlikely to change dramatically on a monthly time scale in this city over the past two decades. SARIMA models show that monthly precipitation at a lag 2 months (=0.004,p=0.031) was statistically significantly associated with RRV disease. It suggests that there may be 50 more cases a year for an increase of 100 mm precipitation on average in Brisbane. The predictive values in the model were generally consistent with actual values (root-mean-square error (RMSE): 1.96). Therefore, this model may have applications as a decision support tool in disease control and risk-management planning programs in Brisbane. The Polynomial distributed lag (PDL) time series regression models were performed to examine the associations between rainfall, mosquito density and the occurrence of RRV after adjusting for season and auto-correlation. The PDL model was used because rainfall and mosquito density can affect not merely RRV occurring in the same month, but in several subsequent months. The rationale for the use of the PDL technique is that it increases the precision of the estimates. We developed an epidemic forecasting model to predict incidence of RRV disease. The results show that 95% and 85% of the variation in the RRV disease was accounted for by the mosquito density and rainfall, respectively. The predictive values in the model were generally consistent with actual values (RMSE: 1.25). The model diagnosis reveals that the residuals were randomly distributed with no significant auto-correlation. The results of this study suggest that PDL models may be better than SARIMA models (R-square increased and RMSE decreased). The findings of this study may facilitate the development of early warning systems for the control and prevention of this widespread disease. Further analyses were conducted using classification trees to identify major mosquito species of Ross River virus (RRV) transmission and explore the threshold of mosquito density for RRV disease in Brisbane, Australia. The results show that Ochlerotatus vigilax (RR: 1.028; 95% CI: 1.001 - 1.057) and Culex annulirostris (RR: 1.013, 95% CI: 1.003 - 1.023) were significantly associated with RRV disease cycles at a lag of 1 month. The presence of RRV was associated with average monthly mosquito density of 72 Ochlerotatus vigilax and 52 Culex annulirostris per light trap. These results may also have applications as a decision support tool in disease control and risk management planning programs. As RRV has significant impact on population health, industry, and tourism, it is important to develop an epidemic forecast system for this disease. The results of this study show the disease surveillance data can be integrated with social, biological and environmental databases. These data can provide additional input into the development of epidemic forecasting models. These attempts may have significant implications in environmental health decision-making and practices, and may help health authorities determine public health priorities more wisely and use resources more effectively and efficiently.
APA, Harvard, Vancouver, ISO, and other styles
45

Hu, Wenbiao. "Applications of Spatio-temporal Analytical Methods in Surveillance of Ross River Virus Disease." Queensland University of Technology, 2005. http://eprints.qut.edu.au/16109/.

Full text
Abstract:
The incidence of many arboviral diseases is largely associated with social and environmental conditions. Ross River virus (RRV) is the most prevalent arboviral disease in Australia. It has long been recognised that the transmission pattern of RRV is sensitive to socio-ecological factors including climate variation, population movement, mosquito-density and vegetation types. This study aimed to assess the relationships between socio-environmental variability and the transmission of RRV using spatio-temporal analytic methods. Computerised data files of daily RRV disease cases and daily climatic variables in Brisbane, Queensland during 1985-2001 were obtained from the Queensland Department of Health and the Australian Bureau of Meteorology, respectively. Available information on other socio-ecological factors was also collected from relevant government agencies as follows: 1) socio-demographic data from the Australia Bureau of Statistics; 2) information on vegetation (littoral wetlands, ephemeral wetlands, open freshwater, riparian vegetation, melaleuca open forests, wet eucalypt, open forests and other bushland) from Brisbane City Council; 3) tidal activities from the Queensland Department of Transport; and 4) mosquito-density from Brisbane City Council. Principal components analysis (PCA) was used as an exploratory technique for discovering spatial and temporal pattern of RRV distribution. The PCA results show that the first principal component accounted for approximately 57% of the information, which contained the four seasonal rates and loaded highest and positively for autumn. K-means cluster analysis indicates that the seasonality of RRV is characterised by three groups with high, medium and low incidence of disease, and it suggests that there are at least three different disease ecologies. The variation in spatio-temporal patterns of RRV indicates a complex ecology that is unlikely to be explained by a single dominant transmission route across these three groupings. Therefore, there is need to explore socio-economic and environmental determinants of RRV disease at the statistical local area (SLA) level. Spatial distribution analysis and multiple negative binomial regression models were employed to identify the socio-economic and environmental determinants of RRV disease at both the city and local (ie, SLA) levels. The results show that RRV activity was primarily concentrated in the northeast, northwest and southeast areas in Brisbane. The negative binomial regression models reveal that RRV incidence for the whole of the Brisbane area was significantly associated with Southern Oscillation Index (SOI) at a lag of 3 months (Relative Risk (RR): 1.12; 95% confidence interval (CI): 1.06 - 1.17), the proportion of people with lower levels of education (RR: 1.02; 95% CI: 1.01 - 1.03), the proportion of labour workers (RR: 0.97; 95% CI: 0.95 - 1.00) and vegetation density (RR: 1.02; 95% CI: 1.00 - 1.04). However, RRV incidence for high risk areas (ie, SLAs with higher incidence of RRV) was significantly associated with mosquito density (RR: 1.01; 95% CI: 1.00 - 1.01), SOI at a lag of 3 months (RR: 1.48; 95% CI: 1.23 - 1.78), human population density (RR: 3.77; 95% CI: 1.35 - 10.51), the proportion of indigenous population (RR: 0.56; 95% CI: 0.37 - 0.87) and the proportion of overseas visitors (RR: 0.57; 95% CI: 0.35 - 0.92). It is acknowledged that some of these risk factors, while statistically significant, are small in magnitude. However, given the high incidence of RRV, they may still be important in practice. The results of this study suggest that the spatial pattern of RRV disease in Brisbane is determined by a combination of ecological, socio-economic and environmental factors. The possibility of developing an epidemic forecasting system for RRV disease was explored using the multivariate Seasonal Auto-regressive Integrated Moving Average (SARIMA) technique. The results of this study suggest that climatic variability, particularly precipitation, may have played a significant role in the transmission of RRV disease in Brisbane. This finding cannot entirely be explained by confounding factors such as other socio-ecological conditions because they have been unlikely to change dramatically on a monthly time scale in this city over the past two decades. SARIMA models show that monthly precipitation at a lag 2 months (=0.004,p=0.031) was statistically significantly associated with RRV disease. It suggests that there may be 50 more cases a year for an increase of 100 mm precipitation on average in Brisbane. The predictive values in the model were generally consistent with actual values (root-mean-square error (RMSE): 1.96). Therefore, this model may have applications as a decision support tool in disease control and risk-management planning programs in Brisbane. The Polynomial distributed lag (PDL) time series regression models were performed to examine the associations between rainfall, mosquito density and the occurrence of RRV after adjusting for season and auto-correlation. The PDL model was used because rainfall and mosquito density can affect not merely RRV occurring in the same month, but in several subsequent months. The rationale for the use of the PDL technique is that it increases the precision of the estimates. We developed an epidemic forecasting model to predict incidence of RRV disease. The results show that 95% and 85% of the variation in the RRV disease was accounted for by the mosquito density and rainfall, respectively. The predictive values in the model were generally consistent with actual values (RMSE: 1.25). The model diagnosis reveals that the residuals were randomly distributed with no significant auto-correlation. The results of this study suggest that PDL models may be better than SARIMA models (R-square increased and RMSE decreased). The findings of this study may facilitate the development of early warning systems for the control and prevention of this widespread disease. Further analyses were conducted using classification trees to identify major mosquito species of Ross River virus (RRV) transmission and explore the threshold of mosquito density for RRV disease in Brisbane, Australia. The results show that Ochlerotatus vigilax (RR: 1.028; 95% CI: 1.001 - 1.057) and Culex annulirostris (RR: 1.013, 95% CI: 1.003 - 1.023) were significantly associated with RRV disease cycles at a lag of 1 month. The presence of RRV was associated with average monthly mosquito density of 72 Ochlerotatus vigilax and 52 Culex annulirostris per light trap. These results may also have applications as a decision support tool in disease control and risk management planning programs. As RRV has significant impact on population health, industry, and tourism, it is important to develop an epidemic forecast system for this disease. The results of this study show the disease surveillance data can be integrated with social, biological and environmental databases. These data can provide additional input into the development of epidemic forecasting models. These attempts may have significant implications in environmental health decision-making and practices, and may help health authorities determine public health priorities more wisely and use resources more effectively and efficiently.
APA, Harvard, Vancouver, ISO, and other styles
46

Savan, Emanuel-Emil. "Consumer liking and sensory attribute prediction for new product development support : applications and enhancements of belief rule-based methodology." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/consumer-liking-and-sensory-attribute-prediction-for-new-product-development-support-applications-and-enhancements-of-belief-rulebased-methodology(0582be52-a5ce-47da-836d-e30b5506fb41).html.

Full text
Abstract:
Methodologies designed to support new product development are receiving increasing interest in recent literature. A significant percentage of new product failure is attributed to a mismatch between designed product features and consumer liking. A variety of methodologies have been proposed and tested for consumer liking or preference prediction, ranging from statistical methodologies e.g. multiple linear regression (MLR) to non-statistical approaches e.g. artificial neural networks (ANN), support vector machines (SVM), and belief rule-based (BRB) systems. BRB has been previously tested for consumer preference prediction and target setting in case studies from the beverages industry. Results have indicated a number of technical and conceptual advantages which BRB holds over the aforementioned alternative approaches. This thesis focuses on presenting further advantages and applications of the BRB methodology for consumer liking prediction. The features and advantages are selected in response to challenges raised by three addressed case studies. The first case study addresses a novel industry for BRB application: the fast moving consumer goods industry, the personal care sector. A series of challenges are tackled. Firstly, stepwise linear regression, principal component analysis and AutoEncoder are tested for predictors’ selection and data reduction. Secondly, an investigation is carried out to analyse the impact of employing complete distributions, instead of averages, for sensory attributes. Moreover, the effect of modelling instrumental measurement error is assessed. The second case study addresses a different product from the personal care sector. A bi-objective prescriptive approach for BRB model structure selection and validation is proposed and tested. Genetic Algorithms and Simulated Annealing are benchmarked against complete enumeration for searching the model structures. A novel criterion based on an adjusted Akaike Information Criterion is designed for identifying the optimal model structure from the Pareto frontier based on two objectives: model complexity and model fit. The third case study introduces yet another novel industry for BRB application: the pastry and confectionary specialties industry. A new prescriptive framework, for rule validation and random training set allocation, is designed and tested. In all case studies, the BRB methodology is compared with the most popular alternative approaches: MLR, ANN, and SVM. The results indicate that BRB outperforms these methodologies both conceptually and in terms of prediction accuracy.
APA, Harvard, Vancouver, ISO, and other styles
47

Aurich, Allan. "Modelle zur Beschreibung der Verkehrssicherheit innerörtlicher Hauptverkehrsstraßennetze unter besonderer Berücksichtigung der Umfeldnutzung." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-125311.

Full text
Abstract:
In der Arbeit wird eine Methodik einer zusammenhängenden Analyse und modellhaften Beschreibung der Verkehrssicherheit in städtischen Hauptstraßennetzen am Beispiel der Stadt Dresden entwickelt. Die dabei gewonnenen Modelle dienen der Abschätzung von Erwartungswerten von Unfallhäufigkeiten mit und ohne Personenschaden unter Berücksichtigung der Verkehrsbeteiligungsart. Die Grundlage bilden multivariate Regressionsmodelle auf Basis verallgemeinerter linearer Modelle (GLM). Die Verwendung verallgemeinerter Regressionsmodelle erlaubt eine Berücksichtigung von Verteilungen, die besser geeignet sind, den Unfallentstehungsprozess wiederzugeben, als die häufig verwendete Normalverteilung. Im konkreten Fall werden hierzu die Poisson-Verteilung sowie die negative Binomialverteilung verwendet. Um Effekte im Hauptverkehrsstraßennetz möglichst trennscharf abbilden zu können, werden vier grundsätzliche Netzelemente differenziert und das Netz entsprechend zerlegt. Unterschieden werden neben Streckenabschnitten und Hauptverkehrsknotenpunkten auch Annäherungsbereiche und Anschlussknotenpunkte. Die Kollektive der Knotenpunkte werden ferner in signalisierte und nicht-signalisierte unterteilt. Es werden zunächst Modelle unterschiedlicher Unfallkollektive getrennt für alle Kollektive der vier Netzelemente berechnet. Anschließend werden verschiedene Vorgehensweisen für eine Zusammenfassung zu Netzmodellen entwickelt. Neben der Verwendung verkehrstechnischer und infrastruktureller Größen als erklärende Variable werden in der Arbeit auch Kenngrößen zur Beschreibung der Umfeldnutzung ermittelt und im Rahmen der Regression einbezogen. Die Quantifizierung der Umfeldnutzung erfolgt mit Hilfe von Korrelations-, Kontingenz- und von Hauptkomponentenanalysen (PCA). Im Ergebnis werden Modelle präsentiert, die eine multivariate Quantifizierung erwarteter Unfallhäufigkeiten in Hauptverkehrsstraßennetzen erlauben. Die vorgestellte Methodik bildet eine mögliche Grundlage für eine differenzierte Sicherheitsbewertung verkehrsplanerischer Variantenabschätzungen
A methodology is developed in order to predict the number of accidents within an urban main road network. The analysis was carried out by surveying the road network of Dresden. The resulting models allow the calculation of individual expectancy values for accidents with and without injury involving different traffic modes. The statistical modelling process is based on generalized linear models (GLM). These were chosen due to their ability to take into account certain non-normal distributions. In the specific case of accident counts, both the Poisson distribution and the negative binomial distribution are more suitable for reproducing the origination process than the normal distribution. Thus they were chosen as underlying distributions for the subsequent regressions. In order to differentiate overlaying influences, the main road network is separated into four basic elements: major intersections, road sections, minor intersections and approaches. Furthermore the major and minor intersections are additionally subdivided into signalised and non-signalised intersections. Separate models are calculated for different accident collectives for the various types of elements. Afterwards several methodologies for calculating aggregated network models are developed and analysed. Apart from traffic-related and infrastructural attributes, environmental parameters are derived taking into account the adjacent building structure as well as the surrounding land-use, and incorporated as explanatory variables within the regression. The environmental variables are derived from statistical analyses including correlation matrices, contingency tables and principal components analyses (PCA). As a result, a set of models is introduced which allows a multivariate calculation of expected accident counts for urban main road networks. The methodology developed can serve as a basis for a differentiated safety assessment of varying scenarios within a traffic planning process
APA, Harvard, Vancouver, ISO, and other styles
48

Mahmood, Muhammad Tariq. "Face Detection by Image Discriminating." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4352.

Full text
Abstract:
Human face recognition systems have gained a considerable attention during last few years. There are very many applications with respect to security, sensitivity and secrecy. Face detection is the most important and first step of recognition system. Human face is non rigid and has very many variations regarding image conditions, size, resolution, poses and rotation. Its accurate and robust detection has been a challenge for the researcher. A number of methods and techniques are proposed but due to a huge number of variations no one technique is much successful for all kinds of faces and images. Some methods are exhibiting good results in certain conditions and others are good with different kinds of images. Image discriminating techniques are widely used for pattern and image analysis. Common discriminating methods are discussed.
SIPL, Mechatronics, GIST 1 Oryong-Dong, Buk-Gu, Gwangju, 500-712 South Korea tel. 0082-62-970-2997
APA, Harvard, Vancouver, ISO, and other styles
49

Sher, Rabnawaz Jan. "Classification of a Sensor Signal Attained By Exposure to a Complex Gas Mixture." Thesis, Linköpings universitet, Statistik och maskininlärning, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-172769.

Full text
Abstract:
This thesis is carried out in collaboration with a private company, DANSiC AB This study is an extension of a research work started by DANSiC AB in 2019 to classify a source. This study is about classifying a source into two classes with the sensitivity of one source higher than the other as one source has greater importance. The data provided for this thesis is based on sensor measurements on different temperature cycles. The data is high-dimensional and is expected to have a drift in measurements. Principal component analysis (PCA) is used for dimensionality reduction. “Differential”, “Relative” and “Fractional” drift compensation techniques are used for compensating the drift in data. A comparative study was performed using three different classification algorithms, which are “Linear Discriminant Analysis (LDA)”, “Naive Bayes classifier (NB)” and “Random forest (RF)”. The highest accuracy achieved is 59%,Random forest is observed to perform better than the other classifiers.

This work is done with DANSiC AB in collaboration with Linkoping University.

APA, Harvard, Vancouver, ISO, and other styles
50

Nakamura, Luiz Ricardo. "Métodos multivariados para agrupamento de bovinos de raça Hereford em função dos parâmetros de curvas de crescimento." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-07022012-112022/.

Full text
Abstract:
Após o ajuste individual das 55 vacas estudadas pelo modelo Gompertz difá- sico com estrutura de erros autorregressiva de ordem 1 (totalizando 7 parâmetros), notou-se que apenas 6 vacas tinham problemas nas estimativas de seus parâmetros (não convergentes ou não signicativos), dessa forma continuou-se o trabalho proposto com 49 animais. Com as estimativas de cada um dos parâmetros (variáveis nessa etapa) foi realizada a análise de componentes principais e observação do gráco biplot, sendo possível a constatação de que 2 dos parâmetros do modelo continham informações ambíguas com pelo menos um dos demais parâmetros e estes foram retirados da análise, restando 5 parâmetros para o estudo. A análise de componentes principais foi realizada novamente apenas com os 5 parâmetros restantes e os três primeiros componentes principais (escolhidos pelo critério da percentagem de variância original explicada) foram utilizados como variáveis em um processo de agrupamento hierárquico. Após a realização da análise de agrupamentos, observou-se que 5 grupos homogêneos de animais foram formados, cada um com caraterísticas distintas. Desta forma, foi possível identicar animais que se destacavam, positiva ou negativamente, no que tange ao seu peso assintótico e taxa de crescimento.
After individual adjustment of the 55 cows studied using the diphasic Gompertz model with autoregressive structure of errors (totalizing 7 parameters), it was noted that only 6 cows had problems on estimates of the parameters (not converged or not signicant), then the proposed work continued with 49 animals. With each of the parameters estimates (variables at this stage) was performed a principal component analysis and observation of the biplot, and it was possible to nd that two of the model parameters contained ambiguous information with at least one of the other parameters, then these 2 parameters were removed from the analysis, leaving 5 parameters for the study. The principal component analysis was performed again with only ve remaining parameters and the rst three principal components (chosen by the criterion of percentage of original explained variance) were used as variables in a process of hierarchical clustering. After performing the cluster analysis, we found that ve homogeneous groups of animals were formed, each with distinct characteristics. Thus, it was possible to identify animals that stood out, positively or negatively, in terms of their asymptotic weight and growth rate.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography