To see the other types of publications on this topic, follow the link: Factor analysis.

Dissertations / Theses on the topic 'Factor analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Factor analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Cheng, Wei. "Factor Analysis for Stock Performance." Link to electronic thesis, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-050405-180040/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Conti, Gabriella, Sylvia Frühwirth-Schnatter, James J. Heckman, and Rémi Piatek. "Bayesian exploratory factor analysis." Elsevier, 2014. http://dx.doi.org/10.1016/j.jeconom.2014.06.008.

Full text
Abstract:
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates from a high dimensional set of psychological measurements. (authors' abstract)
APA, Harvard, Vancouver, ISO, and other styles
3

關志威 and Chi-wai Kwan. "Influential observations in factor analysis." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B29803895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kwan, Chi-wai. "Influential observations in factor analysis /." Hong Kong : University of Hong Kong, 1998. http://sunzi.lib.hku.hk/hkuto/record.jsp?B19003110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cool, Deborah E. "Characterization of the human factor XII (Hageman factor) CDNA and the gene." Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/26980.

Full text
Abstract:
A human liver cDNA library was screened by colony hybridization with two mixtures of synthetic oligodeoxyribonucleotides as probes. These oligonucleotides encoded regions of β-factor Xlla as predicted from the amino acid sequence. Four positive clones were isolated that contained DNA coding for most of factor XII mRNA. A second human liver cDNA library was screened by colony hybridization with ³²P-labeled cDNA clones obtained from the first screen and two identical clones were isolated. DNA sequence analysis of these overlapping clones showed that they contained DNA coding for the signal peptide sequence, the complete amino acid sequence of plasma factor XII, a TGA stop codon, a 3' untranslated region of 150 nucleotides, and a poly A⁺ tail. The cDNA sequence predicts that plasma factor XII consists of 596 amino acid residues. Within the predicted amino acid sequence of factor XII, were identified three peptide bonds that are cleaved by kallikrein during the formation of β-factor Xlla. Comparison of the structure of factor XII with other proteins revealed extensive sequence identity with regions of tissue-type plasminogen activator (the epidermal growth factor-like region and the kringle region) and fibronectin (type I and type II homologies). As the type II region of fibronectin contains a collagen-binding site, the homologous region in factor XII may be responsible for the binding of factor XII to collagen. The carboxyl-terminal region of factor XII shares considerable amino acid sequence homology with other serine proteases including trypsin and many clotting factors. A human genomic phage library was screened by using a human factor XII cDNA as ahybridization probe. Two overlapping phage clones were isolated which contain the entire human factor XII gene. DNA sequence and restriction enzyme analysis of the clones indicate that the gene is approximately 12 kbp in size and is comprised of 13 introns and 14 exons. Exons 3 through 14 are contained in a genomic region of only 4.2 kbp with introns ranging in size from 80 to 554 bp. The multiple regions found in the coding sequence of FXII that are homologous to putative domains in fibronectin and tissue-type plasminogen activator are contained on separate exons in the factor XII gene. The intron/exon gene organization is similar to the serine protease gene family of plasminogen activators and not to the clotting factor family. Analysis of the 5' flanking region of the gene shows that it does not contain the typical TATA and CAAT sequences found in other genes. This is consistent with the finding that transcription of the gene is initiated at multiple start sites.
Medicine, Faculty of
Biochemistry and Molecular Biology, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
6

Khosla, Nitin, and n/a. "Dimensionality Reduction Using Factor Analysis." Griffith University. School of Engineering, 2006. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20061010.151217.

Full text
Abstract:
In many pattern recognition applications, a large number of features are extracted in order to ensure an accurate classification of unknown classes. One way to solve the problems of high dimensions is to first reduce the dimensionality of the data to a manageable size, keeping as much of the original information as possible and then feed the reduced-dimensional data into a pattern recognition system. In this situation, dimensionality reduction process becomes the pre-processing stage of the pattern recognition system. In addition to this, probablility density estimation, with fewer variables is a simpler approach for dimensionality reduction. Dimensionality reduction is useful in speech recognition, data compression, visualization and exploratory data analysis. Some of the techniques which can be used for dimensionality reduction are; Factor Analysis (FA), Principal Component Analysis(PCA), and Linear Discriminant Analysis(LDA). Factor Analysis can be considered as an extension of Principal Component Analysis. The EM (expectation maximization) algorithm is ideally suited to problems of this sort, in that it produces maximum-likelihood (ML) estimates of parameters when there is a many-to-one mapping from an underlying distribution to the distribution governing the observation, conditioned upon the obervations. The maximization step then provides a new estimate of the parameters. This research work compares the techniques; Factor Analysis (Expectation-Maximization algorithm based), Principal Component Analysis and Linear Discriminant Analysis for dimensionality reduction and investigates Local Factor Analysis (EM algorithm based) and Local Principal Component Analysis using Vector Quantization.
APA, Harvard, Vancouver, ISO, and other styles
7

Khosla, Nitin. "Dimensionality Reduction Using Factor Analysis." Thesis, Griffith University, 2006. http://hdl.handle.net/10072/366058.

Full text
Abstract:
In many pattern recognition applications, a large number of features are extracted in order to ensure an accurate classification of unknown classes. One way to solve the problems of high dimensions is to first reduce the dimensionality of the data to a manageable size, keeping as much of the original information as possible and then feed the reduced-dimensional data into a pattern recognition system. In this situation, dimensionality reduction process becomes the pre-processing stage of the pattern recognition system. In addition to this, probablility density estimation, with fewer variables is a simpler approach for dimensionality reduction. Dimensionality reduction is useful in speech recognition, data compression, visualization and exploratory data analysis. Some of the techniques which can be used for dimensionality reduction are; Factor Analysis (FA), Principal Component Analysis(PCA), and Linear Discriminant Analysis(LDA). Factor Analysis can be considered as an extension of Principal Component Analysis. The EM (expectation maximization) algorithm is ideally suited to problems of this sort, in that it produces maximum-likelihood (ML) estimates of parameters when there is a many-to-one mapping from an underlying distribution to the distribution governing the observation, conditioned upon the obervations. The maximization step then provides a new estimate of the parameters. This research work compares the techniques; Factor Analysis (Expectation-Maximization algorithm based), Principal Component Analysis and Linear Discriminant Analysis for dimensionality reduction and investigates Local Factor Analysis (EM algorithm based) and Local Principal Component Analysis using Vector Quantization.
Thesis (Masters)
Master of Philosophy (MPhil)
School of Engineering
Full Text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Jing. "Analogy Between Two Approaches to Separately Identify Specific Factors in Factor Analysis." Bowling Green State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1182784851.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wu, Amery Dai Ling. "Pratt's importance measures in factor analysis : a new technique for interpreting oblique factor models." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2333.

Full text
Abstract:
This dissertation introduces a new method, Pratt's measure matrix, for interpreting multidimensional oblique factor models in both exploratory and confirmatory contexts. Overall, my thesis, supported by empirical evidence, refutes the currently recommended and practiced methods for understanding an oblique factor model; that is, interpreting the pattern matrix or structure matrix alone or juxtaposing both without integrating the information. Chapter Two reviews the complexities of interpreting a multidimensional factor solution due to factor correlation (i.e., obliquity). Three major complexities highlighted are (1) the inconsistency between the pattern and structure coefficients, (2) the distortion of additive properties, and (3) the inappropriateness of the traditional cut-off rules as being "meaningful". Chapter Three provides the theoretical rationale for adapting Pratt's importance measures from their use in multiple regression to that of factor analysis. The new method is demonstrated and tested with both continuous and categorical data in exploratory factor analysis. The results show that Pratt's measures are applicable to factor analysis and are able to resolve three interpretational complexities arising from factor obliquity. In the context of confirmatory factor analysis, Chapter Four warns researchers that a structure coefficient could be entirely spurious due to factor obliquity as well as zero constraint on its corresponding pattern coefficient. Interpreting such structure coefficients as Graham et al. (2003) suggested can be problematic. The mathematically more justified method is to transform the pattern and structure coefficients into Pratt's measures. The last chapter describes eight novel contributions in this dissertation. The new method is the first attempt ever at ordering the importance of latent variables for multivariate data. It is also the first attempt at demonstrating and explicating the existence, mechanism, and implications of the suppression effect in factor analyses. Specifically, the new method resolves the three interpretational problems due to factor obliquity, assists in identifying a better-fitting exploratory factor model, proves that a structure coefficient in a confirmatory factor analysis with a zero pattern constraint is entirely spurious, avoids the debate over the choice of oblique and orthogonal factor rotation, and last but not least, provides a tool for consolidating the role off actors as the underlying causes.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Guangjian. "Bootstrap procedures for dynamic factor analysis." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1153782819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Lo, Siu-ming, and 盧小皿. "Factor analysis for ranking data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B30162464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Hwang, Peggy May T. "Factor analysis of time series /." The Ohio State University, 1997. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487944660933305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Jiang, Huangqi. "FACTOR ANALYSIS OF COGNITIVE CONTROL." Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case1562597562093455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Lo, Siu-ming. "Factor analysis for ranking data /." Hong Kong : University of Hong Kong, 1998. http://sunzi.lib.hku.hk/hkuto/record.jsp?B20792967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kay, Cheryl Ann. "A comparison of traditional and IRT factor analysis." Thesis, University of North Texas, 2004. https://digital.library.unt.edu/ark:/67531/metadc4695/.

Full text
Abstract:
This study investigated the item parameter recovery of two methods of factor analysis. The methods researched were a traditional factor analysis of tetrachoric correlation coefficients and an IRT approach to factor analysis which utilizes marginal maximum likelihood estimation using an EM algorithm (MMLE-EM). Dichotomous item response data was generated under the 2-parameter normal ogive model (2PNOM) using PARDSIM software. Examinee abilities were sampled from both the standard normal and uniform distributions. True item discrimination, a, was normal with a mean of .75 and a standard deviation of .10. True b, item difficulty, was specified as uniform [-2, 2]. The two distributions of abilities were completely crossed with three test lengths (n= 30, 60, and 100) and three sample sizes (N = 50, 500, and 1000). Each of the 18 conditions was replicated 5 times, resulting in 90 datasets. PRELIS software was used to conduct a traditional factor analysis on the tetrachoric correlations. The IRT approach to factor analysis was conducted using BILOG 3 software. Parameter recovery was evaluated in terms of root mean square error, average signed bias, and Pearson correlations between estimated and true item parameters. ANOVAs were conducted to identify systematic differences in error indices. Based on many of the indices, it appears the IRT approach to factor analysis recovers item parameters better than the traditional approach studied. Future research should compare other methods of factor analysis to MMLE-EM under various non-normal distributions of abilities.
APA, Harvard, Vancouver, ISO, and other styles
16

Tsou, Hsiao-Hui Sophie. "Factor analysis of cross-classified data." College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/2962.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2005.
Thesis research directed by: Mathematical Statistics. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
17

Li, Ya. "An empirical analysis of factor seasonalities." HKBU Institutional Repository, 2017. https://repository.hkbu.edu.hk/etd_oa/421.

Full text
Abstract:
I establish the existence of seasonality in 42 popular risk factors in the asset pricing literature. I document extensive empirical evidence for the Keloharju et al. (2016) hypothesis that seasonalities in individual asset returns stem from their exposures to risk factors. It is the seasonal patterns in risk factors that lead to the seasonalities in individual asset portfolios. The empirical findings show that seasonalities are widely present among individual asset portfolios. However, both the all-factor model and the Fama-French (2014) five-factor model demonstrate that these patterns greatly disappear after I eliminate their exposures to the corresponding risk factors. Overall, 76.17% of the returns on 235 test equal-weighted portfolios I examine contain seasonality. My key finding is that 48.68% of equal-weighted portfolio returns with seasonalities no longer contain seasonality after I control for their exposures to all risk factors. Only 52.08% of the equal-weighted portfolio Fama-French five-factor model residual obtain substantial seasonal patterns in the Wald test. Regarding to seasonalities in risk factors, specific seasonal patterns include the January effect, higher returns during February, March, and July, and autocorrelations at irregular lags. The Wald test, a stable seasonality test, the Kruskal-Wallis chi-square test, a combined seasonality test, Fisher's Kappa test, and Bartlett's Kolmogorov-Smirnov test are used to identify the seasonal patterns in individual risk factors. Fama-French SMB (the size factor) and HML (the value factor) in the three-factor model, Fama-French RMW (the operating profitability factor) in the five-factor model, earnings/price, cash flow/price, momentum, short-term reversal, long-term reversal, daily variance, daily residual variance, growth rate of industrial production (value-weighted), term premium (equal-weighted and value-weighted), and profitability display robust seasonalities. Therefore, the first part of the research confirms that risk factors possess substantial seasonal patterns.
APA, Harvard, Vancouver, ISO, and other styles
18

Michael, Steven T. "Attributional style : a confirmatory factor analysis." Virtual Press, 1991. http://liblink.bsu.edu/uhtbin/catkey/770937.

Full text
Abstract:
The purpose of the current study was to investigate three aspects of the construct validity of attributional style assessment instruments. The first purpose was to determine the independence of stability and globality. The second was to determine if controllability was a dimension of attributional style. The third purpose was to determine if inventories that use real or hypothetical events measure attributional style equally well. One hundred fifty-nine female, and one hudred fifty-five male subjects, completed four questionnaires that assessed attributional style. Results provided some support for the general construct of attributional style. All four factors were found, which demonstrates the support for the four factor model. However, the two factor model may be the best overall method. No method factor (real or hypothetical stimulus event) solution was obtained. Possible sex differences are discussed. The findings are discussed in terms of attribution theory. Suggestions for further research are presented.
Department of Psychological Science
APA, Harvard, Vancouver, ISO, and other styles
19

Kim, Dongyoung M. Eng Massachusetts Institute of Technology. "Spectral factor model and risk analysis." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106115.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 69-70).
In this paper, we apply spectral analysis tools to portfolio management. Recognizing volatility and factor beta as major risk sources, we analyze the short-term and longterm components of risk for any given portfolio. We model the portfolio weights as an LTI system filter and describe how the risk metrics behave as one holes the portfolio over increasing horizon. Then, we propose dynamic portfolios to shift frequency-specific risks without changing the investment period or net dollar exposure.
by Dongyoung Kim.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
20

Cruz, Cavalcanti Yanna. "Factor analysis of dynamic PET images." Thesis, Toulouse, INPT, 2018. http://www.theses.fr/2018INPT0078/document.

Full text
Abstract:
La tomographie par émission de positrons (TEP) est une technique d'imagerie nucléaire noninvasive qui permet de quantifier les fonctions métaboliques des organes à partir de la diffusion d'un radiotraceur injecté dans le corps. Alors que l'imagerie statique est souvent utilisée afin d'obtenir une distribution spatiale de la concentration du traceur, une meilleure évaluation de la cinétique du traceur est obtenue par des acquisitions dynamiques. En ce sens, la TEP dynamique a suscité un intérêt croissant au cours des dernières années, puisqu'elle fournit des informations à la fois spatiales et temporelles sur la structure des prélèvements de traceurs en biologie \textit{in vivo}. Les techniques de quantification les plus efficaces en TEP dynamique nécessitent souvent une estimation de courbes temps-activité (CTA) de référence représentant les tissus ou une fonction d'entrée caractérisant le flux sanguin. Dans ce contexte, de nombreuses méthodes ont été développées pour réaliser une extraction non-invasive de la cinétique globale d'un traceur, appelée génériquement analyse factorielle. L'analyse factorielle est une technique d'apprentissage non-supervisée populaire pour identifier un modèle ayant une signification physique à partir de données multivariées. Elle consiste à décrire chaque voxel de l'image comme une combinaison de signatures élémentaires, appelées \textit{facteurs}, fournissant non seulement une CTA globale pour chaque tissu, mais aussi un ensemble des coefficients reliant chaque voxel à chaque CTA tissulaire. Parallèlement, le démélange - une instance particulière d'analyse factorielle - est un outil largement utilisé dans la littérature de l'imagerie hyperspectrale. En imagerie TEP dynamique, elle peut être très pertinente pour l'extraction des CTA, puisqu'elle prend directement en compte à la fois la non-négativité des données et la somme-à-une des proportions de facteurs, qui peuvent être estimées à partir de la diffusion du sang dans le plasma et les tissus. Inspiré par la littérature de démélange hyperspectral, ce manuscrit s'attaque à deux inconvénients majeurs des techniques générales d'analyse factorielle appliquées en TEP dynamique. Le premier est l'hypothèse que la réponse de chaque tissu à la distribution du traceur est spatialement homogène. Même si cette hypothèse d'homogénéité a prouvé son efficacité dans plusieurs études d'analyse factorielle, elle ne fournit pas toujours une description suffisante des données sousjacentes, en particulier lorsque des anomalies sont présentes. Pour faire face à cette limitation, les modèles proposés ici permettent un degré de liberté supplémentaire aux facteurs liés à la liaison spécifique. Dans ce but, une perturbation spatialement variante est introduite en complément d'une CTA nominale et commune. Cette variation est indexée spatialement et contrainte avec un dictionnaire, qui est soit préalablement appris ou explicitement modélisé par des non-linéarités convolutives affectant les tissus de liaisons non-spécifiques. Le deuxième inconvénient est lié à la distribution du bruit dans les images PET. Même si le processus de désintégration des positrons peut être décrit par une distribution de Poisson, le bruit résiduel dans les images TEP reconstruites ne peut généralement pas être simplement modélisé par des lois de Poisson ou gaussiennes. Nous proposons donc de considérer une fonction de coût générique, appelée $\beta$-divergence, capable de généraliser les fonctions de coût conventionnelles telles que la distance euclidienne, les divergences de Kullback-Leibler et Itakura-Saito, correspondant respectivement à des distributions gaussiennes, de Poisson et Gamma. Cette fonction de coût est appliquée à trois modèles d'analyse factorielle afin d'évaluer son impact sur des images TEP dynamiques avec différentes caractéristiques de reconstruction
Thanks to its ability to evaluate metabolic functions in tissues from the temporal evolution of a previously injected radiotracer, dynamic positron emission tomography (PET) has become an ubiquitous analysis tool to quantify biological processes. Several quantification techniques from the PET imaging literature require a previous estimation of global time-activity curves (TACs) (herein called \textit{factors}) representing the concentration of tracer in a reference tissue or blood over time. To this end, factor analysis has often appeared as an unsupervised learning solution for the extraction of factors and their respective fractions in each voxel. Inspired by the hyperspectral unmixing literature, this manuscript addresses two main drawbacks of general factor analysis techniques applied to dynamic PET. The first one is the assumption that the elementary response of each tissue to tracer distribution is spatially homogeneous. Even though this homogeneity assumption has proven its effectiveness in several factor analysis studies, it may not always provide a sufficient description of the underlying data, in particular when abnormalities are present. To tackle this limitation, the models herein proposed introduce an additional degree of freedom to the factors related to specific binding. To this end, a spatially-variant perturbation affects a nominal and common TAC representative of the high-uptake tissue. This variation is spatially indexed and constrained with a dictionary that is either previously learned or explicitly modelled with convolutional nonlinearities affecting non-specific binding tissues. The second drawback is related to the noise distribution in PET images. Even though the positron decay process can be described by a Poisson distribution, the actual noise in reconstructed PET images is not expected to be simply described by Poisson or Gaussian distributions. Therefore, we propose to consider a popular and quite general loss function, called the $\beta$-divergence, that is able to generalize conventional loss functions such as the least-square distance, Kullback-Leibler and Itakura-Saito divergences, respectively corresponding to Gaussian, Poisson and Gamma distributions. This loss function is applied to three factor analysis models in order to evaluate its impact on dynamic PET images with different reconstruction characteristics
APA, Harvard, Vancouver, ISO, and other styles
21

Yeh, Thomas. "Analysis of power factor correction converters /." Online version of thesis, 1992. http://hdl.handle.net/1850/11220.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Rapley, Patrica. "Self-efficacy Theory: Relevance of General and Specific Efficacy Beliefs for Psychosocial Adaptation to Chronic Illness Over Time." Thesis, Curtin University, 2001. http://hdl.handle.net/20.500.11937/2542.

Full text
Abstract:
Over the last decade or more, chronic illness research has consistently found that the lineaer relationship between knowledge and behaviour or between behaviour change and improved health outcomes does not exist. Furthermore, the link between behaviour and health status is not as strong as the link between illness-specific efficacy belier and health status. Strategies to increase confidence in illness-specific behaviours have gradually assumed more importance in improving health outcomes. Strategies to improve behaviour-specific efficacy belief can assist individuals to change their behaviour by influencing behavioural choices, effort and persistence with task man demands. Concomitantly, it has been suggested that there is a positive relationship between efficacy belief and psychosocial functioning. It is unclear as to whether this empirical evidence also applies to chronic illness conditions with a complex self-care regimen. The degree to which a more general level of confidence, or efficacy belief, can also contribute to psychosocial functioning is unknown. The focus of this study was to examine the relative impact of general and illness-specific efficacy expectations on psychosocial adaptation to illness over nine months. The study measured illness-specific efficacy beliefs when it was expected that they were still developing.The illness-specific beliefs were compared to the purportedly more stable general efficacy belief. This longitudinal study employed an exploratory predictive design to measure efficacy beliefs in the natural setting. Data were collected at entry to the study, at three and nine months Participants included adults from three chronic illness groups: Arthritis (n= = ), diabetes type 1 (n = 104) and type 2 (n = 122). The self-report questionnaires used collect the data were three illness-specific efficacy belief measures, general self-efficacy and the Psychosocial Adjustment to Illness Scale. The dependent variable of interest was psychosocial adaptation to illness. Multiple regression analysis provided evidence of between-group differences in the positive contribution of general and illness-specific efficacy beliefs to psychosocial adaptation for chronic illness groups with different regimen attributes. The variables best able to predict psychosocial adaptation to illness over time, after being adjusted for perceived level of stress and general self-efficacy (belief in abilities in general), were illness-specific efficacy beliefs. A general efficacy belief contributed to the illness adaptation process initially but its influence reduced as the influence of illness-specific beliefs increased. Repeated measures MANOVA confirmed the stability of general efficacy belief. The contribution of this study to current knowledge of self- -efficacy theory is its application to self-management programs for chronic illness groups. The findings suggest that the more stable general efficacy belief has a role in psychosocial adaptation to chronic illness during the period when illness-specific efficacy beliefs, targeted by self-management programs, are still developing.
APA, Harvard, Vancouver, ISO, and other styles
23

Siketina, Natalya Hennadievna. "Necessity application dispersion analysis of enterprises in modern conditions." Thesis, Харківський національний університет міського господарства ім. О. М. Бекетова, 2018. http://repository.kpi.kharkov.ua/handle/KhPI-Press/39038.

Full text
Abstract:
The dispersion analysis is used to determine the relationship between the indicators, quantitative assessment of risk parameters. In the analysis of enterprises can different methods and models of analysis are used. Their number and the breadth of application depends on the specific objectives of the analysis and is determined by it tasks in each individual case. When collecting compulsory statistical information, it is quite easy to conduct a dispersion analysis of domestic activities to ensure financial and economic security.
APA, Harvard, Vancouver, ISO, and other styles
24

Karimi, Mahdad. "Functional analysis of the -308G/A polymorphism in the tumour necrosis factor promoter." University of Western Australia. School of Biomedical, Biomolecular and Chemical Sciences, 2007. http://theses.library.uwa.edu.au/adt-WU2007.0140.

Full text
Abstract:
[Truncated abstract] Tumor Necrosis Factor (TNF) is a potent pro-inflammatory cytokine involved in a range of biological functions including the differentiation, proliferation and survival of many cell types. The TNF gene lies in the class III region of the major histocompatibility complex (MHC), approximately 250 Kbp centromeric of the HLA-B locus and 850 Kbp telomeric of HLA-DR. Due to the genomic location and biological relevance of TNF, it is thought that genetic heterogeneity at this locus may be associated with autoimmune and infectious diseases. A G-to-A single nucleotide polymorphism (SNP) at position -308 (relative to the transcriptional start site) in the TNF promoter has been well described. The less common -308A variant has been shown to be linked with the HLA-A1, B8, DR3 haplotype which in turn has been associated to a high TNF producing phenotype. Determining whether the -308 polymorphism contributes to elevated levels of expression has therefore been a priority for many research groups. Some investigators have shown differences in transcription between the -308G and -308A alleles while others could not. These contradicting results have led to conflicting views regarding the functional relevance of the -308 SNP. In this study, statistical analysis of 18 independent transient transfections of -308 biallelic TNF reporter constructs have provided evidence for a functional consequence of the polymorphism. ... In addition, chromatin accessibility of this region was maximal at greater levels of transcription suggesting a role for both chromatin structure and YY1 binding in -308G regulation. Surprisingly, chromatin structure did not seem to play a role in -308A regulation nor was there any significant binding of YY1, suggesting the -308 region does not affect transcriptional control of TNF. Taken as a whole, the G-to-A SNP relieves YY1 binding and demonstrates an allele-specific regulatory mechanism controlling expression. A growing list of promoter polymorphisms exists in the human genome having associations with certain diseases. Determining the functional consequence of these SNPs has proven difficult and utilized mainly in vitro approaches. In this thesis, a unique approach to investigating the functionality of promoter polymoprhisms has been developed, utilizing in vivo techniques which test their effects in a more natural system. It is hoped that the identification of the allele-specific YY1-mediated control of the -308 region of the TNF promoter may provide insight into overexpression as a consequence of the polymorphism and its role in the genetic susceptibility to MHC-associated autoimmune disease.
APA, Harvard, Vancouver, ISO, and other styles
25

Brewer, Carl G. "A comparative study of iterative and noniterative factor analytic techniques in small to moderate sample sizes /." Thesis, McGill University, 1986. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=65540.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Fagan, Marcus A. "Factor Retention Strategies with Ordinal Variables in Exploratory Factor Analysis: A Simulation." Thesis, University of North Texas, 2020. https://digital.library.unt.edu/ark:/67531/metadc1707377/.

Full text
Abstract:
Previous research has individually assessed parallel analysis and minimum average partial for factor retention in exploratory factor analysis using ordinal variables. The current study is a comprehensive simulation study including the manipulation of eight conditions (type of correlation matrix, sample size, number of variables per factor, number of factors, factor correlation, skewness, factor loadings, and number of response categories), and three types of retention methods (minimum average partial, parallel analysis, and empirical Kaiser criterion) resulting in a 2 × 2 × 2 × 2 × 2 × 3 × 3 × 4 × 5 design that totals to 5,760 condition combinations tested over 1,000 replications each. Results show that each retention method performed worse when utilizing polychoric correlation matrices. Moreover, minimum average partials are quite sensitive to factor loadings and overall perform poorly compared to parallel analysis and empirical Kaiser criterion. Empirical Kaiser criterion performed almost identical to parallel analysis in normally distributed data; however, performed much worse under highly skewed conditions. Based on these findings, it is recommended to use parallel analysis utilizing principal components analysis with a Pearson correlation matrix to determine the number of factors to retain when dealing with ordinal data.
APA, Harvard, Vancouver, ISO, and other styles
27

Fluke, Ricky. "A Comparison of Three Correlational Procedures for Factor-Analyzing Dichotomously-Scored Item Response Data." Thesis, University of North Texas, 1991. https://digital.library.unt.edu/ark:/67531/metadc332583/.

Full text
Abstract:
In this study, an improved correlational procedure for factor-analyzing dichotomously-scored item response data is described and tested. The procedure involves (a) replacing the dichotomous input values with continuous probability values obtained through Rasch analysis; (b) calculating interitem product-moment correlations among the probabilities; and (c) subjecting the correlations to unweighted least-squares factor analysis. Two simulated data sets and an empirical data set (Kentucky Comprehensive Listening Test responses) were used to compare the new procedure with two more traditional techniques, using (a) phi and (b) tetrachoric correlations calculated directly from the dichotomous item-response values. The three methods were compared on three criterion measures: (a) maximum internal correlation; (b) product of the two largest factor loadings; and (c) proportion of variance accounted for. The Rasch-based procedure is recommended for subjecting dichotomous item response data to latent-variable analysis.
APA, Harvard, Vancouver, ISO, and other styles
28

Upadrasta, Bharat. "Boolean factor analysis a review of a novel method of matrix decomposition and neural network Boolean factor analysis /." Diss., Online access via UMI:, 2009.

Find full text
Abstract:
Thesis (M.S.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Systems Science and Industrial Engineering, 2009.
Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
29

Yang, Shanshan. "Improving Seasonal Factor Estimates for Adjustment of Annual Average Daily Traffic." FIU Digital Commons, 2012. http://digitalcommons.fiu.edu/etd/709.

Full text
Abstract:
Traffic volume data are input to many transportation analyses including planning, roadway design, pavement design, air quality, roadway maintenance, funding allocation, etc. Annual Average Daily Traffic (AADT) is one of the most often used measures of traffic volume. Acquiring the actual AADT data requires the collection of traffic counts continuously throughout a year, which is expensive, thus, can only be conducted at a very limited number of locations. Typically, AADTs are estimated by applying seasonal factors (SFs) to short-term counts collected at portable traffic monitoring sites (PTMSs). Statewide in Florida, the Florida Department of Transportation (FDOT) operates about 300 permanent traffic monitoring sites (TTMSs) to collect traffic counts at these sites continuously. TTMSs are first manually classified into different groups (known as seasonal factor categories) based on both engineering judgment and similarities in the traffic and roadway characteristics. A seasonal factor category is then assigned to each PTMS according to the site’s functional classification and geographical location. The SFs of the assigned category are then used to adjust traffic counts collected at PTMSs to estimate the final AADTs. This dissertation research aims to develop a more objective and data-driven method to improve the accuracy of SFs for adjusting PTMSs. A statewide investigation was first conducted to identify potential influential factors that contribute to seasonal fluctuations in traffic volumes in both urban and rural areas in Florida. The influential factors considered include roadway functional classification, demographic, socioeconomic, land use, etc. Based on these factors, a methodology was developed for assigning seasonal factors from one or more TTMSs to each PTMS. The assigned seasonal factors were validated with data from existing TTMSs. The results show that the average errors of the estimated seasonal factors are, on average, about 4 percent. Nearly 95 percent of the estimated monthly SFs contain errors of no more than 10 percent. It was concluded that the method could be applied to improve the accuracy in AADT estimation for both urban and rural areas in Florida.
APA, Harvard, Vancouver, ISO, and other styles
30

Jangerstad, August. "Transcription factor analysis of longitudinal mRNA expression data." Thesis, KTH, Skolan för kemi, bioteknologi och hälsa (CBH), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-278693.

Full text
Abstract:
Transcription factors (TFs) are key regulatory proteins that regulate transcriptionthrough precise, but highly variable binding events to cis-regulatory elements.The complexity of their regulatory patterns makes it difficult to determinethe roles of different TFs, a task which the field is still struggling with.Experimental procedures for this purpose, such as knock out experiments, arehowever costly and time consuming, and with the ever-increasing availabilityof sequencing data, computational methods for inferring the activity of TFsfrom such data have become of great interest. Current methods are howeverlacking in several regards, which necessitates further exploration of alternatives. A novel tool for estimating the activity of individual TFs over time fromlongitudinal mRNA expression data was in this project therefore put togetherand tested on data from Mus musculus liver and brain. The tool is based onprincipal component analysis, which is applied to data subsets containing theexpression data of genes likely regulated by a specific TF to acquire an estimationof its activity. Though initial tests on 17 selected TFs showed issues withunspecific trends in the estimations, further testing is required for a statementon the potential of the estimator.
Transcriptionsfaktorer (TFer) är viktiga regulatoriska protein som reglerar transkriptiongenom att binda till cis-regulatoriska element på precisa, menmycketvarierande vis. Komplexiteten i deras regulatoriska mönster gör det svårt attavgöra vilka roller olika TFer har, vilket är en uppgift som fältet fortfarandebrottas med. Experimentella procedurer i detta syfte, till exempel "knockout"experiment, är dock kostsamma och tidskrävande, och med den evigt ökandetillgången på sekvenseringsdata har metoder för att beräkna TFers aktivitetfrån sådan data fått stort intresse. De beräkningsmetoder som finns idag bristerdock på flera punker, vilket erfordrar ett fortsatt sökande efter alternativ. Ett nytt vektyg för att upskatta aktiviteten hos individuella TFer över tidmed hjälp av longitunell mRNA-uttrycksdata utvecklades därför i det här projektetoch testades på data från Mus musculus lever och hjärna. Verktyget ärbaserat på principalkomponentsanalys, som applicerades på set med uttrycksdatafrån gener sannolikt reglerade av en specifik TF för att erhålla en uppskattningav dess aktivitet. Trots att de första testerna för 17 utvalda TFer påvisadeproblem med ospecifika trender i upskattningarna krävs forsatta tester för attkunna ge ett tydligt svar på vilken potential estimatorn har.
APA, Harvard, Vancouver, ISO, and other styles
31

Brockwell, Timothy Graham. "Application of factor analysis to spectroscopic methods." Thesis, University of Greenwich, 1992. http://gala.gre.ac.uk/6117/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Garrett, Keith. "Vulnerabililty Analysis of Multi-Factor Authentication Protocols." UNF Digital Commons, 2016. http://digitalcommons.unf.edu/etd/715.

Full text
Abstract:
In this thesis, the author hypothesizes that the use of computationally intensive mathematical operations in password authentication protocols can lead to security vulnerabilities in those protocols. In order to test this hypothesis: 1. A generalized algorithm for cryptanalysis was formulated to perform a clogging attack (a formof denial of service) on protocols that use computationally intensive modular exponentiation to guarantee security. 2. This technique was then applied to cryptanalyze four recent password authentication protocols, to determine their susceptibility to the clogging attack. The protocols analyzed in this thesis differ in their usage of factors (smart cards, memory drives, etc.) or their method of communication (encryption, nonces, timestamps, etc.). Their similarity lies in their use of computationally intensivemodular exponentiation as amediumof authentication. It is concluded that the strengths of all the protocols studied in this thesis can be combined tomake each of the protocols secure from the clogging attack. The conclusion is supported by designing countermeasures for each protocol against the clogging attack.
APA, Harvard, Vancouver, ISO, and other styles
33

Dinsmore, Kimberly R., and L. Lee Glenn. "Factor Analysis of the Learning Orientation Questionnaire." Digital Commons @ East Tennessee State University, 2018. https://dc.etsu.edu/etsu-works/7553.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Bussey, Heidi Celeste. "Special Education Teacher Burnout: A Factor Analysis." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/9244.

Full text
Abstract:
The poor retention of special education teachers negatively impacts student academic outcomes. Special education teachers commonly cite burnout as a primary reason for leaving the field; however, there is a deficit of literature available to validate claims concerning special educators and their level of burnout. This study analyzed the psychometric properties of the Maslach Burnout Inventory: Educators' Survey using a sample of 349 special education teachers from schools across the nation (201 resource room special education teachers and 148 self-contained special education teachers). The Maslach Burnout Inventory: Educators' Survey measures three factors (e.g. subscales) emotional exhaustion, depersonalization, and personal accomplishment. A confirmatory factor analysis, an exploratory factor analysis, and a multi-group measurement invariance confirmatory factor analysis were conducted. The results showed measurement invariance between the two groups of teachers. During the exploratory factor analysis, a significant fourth factor, collaborative stress, emerged. These findings suggest the current factor structure of the Maslach Burnout Inventory: Educators' Inventory needs to be modified when measuring burnout among special education teachers. This includes the need to further explore how collaboration stress relates to special education teachers and how to implement formative collaboration practices to retain special education teachers.
APA, Harvard, Vancouver, ISO, and other styles
35

Xu, Ruoyan. "Analysis of fibroblast growth factor-heparin interactions." Thesis, University of Liverpool, 2012. http://livrepository.liverpool.ac.uk/8833/.

Full text
Abstract:
The functions of a large number (> 435) of extracellular regulatory proteins are controlled by their interactions with heparan sulfate (HS). In the case of fibroblast growth factors (FGFs), HS binding controls their transport between cells and is required for the assembly of a high affinity signaling complex with the cognate FGF receptor. However, the specificity of the interaction of FGFs with HS is still debated. In this thesis, a panel of FGFs (FGF-1, FGF-2, FGF-7, FGF-9, FGF-18 and FGF-21) spanning five FGF sub-families were used to probe their specificities for HS/heparin at different levels: recombinant FGF proteins were expressed and purified and their biological activities tested in a DNA synthesis assay. Then, the proteins were tested for their heparin binding specificity using a variety of complementary approaches: 1. Measurement of the binding parameters of FGFs and a model heparin sugar in an optical biosensor or by microscale thermophoresis; 2. Identification of the heparin binding site (HBS) in the proteins using a Protect and Label strategy; 3. Determination of stability changes in FGFs when bound to different heparin sugars and related glycosaminoglycans employing differential scanning flurometry; 4. Measurement of the conformational changes in FGFs when binding to a variety of molar ratios of heparin and chemically modified heparins using synchrotron radiation circular dichroism (SRCD); 5. Measure directly the binding of FGF-2 to cellular HS using nanoparticles (NPs) to label the FGF-2 and transmission electron microscopy. For interaction with heparin, the FGFs have KDs varying between 38 nM (FGF-18) and 620 nM (FGF-9) and association rate constants spanning over 20-fold (FGF-1, 2,900,000 M-1s-1, FGF-9, 130,000 M-1s-1). The canonical HBS in FGF-1, FGF-2, FGF-7, FGF-9 and FGF-18 differs in its size and these FGFs have a different complement of secondary HBS, ranging from none (FGF-9) to two (FGF-1). Differential scanning fluorimetry identified clear preferences in these FGFs for distinct structural features in the polysaccharide. SRCD revealed conformational changes in FGFs induced by binding to heparin and the changes were distinct at different heparin concentrations. Moreover, there was evidence that the conformational changes of FGFs differed with chemically modified heparins, indicating that the conformational change caused by binding to heparin is related to the sulfation pattern. At the cellular level, FGF-2 labeled with nanoparticles allowed the distribution of FGF-2 to be determined in the pericellular matrix of Rama 27 fibroblasts. The results showed that the FGF-2-NPs were specifically bound to cellular HS and were clustered. Taken together, these data suggest that the differences in heparin binding sites in both the protein and the sugar are greatest between FGF sub-families and may be more restricted within a FGF sub-family in accord with the known conservation of function within FGF sub-families, which supports the idea that heparin binding of these proteins is specific, but in terms of consensus sites on the GAG chain, rather than precisely defined chemical structures.
APA, Harvard, Vancouver, ISO, and other styles
36

LAPI, MICHELA. "STRUCTURAL ANALYSIS OF TRANSCRIPTION FACTOR/DNA COMPLEXES." Doctoral thesis, Università degli Studi di Milano, 2021. http://hdl.handle.net/2434/834212.

Full text
Abstract:
Binding of transcription factors (TFs) to discrete sequences in gene promoters and enhancers is crucial to the process by which genetic information is transferred to biological functions. TF structural analysis is the key to understanding their DNA-binding mode and for the design of specific inhibitors. In this context, the present PhD project focuses on two TFs: (1) NFIX, a TF with unknown structure that binds to the palindromic motif TTGGC(n5)GCCAA and plays an essential role in skeletal muscle development; and (2) NF-Y, a histone-like TF that binds the CCAAT box in promoters of cell cycle genes. (1) There is a lack of structural information on NFIX and relative TF family members because of the challenge expression and purification of soluble protein constructs in the amount required for structural characterization. We were able to obtain functional NFIX constructs using E. coli cells, and to purify them with a high yield. NFIX constructs were tested for correct folding, stability, and DNA-binding through a series of biochemical and biophysical methods. Furthermore, we managed to produce well-diffracting NFIX crystals, which were used for Single Anomalous Diffraction (SAD) phasing. We collected two different datasets of same NFIX construct at 2.7 Å and 3.5 Å resolution, in two different space groups. Structural analysis of this NFIX construct shed first light on this class of TFs and put the bases for the understanding of its DNA-binding mode. (2) Genomic data of NF-Y locations at gene promoters indicate that NF-Y plays a key role in oncogenic activation. The knowledge of the 3D structure of NF-Y in complex with its CCAAT box provided the rationale for developing inhibitors able to interfere with DNA-binding. The pipeline used to search for NF-Y inhibitors consisted of in silico screenings of compounds that interfere with NF-Y functional trimerization and/or with CCAAT box interaction, followed by in vitro biochemical/biophysical confirmation of inhibition and X-ray crystallography validation. The selected compound from the initial screening was suramin, which proved to bind to NF-Y and to functionally inhibit the binding of DNA. We obtained suramin/protein complex co-crystals, which diffracted up to 2.3 Å resolution. The crystal structure of the suramin/NF-Y provides the first evidence of NF-Y inhibition by a small molecule.
APA, Harvard, Vancouver, ISO, and other styles
37

Saxena, Vishal. "Interval finite element analysis for load pattern and load combination." Thesis, Available online, Georgia Institute of Technology, 2004:, 2003. http://etd.gatech.edu/theses/available/etd-04072004-180207/unrestricted/saxena%5Fvishal%5F200312%5Fms.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

"Evolutionary factor analysis." Université catholique de Louvain, 2009. http://edoc.bib.ucl.ac.be:81/ETD-db/collection/available/BelnUcetd-02032009-123244/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Lai, Shu-Hui, and 賴淑慧. "Generalized Factor Analysis." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/28707984679276199196.

Full text
Abstract:
博士
國立臺灣大學
流行病學研究所
88
Factor analysis (FA) has been used widely in various areas of sciences to explore or examine the latent measurement structure from a set of observed indicator variables. Both the observed and the latent variables are usually assumed to be continuous and, at least, symmetrically distributed. In the past 30 years or so, several methods had been proposed to extend the FA method for categorical observed indicator variables and/or latent variables, which include latent structure analysis, latent profile analysis, latent class analysis, latent trait analysis, and factor analysis of categorical data. See, for example, the books written by Bartholomew (1987) and Basilevsky (1994) and the references therein. We are interested in developing a general framework for FA, called the " generalized factor analysis" (GFA), for continuous, discrete, or mixed observed indicator variables, as long as they belong to the exponential family of distributions such as Normal, Binomial, and Poisson distributions. Just like the generalized linear models (GLMs), which include analysis of variances (ANOVA), linear regression, logistic regression, and Poisson regression as the special cases, we hope that the GFA method extends the standard FA method to build a measurement structure of continuous latent variable(s) from observed continuous, binary, ordinal, count, or mixed indicator variables in a unified way. Yet, before doing that, we investigate the equivalence between exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). To estimate the factor loadings in a GFA model, we apply the iterative reweighted least squares (IRLS) algorithm of GLMs to "linearize" the generalized factor model first, and then use the usual estimation methods of factor analysis to obtain the estimates of the factor loadings. Specifically, we develop independently a unified three-step estimation procedure for GFA, which is similar to the E-M algorithm discussed in Bartholomew (1987, Sec. 6.1, Pp. 107-115). On the other hand, we treat the estimation of factor loadings in GFA models as an error-in-variable problem of GLMs, and then take an econometricians'' instrumental variable (IV) approach for simultaneous equations model (SiEM) to estimating factor loadings. We shall discuss the results of our simulation study and compare the performances of different estimators numerically.
APA, Harvard, Vancouver, ISO, and other styles
40

Chen, Ke-Chin, and 陳可芹. "Factor Analysis II." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/09894515994453335121.

Full text
Abstract:
碩士
國立臺灣大學
流行病學研究所
87
In this study we considered the effects of exogenous variables on a factor analysis model first, and then modified the standard factor analysis method (called the "Factor Analysis I") to adjust for the covariates' effects. Factor analysis is one of the most popular statistical methods for discovering or examining the latent measurement structure. It extracts the information from the correlations between the observed indicator variables to identify the latent variables of interest. For example, one may be interested in studying students' intelligence through their grades in various courses. Notice that most of the factor analyses conducted before ignored exogenous variables such as sex, age, race, treatment, and so on. Yet, those covariates might affect the mean of the factor scores and/or the estimation of the factor loadings of a factor analysis model. The standard factor analysis method implicitly assumes that the covariate has an effect, if any, only on the latent variable so that it would just affect the mean of the factor scores. In this study, we found that (1) if the covariate has an effect only on the latent variable, then the estimated factor loadings and error variances are the same as ignoring the covariate; (2) if the covariate has an effect only on some of the observed indicator variables, then the estimated factor loadings are the same as ignoring the covariate but some of the estimated error variances are different; and (3) if the covariate has effects both on the latent variable and on some of the observed indicator variables, then the estimated factor loadings and error variances are different from ignoring the covariate. Hence, we developed a general factor analysis method (called the "Factor Analysis II"), which includes the stratified factor analysis as a special case, to account for the dual effects of exogenous variables.
APA, Harvard, Vancouver, ISO, and other styles
41

Gaucher, Beverly Jane. "Factor Analysis for Skewed Data and Skew-Normal Maximum Likelihood Factor Analysis." Thesis, 2013. http://hdl.handle.net/1969.1/149548.

Full text
Abstract:
This research explores factor analysis applied to data from skewed distributions for the general skew model, the selection-elliptical model, the selection-normal model, the skew-elliptical model and the skew-normal model for finite sample sizes. In terms of asymptotics, or large sample sizes, quasi-maximum likelihood methods are broached numerically. The skewed models are formed using selection distribution theory, which is based on Rao’s weighted distribution theory. The models assume the observed variable of the factor model is from a skewed distribution by defining the distribution of the unobserved common factors skewed and the unobserved unique factors symmetric. Numerical examples are provided using maximum likelihood selection skew-normal factor analysis. The numerical examples, such as maximum likelihood parameter estimation with the resolution of the “sign switching” problem and model fitting using likelihood methods, illustrate that the selection skew-normal factor analysis model better fits skew-normal data than does the normal factor analysis model.
APA, Harvard, Vancouver, ISO, and other styles
42

Pietersen, Jacobus Johannes. "Bilevel factor analysis models." Thesis, 2000. http://hdl.handle.net/2263/30460.

Full text
Abstract:
The theory of ordinary factor analysis and its application by means of software packages do not make provision for data sampled from populations with hierarchical structures. Since data are often obtained from such populations - educational data for example ¬the lack of procedures to analyse data of this kind needs to be addressed. A review of the ordinary factor analysis model and maximum likelihood estimation of the parameters in exploratory and confirmatory models is provided, together with practical applications. Subsequently, the concept of hierarchically structured populations and their models, called multilevel models, are introduced. A general framework for the estimation of the unknown parameters in these models is presented. It contains two estimation procedures. The first is the marginal maximum likelihood method in which an iterative expected maximisation approach is used to obtain the maximum likelihood estimates. The second is the Fisher scoring method which also provides estimated standard errors for the maximum likelihood parameter estimates. For both methods, the theory is presented for unconstrained as well as for constrained estimation. A two-stage procedure - combining the mentioned procedures - is proposed for parameter estimation in practice. Multilevel factor analysis models are introduced next, and subsequently a particular two-level factor analysis model is presented. The general estimation theory that was presented earlier is applied to this model - in exploratory and confirmatory analysis. First, the marginal maximum likelihood method is used to obtain the equations for determining the parameter estimates. It is then shown how an iterative expected max¬imisation algorithm is used to obtain these estimates in unconstrained and constrained optimisation. This method is applied to real life data using a FORTRAN program. Secondly, equations are derived by means of the Fisher scoring method to obtain the maximum likelihood estimates of the parameters in the two-level factor analysis model for exploratory and confirmatory analysis. A FORTRAN program was written to apply this method in practice. Real life data are used to illustrate the method. Finally, flowing from this research, some areas for possible further research are pro¬posed.
Thesis (PhD (Applied Statistics))--University of Pretoria, 2007.
Statistics
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
43

Hsieh, Pi Hsia, and 謝碧霞. "Children burns factor analysis." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/70676177249674010176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

"Estimation of factor scores in a three-level confirmatory factor analysis model." 1998. http://library.cuhk.edu.hk/record=b5889599.

Full text
Abstract:
by Yuen Wai-ying.
Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.
Includes bibliographical references (leaves 50-51).
Abstract also in Chinese.
Chapter Chapter 1 --- Introduction --- p.1
Chapter Chapter 2 --- Estimation of Factor Scores in a Three-level Factor Analysis Model
Chapter 2.1 --- The Three-level Factor Analysis Model --- p.5
Chapter 2.2 --- Estimation of Factor Scores in Between-group --- p.7
Chapter 2.2.1 --- REG Method --- p.9
Chapter 2.2.2 --- GLS Method --- p.11
Chapter 2.3 --- Estimation of Factor Scores in Second Level Within-group --- p.13
Chapter 2.3.1 --- REG Method --- p.15
Chapter 2.3.2 --- GLS Method --- p.17
Chapter 2.4 --- Estimation of Factor Scores in First Level Within-group
Chapter 2.4.1 --- First Approach --- p.19
Chapter 2.4.2 --- Second Approach --- p.24
Chapter 2.4.3 --- Comparison of the Two Approaches in Estimating Factor Scores in First Level Within-group --- p.31
Chapter 2.5 --- Summary on the REG and GLS Methods --- p.35
Chapter Chapter 3 --- Simulation Studies
Example1 --- p.37
Example2 --- p.42
Chapter Chapter 4 --- Conclusion and Discussion --- p.48
References --- p.50
Figures --- p.52
APA, Harvard, Vancouver, ISO, and other styles
45

Lin, Hui-Chu, and 林惠珠. "Stratified Data in Factor Analysis." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/39878436022507385287.

Full text
Abstract:
碩士
國立陽明大學
公共衛生研究所
92
Abstract Factor Analysis is one method of exploring the data structure. The main point is to classify correlated variables to reduce the dimension of data. Based on covariance matrix, factor analysis extracts common factors and remains most of variance through matrix algebra. In practice, stratified data can reduce the additional variance due to the extra variable. In modeling, Conditional logistic regression model, and Stratified PH model are available for stratified data. Factor analysis for stratified data is somehow complicated and not available in software. In this thesis, two methods, pooled-amount-variation and group-mean-corrected methods, are proposed for data correction before factor analysis used. In this fetal data, Pulsality Index (PI) values are measured from several parts of vessels. The effect caused from gestational age effect is reduced by corrected methods. Two factors remaining most of variation are generated from factor analysis. Combining the binary fetal outcome (normal or abnormal), logistic regression analysis and CART analysis are used for binary discrimination. The discrimination result has lower misclassification rate from corrected data when data highly correlate with gestational age. Keywords:Stratified Data、Factor Analysis、 Fetal Data、Pulsatility Index﹒
APA, Harvard, Vancouver, ISO, and other styles
46

Kuo, Ching-Yi, and 郭靜宜. "Factor Analysis in Data Mining." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/15758716436758148363.

Full text
Abstract:
碩士
國立清華大學
工業工程與工程管理學系
88
In this study, we proposed a method of factor analysis for a huge database so that not only the independence among the factors can be considered, but also the levels of their importance can be measured. To keep the independence between factors, a statistical correlation analysis and the concept of fuzzy set theory are employed, and to measure the importance of factors a neural-based model is developed. A fuzzy set ‘factors are almost dependent’ is used to measure the degree of dependence between factors, and then a hierarchical clustering method is adopted to detect the dependent factors with an -level dependence. Hence, the independent factors also satisfy the same level of requirement. Then, a supervised feedforward neural network is developed to learn the weights of importance of independent factors. In addition, with the designed hierarchical structure, the proposed model facilitates the extraction of new factors when the information of system is not complete. The applicability of the proposed model is evaluated by two cases of customers’ contribution analysis and churn analysis of a telecom company with 0.08% and 1% error rate.
APA, Harvard, Vancouver, ISO, and other styles
47

Chiou, Mu-Lin, and 邱木霖. "Purchasing Factor Analysis on Smartphones." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/84454604889453842395.

Full text
Abstract:
碩士
義守大學
管理碩博士班
102
The fast global uptake of smartphones has completely changed the way we communicate and use the internet. The latest research, from the IDC survey company in 2013, showed that the smartphone has become the mainstream products on the mobile phone market. The major mobile phone brand manufacturers will put into the market one and another because of the rise of the low-cost, high -quality mobile phone. How to win the trust of the customer will be an important issue in such a competitive market. What is the demand and consideration when consumers choosing from smartphones are even much more crucial for the supplier to explore. This study is to explore that, what are the purchasing factors of the current three major smartphone operating systems, which may influence the efficiency of its sales targets by using the expert weighting method and regression analysis. After get the Empirical Analysis results of the purchasing factors, it not only helps the customers to have a more appropriate basis for reference to choose smartphones, but even gives the provides a more effective ingratiation focus for the buyers, which could improve their sales and market competitiveness greatly.
APA, Harvard, Vancouver, ISO, and other styles
48

Hsieh, Tsung yi, and 謝宗益. "Stock Market Seasonality in Taiwan:Empirical Analysis of Conglomerate Factor,Scale Factor and Industry Factor." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/94388654329381396603.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Xie, Zong-Yi, and 謝宗益. "Stock Market Seasonality in Taiwan:Empirical Analysis of Conglomerate Factor,Scale Factor and Industry Factor." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/52819123302673400334.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Wu, Pal-Hsuan, and 吳柏璇. "The skew-t factor analysis model." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/a39dwt.

Full text
Abstract:
碩士
國立中興大學
統計學研究所
101
Factor analysis(FA) is a classical data reduction technique that seeks a potentially lower number of unobserved variables accounting for most correlation among the observed variables. This thesis presents an extension of the FA model by assuming jointly a restricted version of multivariate skew t distribution for the latent factors and unobservable errors, called the skew-t FA model. The proposed model shows robustness to violations of normality assumptions of the underlying latent factors and provides flexibility in capturing extra skewness as well as heavier tails of the observed data. A computationally feasible EM-type algorithm is developed for computing maximum likelihood estimates of the parameters. The usefulness of the proposed methodology is illustrated by a real-life example and result also demonstrates its better performance over various existing methods.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography