To see the other types of publications on this topic, follow the link: Multivariate analysis.

Dissertations / Theses on the topic 'Multivariate analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Multivariate analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wolting, Duane. "MULTIVARIATE SYSTEMS ANALYSIS." International Foundation for Telemetering, 1985. http://hdl.handle.net/10150/615760.

Full text
Abstract:
International Telemetering Conference Proceedings / October 28-31, 1985 / Riviera Hotel, Las Vegas, Nevada
In many engineering applications, a systems analysis is performed to study the effects of random error propagation throughout a system. Often these errors are not independent, and have joint behavior characterized by arbitrary covariance structure. The multivariate nature of such problems is compounded in complex systems, where overall system performance is described by a q-dimensional random vector. To address this problem, a computer program was developed which generates Taylor series approximations for multivariate system performance in the presence of random component variablilty. A summary of an application of this approach is given in which an analysis was performed to assess simultaneous design margins and to ensure optimal component selection.
APA, Harvard, Vancouver, ISO, and other styles
2

Nisa, Khoirin. "On multivariate dispersion analysis." Thesis, Besançon, 2016. http://www.theses.fr/2016BESA2025.

Full text
Abstract:
Cette thèse examine la dispersion multivariée des modelés normales stables Tweedie. Trois estimateurs de fonction variance généralisée sont discutés. Ensuite dans le cadre de la famille exponentielle naturelle deux caractérisations du modèle normal-Poisson, qui est un cas particulier de modèles normales stables Tweedie avec composante discrète, sont indiquées : d'abord par fonction variance et ensuite par fonction variance généralisée. Le dernier fournit la solution à un problème particulier d'équation de Monge-Ampère. Enfin, pour illustrer l'application de la variance généralisée des modèles Tweedie stables normales, des exemples à partir des données réelles sont fournis
This thesis examines the multivariate dispersion of normal stable Tweedie (NST) models. Three generalize variance estimators of some NST models are discussed. Then within the framework of natural exponential family, two characterizations of normal Poisson model, which is a special case of NST models with discrete component, are shown : first by variance function and then by generalized variance function. The latter provides a solution to a particular Monge-Ampere equation problem. Finally, to illustrate the application of generalized variance of normal stable Tweedie models, examples from real data are provided
APA, Harvard, Vancouver, ISO, and other styles
3

Ahrabian, Alireza. "Multivariate time-frequency analysis." Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/28958.

Full text
Abstract:
Recent advances in time-frequency theory have led to the development of high resolution time-frequency algorithms, such as the empirical mode decomposition (EMD) and the synchrosqueezing transform (SST). These algorithms provide enhanced localization in representing time varying oscillatory components over conventional linear and quadratic time-frequency algorithms. However, with the emergence of low cost multichannel sensor technology, multivariate extensions of time-frequency algorithms are needed in order to exploit the inter-channel dependencies that may arise for multivariate data. Applications of this framework range from filtering to the analysis of oscillatory components. To this end, this thesis first seeks to introduce a multivariate extension of the synchrosqueezing transform, so as to identify a set of oscillations common to the multivariate data. Furthermore, a new framework for multivariate time-frequency representations is developed using the proposed multivariate extension of the SST. The performance of the proposed algorithms are demonstrated on a wide variety of both simulated and real world data sets, such as in phase synchrony spectrograms and multivariate signal denoising. Finally, multivariate extensions of the EMD have been developed that capture the inter-channel dependencies in multivariate data. This is achieved by processing such data directly in higher dimensional spaces where they reside, and by accounting for the power imbalance across multivariate data channels that are recorded from real world sensors, thereby preserving the multivariate structure of the data. These optimized performance of such data driven algorithms when processing multivariate data with power imbalances and inter-channel correlations, and is demonstrated on the real world examples of Doppler radar processing.
APA, Harvard, Vancouver, ISO, and other styles
4

Ahmed, Mosabber Uddin. "Multivariate multiscale complexity analysis." Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/10204.

Full text
Abstract:
Established dynamical complexity analysis measures operate at a single scale and thus fail to quantify inherent long-range correlations in real world data, a key feature of complex systems. They are designed for scalar time series, however, multivariate observations are common in modern real world scenarios and their simultaneous analysis is a prerequisite for the understanding of the underlying signal generating model. To that end, this thesis first introduces a notion of multivariate sample entropy and thus extends the current univariate complexity analysis to the multivariate case. The proposed multivariate multiscale entropy (MMSE) algorithm is shown to be capable of addressing the dynamical complexity of such data directly in the domain where they reside, and at multiple temporal scales, thus making full use of all the available information, both within and across the multiple data channels. Next, the intrinsic multivariate scales of the input data are generated adaptively via the multivariate empirical mode decomposition (MEMD) algorithm. This allows for both generating comparable scales from multiple data channels, and for temporal scales of same length as the length of input signal, thus, removing the critical limitation on input data length in current complexity analysis methods. The resulting MEMD-enhanced MMSE method is also shown to be suitable for non-stationary multivariate data analysis owing to the data-driven nature of MEMD algorithm, as non-stationarity is the biggest obstacle for meaningful complexity analysis. This thesis presents a quantum step forward in this area, by introducing robust and physically meaningful complexity estimates of real-world systems, which are typically multivariate, finite in duration, and of noisy and heterogeneous natures. This also allows us to gain better understanding of the complexity of the underlying multivariate model and more degrees of freedom and rigor in the analysis. Simulations on both synthetic and real world multivariate data sets support the analysis.
APA, Harvard, Vancouver, ISO, and other styles
5

Alashwali, Fatimah Salem. "Robustness and multivariate analysis." Thesis, University of Leeds, 2013. http://etheses.whiterose.ac.uk/5299/.

Full text
Abstract:
Invariant coordinate selection (ICS) is a method for �nding structures in multivariate data using the eigenvalue-eigenvector decomposition of two different scatter matrices. The performance of the ICS depends on the structure of the data and the choice of the scatter matrices. The main goal of this thesis is to understand how ICS works in some situations, and does not in other. In particular, we look at ICS under three different structures: two-group mixtures, long-tailed distributions, and parallel line structure. Under two-group mixtures, we explore ICS based on the fourth-order moment matrix, ^K , and the covariance matrix S. We find the explicit form of ^K , and the ICS criterion under this model. We also explore the projection pursuit (PP) method, a variant of ICS, based on the univariate kurtosis. A comparison is made between PP, based on kurtosis, and ICS, based on ^K and S, through a simulation study. The results show that PP is more accurate than ICS. The asymptotic distributions of the ICS and PP estimates of the groups separation direction are derived. We explore ICS and PP based on two robust measures of spread, under twogroup mixtures. The use of common location measures, and pairwise differencing of the data in robust ICS and PP are investigated using simulations. The simulation results suggest that using a common location measure can be sometimes useful. The second structure considered in this thesis, the long-tailed distribution, is modelled by two dimensional errors-in-variables model, where the signal can have a non-normal distribution. ICS based on ^K and S is explored. We gain insight into how ICS �nds the signal direction in the errors in variables problem. We also compare the accuracy of the ICS estimate of the signal direction and Geary's fourth-order cumulant-based estimates through simulations. The results suggest that some of the cumulant-based estimates are more accurate than ICS, but ICS has the advantage of affine equivariance. The third structure considered is the parallel lines structure. We explore ICS based on the W-estimate based on the pairwise di�erencing of the data, ^ V , and S. We give a detailed analysis of the e�ect of the separation between points, overall and conditional on the horizontal separation, on the power of ICS based on ^ V and S.
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Di. "Analysis guided visual exploration of multivariate data." Worcester, Mass. : Worcester Polytechnic Institute, 2007. http://www.wpi.edu/Pubs/ETD/Available/etd-050407-005925/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lans, Ivo A. van der. "Nonlinear multivariate analysis for multiattribute preference data." [Leiden] : DSWO Press, Leiden University, 1992. http://catalog.hathitrust.org/api/volumes/oclc/28733326.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chongcharoen, Samruam. "One-sided multivariate tests /." free to MU campus, to others for purchase, 1998. http://wwwlib.umi.com/cr/mo/fullcit?p9924874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Salter, Amy Beatrix. "Multivariate dependencies in survival analysis." Title page, contents and introduction only, 1999. http://web4.library.adelaide.edu.au/theses/09PH/09phs177.pdf.

Full text
Abstract:
Bibliography: leaves 177-181. This thesis investigates determinants of factors associated with retention of injecting drug users on the South Australian methadone program over the decade 1981 to mid 1991. Truncated multivariate survival models are proposed for the analysis of data from the program, and the theory of graphical chain models applied to the data. A detailed analysis is presented which gives further insight into the nature of the relationships that exist amongst these data. This provides an application of graphical chain models to survival data.
APA, Harvard, Vancouver, ISO, and other styles
10

Tavares, Nuno Filipe Ramalho da Cunha. "Multivariate analysis applied to clinical analysis data." Master's thesis, Faculdade de Ciências e Tecnologia, 2014. http://hdl.handle.net/10362/12288.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia e Gestão Industrial
Folate, vitamin B12, iron and hemoglobin are essential for metabolic functions in the body. The deficiency of these can be the cause of several known pathologies and, untreated, can be responsible for severe morbidity and even death. The objective of this study is to characterize a population, residing in the metropolitan area of Lisbon and Setubal, concerning serum levels of folate, vitamin B12, iron and hemoglobin, as well as finding evidence of correlations between these parameters and illnesses, mainly cardiovascular, gastrointestinal, neurological and anemia. Clinical analysis data was collected and submitted to multivariate analysis. First the data was screened with Spearman correlation and Kruskal-Wallis analysis of variance to study correlations and variability between groups. To characterize the population, we used cluster analysis with Ward’s linkage method. Finally a sensitivity analysis was performed to strengthen the results. A positive correlation between iron with, ferritin and transferrin, and with hemoglobin was observed with the Spearman correlation. Kruskal-Wallis analysis of variance test showed significant differences between these biomarkers in persons aged 0 to 29, 30 to 59 and over 60 years old. Cluster analysis proved to be a useful tool when characterizing a population based on its biomarkers, showing evidence of low folate levels for the population in general, and hemoglobin levels below the reference values. Iron and vitamin B12 were within the reference range for most of the population. Low levels of the parameters were registered mainly in patients with cardiovascular, gastrointestinal, and neurological diseases and anemia.
APA, Harvard, Vancouver, ISO, and other styles
11

Malan, Karien. "Stationary multivariate time series analysis." Pretoria : [s.n.], 2008. http://upetd.up.ac.za/thesis/available/etd-06132008-173800.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Hagen, Reidar Strand. "a Multivariate Image Analysis Toolbox." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9275.

Full text
Abstract:

The toolkit has been implemented as planned: The ground work for visualisation mappings and relationships between datasets have been finished. Wavelet transforms have been to compress datasets in order to reduce computational time. Principal Component Analysis and other transforms are working. Examples of use have been provided, and several ways of visualizing them have been provided. Multivariate Image Analysis is viable on regular Workstations.

APA, Harvard, Vancouver, ISO, and other styles
13

Calder, P. "Influence functions in multivariate analysis." Thesis, University of Kent, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.375052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Powell, Kenneth John. "Multivariate analysis of underwater sounds." Thesis, University of Exeter, 1997. http://create.canterbury.ac.uk/10470/.

Full text
Abstract:
This thesis considers the use of multivariate statistical methods in relation to a common signal processing problem, that of detecting features in sound recordings which contain interference, distortion and background noise. Two separate but related areas of study are undertaken, first, the compression and noise reduction of sounds; second, the detection of intermittent departures ('signals') from the background sound environment ('noise'), where the latter may be evolving and changing over time. Compression and noise reduction are two closely related areas that have been studied for a wide range of signals, both one dimensional (such as sound) and two dimensional (such as images). Many well known techniques used in this field are based on the Fourier transform. In this work, we show how the comparative recent wavelet transform is superior for sound data involving short duration signals (such as shrimp clicks) whilst being at least as good as the Fourier transform for longer duration signals (such as dolphin whistles). Various noise reduction techniques involving thresholding wavelet transforms are examined and compared. We show how none of the standard threshold methods cope well with underwater sounds to any reasonable degree and propose a new technique, known as RunsThresh to overcome the perceived problems. The performance of this new method is contrasted with that of various standard thresholds. Signal identification for underwater sounds is an area that has been examined in detail in much previous work. Here, we build upon the results gleaned from noise reduction to develop a methodology for detecting signals. The underwater noise environment is dynamically modelled using recursive density estimation of certain summary features of its wavelet decomposition. Observations which are considered to be outliers from this distribution are flagged as 'signal'. The performance of our signal detection method is illustrated on artificial data, containing known signals, and on real data. This performance is compared with standard Fourier based methods for both cases. Finally in this thesis, several ideas are presented and discussed which consider how the noise reduction and signal detection techniques examined in earlier chapters could be developed further, for example, in order to classify detected signals into different classes. These ideas are presented in outline only and are not followed up in detail, since they represent interesting directions for future study, rather than a primary focus of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
15

Oliveira, Irene. "Correlated data in multivariate analysis." Thesis, University of Aberdeen, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.401414.

Full text
Abstract:
After presenting (PCA) Principal Component Analysis and its relationship with time series data sets, we describe most of the existing techniques in this field. Various techniques, e.g. Singular Spectrum Analysis, Hilbert EOF, Extended EOF or Multichannel Singular Spectrum Analysis (MSSA), Principal Oscillation Pattern Analysis (POP Analysis), can be used for such data. The way we use the matrix of data or the covariance or correlation matrix, makes each method different from the others. SSA may be considered as a PCA performed on a lagged versions of a single time series where we may decompose the original time series into some main components. Following SSA we have its multivariate version (MSSA) where we try to augment the initial matrix of data to get information on lagged versions of each variable (time series) and so past (or future) behaviour can be used to reanalyse the information between variables. In POP Analysis a linear system involving the vector field is analysed, xt+1=Axt+nt, in order to “know” xt at time t+1 given the information from time t. The matrix A is estimated by using not only the covariance matrix but also the matrix of covariances between the systems at the current time and at lag 1. In Hilbert EOF we try to get some (future) information from the internal correlation in each variable by using the Hilbert transform of each series in a augmented complex matrix with the data themselves in the real part and the Hilbert time series in the imaginary part Xt + XtH. In addition to all these ideas from the statistics and other literature we develop a new methodology as a modification of HEOF and POP Analysis, namely Hilbert Oscillation Patterns (HOP) Analysis or the related idea of Hilbert Canonical Correlation Analysis (HCCA), by using a system, xHt = Axt + nt. Theory and assumptions are presented and HOPS results will be related with the results extracted from a Canonical Correlation Analysis between the time series data matrix and its Hilbert transform. Some examples will be given to show the differences and similarities of the results of the HCCA technique with those from PCA, MSSA, HEOF and POPs. We also present PCA for time series as observations where a technique of linear algebra (PCA) becomes a problem in function analysis leading to Functional PCA (FPCA).  We also adapt PCA to allow for this and discuss the theoretical and practical behaviour of using PCA on the even part (EPCA) and odd part (OPCA) of the data, and its application in functional data. Comparisons will be made between PCA and this modification, for the reconstruction of data sets for which considerations of symmetry are especially relevant.
APA, Harvard, Vancouver, ISO, and other styles
16

Prelorendjos, Alexios. "Multivariate analysis of metabonomic data." Thesis, University of Strathclyde, 2014. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=24286.

Full text
Abstract:
Metabonomics is one of the main technologies used in biomedical sciences to improve understanding of how various biological processes of living organisms work. It is considered a more advanced technology than e.g. genomics and proteomics, as it can provide important evidence of molecular biomarkers for the diagnosis of diseases and the evaluation of beneficial adverse drug effects, by studying the metabolic profiles of living organisms. This is achievable by studying samples of various types such as tissues and biofluids. The findings of a metabonomics study for a specific disease, disorder or drug effect, could be applied to other diseases, disorders or drugs, making metabonomics an important tool for biomedical research. This thesis aims to review and study various multivariate statistical techniques which can be used in the exploratory analysis of metabonomics data. To motivate this research, a metabonomics data set containing the metabolic profiles of a group of patients with epilepsy was used. More specifically, the metabolic fingerprints (proton NMR spectra) of 125 patients with epilepsy, of blood serum type, have been obtained from the Western Infirmary, Glasgow, for the purposes of this project. These data were originally collected as baseline data in a study to investigate if the treatment with Anti-Epileptic Drugs (AEDs), of patients with pharmacoresistant epilepsy affects the seizure levels of the patients. The response to the drug treatment in terms of the reduction in seizure levels of these patients enabled two main categories of response to be identified, i.e. responders and the non-responders to AEDs. We explore the use of statistical methods used in metabonomics to analyse these data. Novel aspects of the thesis are the use of Self Organising Maps (SOM) and of Fuzzy Clustering Methods to pattern recognition in metabonomics data. Part I of the thesis defines metabonomics and the other main "omics" technologies, and gives a detailed description of the metabonomics data to be analysed, as well as a description of the two main analytical chemical techniques, Mass Spectrometry (MS) and Nuclear Magnetic Resonance Spectroscopy (NMR), that can be used to generate metabonomics data. Pre-processing and pre-treatment methods that are commonly used in NMR-generated metabonomics data to enhance the quality and accuracy of the data, are also discussed. In Part II, several unsupervised statistical techniques are reviewed and applied to the epilepsy data to investigate the capability of these techniques to discriminate the patients according to their type of response. The techniques reviewed include Principal Components Analysis (PCA), Multi-dimensional scaling (both Classical scaling and Sammon's non-linear mapping) and Clustering techniques. The latter include Hierarchical clustering (with emphasis on Agglomerative Nesting algorithms), Partitioning methods (Fuzzy and Hard clustering algorithms) and Competitive Learning algorithms (Self Organizing maps). The advantages and disadvantages of the different methods are examined, for this kind of data. Results of the exploratory multivariate analyses showed that no natural clusters of patients existed with regards to th eir response to AEDs, therefore none of these techniques was capable of discriminating these patients according to their clinical characteristics. To examine the capability of an unsupervised technique such as PCA, to identify groups in such data as the data based on metabolic fingerprints of patients with epilepsy, a simulation algorithm was developed to run a series of experiments, covered in Part III of the thesis. The aim of the simulation study is to investigate the extent of the difference in the clusters of the data, and under what conditions this difference is detectable by unsupervised techniques. Furthermore, the study examines whether the existence or lack of variation in the mean-shifted variables affects the discriminating ability of the unsupervised techniques (in this case PCA) or not. In each simulation experiment, a reference and a test data set were generated based on the original epilepsy data, and the discriminating capability of PCA was assessed. A test set was generated by mean-shifting a pre-selected number of variables in a reference set. Three methods of selecting the variables to meanshift (maximum and minimum standard deviations and maximum means), five subsets of variables of sizes 1, 3, 20, 120 and 244 (total number of variables in the data sets) and three sample sizes (100, 500 and 1000) were used. Average values in 100 runs of an experiment for two statistics, i.e. the misclassification rate and the average separation (Webb, 2002) were recorded. Results showed that the number of mean-shifted variables (in general) and the methods used to select the variables (in some cases) are important factors for the discriminating ability of PCA, whereas the sample size of the two data sets does not play any role in the experiments (although experiments in large sample sizes showed greater stability in the results for the two statistics in 100 runs of any experiment). The results have implications for the use of PCA with metabonomics data generally.
APA, Harvard, Vancouver, ISO, and other styles
17

Zhu, Liang. "Semiparametric analysis of multivariate longitudinal data." Diss., Columbia, Mo. : University of Missouri-Columbia, 2008. http://hdl.handle.net/10355/6044.

Full text
Abstract:
Thesis (Ph. D.)--University of Missouri-Columbia, 2008.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on August 3, 2009) Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
18

Seasholtz, Mary Beth. "Parsimonious construction of multivariate calibration models in chemometrics /." Thesis, Connect to this title online; UW restricted, 1992. http://hdl.handle.net/1773/8705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Bush, Helen Meyers. "Nonparametric multivariate quality control." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/25571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Fang, Hong-bin. "Some non-classical multivariate distributions." HKBU Institutional Repository, 1998. https://repository.hkbu.edu.hk/etd_ra/259.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Goff, Matthew. "Multivariate discrete phase-type distributions." Online access for everyone, 2005. http://www.dissertations.wsu.edu/Dissertations/Spring2005/m%5Fgoff%5F032805.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

何志興 and Chi-hing Ho. "The statistical analysis of multivariate counts." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1991. http://hub.hku.hk/bib/B31232218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Ho, Chi-hing. "The statistical analysis of multivariate counts /." [Hong Kong] : University of Hong Kong, 1991. http://sunzi.lib.hku.hk/hkuto/record.jsp?B12922602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Shani, Najah Turki. "Multivariate analysis and survival analysis with application to company failure." Thesis, Bangor University, 1991. https://research.bangor.ac.uk/portal/en/theses/multivariate-analysis-and-survival-analysis-with-application-to-company-failure(a031bf91-13bc-4367-b4fc-e240ab54a73b).html.

Full text
Abstract:
This thesis offers an explanation of the statistical modelling of corporate financial indicators in the context where the life of a company is terminated. Whilst it is natural for companies to fail or close down, an excess of failure causes a reduction in the activity of the economy as a whole. Therefore, studies on business failure identification leading to models which may provide early warnings of impending financial crisis may make some contribution to improving economic welfare. This study considers a number of bankruptcy prediction models such as multiple discriminant analysis and logit, and then introduces survival analysis as a means of modelling corporate failure. Then, with a data set of UK companies which failed, or were taken over, or were still operating when the information was collected, we provide estimates of failure probabilities as a function of survival time, and we specify the significance of financial characteristics which are covariates of survival. Three innovative statistical methods are introduced. First, a likelihood solution is provided to the problem of takeovers and mergers in order to incorporate such events into the dichotomous outcome of failure and survival. Second, we move away from the more conventional matched pairs sampling framework to one that reflects the prior probabilities of failure and construct a sample of observations which are randomly censored, using stratified sampling to reflect the structure of the group of failed companies. The third innovation concerns the specification of survival models, which relate the hazard function to the length of survival time and to a set of financial ratios as predictors. These models also provide estimates of the rate of failure and of the parameters of the survival function. The overall adequacy of these models has been assessed using residual analysis and it has been found that the Weibull regression model fitted the data better than other parametric models. The proportional hazard model also fitted the data adequately and appears to provide a promising approach to the prediction of financial distress. Finally, the empirical analysis reported in this thesis suggests that survival models have lower classification error than discriminant and logit models.
APA, Harvard, Vancouver, ISO, and other styles
25

Guan, Puhua. "Factorization of multivariate polynomials /." The Ohio State University, 1985. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487263399023278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Zhou, Huajun. "Multivariate compound point processes with drifts." Online access for everyone, 2006. http://www.dissertations.wsu.edu/Dissertations/Summer2006/h%5Fzhou%5F051606.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Chiang, Joyce Hsien-yin. "Multivariate analysis of surface electromyography signals." Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/31587.

Full text
Abstract:
As the primary method of measuring muscle activation, the surface electromyography (sEMG) is of great importance in the study of motor deficits seen in patients with brain injuries and neuromuscular disorders. While clinicians have long intuitively understood that deficits in motor control are related to inappropriate recruitment of muscle synergies across several muscles, sEMG recordings are still typically examined in a univariate fashion. However, most traditional univariate techniques are unable to quantitatively capture the complex interactions between muscles during natural movements. To address this issue, multivariate signal processing techniques are employed in this thesis to study muscle co-activation patterns in patient populations. A method for classification of multivariate sEMG recordings between stroke and healthy subjects is proposed. The proposed classification scheme utilizes the eigenspectra of time-varying covariance patterns between sEMG channels as feature vectors and the support vector machines (SVM) as classifiers. Despite the minimal differences in the RMS profiles of individual muscles, the proposed scheme is able to effectively differentiate between healthy and stroke subjects. Moreover, the classification rate is shown to be monotonically related to the severity of motor impairment. This simple, biologically-inspired approach is able to quantitatively capture the subtle differences in muscle recruitment patterns between two populations and appears to be a promising means to measure motor performance. The other approach to modeling multivariate sEMG utilizes the HMM-mAR framework, which combines hidden Markov models (HMMs] and multivariate autoregressive (mAR) models. Different forms of sEMG data are analyzed, including raw sEMG, amplitude sEMG and carrier sEMG. The classification between healthy and stroke subjects is performed using structural features derived from estimated model parameters. Both the raw and carrier data produce excellent classification performance. The proposed method represents a fundamental departure from most existing classification methods where only amplitude sEMG is analyzed or mAR coefficients are directly used as feature vectors. In contrast, our analysis shows that the structural features of the carrier sEMG can enhance the classification performance and provide additional insights into motor control.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
28

Droop, Alastair Philip. "Correlation Analysis of Multivariate Biological Data." Thesis, University of York, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.507622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Medri, Sander. "Multivariate Conditional Distribution Estimation and Analysis." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-234657.

Full text
Abstract:
The goals of this thesis were to implement different methods for estimating conditional distributions from data and to evaluate the performance of these methods on data sets with different characteristics. The methods were implemented in C++ and several existing software libraries were also used. Tests were run on artificially generated data sets and on some real-world data sets. The accuracy, run time and memory usage of the methods was measured. Based on the results the natural or smoothing spline methods or the k-nearest neighbors method would potentially be a good first choice to apply to a data set if not much is known about it. In general the wavelet method did not seem to perform particularly well. The noisy-OR method could be a faster and possibly more accurate  alternative to the popular logistic regression in certain cases.
APA, Harvard, Vancouver, ISO, and other styles
30

Collins, Gary Stephen. "Multivariate analysis of flow cytometry data." Thesis, University of Exeter, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324749.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Hopkins, Julie Anne. "Sampling designs for exploratory multivariate analysis." Thesis, University of Sheffield, 2000. http://etheses.whiterose.ac.uk/14798/.

Full text
Abstract:
This thesis is concerned with problems of variable selection, influence of sample size and related issues in the applications of various techniques of exploratory multivariate analysis (in particular, correspondence analysis, biplots and canonical correspondence analysis) to archaeology and ecology. Data sets (both published and new) are used to illustrate these methods and to highlight the problems that arise - these practical examples are returned to throughout as the various issues are discussed. Much of the motivation for the development of the methodology has been driven by the needs of the archaeologists providing the data, who were consulted extensively during the study. The first (introductory) chapter includes a detailed description of the data sets examined and the archaeological background to their collection. Chapters Two, Three and Four explain in detail the mathematical theory behind the three techniques. Their uses are illustrated on the various examples of interest, raising data-driven questions which become the focus of the later chapters. The main objectives are to investigate the influence of various design quantities on the inferences made from such multivariate techniques. Quantities such as the sample size (e.g. number of artefacts collected), the number of categories of classification (e.g. of sites, wares, contexts) and the number of variables measured compete for fixed resources in archaeological and ecological applications. Methods of variable selection and the assessment of the stability of the results are further issues of interest and are investigated using bootstrapping and procrustes analysis. Jack-knife methods are used to detect influential sites, wares, contexts, species and artefacts. Some existing methods of investigating issues such as those raised above are applied and extended to correspondence analysis in Chapters Five and Six. Adaptions of them are proposed for biplots in Chapters Seven and Eight and for canonical correspondence analysis in Chapter Nine. Chapter Ten concludes the thesis.
APA, Harvard, Vancouver, ISO, and other styles
32

Hallam, Robert Kenneth. "Dual optical detection and multivariate analysis." Thesis, Loughborough University, 2003. https://dspace.lboro.ac.uk/2134/33747.

Full text
Abstract:
The application of flow injection analysis into the simultaneous determination of two or more components has been challenging for many years. Various detectors such as ultraviolet/visible absorption, fluorescence, and electrochemical detectors, have been used individually or in combination with each other. Combining two optical detectors such as fluorescence and ultraviolet/visible absorbance, however, has always been challenging due to their incompatibilities. However, the recent developments in fibre optics, solid-state light sources and miniaturised charged coupled devices (CCD), allow novel designs and most of the incompatibilities be circumvented. A flow injection manifold can now be adapted so that only one flow cell is used along with a diode array CCD detector that can detect both fluorescence and absorbance simultaneously. The initial development and testing of such dual detection system is described in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
33

Nicolini, Olivier. "LIBS Multivariate Analysis with Machine Learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-286595.

Full text
Abstract:
Laser-Induced Breakdown Spectroscopy (LIBS) is a spectroscopic technique used for chemical analysis of materials. By analyzing the spectrum obtained with this technique it is possible to understand the chemical composition of a sample. The possibility to analyze materials in a contactless and online fashion, without sample preparation make LIBS one of the most interesting techniques for chemical composition analysis. However, despite its intrinsic advantages, LIBS analysis suffers from poor accuracy and limited reproducibility of the results due to interference effects caused by the chemical composition of the sample or other experimental factors. How to improve the accuracy of the analysis by extracting useful information from LIBS high dimensionality data remains the main challenge of this technique. In the present work, with the purpose to propose a robust analysis method, I present a pipeline for multivariate regression on LIBS data composed of preprocessing, feature selection, and regression. First raw data is preprocessed by application of intensity filtering, normalization and baseline correction to mitigate the effect of interference factors such as laser energy fluctuations or the presence of baseline in the spectrum. Feature selection allows finding the most informative lines for an element that are then used as input in the subsequent regression phase to predict the element concentration. Partial Least Squares (PLS) and Elastic Net showed the best predictive ability among the regression methods investigated, while Interval PLS (iPLS) and Iterative Predictor Weighting PLS (IPW-PLS) proved to be the best feature selection algorithms for this type of data. By applying these feature selection algorithms on the full LIBS spectrum before regression with PLS or Elastic Net it is possible to get accurate predictions in a robust fashion.
Laser-Induced Breakdown Spectroscopy (LIBS) är en spektroskopisk teknik som används för kemisk analys av material. Genom att analysera det spektrum som erhållits med denna teknik är det möjligt att förstå den kemiska sammansättningen av ett prov. Möjligheten att analysera material på ett kontaktlöst och online sätt utan förberedelse av prov gör LIBS till en av de mest intressanta teknikerna för kemisk sammansättning analys. Trots dess inneboende fördelar lider LIBS-analysen av dålig noggrannhet och begränsad reproducerbarhet av resultaten på grund av interferenseffekter orsakade av provets kemiska sammansättning eller andra experimentella faktorer. Hur man kan förbättra analysens noggrannhet genom att extrahera användbar information från LIBS-data med hög dimensionering är fortfarande den största utmaningen med denna teknik. I det nuvarande arbetet, med syftet att föreslå en robust analysmetod, presenterar jag en pipeline för multivariat regression på LIBS-data som består av förbehandling, val av funktioner och regression. Första rådata förbehandlas genom tillämpning av intensitetsfiltrering, normalisering och baslinjekorrektion för att mildra effekten av interferensfaktorer såsom laserens energifluktuationer eller närvaron av baslinjen i spektrumet. Funktionsval gör det möjligt att hitta de mest informativa linjerna för ett element som sedan används som input i den efterföljande regressionsfasen för att förutsäga elementkoncentrationen. Partial Least Squares (PLS) och Elastic Net visade den bästa förutsägelseförmågan bland de undersökta regressionsmetoderna, medan Interval PLS (iPLS) och Iterative PredictorWeighting PLS (IPW-PLS) visade sig vara de bästa funktionsval algoritmerna för denna typ av data. Genom att tillämpa dessa funktionsval algoritmer på hela LIBS-spektrumet före regression med PLS eller Elastic Net är det möjligt att få exakta förutsägelser på ett robust sätt.
APA, Harvard, Vancouver, ISO, and other styles
34

Haydock, Richard. "Multivariate analysis of Raman spectroscopy data." Thesis, University of Nottingham, 2015. http://eprints.nottingham.ac.uk/30697/.

Full text
Abstract:
This thesis is concerned with developing techniques for analysing Raman spectroscopic images. A Raman spectroscopic image differs from a standard image as in place of red, green and blue quantities for each pixel a Raman image contains a spectrum of light intensities at each pixel. These spectra are used to identify the chemical components from which the image subject, for example a tablet, is comprised. The study of these types of images is known as chemometrics, with the majority of chemometric methods based on multivariate statistical and image analysis techniques. The work in this thesis has two main foci. The first of these is on the spectral decomposition of a Raman image, the purpose of which is to identify the component chemicals and their concentrations. The standard method for this is to fit a bilinear model to the image where both parts of the model, representing components and concentrations, must be estimated. As the standard bilinear model is nonidentifiable in its solutions we investigate the range of possible solutions in the solution space with a random walk. We also derive an improved model for spectral decomposition, combining cluster analysis techniques and the standard bilinear model. For this purpose we apply the expectation maximisation algorithm on a Gaussian mixture model with bilinear means, to represent our spectra and concentrations. This reduces noise in the estimated chemical components by separating the Raman image subject from the background. The second focus of this thesis is on the analysis of our spectral decomposition results. For testing the chemical components for uniform mixing we derive test statistics for identifying patterns in the image based on Minkowski measures, grey level co-occurence matrices and neighbouring pixel correlations. However with a non-identifiable model any hypothesis tests performed on the solutions will be specific to only that solution. Therefore to obtain conclusions for a range of solutions we combined our test statistics with our random walk. We also investigate the analysis of a time series of Raman images as the subject dissolved. Using models comprised of Gaussian cumulative distribution functions we are able to estimate the changes in concentration levels of dissolving tablets between the scan times. The results of which allowed us to describe the dissolution process in terms of the quantities of component chemicals.
APA, Harvard, Vancouver, ISO, and other styles
35

Zappi, Alessandro <1990&gt. "Chemometrics applied to direct multivariate analysis." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amsdottorato.unibo.it/8898/1/Zappi_Alessandro_Thesis.pdf.

Full text
Abstract:
The present Ph.D. Thesis is focused on applications and developments of chemometrics. After a short introduction about chemometrics (Chapter 1), the present work is divided in three Chapters, reflecting the research activities addressed during the three-year PhD work: • Chapter 2 concerns the application of classification tools to food traceability (Chapter 2.1), plant metabolomics (Chapter 2.2), and food-frauds detection (Chapter 2.3) problems. • Chapter 3 concerns the application of design of experiments for a bio-remediation research (Chapter 3.1) and for machine optimization (Chapter 3.2). • Chapter 4 concerns the development of the net analyte signal (NAS) procedure and its application to several analytical problems. The main aim of this research is to face the matrix-effect problem using a multivariate approach. Chemometrics is the science that extracts useful information from chemical data. The development of instruments and computers is bringing to analytical methodologies ever more sophisticated, and the consequence is that huge amounts of data are collected. In parallel with this rapid evolution, it is, therefore, important to develop chemometric methods able to handle and process the data. Moreover, the attention is also focusing on analytical techniques that do not destroy the analyzed samples. Chemometrics and its application to non-destructive analytical methods are the main topics of this research project. Several analytical techniques have been used during this project: gas-chromatography (GC), bioluminescence, atomic absorption spectroscopy (AAS), liquid chromatography (HPLC), near-infrared spectroscopy, UV-Vis spectroscopy, Raman spectroscopy, X-ray powder diffraction (XRPD), attenuated total reflectance (ATR) spectroscopy. Moreover, this research activity was carried out in collaboration with several external research groups and companies
APA, Harvard, Vancouver, ISO, and other styles
36

Chiari, Diana Elisa. "Network identification via multivariate correlation analysis." Doctoral thesis, Università degli studi di Trento, 2019. https://hdl.handle.net/11572/368752.

Full text
Abstract:
In this thesis an innovative approach to assess connectivity in a complex network was proposed. In network connectivity studies, a major problem is to estimate the links between the elements of a system in a robust and reliable way. To address this issue, a statistical method based on Pearson’s correlation coefficient was proposed. The former inherits the versatility of the latter, declined in a general applicability to any kind of system and the capability to evaluate cross–correlation of time series pairs both simultaneously and at different time lags. In addition, our method has an increased “investigation power†, allowing to estimate correlation at different time scale–resolutions. The method was tested on two very different kind of systems: the brain and a set of meteorological stations in the Trentino region. In both cases, the purpose was to reconstruct the existence of significant links between the elements of the two systems at different temporal resolutions. In the first case, the signals used to reconstruct the networks are magnetoencephalographic (MEG) recordings acquired from human subjects in resting–state. Zero–delays cross–correlations were estimated on a set of MEG time series corresponding to the regions belonging to the default mode network (DMN) to identify the structure of the fully–connected brain networks at different time scale resolutions. A great attention was devoted to test the correlation significance, estimated by means of surrogates of the original signal. The network structure is defined by means of the selection of four parameter values: the level of significance α, the efficiency η0, and two ranking parameters, R1 and R2, used to merge the results obtained from the whole dataset in a single average behav- ior. In the case of MEG signals, the functional fully–connected networks estimated at different time scale resolutions were compared to identify the best observation window at which the network dynamics can be highlighted. The resulting best time scale of observation was ∼ 30 s, in line with the results present in the scientific liter- ature. The same method was also applied to meteorological time series to possibly assess wind circulation networks in the Trentino region. Although this study is pre- liminary, the first results identify an interesting clusterization of the meteorological stations used in the analysis.
APA, Harvard, Vancouver, ISO, and other styles
37

Chiari, Diana Elisa. "Network identification via multivariate correlation analysis." Doctoral thesis, University of Trento, 2019. http://eprints-phd.biblio.unitn.it/3773/1/PhDThesis_tosend.pdf.

Full text
Abstract:
In this thesis an innovative approach to assess connectivity in a complex network was proposed. In network connectivity studies, a major problem is to estimate the links between the elements of a system in a robust and reliable way. To address this issue, a statistical method based on Pearson’s correlation coefficient was proposed. The former inherits the versatility of the latter, declined in a general applicability to any kind of system and the capability to evaluate cross–correlation of time series pairs both simultaneously and at different time lags. In addition, our method has an increased “investigation power”, allowing to estimate correlation at different time scale–resolutions. The method was tested on two very different kind of systems: the brain and a set of meteorological stations in the Trentino region. In both cases, the purpose was to reconstruct the existence of significant links between the elements of the two systems at different temporal resolutions. In the first case, the signals used to reconstruct the networks are magnetoencephalographic (MEG) recordings acquired from human subjects in resting–state. Zero–delays cross–correlations were estimated on a set of MEG time series corresponding to the regions belonging to the default mode network (DMN) to identify the structure of the fully–connected brain networks at different time scale resolutions. A great attention was devoted to test the correlation significance, estimated by means of surrogates of the original signal. The network structure is defined by means of the selection of four parameter values: the level of significance α, the efficiency η0, and two ranking parameters, R1 and R2, used to merge the results obtained from the whole dataset in a single average behav- ior. In the case of MEG signals, the functional fully–connected networks estimated at different time scale resolutions were compared to identify the best observation window at which the network dynamics can be highlighted. The resulting best time scale of observation was ∼ 30 s, in line with the results present in the scientific liter- ature. The same method was also applied to meteorological time series to possibly assess wind circulation networks in the Trentino region. Although this study is pre- liminary, the first results identify an interesting clusterization of the meteorological stations used in the analysis.
APA, Harvard, Vancouver, ISO, and other styles
38

Schubert, Daniel Dice. "A multivariate adaptive trimmed likelihood algorithm /." Access via Murdoch University Digital Theses Project, 2005. http://wwwlib.murdoch.edu.au/adt/browse/view/adt-MU20061019.132720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Wong, Hoi-lam. "On two tests for multivariate normality." HKBU Institutional Repository, 1993. https://repository.hkbu.edu.hk/etd_ra/12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

鄧明基 and Ming-kei Tang. "Assessment of influence in multivariate regression." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31219949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Tang, Ming-kei. "Assessment of influence in multivariate regression /." Hong Kong : University of Hong Kong, 1998. http://sunzi.lib.hku.hk/hkuto/record.jsp?B19853658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Burnham, Alison J. "Multivariate latent variable regression : modelling and estimation /." *McMaster only, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
43

Noble, Robert Bruce. "Multivariate Applications of Bayesian Model Averaging." Diss., Virginia Tech, 2000. http://hdl.handle.net/10919/30180.

Full text
Abstract:
The standard methodology when building statistical models has been to use one of several algorithms to systematically search the model space for a good model. If the number of variables is small then all possible models or best subset procedures may be used, but for data sets with a large number of variables, a stepwise procedure is usually implemented. The stepwise procedure of model selection was designed for its computational efficiency and is not guaranteed to find the best model with respect to any optimality criteria. While the model selected may not be the best possible of those in the model space, commonly it is almost as good as the best model. Many times there will be several models that exist that may be competitors of the best model in terms of the selection criterion, but classical model building dictates that a single model be chosen to the exclusion of all others. An alternative to this is Bayesian model averaging (BMA), which uses the information from all models based on how well each is supported by the data. Using BMA allows a variance component due to the uncertainty of the model selection process to be estimated. The variance of any statistic of interest is conditional on the model selected so if there is model uncertainty then variance estimates should reflect this. BMA methodology can also be used for variable assessment since the probability that a given variable is active is readily obtained from the individual model posterior probabilities. The multivariate methods considered in this research are principal components analysis (PCA), canonical variate analysis (CVA), and canonical correlation analysis (CCA). Each method is viewed as a particular multivariate extension of univariate multiple regression. The marginal likelihood of a univariate multiple regression model has been approximated using the Bayes information criteria (BIC), hence the marginal likelihood for these multivariate extensions also makes use of this approximation. One of the main criticisms of multivariate techniques in general is that they are difficult to interpret. To aid interpretation, BMA methodology is used to assess the contribution of each variable to the methods investigated. A second issue that is addressed is displaying of results of an analysis graphically. The goal here is to effectively convey the germane elements of an analysis when BMA is used in order to obtain a clearer picture of what conclusions should be drawn. Finally, the model uncertainty variance component can be estimated using BMA. The variance due to model uncertainty is ignored when the standard model building tenets are used giving overly optimistic variance estimates. Even though the model attained via standard techniques may be adequate, in general, it would be difficult to argue that the chosen model is in fact the correct model. It seems more appropriate to incorporate the information from all plausible models that are well supported by the data to make decisions and to use variance estimates that account for the uncertainty in the model estimation as well as model selection.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
44

Lee, Yau-wing. "Modelling multivariate survival data using semiparametric models." Click to view the E-thesis via HKUTO, 2000. http://sunzi.lib.hku.hk/hkuto/record/B4257528X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Prosser, Robert James. "Robustness of multivariate mixed model ANOVA." Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/25511.

Full text
Abstract:
In experimental or quasi-experimental studies in which a repeated measures design is used, it is common to obtain scores on several dependent variables on each measurement occasion. Multivariate mixed model (MMM) analysis of variance (Thomas, 1983) is a recently developed alternative to the MANOVA procedure (Bock, 1975; Timm, 1980) for testing multivariate hypotheses concerning effects of a repeated factor (called occasions in this study) and interaction between repeated and non-repeated factors (termed group-by-occasion interaction here). If a condition derived by Thomas (1983), multivariate multi-sample sphericity (MMS), regarding the equality and structure of orthonormalized population covariance matrices is satisfied (given multivariate normality and independence for distributions of subjects' scores), valid likelihood-ratio MMM tests of group-by-occasion interaction and occasions hypotheses are possible. To date, no information has been available concerning actual (empirical) levels of significance of such tests when the MMS condition is violated. This study was conducted to begin to provide such information. Departure from the MMS condition can be classified into three types— termed departures of types A, B, and C respectively: (A) the covariance matrix for population ℊ (ℊ = 1,...G), when orthonormalized, has an equal-diagonal-block form but the resulting matrix for population ℊ is unequal to the resulting matrix for population ℊ' (ℊ ≠ ℊ'); (B) the G populations' orthonormalized covariance matrices are equal, but the matrix common to the populations does not have equal-diagonal-block structure; or (C) one or more populations has an orthonormalized covariance matrix which does not have equal-diagonal-block structure and two or more populations have unequal orthonormalized matrices. In this study, Monte Carlo procedures were used to examine the effect of each type of violation in turn on the Type I error rates of multivariate mixed model tests of group-by-occasion interaction and occasions null hypotheses. For each form of violation, experiments modelling several levels of severity were simulated. In these experiments: (a) the number of measured variables was two; (b) the number of measurement occasions was three; (c) the number of populations sampled was two or three; (d) the ratio of average sample size to number of measured variables was six or 12; and (e) the sample size ratios were 1:1 and 1:2 when G was two, and 1:1:1 and 1:1:2 when G was three. In experiments modelling violations of types A and C, the effects of negative and positive sampling were studied. When type A violations were modelled and samples were equal in size, actual Type I error rates did not differ significantly from nominal levels for tests of either hypothesis except under the most severe level of violation. In type A experiments using unequal groups in which the largest sample was drawn from the population whose orthogonalized covariance matrix has the smallest determinant (negative sampling), actual Type I error rates were significantly higher than nominal rates for tests of both hypotheses and for all levels of violation. In contrast, empirical levels of significance were significantly lower than nominal rates in type A experiments in which the largest sample was drawn from the population whose orthonormalized covariance matrix had the largest determinant (positive sampling). Tests of both hypotheses tended to be liberal in experiments which modelled type B violations. No strong relationships were observed between actual Type I error rates and any of: severity of violation, number of groups, ratio of average sample size to number of variables, and relative sizes of samples. In equal-groups experiments modelling type C violations in which the orthonormalized pooled covariance matrix departed at the more severe level from equal-diagonal-block form, actual Type I error rates for tests of both hypotheses tended to be liberal. Findings were more complex under the less severe level of structural departure. Empirical significance levels did not vary with the degree of interpopulation heterogeneity of orthonormalized covariance matrices. In type C experiments modelling negative sampling, tests of both hypotheses tended to be liberal. Degree of structural departure did not appear to influence actual Type I error rates but degree of interpopulation heterogeneity did. Actual Type I error rates in type C experiments modelling positive sampling were apparently related to the number of groups. When two populations were sampled, both tests tended to be conservative, while for three groups, the results were more complex. In general, under all types of violation the ratio of average group size to number of variables did not greatly affect actual Type I error rates. The report concludes with suggestions for practitioners considering use of the MMM procedure based upon the findings and recommends four avenues for future research on Type I error robustness of MMM analysis of variance. The matrix pool and computer programs used in the simulations are included in appendices.
Education, Faculty of
Educational and Counselling Psychology, and Special Education (ECPS), Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
46

Cheung, Chung-pak, and 張松柏. "Multivariate time series analysis on airport transportation." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1991. http://hub.hku.hk/bib/B31976499.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Tardif, Geneviève. "Multivariate Analysis of Canadian Water Quality Data." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32245.

Full text
Abstract:
Physical-chemical water quality data from lotic water monitoring sites across Canada were integrated into one dataset. Two overlapping matrices of data were analyzed with principal component analysis (PCA) and cluster analysis to uncover structure and patterns in the data. The first matrix (Matrix A) had 107 sites located throughout Canada, and the following water quality parameters: pH, specific conductance (SC), and total phosphorus (TP). The second matrix (Matrix B) included more variables: calcium (Ca), chloride (Cl), total alkalinity (T_ALK), dissolved oxygen (DO), water temperature (WT), pH, SC and TP; for a subset of 42 sites. Landscape characteristics were calculated for each water quality monitoring site and their importance in explaining water quality data was examined through redundancy analysis. The first principal components in the analyses of Matrix A and B were most correlated with SC, suggesting this parameter is the most representative of water quality variance at the scale of Canada. Overlaying cluster analysis results on PCA information proved an excellent mean to identify the major water characteristics defining each group; mapping cluster analysis group membership provided information on their spatial distribution and was found informative with regards to the probable environmental influences on each group. Redundancy analyses produced significant predictive models of water quality demonstrating that landscape characteristics are determinant factors in water quality at the country scale. The proportion of cropland and the mean annual total precipitation in the drainage area were the landscape variables with the most variance explained. Assembling a consistent dataset of water quality data from monitoring locations throughout Canada proved difficult due to the unevenness of the monitoring programs in place. It is therefore recommended that a standard for the monitoring of a minimum core set of water quality variable be implemented throughout the country to support future nation-wide analysis of water quality data.
APA, Harvard, Vancouver, ISO, and other styles
48

Zhou, Feifei, and 周飞飞. "Cure models for univariate and multivariate survival data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B45700977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Spindler, Susan L. "Evaluation of some multivariate CUSUM schemes /." Online version of thesis, 1987. http://hdl.handle.net/1850/10330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Mlynski, David. "On the multivariate analysis of animal networks." Thesis, University of Bath, 2016. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.690727.

Full text
Abstract:
From the individual to species level, it is common for animals to have connections with one another. These connections can exist in a variety of forms; from the social relationships within an animal society, to hybridisation between species. The structure of these connections in animal systems can be depicted using networks, often revealing non-trivial structure which can be biologically informative. Understanding the factors which drive the structure of animal networks can help us understand the costs and benefits of forming and maintaining relationships. Multivariate modelling provides a means to evaluate the relative contributions of a set of explanatory factors to a response variable. However, conventional modelling approaches use statistical tests which are unsuitable for the dependencies inherent in network and relational data. A solution to this problem is to use specialised models developed in the social sciences, which have a long history in modelling human social networks. Taking predictive multivariate models from the social sciences and applying them to animal networks is attractive given that current analytical approaches are predominantly descriptive. However, these models were developed for human social networks, where participants can self-identify relationships. In contrast, relationships between animals have to be inferred through observations of associations or interactions, which can introduce sampling bias and uncertainty to the data. Without appropriate care, these issues could lead us to make incorrect or overconfident conclusions about our data. In this thesis, we use an established network model, the multiple regression quadratic assignment procedure (MRQAP), and propose approaches to facilitate the application of this model in animal network studies. Through demonstrating these approaches on three animal systems, we make new biological findings and highlight the importance of considering data-sampling issues when analysing networks. Additionally, our approaches have wider applications to animal network studies where relationships are inferred through observing dyadic interactions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography