Dissertations / Theses on the topic 'Multivariate Ratio'

To see the other types of publications on this topic, follow the link: Multivariate Ratio.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 43 dissertations / theses for your research on the topic 'Multivariate Ratio.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

North, Robert. "Applications of the dependence ratio association measure for multivariate categorical data." Thesis, University of Southampton, 2015. https://eprints.soton.ac.uk/378642/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liang, Yuli. "Contributions to Estimation and Testing Block Covariance Structures in Multivariate Normal Models." Doctoral thesis, Stockholms universitet, Statistiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-115347.

Full text
Abstract:
This thesis concerns inference problems in balanced random effects models with a so-called block circular Toeplitz covariance structure. This class of covariance structures describes the dependency of some specific multivariate two-level data when both compound symmetry and circular symmetry appear simultaneously. We derive two covariance structures under two different invariance restrictions. The obtained covariance structures reflect both circularity and exchangeability present in the data. In particular, estimation in the balanced random effects with block circular covariance matrices is considered. The spectral properties of such patterned covariance matrices are provided. Maximum likelihood estimation is performed through the spectral decomposition of the patterned covariance matrices. Existence of the explicit maximum likelihood estimators is discussed and sufficient conditions for obtaining explicit and unique estimators for the variance-covariance components are derived. Different restricted models are discussed and the corresponding maximum likelihood estimators are presented. This thesis also deals with hypothesis testing of block covariance structures, especially block circular Toeplitz covariance matrices. We consider both so-called external tests and internal tests. In the external tests, various hypotheses about testing block covariance structures, as well as mean structures, are considered, and the internal tests are concerned with testing specific covariance parameters given the block circular Toeplitz structure. Likelihood ratio tests are constructed, and the null distributions of the corresponding test statistics are derived.
APA, Harvard, Vancouver, ISO, and other styles
3

Karawatzki, Roman, Josef Leydold, and Klaus Pötzelberger. "Automatic Markov Chain Monte Carlo Procedures for Sampling from Multivariate Distributions." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2005. http://epub.wu.ac.at/1400/1/document.pdf.

Full text
Abstract:
Generating samples from multivariate distributions efficiently is an important task in Monte Carlo integration and many other stochastic simulation problems. Markov chain Monte Carlo has been shown to be very efficient compared to "conventional methods", especially when many dimensions are involved. In this article we propose a Hit-and-Run sampler in combination with the Ratio-of-Uniforms method. We show that it is well suited for an algorithm to generate points from quite arbitrary distributions, which include all log-concave distributions. The algorithm works automatically in the sense that only the mode (or an approximation of it) and an oracle is required, i.e., a subroutine that returns the value of the density function at any point x. We show that the number of evaluations of the density increases slowly with dimension. An implementation of these algorithms in C is available from the authors. (author's abstract)
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
4

Sheppard, Therese. "Extending covariance structure analysis for multivariate and functional data." Thesis, University of Manchester, 2010. https://www.research.manchester.ac.uk/portal/en/theses/extending-covariance-structure-analysis-for-multivariate-and-functional-data(e2ad7f12-3783-48cf-b83c-0ca26ef77633).html.

Full text
Abstract:
For multivariate data, when testing homogeneity of covariance matrices arising from two or more groups, Bartlett's (1937) modified likelihood ratio test statistic is appropriate to use under the null hypothesis of equal covariance matrices where the null distribution of the test statistic is based on the restrictive assumption of normality. Zhang and Boos (1992) provide a pooled bootstrap approach when the data cannot be assumed to be normally distributed. We give three alternative bootstrap techniques to testing homogeneity of covariance matrices when it is both inappropriate to pool the data into one single population as in the pooled bootstrap procedure and when the data are not normally distributed. We further show that our alternative bootstrap methodology can be extended to testing Flury's (1988) hierarchy of covariance structure models. Where deviations from normality exist, we show, by simulation, that the normal theory log-likelihood ratio test statistic is less viable compared with our bootstrap methodology. For functional data, Ramsay and Silverman (2005) and Lee et al (2002) together provide four computational techniques for functional principal component analysis (PCA) followed by covariance structure estimation. When the smoothing method for smoothing individual profiles is based on using least squares cubic B-splines or regression splines, we find that the ensuing covariance matrix estimate suffers from loss of dimensionality. We show that ridge regression can be used to resolve this problem, but only for the discretisation and numerical quadrature approaches to estimation, and that choice of a suitable ridge parameter is not arbitrary. We further show the unsuitability of regression splines when deciding on the optimal degree of smoothing to apply to individual profiles. To gain insight into smoothing parameter choice for functional data, we compare kernel and spline approaches to smoothing individual profiles in a nonparametric regression context. Our simulation results justify a kernel approach using a new criterion based on predicted squared error. We also show by simulation that, when taking account of correlation, a kernel approach using a generalized cross validatory type criterion performs well. These data-based methods for selecting the smoothing parameter are illustrated prior to a functional PCA on a real data set.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Sai. "GLR Control Charts for Monitoring the Mean Vector or the Dispersion of a Multivariate Normal Process." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/77227.

Full text
Abstract:
In many applications, the quality of process outputs is described by more than one characteristic variable. These quality variables usually follow a multivariate normal (MN) distribution. This dissertation discusses the monitoring of the mean vector and the covariance matrix of MN processes. The first part of this dissertation develops a statistical process control (SPC) chart based on a generalized likelihood ratio (GLR) statistic to monitor the mean vector. The performance of the GLR chart is compared to the performance of the Hotelling Χ² chart, the multivariate exponentially weighted moving average (MEWMA) chart, and a multi-MEWMA combination. Results show that the Hotelling Χ² chart and the MEWMA chart are only effective for a small range of shift sizes in the mean vector, while the GLR chart and some carefully designed multi-MEWMA combinations can give similarly better overall performance in detecting a wide range of shift magnitudes. Unlike most of these other options, the GLR chart does not require specification of tuning parameter values by the user. The GLR chart also has the advantage in process diagnostics: at the time of a signal, estimates of change-point and out-of-control mean vector are immediately available to the user. All these advantages of the GLR chart make it a favorable option for practitioners. For the design of the GLR chart, a series of easy to use equations are provided to users for calculating the control limit to achieve the desired in-control performance. The use of this GLR chart with a variable sampling interval (VSI) scheme has also been evaluated and discussed. The rest of the dissertation considers the problem of monitoring the covariance matrix. Three GLR charts with different covariance matrix estimators have been discussed. Results show that the GLR chart with a multivariate exponentially weighted moving covariance (MEWMC) matrix estimator is slightly better than the existing method for detecting any general changes in the covariance matrix, and the GLR chart with a constrained maximum likelihood estimator (CMLE) gives much better overall performance for detecting a wide range of shift sizes than the best available options for detecting only variance increases.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
6

Karawatzki, Roman, and Josef Leydold. "Automatic Markov Chain Monte Carlo Procedures for Sampling from Multivariate Distributions." Department of Statistics and Mathematics, Abt. f. Angewandte Statistik u. Datenverarbeitung, WU Vienna University of Economics and Business, 2005. http://epub.wu.ac.at/294/1/document.pdf.

Full text
Abstract:
Generating samples from multivariate distributions efficiently is an important task in Monte Carlo integration and many other stochastic simulation problems. Markov chain Monte Carlo has been shown to be very efficient compared to "conventional methods", especially when many dimensions are involved. In this article we propose a Hit-and-Run sampler in combination with the Ratio-of-Uniforms method. We show that it is well suited for an algorithm to generate points from quite arbitrary distributions, which include all log-concave distributions. The algorithm works automatically in the sense that only the mode (or an approximation of it) and an oracle is required, i.e., a subroutine that returns the value of the density function at any point x. We show that the number of evaluations of the density increases slowly with dimension. (author's abstract)
Series: Preprint Series / Department of Applied Statistics and Data Processing
APA, Harvard, Vancouver, ISO, and other styles
7

Bhatia, Krishan. "USE OF NEAR INFRARED SPECTROSCOPY AND MULTIVARIATE CALIBRATION IN PREDICTING THE PROPERTIES OF TISSUE PAPER MADE OF RECYCLED FIBERS AND VIRGIN PULP." Miami University / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=miami1077768497.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yamane, Danilo Ricardo [UNESP]. "Nutrient diagnosis of orange crops applying compositional data analysis and machine learning techniques." Universidade Estadual Paulista (UNESP), 2018. http://hdl.handle.net/11449/180576.

Full text
Abstract:
Submitted by Danilo Ricardo Yamane (danilo_yamane@yahoo.com.br) on 2019-01-28T17:15:40Z No. of bitstreams: 1 Tese Yamane (2018).pdf: 2814108 bytes, checksum: f2e102e5809427e8d3d26d0a59971542 (MD5)
Approved for entry into archive by Tatiana Camila Gricio (tatiana.gricio@unesp.br) on 2019-01-28T17:32:17Z (GMT) No. of bitstreams: 1 yamane_dr_dr_jabo.pdf: 2814108 bytes, checksum: f2e102e5809427e8d3d26d0a59971542 (MD5)
Made available in DSpace on 2019-01-28T17:32:17Z (GMT). No. of bitstreams: 1 yamane_dr_dr_jabo.pdf: 2814108 bytes, checksum: f2e102e5809427e8d3d26d0a59971542 (MD5) Previous issue date: 2018-11-29
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
O manejo eficiente de nutrientes é crucial para atingir alta produtividade de frutos. Resultados da análise do tecido são comumente interpretados usando faixas críticas de concentração de nutrientes (CNCR) e Sistema Integrado de Diagnose e Recomendação (DRIS) em culturas de laranja. No entanto, ambos os métodos ignoram as propriedades inerentes à classe dos dados composicionais, não considerando adequadamente as interações de nutrientes e a influência varietal na composição nutricional da planta. Portanto, ferramentas eficazes de modelagem são necessárias para corrigir vieses e incorporar efeitos genéticos na avaliação do estado nutricional. O objetivo deste estudo foi desenvolver uma abordagem diagnóstica precisa para avaliar o estado nutricional de variedades de copa de laranjeira (Citrus sinensis), usando a análise composicional dos dados e algoritmos de inteligência artificial. Foram coletadas 716 amostras foliares de ramos frutíferos em pomares comerciais de laranjeiras não irrigadas (“Valência”, “Hamlin”, “Pera”, “Natal”, “Valencia Americana” e “Westin”) distribuídos pelo estado de São Paulo (Brasil), analisadas as concentrações de N, S, P, K, Ca, Mg, B, Cu, Zn, Mn e Fe, e avaliadas as produções de frutos. Balanços de nutrientes foram computados como relações-log isométricas (ilr). Análises discriminantes dos valores de ilr diferenciaram os perfis de nutrientes das variedades de copa, indicando composições nutricionais específicas. A acurácia diagnóstica dos balanços de nutrientes atingiu 88% com a produtividade de corte correspondente a 60 t ha-1, utilizando-se ilrs e o algoritmo de classificação knn, o que possibilitou o desenvolvimento de padrões nutricionais confiáveis para a obtenção de elevado nível de produtividade de frutos. Os citricultores do estado de São Paulo devem adotar o conceito de balanços de nutrientes, onde grupos de nutrientes estão equilibrados de maneira ideal. Fornecer mais Ca através de calcário ou gesso, reduzir as aplicações de fertilizantes P e K, e aumentar a fertilização de B via solo pode reequilibrar os balanços [Mg | Ca], [Ca, Mg | K], [P | N, S], [K, Ca, Mg | N, S, P] e [B | N, S, P, K, Ca, Mg] em pomares de laranjas com produtividade inferior a 60 t ha-1. O software “CND-Citros” pode auxiliar os citricultores, engenheiros agrônomos e técnicos a diagnosticar o estado nutricional das lavouras de laranja com base no método proposto, utilizando os resultados da análise química das folhas.
Efficient nutrient management is crucial to attain high fruit productivity. Results of tissue analysis are commonly interpreted using critical nutrient concentration ranges (CNCR) and Diagnosis and Recommendation Integrated System (DRIS) on orange crops. Nevertheless, both methods ignore the inherent properties of compositional data class, not accounting adequately for nutrient interactions and varietal influence on plant ionome. Therefore, effective modeling tools are needed to rectify biases and incorporate genetic effects on nutrient composition. The objective of this study was to develop an accurate diagnostic approach to evaluate the nutritional status across orange (Citrus sinensis) canopy varieties using compositional data analysis and machine learning algorithms. We collected 716 foliar samples from fruit-bearing shoots in plots of non-irrigated commercial orange orchards (“Valencia”, “Hamlin”, “Pera”, “Natal”, “Valencia Americana” and “Westin”) distributed across São Paulo state (Brazil), analyzed N, S, P, K, Ca, Mg, B, Cu, Zn, Mn and Fe, and measured fruit yields. Sound nutrient balances were computed as isometric log-ratios (ilr). Discriminant analysis of ilr values differentiated the nutrient profiles of canopy varieties, indicating plant-specific ionomes. Diagnostic accuracy of nutrient balances reached 88% about cutoff yield of 60 Mg ha-1 using ilrs and a k-nearest neighbors classification, allowing the development of reliable nutritional standards at high fruit yield level. Citrus growers from São Paulo state should adopt the concept of yield-limiting nutrient balances, where groups of nutrients are optimally balanced. Supplying more Ca as lime or gypsum materials, reducing the P and K fertilizer applications and enhancing soil B fertilization could re-establish the [Mg | Ca], [Ca, Mg | K], [P | N, S], [K, Ca, Mg | N, S, P] and [B | N, S, P, K, Ca, Mg] balances in orange orchards yielding less than 60 Mg ha-1. The software “CND-Citros” can assist citrus growers, agronomy engineers and technicians to diagnose the nutrient status of orange crops based on the proposed method, using the results of leaf chemical analysis.
APA, Harvard, Vancouver, ISO, and other styles
9

Yamane, Danilo Ricardo. "Nutrient diagnosis of orange crops applying compositional data analysis and machine learning techniques /." Jaboticabal, 2018. http://hdl.handle.net/11449/180576.

Full text
Abstract:
Orientador: Arthur Bernardes Cecílio Filho
Resumo: O manejo eficiente de nutrientes é crucial para atingir alta produtividade de frutos. Resultados da análise do tecido são comumente interpretados usando faixas críticas de concentração de nutrientes (CNCR) e Sistema Integrado de Diagnose e Recomendação (DRIS) em culturas de laranja. No entanto, ambos os métodos ignoram as propriedades inerentes à classe dos dados composicionais, não considerando adequadamente as interações de nutrientes e a influência varietal na composição nutricional da planta. Portanto, ferramentas eficazes de modelagem são necessárias para corrigir vieses e incorporar efeitos genéticos na avaliação do estado nutricional. O objetivo deste estudo foi desenvolver uma abordagem diagnóstica precisa para avaliar o estado nutricional de variedades de copa de laranjeira (Citrus sinensis), usando a análise composicional dos dados e algoritmos de inteligência artificial. Foram coletadas 716 amostras foliares de ramos frutíferos em pomares comerciais de laranjeiras não irrigadas (“Valência”, “Hamlin”, “Pera”, “Natal”, “Valencia Americana” e “Westin”) distribuídos pelo estado de São Paulo (Brasil), analisadas as concentrações de N, S, P, K, Ca, Mg, B, Cu, Zn, Mn e Fe, e avaliadas as produções de frutos. Balanços de nutrientes foram computados como relações-log isométricas (ilr). Análises discriminantes dos valores de ilr diferenciaram os perfis de nutrientes das variedades de copa, indicando composições nutricionais específicas. A acurácia diagnóstica dos balanços de... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: Efficient nutrient management is crucial to attain high fruit productivity. Results of tissue analysis are commonly interpreted using critical nutrient concentration ranges (CNCR) and Diagnosis and Recommendation Integrated System (DRIS) on orange crops. Nevertheless, both methods ignore the inherent properties of compositional data class, not accounting adequately for nutrient interactions and varietal influence on plant ionome. Therefore, effective modeling tools are needed to rectify biases and incorporate genetic effects on nutrient composition. The objective of this study was to develop an accurate diagnostic approach to evaluate the nutritional status across orange (Citrus sinensis) canopy varieties using compositional data analysis and machine learning algorithms. We collected 716 foliar samples from fruit-bearing shoots in plots of non-irrigated commercial orange orchards (“Valencia”, “Hamlin”, “Pera”, “Natal”, “Valencia Americana” and “Westin”) distributed across São Paulo state (Brazil), analyzed N, S, P, K, Ca, Mg, B, Cu, Zn, Mn and Fe, and measured fruit yields. Sound nutrient balances were computed as isometric log-ratios (ilr). Discriminant analysis of ilr values differentiated the nutrient profiles of canopy varieties, indicating plant-specific ionomes. Diagnostic accuracy of nutrient balances reached 88% about cutoff yield of 60 Mg ha-1 using ilrs and a k-nearest neighbors classification, allowing the development of reliable nutritional standards at high fruit... (Complete abstract click electronic access below)
Doutor
APA, Harvard, Vancouver, ISO, and other styles
10

Mahmoud, Mahmoud A. "The Monitoring of Linear Profiles and the Inertial Properties of Control Charts." Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/29544.

Full text
Abstract:
The Phase I analysis of data when the quality of a process or product is characterized by a linear function is studied in this dissertation. It is assumed that each sample collected over time in the historical data set consists of several bivariate observations for which a simple linear regression model is appropriate, a situation common in calibration applications. Using a simulation study, the researcher compares the performance of some of the recommended approaches used to assess the stability of the process. Also in this dissertation, a method based on using indicator variables in a multiple regression model is proposed. This dissertation also proposes a change point approach based on the segmented regression technique for testing the constancy of the regression parameters in a linear profile data set. The performance of the proposed change point method is compared to that of the most effective Phase I linear profile control chart approaches using a simulation study. The advantage of the proposed change point method over the existing methods is greatly improved detection of sustained step changes in the process parameters. Any control chart that combines sample information over time, e.g., the cumulative sum (CUSUM) chart and the exponentially weighted moving average (EWMA) chart, has an ability to detect process changes that varies over time depending on the past data observed. The chart statistics can take values such that some shifts in the parameters of the underlying probability distribution of the quality characteristic are more difficult to detect. This is referred to as the "inertia problem" in the literature. This dissertation shows under realistic assumptions that the worst-case run length performance of control charts becomes as informative as the steady-state performance. Also this study proposes a simple new measure of the inertial properties of control charts, namely the signal resistance. The conclusions of this study support the recommendation that Shewhart limits should be used with EWMA charts, especially when the smoothing parameter is small. This study also shows that some charts proposed by Pignatiello and Runger (1990) and Domangue and Patch (1991) have serious disadvantages with respect to inertial properties.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
11

Andersson, Aron, and Shabnam Mirkhani. "Portfolio Performance Optimization Using Multivariate Time Series Volatilities Processed With Deep Layering LSTM Neurons and Markowitz." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273617.

Full text
Abstract:
The stock market is a non-linear field, but many of the best-known portfolio optimization algorithms are based on linear models. In recent years, the rapid development of machine learning has produced flexible models capable of complex pattern recognition. In this paper, we propose two different methods of portfolio optimization; one based on the development of a multivariate time-dependent neural network,thelongshort-termmemory(LSTM),capable of finding lon gshort-term price trends. The other is the linear Markowitz model, where we add an exponential moving average to the input price data to capture underlying trends. The input data to our neural network are daily prices, volumes and market indicators such as the volatility index (VIX).The output variables are the prices predicted for each asset the following day, which are then further processed to produce metrics such as expected returns, volatilities and prediction error to design a portfolio allocation that optimizes a custom utility function like the Sharpe Ratio. The LSTM model produced a portfolio with a return and risk that was close to the actual market conditions for the date in question, but with a high error value, indicating that our LSTM model is insufficient as a sole forecasting tool. However,the ability to predict upward and downward trends was somewhat better than expected and therefore we conclude that multiple neural network can be used as indicators, each responsible for some specific aspect of what is to be analysed, to draw a conclusion from the result. The findings also suggest that the input data should be more thoroughly considered, as the prediction accuracy is enhanced by the choice of variables and the external information used for training.
Aktiemarknaden är en icke-linjär marknad, men många av de mest kända portföljoptimerings algoritmerna är baserad på linjära modeller. Under de senaste åren har den snabba utvecklingen inom maskininlärning skapat flexibla modeller som kan extrahera information ur komplexa mönster. I det här examensarbetet föreslår vi två sätt att optimera en portfölj, ett där ett neuralt nätverk utvecklas med avseende på multivariata tidsserier och ett annat där vi använder den linjära Markowitz modellen, där vi även lägger ett exponentiellt rörligt medelvärde på prisdatan. Ingångsdatan till vårt neurala nätverk är de dagliga slutpriserna, volymerna och marknadsindikatorer som t.ex. volatilitetsindexet VIX. Utgångsvariablerna kommer vara de predikterade priserna för nästa dag, som sedan bearbetas ytterligare för att producera mätvärden såsom förväntad avkastning, volatilitet och Sharpe ratio. LSTM-modellen producerar en portfölj med avkastning och risk som ligger närmre de verkliga marknadsförhållandena, men däremot gav resultatet ett högt felvärde och det visar att vår LSTM-modell är otillräckligt för att använda som ensamt predikteringssverktyg. Med det sagt så gav det ändå en bättre prediktion när det gäller trender än vad vi antog den skulle göra. Vår slutsats är därför att man bör använda flera neurala nätverk som indikatorer, där var och en är ansvarig för någon specifikt aspekt man vill analysera, och baserat på dessa dra en slutsats. Vårt resultat tyder också på att inmatningsdatan bör övervägas mera noggrant, eftersom predikteringsnoggrannheten.
APA, Harvard, Vancouver, ISO, and other styles
12

Ribeiro, Ana Karenina Fernandes de Sousa. "Atributos de solos sob sistemas de uso agropecuários na mesorregião do Oeste Potiguar - RN." Universidade Federal Rural do Semi-Árido, 2016. http://bdtd.ufersa.edu.br:80/tede/handle/tede/593.

Full text
Abstract:
Submitted by Lara Oliveira (lara@ufersa.edu.br) on 2017-01-31T20:09:26Z No. of bitstreams: 1 AnaKFSR_DISSERT.pdf: 1693292 bytes, checksum: 36d338aa920da9d3f5c142d94ecbf871 (MD5)
Approved for entry into archive by Vanessa Christiane (referencia@ufersa.edu.br) on 2017-03-21T14:51:55Z (GMT) No. of bitstreams: 1 AnaKFSR_DISSERT.pdf: 1693292 bytes, checksum: 36d338aa920da9d3f5c142d94ecbf871 (MD5)
Approved for entry into archive by Vanessa Christiane (referencia@ufersa.edu.br) on 2017-03-21T14:52:02Z (GMT) No. of bitstreams: 1 AnaKFSR_DISSERT.pdf: 1693292 bytes, checksum: 36d338aa920da9d3f5c142d94ecbf871 (MD5)
Made available in DSpace on 2017-03-21T14:52:14Z (GMT). No. of bitstreams: 1 AnaKFSR_DISSERT.pdf: 1693292 bytes, checksum: 36d338aa920da9d3f5c142d94ecbf871 (MD5) Previous issue date: 2016-10-11
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
The semi-arid region is extremely diverse from the point of view of their natural resources which vary according to factors such as location, soil types, lithology and climate. However, it is perceived fragility of the region under study with regard to human action, making it more susceptible site to degradation processes. Studies evaluating soil properties in Oeste Potiguar in the Rio Grande do Norte state are scarce, but its quantification in different uses and environments in an integrated manner is necessary for understanding and subsequent adoption of appropriate practices to local conditions. This study aimed to evaluate the physical and chemical properties in different agricultural uses, detecting the most sensitive in distinguishing environments. The survey was conducted in the cities of Pau dos Ferros, San Francisco West, Mossoro, Governador Dix-Sept Rosado. The areas under study have particular characteristics as to classification of soils and agricultural uses. physical fertility and analysis analyzes were performed as particle size, plasticity limits and liquidity, plasticity index and gravimetric moisture. The results were analyzed by means of multivariate analysis as the main tool, specifically factor analysis and clustering. There was a greater contribution TOC in Gleysol (favoring the increase in P, Ca 2+ and K +), favored by organic waste and poor drainage on the basis of the clay fraction. Soils showed eutrophic character (V> 50%), influenced by lithology, except Latossolo. In Gleysol and Cambisol occurred increase in liquidity limits and plasticity, due to the increase of the clay fraction and total organic carbon, increasing the gravimetric moisture to achieve crispness, with the exception of Planosol that showed low permeability on the horizon B, where the limits of plasticity and liquidity diverged, thus, greater plasticity index. In particle size analysis profiles showed changes in textural classes, especially the Gleysol with the highest silt fraction, and an indication of young soils with little weathering activity. We conclude that the physical attributes moisture, liquid limit, plastic limit, plasticity index clay, fine sand were the most sensitive in the environments distinction and pH chemicals, (H + Al), V, PST. The Planosol showed low permeability in the B horizon, thus having the greatest plasticity index distancing the limits between them. The areas studied showed acidity to alkalinity reactions with the presence of Al 3+ and (H + Al) and high salinity. The source material favored the increase in calcium, sodium, magnesium and potassium
A região semiárida é extremamente diversificada do ponto de vista de seus recursos naturais que variam de acordo com fatores como localização, tipos de solo, litologia e clima. No entanto, percebe-se fragilidade da região em estudo no que diz respeito à ação antrópica, tornando o local mais susceptível aos processos de degradação. Estudos avaliando atributos do solo na mesorregião do Oeste Potiguar no estado do Rio Grande do Norte são escassos, porém, sua quantificação em diferentes usos e ambientes, de forma integrada se faz necessária para o entendimento e consequente adoção de práticas adequadas às particularidades locais. Este estudo teve como objetivo avaliar os atributos físicos e químicos em diferentes usos agropecuários, detectando os mais sensíveis na distinção dos ambientes. A pesquisa foi realizada nos municípios de Pau dos Ferros, São Francisco do Oeste, Mossoró, Governador Dix-Sept Rosado. As áreas em estudo possuem características particulares quanto à classificação de seus solos e usos agropecuários. Foram realizadas análises de fertilidade e análises físicas como granulometria, limites de plasticidade e liquidez, índice de plasticidade e umidade gravimétrica. Os resultados foram interpretados por meio de técnicas de análise multivariada como ferramenta principal, especificamente a Análise Fatorial e agrupamento. Verificou-se um maior aporte de COT no Gleissolo (que favoreceu o aumento nos teores de P, Ca 2+ e K +), favorecido pelos resíduos orgânicos e má drenagem em função da fração argila. Os solos apresentaram caráter eutrófico (V> 50%), influenciados pela litologia, com exceção do Latossolo. No Gleissolo e Cambissolo ocorreram aumento nos limites de liquidez e plasticidade, em razão do aumento da fração argila e do carbono orgânico total, com aumento da umidade gravimétrica para atingir a friabilidade, com exceção, do Planossolo que apresentou baixa permeabilidade no horizonte B, onde os limites de plasticidade e liquidez se distanciaram, tendo assim, maior índice de plasticidade. Na análise granulométrica os perfis apresentaram variações nas classes texturais, com destaque para o Gleissolo que apresentou maior fração silte, sendo um indicativo de solos jovens com pouca atividade intempérica. Conclui-se que os atributos físicos umidade, limite de liquidez, limite de plasticidade, índice de plasticidade argila, areia fina foram os mais sensíveis na distinção dos ambientes e os químicos pH, (H+ Al ), V, PST. O Planossolo apresentou baixa permeabilidade no horizonte B, tendo assim o maior índice de plasticidade distanciando os limites entre si. As áreas estudadas apresentaram reações de acidez à alcalinidade com presença de Al3+ e (H + Al) e com elevada salinidade. O material de origem favoreceu o aumento nos teores de cálcio, sódio, magnésio e potássio
2017-01-31
APA, Harvard, Vancouver, ISO, and other styles
13

Larroza, Eliane Gonçalves. "Caracterização das nuvens cirrus na região metropolitana de São Paulo (RMSP) com a técnica de Lidar de retroespalhamento elástico." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/85/85134/tde-19122011-153154/.

Full text
Abstract:
Este trabalho, sendo pioneiro no Brasil, teve o intuito de efetuar uma investigação das nuvens cirrus na região Metropolitana de São Paulo (23,33ºS / 46,44ºW), SP, através do sistema MSP-Lidar para o período de Junho à Julho de 2007. Durante este período, foi verificada uma ocorrência de cirrus de aproximadamente 54% sobre o total de medidas efetuadas pelo sistema Lidar. Medidas com Lidar nos forneceram uma alta resolução espacial e temporal destas nuvens, permitindo assim caracterizá-las e classificá-las de acordo com as suas propriedades macro- e microfísicas. Para obter tais parâmetros, uma metodologia própria foi desenvolvida na recuperação dos dados de Lidar e uma robusta estatística foi aplicada para determinar as diferentes classes de cirrus. A metodologia adotada se resumiu basicamente (a) na determinação de períodos estacionários (ou observações) durante a evolução temporal de detecção de cirrus, (b) determinação da base e topo através de um valor limiar para o cálculo das variáveis macrofísicas (altitudes, temperaturas, espessuras geométricas), (c) aplicação do método da transmitância para cada camada de nuvem e a determinação das variáveis microfísicas (profundidade óptica e razão de Lidar). Neste processo, a razão de Lidar é calculada iterativamente até que haja a convergência da mesma. Análises estatísticas de multivariáveis foram efetuadas para a determinação das classes de cirrus. Estas classes são baseadas na espessura geométrica, altitude média e sua respectiva temperatura, a altitude relativa (diferença entre a altura da tropopausa e topo da nuvem) e a profundidade óptica. O uso sucessivo da Análise de Componentes Principais (PCA), do Método de Cluster Hierárquico (MCH) e da Análise de Discriminantes (AD) permitiu a identificação de 4 classes. Vale ressaltar que tais métodos foram aplicados somente para os casos identificados como camadas únicas de nuvens, pois não se observou significativamente a ocorrência de nuvens com multicamadas. A origem de formação das classes de cirrus encontradas, embora apresentando propriedades macro- e microfísicas distintas, foi identificada basicamente como a mesma, isto é, provenientes da injeção de vapor dágua na atmosfera por meio de sistemas frontais e seu respectivo resfriamento para a formação dos cristais de gelo. O mesmo mecanismo de formação também é atribuído aos jatos subtropicais. Uma análise em relação ao perfil de temperatura e a comparação com a literatura mostrou que as cirrus classificadas apresentam possivelmente cristais em forma de placas e colunas hexagonais. As razões de lidar (RL) calculadas também estão de acordo com a literatura.
This pioneer work in Brazil, aimed at investigating cirrus clouds in the metropolitan region of São Paulo (23.33 ºS / 46.44 ºW), SP, observed by the MSP-Lidar system in June and July 2007. During this period, cirrus clouds were observed during approximately 54% of the time of all Lidar measurements available. The Lidar provided measurements with high spatial and temporal resolution measurements of these clouds that allowed characterizing and classifying them according to their macro-and microphysical properties. For such parameters, a unique methodology was developed for the Lidar data retrieval and a robust statistic was applied to determine the different classes of cirrus. The following steps were adopted to characterize the observations: (a) the determination of stationary periods (or observations) during the time evolution of cirrus detection, (b) determination of the base and top of clouds through a so called threshold value to derive the macrophysical variables (altitude, temperature, geometrical thickness), (c) the application of the transmittance method for each layer and the determination of cloud microphysical variables (optical depth and Lidar ratio). In this process, the Lidar ratio is calculated iteratively until a convergence of this value is achieved. Multivariate statistical analyses were performed to determine the classes of cirrus. These classes are based on geometric thickness, average altitude and the respective temperature, relative altitude (difference between tropopause height and cloud top) and optical depth. The successive use of Principal Component Analysis (PCA), Hierarchical Clustering Method (HCM) and Discriminant Analysis (DA) allowed the identification of four classes of cirrus. It is important to point out here that such methods were applied only to cases identified as single layers of clouds, due to the rare occurrence of multilayered clouds. The origin of formation for the four cirrus classes, though they have distinct macro-and microphysical properties, was found to be basically the same, i.e., from the injection of water vapor in the atmosphere provided by frontal systems, followed by the cooling process to form ice crystals. The same formation mechanism is also attributed to the subtropical jet. An analysis of the temperature profile and comparison with the literature showed that the cirrus crystals possibly have the form of hexagonal plates and columns. The Lidar Ratio (LR) was also found to be in accordance with the literature.
APA, Harvard, Vancouver, ISO, and other styles
14

Hosler, Deborah Susan. "Models and Graphics in the Analysis of Categorical Variables: The Case of the Youth Tobacco Survey." [Johnson City, Tenn. : East Tennessee State University], 2002. http://etd-submit.etsu.edu/etd/theses/available/etd-0716102-095453/unrestricted/HoslerD080202.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Jaradat, Rasheed Abdelkareem. "Prediction of reservoir properties of the N-sand, vermilion block 50, Gulf of Mexico, from multivariate seismic attributes." Diss., Texas A&M University, 2003. http://hdl.handle.net/1969.1/2236.

Full text
Abstract:
The quantitative estimation of reservoir properties directly from seismic data is a major goal of reservoir characterization. Integrated reservoir characterization makes use of different varieties of well and seismic data to construct detailed spatial estimates of petrophysical and fluid reservoir properties. The advantage of data integration is the generation of consistent and accurate reservoir models that can be used for reservoir optimization, management and development. This is particularly valuable in mature field settings where hydrocarbons are known to exist but their exact location, pay, lateral variations and other properties are poorly defined. Recent approaches of reservoir characterization make use of individual seismic attributes to estimate inter-well reservoir properties. However, these attributes share a considerable amount of information among them and can lead to spurious correlations. An alternative approach is to evaluate reservoir properties using multiple seismic attributes. This study reports the results of an investigation of the use of multivariate seismic attributes to predict lateral reservoir properties of gross thickness, net thickness, gross effective porosity, net-to-gross ratio and net reservoir porosity thickness product. This approach uses principal component analysis and principal factor analysis to transform eighteen relatively correlated original seismic attributes into a set of mutually orthogonal or independent PC??s and PF??s which are designated as multivariate seismic attributes. Data from the N-sand interval of Vermilion Block 50 field, Gulf of Mexico, was used in this study. Multivariate analyses produced eighteen PC??s and three PF??s grid maps. A collocated cokriging geostaistical technique was used to estimate the spatial distribution of reservoir properties of eighteen wells penetrating the N-sand interval. Reservoir property maps generated by using multivariate seismic attributes yield highly accurate predictions of reservoir properties when compared to predictions produced with original individual seismic attributes. To the contrary of the original seismic attribute results, predicted reservoir properties of the multivariate seismic attributes honor the lateral geological heterogeneities imbedded within seismic data and strongly maintain the proposed geological model of the N-sand interval. Results suggest that multivariate seismic attribute technique can be used to predict various reservoir properties and can be applied to a wide variety of geological and geophysical settings.
APA, Harvard, Vancouver, ISO, and other styles
16

LISBOA, Francy Junio Gon?alves. "Uso da abordagem estat?stica procrusteana em Ecologia de Solo: caso de estudo envolvendo sistema de integra??o lavoura-pecu?ria-floresta no Cerrado." Universidade Federal Rural do Rio de Janeiro, 2015. https://tede.ufrrj.br/jspui/handle/jspui/1570.

Full text
Abstract:
Submitted by Jorge Silva (jorgelmsilva@ufrrj.br) on 2017-05-02T21:19:46Z No. of bitstreams: 1 2015 - Francy Junio Gon?alves Lisboa.pdf: 2939884 bytes, checksum: c4aa4152d0d3ca90c85ecb78ff7e5da6 (MD5)
Made available in DSpace on 2017-05-02T21:19:46Z (GMT). No. of bitstreams: 1 2015 - Francy Junio Gon?alves Lisboa.pdf: 2939884 bytes, checksum: c4aa4152d0d3ca90c85ecb78ff7e5da6 (MD5) Previous issue date: 2015-02-25
CAPES
This thesis is part of a multiple scientific effort seeking to support the replacement of degraded brazilian pastures by systems which integrate different land use types such as crop, pasture, and forest plantation (collectively known as iCLF systems). Here, the focus was also to discuss the potentialities of an unusual statistical multivariate approach called ?Procrustes Analysis? in the plant and soil ecology framework. The current thesis has three chapters through which details of the Procrustes analysis are presented on both technically e intuitively manner. The first chapter describes roadmaps showing how the procrustean residual vector (so-called PAM: Procrustean association metric), representing the multivariate correlation between two or more data tables, can be used as an univariate variable in more user-traditional statistical approaches such as ecological ordination, regression analysis and ANOVA followed by mean comparisons. The second chapter discussed a case study and had as the general objective to use PAMs, depicting the relationships between distance matrices from individual soil microbial structure (PLFA: Phospholipids Fatty Acid) and distance matrices form soil properties variables (chemical and physic), as response variables in an ANOVA framework with land use type as categorical predictor (degraded pasture, improved pasture, native fragment and iCLF system). The hypothesis in this case was that the fungi:bacteria ratio given by PLFA analysis, a good index of changes in microbial structure as response to land use alteration and associated to more conservative soils in terms of carbon mineralization, is favored by the man ? introduced vegetal heterogeneity which characterizes the integration crop ? livestock ? forest. The last chapter was entirely dedicated to answer some technical questions which arose after the publication of the first chapters. Basically the two most common questions were: i) Does the increasing number of columns/variables within a data table affect Procrustes outcomes? ii) Can the procrustean residual vector, the PAM, translate differences between treatments in terms of multivariate correlation as it is used in mean comparisons? Specifically for these questions, Procrustes was useful in supporting iCLF systems as potential alternative to degraded pasture by raising insights that the man ? introduced vegetal heterogeneity in such integrated agroecosystem, favor shifts in microbial structure toward fungal dominance.
A presente tese fez parte do esfor?o multinstitucional buscando sustentar a substitui??o de pastagens degradas por sistemas que integrem diferentes tipos de uso da terra, mais especificamente aqueles integrando lavoura, pastagem, e floresta plantada, coletivamente: sistemas iLPF. Aqui, o foco foi a explora??o das potencialidades da abordagem estat?stica denominada an?lise Procrutes, ou simplesmente Procrustes, na seara de ecologia de planta e solo. Basicamente, a tese foi composta por tr?s cap?tulos onde ? descrito com detalhes os principais nuances dessa abordagem multivariada ainda pouco utilizada por ecologistas de planta e solo. O primeiro cap?tulo descreve roteiros esquem?ticos mostrando como o vetor de res?duos derivado da correla??o e duas tabelas de dados pela an?lise Procrustes (chamado PAM: Procrustes association metric) pode ser utilizado como representante univariado da correla??o em outras abordagens estat?sticas (ordena??o ecol?gica, regress?o, e ANOVA seguida de teste de m?dias). O segundo cap?tulo da tese, utilizando sugest?es do primeiro cap?tulo, tratou de um estudo de caso. Neste caso, fazenda experimental situada no munic?pio de Cachoeira dourada ? GO, e contendo quatro diferentes tipos de uso da terra, dentre os quais um sistema iLPF, foi escolhida para a condu??o do estudo de caso. O objetivo geral foi acessar como correla??es, no formato de PAM, entre tabelas de dados representadas por vari?veis individuais de estrutura microbiana (dada por an?lise de lip?dios oriundos do solo; PLFA: Phospholipids Fatty Acid) e propriedades individuais de qu?mica e f?sica de solo, eram moduladas pelo tipo de uso da terra: pastagem degradada, pastagem melhorada, fragmento de mata nativa, e sistema iLPF. A hip?tese para o estudo de caso foi a de que a rela??o fungo: bact?ria, comumente associada a ambientes mais conservativos, era promovida pelo sistema iLPF uma vez que tais sistemas s?o caracterizados pelo aumento da heterogeneidade vegetal oriunda da sistematizada introdu??o de especies arb?reas em meio a pastagem. O terceiro e ?ltimo cap?tulo da tese foi estritamente dedicado a responder questionamentos t?cnicos referentes ? abordagem procrusteana e surgidos depois das publica??es dos dois primeiros cap?tulos da tese. Neste caso, dois dos questionamentos mais comuns foram abordados. Foram eles: i) quais s?o os efeitos da correla??o entre colunas/vari?veis dentro de uma tabela de dados sobre os resultados da an?lise Procrustes? ii) Pode o vetor de res?duos procrusteanos, a PAM, traduzir diferen?as entre tratamentos em termos da for?a de correla??o multivariada entre duas tabelas de dados? Para o estudo de caso os resultados da corrente tese suportaram os sistemas iLPF como potencial alternativa para substitui??o de pastagens degradadas ao levantar ind?cios de que a heterogeneidade vegetal introduzida nos sistemas iLPF pode favorecer o deslocamento da estrutura microbiana em dire??o ao dom?nio de fungos.
APA, Harvard, Vancouver, ISO, and other styles
17

Higgs, Helen. "Price and volatility relationships in the Australian electricity market." Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16404/1/Helen_Higgs_Thesis.pdf.

Full text
Abstract:
This thesis presents a collection of papers that has been published, accepted or submitted for publication. They assess price, volatility and market relationships in the five regional electricity markets in the Australian National Electricity Market (NEM): namely, New South Wales (NSW), Queensland (QLD), South Australia (SA), the Snowy Mountains Hydroelectric Scheme (SNO) and Victoria (VIC). The transmission networks that link regional systems via interconnectors across the eastern states have played an important role in the connection of the regional markets into an efficient national electricity market. During peak periods, the interconnectors become congested and the NEM separates into its regions, promoting price differences across the market and exacerbating reliability problems in regional utilities. This thesis is motivated in part by the fact that assessment of these prices and volatility within and between regional markets allows for better forecasts by electricity producers, transmitters and retailers and the efficient distribution of energy on a national level. The first two papers explore whether the lagged price and volatility information flows of the connected spot electricity markets can be used to forecast the pricing behaviour of individual markets. A multivariate generalised autoregressive conditional heteroskedasticity (MGARCH) model is used to identify the source and magnitude of price and volatility spillovers within (intra-relationship) and across (inter-relationship) the various spot markets. The results show evidence of the fact that prices in one market can be explained by their own price lagged one-period and are independent of lagged spot prices of any other markets when daily data is employed. This implies that the regional spot electricity markets are not fully integrated. However, there is also evidence of a large number of significant ownvolatility and cross-volatility spillovers in all five markets indicating that shocks in some markets will affect price volatility in others. Similar conclusions are obtained when the daily data are disaggregated into peak and off-peak periods, suggesting that the spot electricity markets are still rather isolated. These results inspired the research underlying the third paper of the thesis on modelling the dynamics of spot electricity prices in each regional market. A family of generalised autoregressive conditional heteroskedasticity (GARCH), RiskMetrics, normal Asymmetric Power ARCH (APARCH), Student APARCH and skewed Student APARCH is used to model the time-varying variance in prices with the inclusion of news arrival as proxied by the contemporaneous volume of demand, time-of-day, day-of-week and month-of-year effects as exogenous explanatory variables. The important contribution in this paper lies in the use of two latter methodologies, namely, the Student APARCH and skewed Student APARCH which take account of the skewness and fat tailed characteristics of the electricity spot price series. The results indicate significant innovation spillovers (ARCH effects) and volatility spillovers (GARCH effects) in the conditional standard deviation equation, even with market and calendar effects included. Intraday prices also exhibit significant asymmetric responses of volatility to the flow of information (that is, positive shocks or good news are associated with higher volatility than negative shocks or bad news). The fourth research paper attempts to capture salient feature of price hikes or spikes in wholesale electricity markets. The results show that electricity prices exhibit stronger mean-reversion after a price spike than the mean-reversion in the normal period, suggesting the electricity price quickly returns from some extreme position (such as a price spike) to equilibrium; this is, extreme price spikes are shortlived. Mean-reversion can be measured in a separate regime from the normal regime using Markov probability transition to identify the different regimes. The fifth and final paper investigates whether interstate/regional trade has enhanced the efficiency of each spot electricity market. Multiple variance ratio tests are used to determine if Australian spot electricity markets follow a random walk; that is, if they are informationally efficient. The results indicate that despite the presence of a national market only the Victorian market during the off-peak period is informationally (or market) efficient and follows a random walk. This thesis makes a significant contribution in estimating the volatility and the efficiency of the wholesale electricity prices by employing four advanced time series techniques that have not been previously explored in the Australian context. An understanding of the modelling and forecastability of electricity spot price volatility across and within the Australian spot markets is vital for generators, distributors and market regulators. Such an understanding influences the pricing of derivative contracts traded on the electricity markets and enables market participants to better manage their financial risks.
APA, Harvard, Vancouver, ISO, and other styles
18

Higgs, Helen. "Price and volatility relationships in the Australian electricity market." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16404/.

Full text
Abstract:
This thesis presents a collection of papers that has been published, accepted or submitted for publication. They assess price, volatility and market relationships in the five regional electricity markets in the Australian National Electricity Market (NEM): namely, New South Wales (NSW), Queensland (QLD), South Australia (SA), the Snowy Mountains Hydroelectric Scheme (SNO) and Victoria (VIC). The transmission networks that link regional systems via interconnectors across the eastern states have played an important role in the connection of the regional markets into an efficient national electricity market. During peak periods, the interconnectors become congested and the NEM separates into its regions, promoting price differences across the market and exacerbating reliability problems in regional utilities. This thesis is motivated in part by the fact that assessment of these prices and volatility within and between regional markets allows for better forecasts by electricity producers, transmitters and retailers and the efficient distribution of energy on a national level. The first two papers explore whether the lagged price and volatility information flows of the connected spot electricity markets can be used to forecast the pricing behaviour of individual markets. A multivariate generalised autoregressive conditional heteroskedasticity (MGARCH) model is used to identify the source and magnitude of price and volatility spillovers within (intra-relationship) and across (inter-relationship) the various spot markets. The results show evidence of the fact that prices in one market can be explained by their own price lagged one-period and are independent of lagged spot prices of any other markets when daily data is employed. This implies that the regional spot electricity markets are not fully integrated. However, there is also evidence of a large number of significant ownvolatility and cross-volatility spillovers in all five markets indicating that shocks in some markets will affect price volatility in others. Similar conclusions are obtained when the daily data are disaggregated into peak and off-peak periods, suggesting that the spot electricity markets are still rather isolated. These results inspired the research underlying the third paper of the thesis on modelling the dynamics of spot electricity prices in each regional market. A family of generalised autoregressive conditional heteroskedasticity (GARCH), RiskMetrics, normal Asymmetric Power ARCH (APARCH), Student APARCH and skewed Student APARCH is used to model the time-varying variance in prices with the inclusion of news arrival as proxied by the contemporaneous volume of demand, time-of-day, day-of-week and month-of-year effects as exogenous explanatory variables. The important contribution in this paper lies in the use of two latter methodologies, namely, the Student APARCH and skewed Student APARCH which take account of the skewness and fat tailed characteristics of the electricity spot price series. The results indicate significant innovation spillovers (ARCH effects) and volatility spillovers (GARCH effects) in the conditional standard deviation equation, even with market and calendar effects included. Intraday prices also exhibit significant asymmetric responses of volatility to the flow of information (that is, positive shocks or good news are associated with higher volatility than negative shocks or bad news). The fourth research paper attempts to capture salient feature of price hikes or spikes in wholesale electricity markets. The results show that electricity prices exhibit stronger mean-reversion after a price spike than the mean-reversion in the normal period, suggesting the electricity price quickly returns from some extreme position (such as a price spike) to equilibrium; this is, extreme price spikes are shortlived. Mean-reversion can be measured in a separate regime from the normal regime using Markov probability transition to identify the different regimes. The fifth and final paper investigates whether interstate/regional trade has enhanced the efficiency of each spot electricity market. Multiple variance ratio tests are used to determine if Australian spot electricity markets follow a random walk; that is, if they are informationally efficient. The results indicate that despite the presence of a national market only the Victorian market during the off-peak period is informationally (or market) efficient and follows a random walk. This thesis makes a significant contribution in estimating the volatility and the efficiency of the wholesale electricity prices by employing four advanced time series techniques that have not been previously explored in the Australian context. An understanding of the modelling and forecastability of electricity spot price volatility across and within the Australian spot markets is vital for generators, distributors and market regulators. Such an understanding influences the pricing of derivative contracts traded on the electricity markets and enables market participants to better manage their financial risks.
APA, Harvard, Vancouver, ISO, and other styles
19

Sistanizadeh, Mohammad K. "Weak narrow-band signal detection in multivariate non-gaussian clutter." Diss., Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/71187.

Full text
Abstract:
This dissertation is concerned with the development and performance analysis of non-linear receivers for detection of weak narrow-band signals in multivariate non-Gaussian clutter. The novelty of the detection scheme lies in the utilization of both the complex measurement and the multivariate non-Gaussian character of the clutter. Two clutter models are developed based on the available partial information. Model (I) is based on the a priori knowledge of the first-order density, correlation structure of the amplitude, and the circular symmetric assumption of the in-phase and quadrature phase components. Model (II) is based on the first-order in-phase and quadrature phase densities and the complex correlation structure. These models completely specify a multivariate complex nonGaussian density and can be used for clutter generation. A class of optimum non-linear receiver structures based on weak signal level, canonically known as Locally Optimum Detectors (LOD) are derived under clutter Model (I). This can be considered to be a generalization of the LOD for the independent and identically distributed (i.i.d) clutter. The detectors utilize complex measurements and their structures depend on whether the underlying hypothesis testing model is real or complex. The performance of each of the proposed detector structures, based on the concept of Efficacy, is formulated. Then, the performance of the detectors are evaluated with respect to a reference detector using Asymptotic Relative Efficiency (ARE) criterion. Numerical evaluation of the performance expression is carried out for constant signal in Weibull distribution for various density parameters. Simulation results indicate that the performance of the developed detectors, based on ARE, is superior to (i.i.d) LOD detector and matched filter. Finally, the sensitivity of the detector performance to parameter variation of the structural non-linearities is investigated.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
20

Hirk, Rainer, Kurt Hornik, and Laura Vana. "Multivariate Ordinal Regression Models: An Analysis of Corporate Credit Ratings." WU Vienna University of Economics and Business, 2017. http://epub.wu.ac.at/5389/1/Report132_lvana.pdf.

Full text
Abstract:
Correlated ordinal data typically arise from multiple measurements on a collection of subjects. Motivated by an application in credit risk, where multiple credit rating agencies assess the creditworthiness of a firm on an ordinal scale, we consider multivariate ordinal models with a latent variable specification and correlated error terms. Two different link functions are employed, by assuming a multivariate normal and a multivariate logistic distribution for the latent variables underlying the ordinal outcomes. Composite likelihood methods, more specifically the pairwise and tripletwise likelihood approach, are applied for estimating the model parameters. We investigate how sensitive the pairwise likelihood estimates are to the number of subjects and to the presence of observations missing completely at random, and find that these estimates are robust for both link functions and reasonable sample size. The empirical application consists of an analysis of corporate credit ratings from the big three credit rating agencies (Standard & Poor's, Moody's and Fitch). Firm-level and stock price data for publicly traded US companies as well as an incomplete panel of issuer credit ratings are collected and analyzed to illustrate the proposed framework.
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
21

Hirk, Rainer, Kurt Hornik, and Laura Vana. "Multivariate ordinal regression models: an analysis of corporate credit ratings." Springer Berlin Heidelberg, 2018. http://dx.doi.org/10.1007/s10260-018-00437-7.

Full text
Abstract:
Correlated ordinal data typically arises from multiple measurements on a collection of subjects. Motivated by an application in credit risk, where multiple credit rating agencies assess the creditworthiness of a firm on an ordinal scale, we consider multivariate ordinal regression models with a latent variable specification and correlated error terms. Two different link functions are employed, by assuming a multivariate normal and a multivariate logistic distribution for the latent variables underlying the ordinal outcomes. Composite likelihood methods, more specifically the pairwise and tripletwise likelihood approach, are applied for estimating the model parameters. Using simulated data sets with varying number of subjects, we investigate the performance of the pairwise likelihood estimates and find them to be robust for both link functions and reasonable sample size. The empirical application consists of an analysis of corporate credit ratings from the big three credit rating agencies (Standard & Poor's, Moody's and Fitch). Firm-level and stock price data for publicly traded US firms as well as an unbalanced panel of issuer credit ratings are collected and analyzed to illustrate the proposed framework.
APA, Harvard, Vancouver, ISO, and other styles
22

DeBord, Joshua S. "Predicting the Geographic Origin of Heroin by Multivariate Analysis of Elemental Composition and Strontium Isotope Ratios." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3802.

Full text
Abstract:
The goal of this research was to aid in the fight against the heroin and opioid epidemic by developing new methodology for heroin provenance determination and forensic sample comparison. Over 400 illicit heroin powder samples were analyzed using quadrupole and high-resolution inductively-coupled plasma mass spectrometry (Q-ICP-MS and HR-ICP-MS) in order to measure and identify elemental contaminants useful for associating heroin samples of common origin and differentiating heroin of different geographic origins. Additionally, 198 heroin samples were analyzed by multi-collector ICP-MS (MC-ICP-MS) to measure radiogenic strontium isotope ratios (87Sr/86Sr) with high-precision for heroin provenance determination, for the first time. Supervised discriminant analysis models were constructed to predict heroin origin using elemental composition. The model was able to correctly associate 88% of the samples to their region of origin. When 87Sr/86Sr data were combined with Q-ICP-MS elemental data, the correct association of heroin samples improved to ≥90% for all groups with an average of 93% correct classification. For forensic sample comparisons, quantitative elemental data (11 elements measured) from 120 samples, 30 from each of the four regions, were compared in order to assess the rate of discrimination (5400 total comparisons). Using a match criterion of ±3 standard deviations about the mean, only 14 of the 5400 possible comparison pairs were not discriminated resulting in a discrimination rate of 99.7%. For determining the rate of correct associations, 3 replicates of 24 duplicate samples were prepared and analyzed on separate days. Only 1 of the 24 correct pairs were not associated for a correct association rate of 95.8%. New methods for provenance determination and sample comparison are expected to be incredibly useful to intelligence agencies and law enforcement working to reduce the proliferation of heroin.
APA, Harvard, Vancouver, ISO, and other styles
23

Ma, San-San, and Patrick Truong. "The influence of financial ratios on different sectors : A Multivariate Regression of OMXS stocks to determine what financial ratios influence stock growth in different sectors most." Thesis, KTH, Matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-169905.

Full text
Abstract:
Financial ratios are used to indicate a stocks performance. This thesis aims to clarify if there are any differences in how sectors respond to financial ratios. By doing an deductive research this thesis establishes that the most prominent financial ratios are different in the various sectors while also establishing that financial ratios account only for a small part of the stock growth. The thesis also contains a qualitative study which attempts to discuss the forces behind stock growth. The results indicate that the the stock growth are mainly caused by fundamental factors, which is a notion also supported by previous research.
APA, Harvard, Vancouver, ISO, and other styles
24

Pereira, Fernando. "Analyse spatio-temporelle du champ géomagnétique et des processus d'accélération solaires observés en émission radio." Orléans, 2004. http://www.theses.fr/2004ORLE2011.

Full text
Abstract:
L'étude des relations Soleil-Terre requiert fréquemment l'analyse de données multivariées, qui dépendent de plusieurs variables (le temps, l'espace,. . . ). Pour caractériser les processus physiques, nous proposons d'utiliser des méthodes statistiques multivariées (la SVD, l'ICA, ). De telles méthodes permettent de projeter les données sur un nombre restreint de modes qui en captent les traits de comportement saillants et auxquels il faudra ensuite donner une interprétation physique. Nous les appliquons à deux exemples ; (1) le champ géomagnétique, mesuré en différents endroits du globe, et (2) les processus d'accélération de la couronne solaire observés par le radiohéliographe de Nançay. À partir de modes purement statistiques, nous montrons qu'il est possible de mettre en évidence des processus physiquement connus et de mieux isoler des perturbations très faibles telles que les soubresauts géomagnétiques.
APA, Harvard, Vancouver, ISO, and other styles
25

PENNONI, FULVIA. "Metodi statistici multivariati applicati all'analisi del comportamento dei titolari di carta di credito di tipo revolving." Bachelor's thesis, Universita' degli studi di Perugia, 2000. http://hdl.handle.net/10281/50024.

Full text
Abstract:
Il presente lavoro di tesi illustra un'applicazione dei modelli grafici per il l’analisi del credit scoring comportamentale o behavioural scoring. Quest'ultimo e' definito come: ‘the systems and models that allow lenders to make better decisions in managing existing clients by forcasting their future performance’, secondo Thomas (1999). La classe di modelli grafici presa in considerazione e’ quella dei modelli garfici a catena. Sono dei modelli statistici multivariati che consetono di modellizzare in modo appropriato le relazioni tra le variabili che descrivono il comporatemento dei titoloari della carta. Dato che sono basati su un'espansione log-lineare della funzione di densità delle variabili consentono di rappresentare anche graficamente associazioni orientate, inerenti sottoinsiemi di variabili. Consentono, inoltre, di individuare la struttura che rappresenti in modo più parsimonioso possibile tali relazioni e modellare simultaneamente più di una variabile risposta. Sono utili quando esiste un ordinamento anche parziale tra le variabili che permette di suddividerle in meramente esogene, gruppi d’intermedie tra loro concatenate e di risposta. Nei modelli grafici la struttura d’indipendenza delle variabili viene rappresentata visivamente attraverso un grafo. Nel grafo le variabili sono rappresentate da nodi legati da archi i quali mostrano le dipendenze in probabilità tra le variabili. La mancanza di un arco implica che due nodi sono indipendenti dati gli altri nodi. Tali modelli risultano particolarmente utili per la teoria che li accomuna con i sistemi esperti, per cui una volta selezionato il modello è possibile interrogare il sistema esperto per modellare la distribuzione di probabilità congiunta e marginale delle variabili. Nel primo capitolo vengono presentati i principali modelli statistici adottati nel credit scoring. Il secondo capitolo prende in considerazione le variabili categoriche. Le informazioni sui titolari di carta di credito sono, infatti, compendiate in tabelle di contingenza. Si introducono le nozioni d’indipendenza tra due variabili e di indipendenza condizionata tra più di due variabili. Si elencano alcune misure d’associazione tra variabili, in particolare, si introducono i rapporti di odds che costituiscono la base per la costruzione dei modelli multivariati utilizzati. Nel terzo capitolo vengono illustrati i modelli log-lineari e logistici che appartengono alla famiglia dei modelli lineari generalizzati. Essendo metodi multivariati consentono di studiare l’associazione tra le variabili considerandole simultaneamente. In particolare viene descritta una speciale parametrizzazione log-lineare che permette di tener conto della scala ordinale con cui sono misurate alcune delle variabili categoriche utilizzate. Questa è anche utile per trovare la migliore categorizzazione delle variabili continue. Si richiamano, inoltre, i risultati relativi alla stima di massima verosimiglianza dei parametri dei modelli, accennando anche agli algoritmi numerici iterativi necessari per la risoluzione delle equazioni di verosimiglianza rispetto ai parametri incogniti. Si fa riferimento al test del rapporto di verosimiglianza per valutare la bontà di adattamento del modello ai dati. Il capitolo quarto introduce alla teoria dei grafi, esponendone i concetti principali ed evidenziando alcune proprietà che consentono la rappresentazione visiva del modello mediante il grafo, mettendone in luce i vantaggi interpretativi. In tale capitolo si accenna anche al problema derivante dalla sparsità della tabella di contingenza, quando le dimensioni sono elevate. Vengono pertanto descritti alcuni metodi adottati per far fronte a tale problema ponendo l’accento sulle definizioni di collassabilità. Il quinto capitolo illustra un’applicazione dei metodi descritti su un campione composto da circa sessantamila titolari di carta di credito revolving, rilasciata da una delle maggiori società finanziarie italiane operanti nel settore. Le variabili prese in esame sono quelle descriventi le caratteristiche socioeconomiche del titolare della carta, desumibili dal modulo che il cliente compila alla richiesta di finanziamento e lo stato del conto del cliente in due periodi successivi. Ogni mese, infatti, i clienti vengono classificati dalla società in: ‘attivi’, ‘inattivi’ o ‘dormienti’ a seconda di come si presenta il saldo del conto. Lo scopo del lavoro è stato quello di ricercare indipendenze condizionate tra le variabili in particolare rispetto alle due variabili obbiettivo e definire il profilo di coloro che utilizzano maggiormente la carta. Le conclusioni riguardanti le analisi effettuate al capitolo quinto sono riportate nell’ultima sezione. L’appendice descrive alcuni dei principali programmi relativi ai software statistici utilizzati per le elaborazioni.
In this thesis work the use of graphical models is proposed to the analysis of credit scoring. In particular the applied application is related to the behavioural scoring which is defined by Thomas (1999) as ‘the systems and models that allow lenders to make better decisions in managing existing clients by forecasting their future performance’. The multivariate statistical models, named chain graph models, proposed for the application allow us to model in a proper way the relation between the variables describing the behaviour of the holders of the credit card. The proposed models are named chain graph models. They are based on a log-linear expansion of the density function of the variables. They allow to: depict oriented association between subset of variables; to detect the structure which accounts for a parsimonious description of the relations between variables; to model simultaneously more than one response variable. They are useful in particular when there is a partial ordering between variables such that they can be divided into exogenous, intermediate and responses. In the graphical models the independence structure is represented by a graph. The variables are represented by nodes, joint by edges showing the dependence in probability among variables. The missing edge means that two nodes are independent given the other nodes. Such class of models is very useful for the theory which combines them with the expert systems. In fact, once the model has been selected, it is possible to link it to the expert system to model the joint and marginal probability of the variables. The first chapter introduces the most used statistical models for the credit scoring analysis. The second chapter introduces the categorical variables. The information related to the credit card holder are stored in a contingency table. It illustrates also the notion of independence between two variables and conditional independence among more than two variables. The odds ratio is introduced as a measure of association between two variables. It is the base of the model formulation. The third chapter introduces the log-linear and logistic models belonging to the family of generalized linear models. They are multivariate methods allowing to study the association between variables considering them simultaneously. A log-linear parameterization is described in details. Its advantage is also that it allow us to take into account of the ordinal scale on which the categorical variables are measured. This is also useful to find the better categorization of the continuous variables. The results related to the maximum likelihood estimation of the model parameters are mentioned as well as the numerical iterative algorithm which are used to solve the likelihood equations with respect to the unknown parameters. The score test is illustrated to evaluate the goodness of fit of the model to the data. Chapter 4 introduces some main concepts of the graph theory in connection with their properties which allow us to depict the model through the graph, showing the interpretative advantages. The sparsity of the contingency table is also mentioned, when there are many cells. The collapsibility conditions are considered as well. Finally, Chapter 5 illustrates the application of the proposed methodology on a sample composed by 70000 revolving credit card holders. The data are released by a one of biggest Italian financial society working in this sector. The variables are the socioeconomic characteristics of the credit card holder, taken form the form filled by the customer when asking for the credit. Every months the society refines the classification of the customers in active, inactive or asleep according to the balance. The application of the proposed method was devoted to find the existing conditional independences between variables related to the two responses which are the balance of the account at two subsequent dates and therefore to define the profiles of most frequently users of the revolving credit card. The chapter ends with some conclusive remarks. The appendix of the chapter reports the code of the used statistical softwares.
APA, Harvard, Vancouver, ISO, and other styles
26

GHILARDELLI, FRANCESCA. "USE OF MULTIVARIATE AND MACHINE LEARNING STATISTICS TO RELATE FEED QUALITY AND SAFETY CHARACTERISTICS TO NUTRIENT UTILIZATION EFFICIENCY AND MILK TRAITS: A HEURISTIC APPROACH." Doctoral thesis, Università Cattolica del Sacro Cuore, 2022. http://hdl.handle.net/10280/119856.

Full text
Abstract:
Corrette pratiche nutrizionali sono alla base della redditività e della sostenibilità delle produzioni animali e sono uno dei principali fattori che influenzano il benessere animale. Per valutare gli alimenti, oltre alla composizione chimica, la qualità sanitaria, in termini di qualità fermentativa e contaminazione microbiche, gioca un ruolo importante nel determinare l’effettiva appetibilità e sicurezza degli alimenti. Nel corrente lavoro di tesi di dottorato, si è affrontato, attraverso metodo euristico di raccolta dati e campioni, lo studio delle interazioni fra qualità degli alimenti e impatto sulle performance degli animali. In particolare, si sono studiate le interazioni fra qualità del silomais e delle diete. Data la complessità di queste matrici in termini di popolazioni microbiche che influenzano e guidano la qualità dell’alimento, le nuove sfide della valutazione nutrizionale per i bovini devono orientarsi verso valutazioni multi-parametriche che includano caratterizzazioni chimiche-biologiche, microbiologiche e sanitarie. La raccolta di queste informazioni condotta senza obiettivi predeterminati, ha permesso di analizzare con statistica multivariata e tecniche di machine learning le relazioni tra qualità degli alimenti e gli effetti che hanno sulle performance della mandria, proponendo nuovi approcci per classificare la qualità degli alimenti e le strategie nutrizionali adottate in stalle da latte.
Adequate nutritional practices are the basis of profitability and sustainability of animal production and are one of the main factors influencing animal welfare. In addition to the chemical composition, the safety quality, in terms of fermentation quality and microbial contamination, plays an important role in determining the actual palatability and safety of feed. In the current PhD thesis, we addressed, through heuristic method of data and sample collection, the study of interactions between feed quality and impact on animal performance. In particular, the interactions between silage quality and diets were evaluated. Given the complexity of these matrices in terms of microbial populations influencing and driving feed quality, new challenges in nutritional assessment for cattle must move toward multi-parameter assessments that include chemical-biological, microbiological, and safety characterizations. The collection of this information conducted without predetermined aims, has allowed to analyze with multivariate statistics and machine learning techniques the relationships between feed quality and the effects they have on herd performance, proposing new approaches to classify feed quality and nutritional strategies adopted in dairy farms.
APA, Harvard, Vancouver, ISO, and other styles
27

Jotta, César Augusto Degiato. "Análise de variância multivariada nas estimativas dos parâmetros do modelo log-logístico para susceptibilidade do capim-pé-de-galinha ao glyphosate." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-29112016-163511/.

Full text
Abstract:
O cenário agrícola nacional tem se tornado cada vez mais competitivo ao longo dos anos, manter o crescimento da produtividade a um baixo custo operacional e com baixo impacto ambiental tem sido os três ingredientes de maior relevância na área. A produtividade por sua vez, é função de várias variáveis, sendo o controle de plantas daninhas uma dessas variáveis a ser considerada. Nesse trabalho é analisado um conjunto de dados de um experimento realizado no departamento de Produção Vegetal da ESALQ-USP, Piracicaba - SP. Foram avaliadas 4 biótipos de capim-pé-de-galinha provenientes de três estados brasileiros e em três estágios morfológicos com 4 repetições para cada biótipo, a variável resposta utilizada foi massa seca (g) e como variável regressora foi utilizada a dose de glyphosate nas concentrações variando de 1/16 D a 16 D mais a testemunha, sem aplicação de herbicida, em que D varia de 480 gramas de equivalente ácido de glyphosate por hectare (g .e a. ha-1) para o estágio de 2 a 3 perfilhos, 720 (g .e a. ha-1) para o estágio de 6 a 8 perfilhos e de 960 para o estágio de 10-12 perfilhos. O trabalho teve como objetivo primário avaliar se, ao longo dos anos, as populações de capim-pé-de-galinha tem se tornado resistentes ao herbicida glyphosate, visando detecção de biótipos resistentes. O experimento foi instalado segundo o delineamento inteiramente aleatorizado, sendo feito em três estágios diferentes. Para a análise dos dados foi utilizado o modelo não-linear log-logístico proposto em Knezevic, S. e Ritz (2007) como método univariado, foi utilizado ainda o método da máxima verossimilhança para verificar a igualdade do parâmetro e. O modelo utilizado convergiu para quase todas as repetições, mas não houve um comportamento sistemático observado que explicasse a não convergência de uma repetição em particular. Num segundo momento, as estimativas dos três parâmetros do modelo foram tomadas como variáveis dependentes em uma análise de variância multivariada. Observando que as três, conjuntamente, foram significativas pelos testes de Pillai, Wilks, Roy e Hotelling-Lawley, foi realizado o teste de Tukey para o mesmo parâmetro e comparado com o primeiro método utilizado. Esse procedimento apresentou, com o mesmo coeficiente de significância, menor capacidade de identificar diferença entre as médias dos parâmetros das variedades de capim do que o método proposto por Regazzi (2015).
The national agricultural scenery has become increasingly competitive over the years, maintaining productivity growth at a low operating cost and low environmental impact has been the three most important ingredients in the area. Productivity in turn is a function of several variables, and the weed control is one of these variables to be considered. In this work it is analyzed a dataset of an experiment conducted in the Plant Production Department of ESALQ-USP, Piracicaba - SP. Were evaluated 4 grass chicken\'s feet biotypes from three Brazilian states in three morphological stages with 4 repetitions for each biotype, the response variable used was dry mass (g) and as regressor variable were used the dose of glyphosate in concentrations ranging from 1/16 D to 16 D plus the control without herbicide, wherein D ranges from 480 grams of glyphosate acid equivalent per hectare (g .e a. ha-1) for 2 to 3 stage tillers, 720 grams of glyphosate acid equivalent per hectare (g .e a. ha-1) for 6 to 8 tillers and 960 for stage 10-12 tillers. The work had as main objective to evaluate , if over the years, populations of grass chicken\'s feet has become resistant to glyphosate, aiming detection of resistant biotypes. The experiment was conducted under completely randomized design being done in three stages. For data analysis was used the non-linear log-logistic proposed in Knezevic, S. e Ritz (2007) as univariate method, it was still used the maximum likelihood method to verify the equality of the parameter e. The model converged to almost all repetitions, but there was an observed systematic behavior to explain the non-convergence of a particular repetition. Secondly, estimates of the three model parameters were taken as dependent variables in a multivariate analysis of variance. Noting that all three together, were significant by Pillai, Wilks, Roy and Hotelling-Lawley tests, was performed Tukey test for the same parameter e and compared with the first method. This procedure presented, with the same coefficient of significance, less able to identify differences between the means of the parameters of grass varieties than the method proposed by Regazzi (2015).
APA, Harvard, Vancouver, ISO, and other styles
28

Russo, Cibele Maria. ""Análise de um modelo de regressão com erros nas variáveis multivariado com intercepto nulo"." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-01082006-214556/.

Full text
Abstract:
Para analisar características de interesse a respeito de um conjunto de dados reais da área de Odontologia apresentado em Hadgu & Koch (1999), ajustaremos um modelo de regressão linear multivariado com erros nas variáveis com intercepto nulo. Este conjunto de dados é caracterizado por medições de placa bacteriana em três grupos de voluntários, antes e após utilizar dois líquidos de bochecho experimentais e um líquido de bochecho controle, com medições (sujeitas a erros de medição) no início do estudo, após três e seis meses de utilização dos líquidos. Neste caso, uma possível estrutura de dependência entre as medições feitas em um mesmo indivíduo deve ser incorporada ao modelo e, além disto, temos duas variáveis resposta para cada indivíduo. Após a apresentação do modelo estatístico, iremos obter estimativas de máxima verossimilhança dos parâmetros utilizando o algoritmo iterativo EM e testaremos as hipóteses de interesse utilizando testes assintóticos de Wald, razão de verossimilhanças e score. Como neste caso não existe um teste ótimo, faremos um estudo de simulação para verificar o comportamento das três estatísticas de teste em relação a diferentes tamanhos amostrais e diferentes valores de parâmetros. Finalmente, faremos um estudo de diagnóstico buscando identificar possíveis pontos influentes no modelo, considerando o enfoque de influência local proposto por Cook (1986) e a medida de curvatura normal conformal desenvolvida por Poon & Poon (1999).
To analyze some characteristics of interest in a real odontological data set presented in Hadgu & Koch (1999), we propose the use of a multivariate null intercept errors-in-variables regression model. This data set is composed by measurements of dental plaque index (with measurement errors), which were measured in volunteers who were randomized to two experimental mouth rinses (A and B) or a control mouth rinse. The measurements were taken in each individual, before and after the use of the respective mouth rinses, in the beginning of the study, after three months from the baseline and after six months from the baseline. In this case, a possible structure of dependency between the measurements taken within the same individual must be incorporated in the model. After presenting the statistical model, we obtain the maximum likelihood estimates of the parameters using the numerical algorithm EM, and we test the hypotheses of interest considering asymptotic tests (Wald, likelihood ratio and score). Also, a simulation study to verify the behavior of these three test statistics is presented, considering diferent sample sizes and diferent values for the parameters. Finally, we make a diagnostic study to identify possible influential observations in the model, considering the local influence approach proposed by Cook (1986) and the conformal normal curvature proposed by Poon & Poon (1999).
APA, Harvard, Vancouver, ISO, and other styles
29

Schwab, Nicolas Vilczaki 1986. "Determinação de dióxido de titânio em cremes dentais por fluorescência de raios X e calibração multivariada." [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/248800.

Full text
Abstract:
Orientador: Maria Izabel Maretti Silveira Bueno
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Química
Made available in DSpace on 2018-08-17T19:39:49Z (GMT). No. of bitstreams: 1 Schwab_NicolasVilczaki_M.pdf: 3711398 bytes, checksum: e20932b11edb933d211e62ceba8679a8 (MD5) Previous issue date: 2011
Resumo: O método de parâmetros fundamentais (PF), embora muito eficaz para a determinação elementar em fluorescência de raios X (FRX) em análise de amostras simples (como ligas e misturas de óxidos), é inviável para a quantificação em matrizes complexas, como cremes dentais. Por outro lado, ao contrário do método de PF, a quimiometria evita cálculos de coeficientes teóricos, relacionados à matriz da amostra e às características geométricas e instrumentais, permitindo ao sistema obter modelos com maior habilidade de previsão. Esse trabalho propõe uma metodologia para determinação de dióxido de titânio diretamente em pastas de dente com uso de calibração multivariada (PLS), usando como pré-tratamento das amostras em alguns casos, apenas a homogeneização e requerendo somente 5 minutos para a análise. Para construção do modelo foram analisadas 22 amostras de diversas marcas e tipos. O método proposto envolveu a utilização de espectros de FRX de pastas de dente e quimiometria, usando como valores de referência os obtidos pelo método de Parâmetros Fundamentais para as cinzas das mesmas amostras, método que requer pelo menos 8 horas para cada análise. Oito variáveis latentes foram necessárias para descrever o conjunto, tornando o modelo adequado para realizar análises diretas para as diferentes marcas encontradas no comércio brasileiro, sem que ocorra sobreajuste no modelo. Ele foi capaz de prever o teor de dióxido de titânio em amostras externas com erros de até 16% para 100 s e 9% para 700 s de irradiação; no entanto, sem diferença significativa entre os métodos, evidenciada estatisticamente pelo teste t, com 95% de confiança. Dessa forma, pode-se afirmar que a proposta é eficaz para a determinação de teores de TiO2 em matrizes complexas como as pastas de dentes, de forma rápida e com o mínimo preparo de amostra
Abstract: The direct application of a fundamental parameters method in elemental determinations using X-ray fluorescence is not feasible for complex samples, like dentifrices or toothpastes, as it is for simpler samples, like alloys or mixtures of elemental oxides. However, instead of fundamental parameters method, chemometric methods, not based on the uncertainness of theoretical coefficients related to sample matrices and of geometrical and instrumental parameters, allow obtaining models with adequate prediction abilities. This work proposes a methodology to determine titanium dioxide contents directly in toothpastes, by applying Partial Least Square Regression, having as sample pretreatment just its homogenization, when required. The analytical frequency is very high, ca. 24 samples per hour. Twenty-two toothpaste samples having different Brazilian brand names and in diverse presentations were used to build and validate the model. Direct X-ray Fluorescence toothpaste spectra and chemometrics were considered, where the reference values of their TiO2 concentrations were obtained from fundamental parameters data of the ash of the same samples, requiring 8 hours to be obtained. Eight latent variables are necessary to describe the whole sample set and the Partial Least Square Regression model be able to make direct analysis of the different samples found on the Brazilian market, without over-fitting the model. The Partial Least Square Regression model is able to predict the content of TiO2 in external samples with average errors until 16% for 100 s and 9% for 700 s of irradiation, however, no significant difference between the methods, as statistically indicated by t-test, with 95% of confidence. The advantages of the proposed approach are mainly its speed, minimum sample preparation and robustness
Mestrado
Quimica Analitica
Mestre em Química
APA, Harvard, Vancouver, ISO, and other styles
30

PERIRA, Fernando. "Analyse spatio-temporelle du champ géomagnétique et des processus d'accélération solaires observés en émission radio." Phd thesis, Université d'Orléans, 2004. http://tel.archives-ouvertes.fr/tel-00006128.

Full text
Abstract:
L'étude des relations Soleil--Terre requiert fréquemment l'analyse de données multivariées, qui dépendent de plusieurs variables (le temps, l'espace, ...). Pour caractériser les processus physiques, nous proposons d'utiliser des méthodes statistiques multivariées (la SVD, l'ICA, ...). De telles méthodes permettent de projeter les données sur un nombre restreint de modes qui en captent les traits de comportement saillants et auxquels il faudra ensuite donner une interprétation physique. Nous les appliquons à deux exemples ; (1) le champ géomagnétique, mesuré en différents endroits du globe, et (2) les processus d'accélération de la couronne solaire observés par le radiohéliographe de Nançay. À partir de modes purement statistiques, nous montrons qu'il est possible de mettre en évidence des processus physiquement connus et de mieux isoler des perturbations très faibles telles que les soubresauts géomagnétiques.
APA, Harvard, Vancouver, ISO, and other styles
31

Brito, Geysa Barreto. "Estratégias para determinação direta de elementos químicos em amostras de macroalgas marinhas por técnicas espectroanalíticas." Instituto de Química, 2015. http://repositorio.ufba.br/ri/handle/ri/19131.

Full text
Abstract:
Submitted by Ana Hilda Fonseca (anahilda@ufba.br) on 2016-04-08T14:28:58Z No. of bitstreams: 1 tese fim colegiado.pdf: 4277452 bytes, checksum: 6964bedb598d459a8ae309af6de4f9e6 (MD5)
Approved for entry into archive by Ana Hilda Fonseca (anahilda@ufba.br) on 2016-05-10T14:24:31Z (GMT) No. of bitstreams: 1 tese fim colegiado.pdf: 4277452 bytes, checksum: 6964bedb598d459a8ae309af6de4f9e6 (MD5)
Made available in DSpace on 2016-05-10T14:24:31Z (GMT). No. of bitstreams: 1 tese fim colegiado.pdf: 4277452 bytes, checksum: 6964bedb598d459a8ae309af6de4f9e6 (MD5)
CNPq e FAPESB
Este trabalho, desenvolvido no Grupo de Pesquisa em Química Analítica da UFBA, se encontra dentro do âmbito da FAPESB, no projeto Avaliação da Poluição e Identificação de Processos de Recuperação para Regiões de Manguezais sob Influência de Atividades Industriais na Baía de Todos os Santos, e teve por objetivo o estudo e desenvolvimento de dois métodos para determinação direta de elementos químicos em macroalgas marinhas. Esses organismos estão sendo utilizadas com êxito no monitoramento da qualidade ambiental e na biorremediação de contaminação aquática. Além disso, possuem elevado valor nutricional e grande potencial na fabricação de biocombustíveis. Diante de sua importância ambiental, nutricional e energética, o estudo de sua composição mineral é importante para avaliação de potenciais aplicações e consequências. Muitos trabalhos têm sido desenvolvidos visando a determinação qualitativa e quantitativa de elementos químicos, em concentrações macro, micro e traço nas macroalgas, porém poucos são os trabalhos descritos na literatura usando métodos diretos de análise. A aplicação de métodos de análise direta de amostras sólidas é uma alternativa viável para diminuição de custos, consumo de reagentes, tempo de análise, geração de resíduos, além de minimizar a manipulação da amostra, evitando perdas de analitos e contaminação. As técnicas de fluorescência de raios X por energia dispersiva (EDXRF) e espectroscopia de emissão em plasma induzido por laser (LIBS) foram avaliadas para a determinação elementar em amostras de macroalgas marinhas. A principal dificuldade dessas técnicas para a análise direta de sólidos é estabelecer a estratégia de calibração externa, pois amostras sólidas podem ser heterogêneas, apresentar superfícies pouco uniformes, aliadas à falta de padrões compatíveis com as matrizes estudadas. Esses fatores acabam interferindo na exatidão, precisão e confiabilidade do método. Por isso, alternativas de calibração com uso de amostras de mesma matriz e análise multivariada foram aplicadas. Para verificação da eficiência das estratégias propostas, um método validado a partir de decomposição ácida de amostra assistida por radiação micro-ondas com determinação por espectrometria de emissão óptica com plasma acoplado indutivamente (ICP OES) foi utilizado para comparação de resultados, além do uso de sete materiais de referência certificados (CRMs) de diferentes materiais vegetais. A EDXRF possibilitou a determinação de Ca, K e Mg. Os valores de r2 dos modelos de calibração, precisão (%) para n=10, LOQ (µg g-1) e faixa de recuperação (%) em diferentes CRMs foram para: Ca (0,9233, 2,07, 109,5 e 85,0-89,3), K (0,9964, 3,82, 207,0 e 126,6-129,6) e Mg (0,9432, 4,07, 195,6 e 92,7-115,4). Por outro lado, LIBS, com uso de regressão multivariada por PLS (regressão por mínimos quadrados parciais) gerou modelos de validação com dados de número de variáveis, variáveis latentes (VLs), erro médio da validação cruzada (RMSECV, em µg g-1), r2 e faixa recuperação (%) para os CRMs de: 55, 3, 9094, 0,9174 e 124-134 (Ca); 75, 1, 4264, 0,9626 e 84-90,7 (K); 235, 1, 1315, 0,5299 e 60-4953 (Mg); e 180, 2, 2580, 0,9781 e This work, developed in Analytical Chemistry Research Group from UFBA, is within the scope of FAPESB, in the Assessment of Pollution and Recovery Process Identification for Mangrove Regions under the Influence of Industrial Activities in the All Saints Bay project, with the objective to study and development of two methods for direct determination of chemical elements in marine macroalgae. These organisms are being successfully used for monitoring the quality of an environment and for bioremediation of aquatic contamination. Macroalgae are also known by their nutritional value and great potential for production of biofuels. The determination of mineral composition of macroalgae is relevant taking into account their relevant applications and potential consequences. Several studies have shown the determination of macro, micro, and trace elements in macroalgae, but few studies were dedicated to their direct analysis. The application of direct solid analysis is an attractive alternative to decrease costs, consumption of reagents, sample throughput, and generation of residues. Direct solid analysis also avoids analyte losses and contamination. In the work here described, energy dispersive X-ray fluorescence (EDXRF) and laser induced breakdown spectroscopy (LIBS) were studied for elemental analysis of marine macroalgae. The main difficulty for both techniques for direct analysis of solids is the strategy for calibration, because solid samples may be heterogeneous, may have an irregular surface, and frequently there are no solid standards available considering sample matrices. These factors may affect negatively the accuracy, precision and reliability of the method. We investigated calibration strategies based on the use of same matrix and multivariate analysis. In a parallel procedure applied for comparison purposes, macroalgae samples were microwave- assisted acid digested and elemental analysis was performed by inductively coupled plasma optical emission spectrometry (ICP OES) beyond the use of seven Certified Reference Materials (CRMs) of different plant materials. It was demonstrated that EDXRF led to accurate determination of Ca, K and Mg. The r2 for the calibration models, accuracy (%) for n = 10, LOQ (µg g-1) and recovery ranges (%) at different CRMs values were to: Ca (0.9233, 2.07, 109.5 and 85.0 to 89.3), K (0.9964, 3.82, 207.0 and 126.6 to 129.6) and Mg (0.9432, 4.07, 195.6 and 92.7 to 115.4). Furthermore, LIBS, using PLS (partial least squares regression) generated validation’s models with data for number of variable, latent variables (VLs), mean error of cross validation (RMSECV in µg g-1), r2 and range recovery (%) for CRMs of: 55, 3, 9094, 0.9174 and 124 to 134 (Ca); 75, 1, 4264, 0.9626 and 84 to 90.7 (K); 235, 1, 1315, 0.5299 and 60 to 4953 (Mg); and 180, 2, 2580, 0.9781 and
APA, Harvard, Vancouver, ISO, and other styles
32

Nicollin, Florence. "Traitement de profils sismiques "ECORS" par projection sur le premier vecteur propre de la matrice spectrale." Grenoble INPG, 1989. http://www.theses.fr/1989INPG0101.

Full text
Abstract:
Le filtrage matriciel, obtenu par projection sur le premier vecteur propre de la matrice spectrale, constitue un outil performant pour l'etude de profils sismiques. C'est un traitement multidimensionnel assez simple a mettre en uvre et qui, par amelioration du rapport signal sur bruit, permet de mettre en evidence les arrivees d'energie caracterisant les structures geologiques. La methode necessite peu d'hypotheses a priori; elle est basee sur les relations frequentielles entre signaux dont la fonction de transfert est definie par le premier vecteur propre de la matrice. L'application a differents profils ecors et leur interpretation structurale permettent de cerner les performances et les limites de la methode: le traitement de profils de reflexion grand angle (campagne preliminaire ecors alpes 85) est peu efficace a cause du caractere tres bruite des donnees; par contre, le traitement de sections de type grands profils donne de bons resultats (profils de reflexion verticale et de reflexion grand angle ecors alpes 86, profil de refraction ecors nord de la france)
APA, Harvard, Vancouver, ISO, and other styles
33

HUANG, YAU-YI, and 黃耀億. "A Comparison of Multivariate Ratio Estimators." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/19315467521060165397.

Full text
Abstract:
碩士
國立臺北大學
統計學系
95
This paper aims to compare the multivariate ratio estimators based upon a Monte Carlo approach. The multivariate ratio estimators explored in this paper are derived from univariate ratio estimators which are summarized from previous studies. Except traditional and Hartley & Ross multivariate ratio estimators proposed by Olkin, no other univariate ratio estimators have been extended to multivariate type. Therefore, in this paper following the Olkin’s concept of expanding univariate ratio estimator to multivariate ratio estimator, the multivariate ratio estimators and their variances are derived and extended from the corresponding univariate ratio estimators which are summarized from previous studied. Using Monte Carlo approach the efficiency of the proposed multivariate ratio estimators are then compared based upon bias, variance, and MSE. The simulation results show that all the other ratio estimators have smaller bias than the traditional ratio estimator for estimating the population total under both of the univariate or multivariate type. The simulation results also find that the bias can be reduced as sample size increased and the variance of ratio estimators are smaller than variance of the mean per unit for estimating population total. That implies that we can reduce the variance of estimator and increase estimation efficiency by increasing sample size or increasing number of groups.
APA, Harvard, Vancouver, ISO, and other styles
34

Yilmaz, Yildiz Elif. "Estimation and Goodness of Fit for Multivariate Survival Models Based on Copulas." Thesis, 2009. http://hdl.handle.net/10012/4571.

Full text
Abstract:
We provide ways to test the fit of a parametric copula family for bivariate censored data with or without covariates. The proposed copula family is tested by embedding it in an expanded parametric family of copulas. When parameters in the proposed and the expanded copula models are estimated by maximum likelihood, a likelihood ratio test can be used. However, when they are estimated by two-stage pseudolikelihood estimation, the corresponding test is a pseudolikelihood ratio test. The two-stage procedures offer less computation, which is especially attractive when the marginal lifetime distributions are specified nonparametrically or semiparametrically. It is shown that the likelihood ratio test is consistent even when the expanded model is misspecified. Power comparisons of the likelihood ratio and the pseudolikelihood ratio tests with some other goodness-of-fit tests are performed both when the expanded family is correct and when it is misspecified. They indicate that model expansion provides a convenient, powerful and robust approach. We introduce a semiparametric maximum likelihood estimation method in which the copula parameter is estimated without assumptions on the marginal distributions. This method and the two-stage semiparametric estimation method suggested by Shih and Louis (1995) are generalized to regression models with Cox proportional hazards margins. The two-stage semiparametric estimator of the copula parameter is found to be about as good as the semiparametric maximum likelihood estimator. Semiparametric likelihood ratio and pseudolikelihood ratio tests are considered to provide goodness of fit tests for a copula model without making parametric assumptions for the marginal distributions. Both when the expanded family is correct and when it is misspecified, the semiparametric pseudolikelihood ratio test is almost as powerful as the parametric likelihood ratio and pseudolikelihood ratio tests while achieving robustness to the form of the marginal distributions. The methods are illustrated on applications in medicine and insurance. Sequentially observed survival times are of interest in many studies but there are difficulties in modeling and analyzing such data. First, when the duration of followup is limited and the times for a given individual are not independent, the problem of induced dependent censoring arises for the second and subsequent survival times. Non-identifiability of the marginal survival distributions for second and later times is another issue, since they are observable only if preceding survival times for an individual are uncensored. In addition, in some studies, a significant proportion of individuals may never have the first event. Fully parametric models can deal with these features, but lack of robustness is a concern, and methods of assessing fit are lacking. We introduce an approach to address these issues. We model the joint distribution of the successive survival times by using copula functions, and provide semiparametric estimation procedures in which copula parameters are estimated without parametric assumptions on the marginal distributions. The performance of semiparametric estimation methods is compared with some other estimation methods in simulation studies and shown to be good. The methodology is applied to a motivating example involving relapse and survival following colon cancer treatment.
APA, Harvard, Vancouver, ISO, and other styles
35

Liao, Ran. "Joint modeling of bivariate time to event data with semi-competing risk." Diss., 2016. http://hdl.handle.net/1805/12076.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Survival analysis often encounters the situations of correlated multiple events including the same type of event observed from siblings or multiple events experienced by the same individual. In this dissertation, we focus on the joint modeling of bivariate time to event data with the estimation of the association parameters and also in the situation of a semi-competing risk. This dissertation contains three related topics on bivariate time to event mod els. The first topic is on estimating the cross ratio which is an association parameter between bivariate survival functions. One advantage of using cross-ratio as a depen dence measure is that it has an attractive hazard ratio interpretation by comparing two groups of interest. We compare the parametric, a two-stage semiparametric and a nonparametric approaches in simulation studies to evaluate the estimation perfor mance among the three estimation approaches. The second part is on semiparametric models of univariate time to event with a semi-competing risk. The third part is on semiparametric models of bivariate time to event with semi-competing risks. A frailty-based model framework was used to accommodate potential correlations among the multiple event times. We propose two estimation approaches. The first approach is a two stage semiparametric method where cumulative baseline hazards were estimated by nonparametric methods first and used in the likelihood function. The second approach is a penalized partial likelihood approach. Simulation studies were conducted to compare the estimation accuracy between the proposed approaches. Data from an elderly cohort were used to examine factors associated with times to multiple diseases and considering death as a semi-competing risk.
APA, Harvard, Vancouver, ISO, and other styles
36

PAI, TSAI-LING, and 白彩綾. "Integrating Dynamic Principal Component Analysis-Decorrelated Residuals with Generalized Likelihood Ratio Test for Autocorrelated Multivariate Process Fault Detection." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/595h64.

Full text
Abstract:
碩士
朝陽科技大學
工業工程與管理系
105
Principal Component Analysis (PCA) has been widely used for multivariate process fault detection. PCA can effectively detect process faults under the premise of independent observations. However, the acquired data from a real process usually exhibits autocorrelation characteristics. Therefore, Ku, Storer and Georgakis (1995) suggested to introduce lagged variables into original data matrix and then apply the traditional PCA algorithm to the augmented matrix, they called this method as Dynamic PCA (DPCA). Moreover, Rato and Reis (2013) discovered the T^2 and Q monitoring statistics calculated from DPCA still present autocorrelation. To tackle this issue, Rato and Reis (2013) developed a Dynamic PCA based on Decorrelated Residuals (DPCA-DR) method in an attempt to reduce the autocorrelation of T^2 and Q. Even though the implementation of DPCA-DR can lower the autocorrelation of monitoring metrics, the autocorrelation cannot be exterminated. Furthermore, T^2 and Q are essentially calculated from Mahalanobis distance in which only recent observation was taken into consideration, leading to an ineffective detection of a small process change. According to abovementioned, this study will develop a DPCA-DR-GLR in an effort to detect a wide range of process changes. The DPCA-DR-DR was used to reduce data dimensionality and reduce the autocorrelation of T^2 and Q. The Generalized Likelihood Ratio (GLR) is adopted as the monitoring statistic due to the simultaneous consideration of recent observation and past observations. The advantages of the proposed method includes : 1) can detect a wide range of process changes; 2) can estimate the process change point that will provide practitioner the fault diagnosed information; 3) no further parameters to be given during monitoring. The efficiency of the proposed method will be verified via three examples : a simulated multivariate autocorrelated process, Tennessee Eastman process and White-wine inspection. Result demonstrated that the proposed method can effectively detect multivariate autorrelated process faults.
APA, Harvard, Vancouver, ISO, and other styles
37

Chen, Mei-Hua, and 陳美華. "Study of biomarker and diagnostic ratio approaches for oil spill identification- application of modified oil spill identification flowchart with multivariate statistical analysis techniques." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/bj5qg4.

Full text
Abstract:
碩士
國立聯合大學
環境與安全衛生工程學系碩士班
99
Oil source tracking and identification related technologies are constantly being developed and applied since oil spill accidents occurred frequently that cause great impact on the environmental and ecological systems as well as the economy. In this study, fresh crude oils from different regions and countries were analyzed and recognized by using chemical fingerprint chromatogram together with source-sensitive diagnostic ratios. Following the proposed oil spill identification flowchart with various appropriate biomarkers and source-specific ratios, it’s possible to identify characteristics of crude oils from unknown resources. Moreover, oil characteristics can be classified even more effectively by multivariate statistical approach such as principal component analysis (PCA), hierarchical cluster analysis (HCA), repeatability limit and student’s t-test to statistically evaluate the imperceptible differences between oils. It was shown that our proposed flowchart of oil spill identification with appropriate biomarkers and diagnostic ratios along with the multivariate statistical analysis techniques were also applied effectively to identify different diesel types, oil-to-oil correlation, and oil source tracking.
APA, Harvard, Vancouver, ISO, and other styles
38

Dzikiti, Weston. "Banking sector, stock market development and economic growth in Zimbabwe : a multivariate causality framework." Diss., 2017. http://hdl.handle.net/10500/22818.

Full text
Abstract:
The thesis examined the comprehensive causal relationship between the banking sector, stock market development and economic growth in a multi-variate framework using Zimbabwean time series data from 1988 to 2015. Three banking sector development proxies (total financial sector credit, banking credit to private sector and broad money M3) and three stock market development proxies (stock market capitalization, value traded and turnover ratio) were employed to estimate both long and short run relationships between banking sector, stock market and economic growth in Zimbabwe. The study employs the vector error correction model (VECM) as the main estimation technique and the autoregressive distributed lag (ARDL) approach as a robustness testing technique. Results showed that in Zimbabwe a significant causal relationship from banking sector and stock market development to economic growth exists in the long run without any feedback effects. In the short run, however, a negative yet statistically significant causal relationship runs from economic growth to banking sector and stock market development in Zimbabwe. The study further concludes that there is a unidirectional causal relationship running from stock market development to banking sector development in Zimbabwe in both short and long run periods. Nonetheless this relationship between banking sector and stock markets has been found to be more significant in the short run than in the long run. The thesis adopts the complementary view and recommends for the spontaneity implementation of monetary policies as the economy grows. Monetary authorities should thus formulate policies to promote both banks and stock markets with corresponding growth in Zimbabwe’s economy.
Business Management
M. Com. (Business Management)
APA, Harvard, Vancouver, ISO, and other styles
39

Austin, Elizabeth. "Regression Analysis for Ordinal Outcomes in Matched Study Design: Applications to Alzheimer's Disease Studies." 2018. https://scholarworks.umass.edu/masters_theses_2/628.

Full text
Abstract:
Alzheimer's Disease (AD) affects nearly 5.4 million Americans as of 2016 and is the most common form of dementia. The disease is characterized by the presence of neurofibrillary tangles and amyloid plaques [1]. The amount of plaques are measured by Braak stage, post-mortem. It is known that AD is positively associated with hypercholesterolemia [16]. As statins are the most widely used cholesterol-lowering drug, there may be associations between statin use and AD. We hypothesize that those who use statins, specifically lipophilic statins, are more likely to have a low Braak stage in post-mortem analysis. In order to address this hypothesis, we wished to fit a regression model for ordinal outcomes (e.g., high, moderate, or low Braak stage) using data collected from the National Alzheimer's Coordinating Center (NACC) autopsy cohort. As the outcomes were matched on the length of follow-up, a conditional likelihood-based method is often used to estimate the regression coefficients. However, it can be challenging to solve the conditional-likelihood based estimating equation numerically, especially when there are many matching strata. Given that the likelihood of a conditional logistic regression model is equivalent to the partial likelihood from a stratified Cox proportional hazard model, the existing R function for a Cox model, coxph( ), can be used for estimation of a conditional logistic regression model. We would like to investigate whether this strategy could be extended to a regression model for ordinal outcomes. More specifically, our aims are to (1) demonstrate the equivalence between the exact partial likelihood of a stratified discrete time Cox proportional hazards model and the likelihood of a conditional logistic regression model, (2) prove equivalence, or lack there-of, between the exact partial likelihood of a stratified discrete time Cox proportional hazards model and the conditional likelihood of models appropriate for multiple ordinal outcomes: an adjacent categories model, a continuation-ratio model, and a cumulative logit model, and (3) clarify how to set up stratified discrete time Cox proportional hazards model for multiple ordinal outcomes with matching using the existing coxph( ) R function and interpret the regression coefficient estimates that result. We verified this theoretical proof through simulation studies. We simulated data from the three models of interest: an adjacent categories model, a continuation-ratio model, and a cumulative logit model. We fit a Cox model using the existing coxph( ) R function to the simulated data produced by each model. We then compared the coefficient estimates obtained. Lastly, we fit a Cox model to the NACC dataset. We used Braak stage as the outcome variables, having three ordinal categories. We included predictors for age at death, sex, genotype, education, comorbidities, number of days having taken lipophilic statins, number of days having taken hydrophilic statins, and time to death. We matched cases to controls on the length of follow up. We have discussed all findings and their implications in detail.
APA, Harvard, Vancouver, ISO, and other styles
40

Lin, Yen-chun, and 林妍君. "Testing for Constant Hedge Ratios in Futures Markets:A Multivariate GARCH Approach." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/46900172873032679154.

Full text
Abstract:
碩士
南華大學
經濟學研究所
94
Allowing for a more flexible BEKK form of time-varying volatility and with the day-of-the-week effect embedded in the variance-covariance matrix, the study follows a bivariate GARCH parameterization from Moschini and Myers (2002) to test the hypotheses that the optimal futures hedge ratios of MSCI Taiwan Index futures and TAIFEX Stock Index futures are constant over time. The time period covered is from September 1, 1998 through December 30, 2005, including 1867 daily observations over a span of 2921 calendar days. The empirical results show that the null hypothesis of a constant hedge ratio is statistically significantly rejected and the time-varying optimal hedge ratios cannot be explained solely by the day-of-the-week effect. It is also found that over 80% of the variance of the unhedged portfolios returns can be reduced by the hedging strategies suggested in the study for both MSCI Taiwan Index futures and TAIFEX Stock Index futures.
APA, Harvard, Vancouver, ISO, and other styles
41

Yao, Chia-Chu, and 姚嘉初. "Hedge Ratios and Hedging Effectivenesss of Multivariante GARCH Model- Evidence from Taiwan Futures Market." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/03341698904880789632.

Full text
Abstract:
碩士
東吳大學
企業管理學系
97
This study using TAIFEX Taiwan stock index futures as a hedging objective trying to eliminate the risk of Taiwan stock index. As the finacial asset is found with ARCH effect in academic research, so that the GARCH model is expected to obtain more effective hedge performance than other models. The optimal hedge ratios are estimated from the OLS model, VAR model, VECM model, and multivariate diagonal Vec GARCH model (MVGARCH). The effectiveness of hedge ratios is measured by using a minimum return variance and maximum utility under different hedging periods. The sample encompasses both series of stock index and futures on a daily basis from August 24, 1998 to December 31, 2008. The results demonstrate that MVGARCH model is with the best hedge performance only in the short term hedge periods. VECM model that is with error correction is demonstrated with higher effectivenesss than VAR model in present research result.
APA, Harvard, Vancouver, ISO, and other styles
42

LO, YU-HUI, and 羅玉惠. "INTEGRATING FINANCIAL RATIOS AND CORPORATE GOVERNANCE INDICES TO BUILD THE MODEL OF CREDIT RATING PREDICTION—APPLICATION OF MULTIVARIATE DISCRIMINATE ANALYSIS AND ARTIFICIAL NEURAL NETWORK." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/23148047855524939097.

Full text
Abstract:
碩士
國立臺北大學
企業管理學系
95
After the Asian financial crisis in 1997, many well-known enterprises faced with operating crisis and lost investors’ confidence. One of the main reasons is the mechanism of corporate governance not sound enough. Many researches found non-financial information can reflect business crisis better than financial information. The function of credit rating is to evaluate the company’s ability to meet its financial obligations. So it can be seen an indicator of business financial crisis, especially many Tank stocks appear recently in our country. So our research wants to discuss how the non-financial information, corporate governance variables can predict corporate credit rating, and compare difference between two models built by two distinct forecast technologies. The empirical results are as the followings: First our research uses multivariate discriminate analysis to build a predicting model and screen key variables. The empirical result is that the hit ratio of integrating financial ratios and corporate governance indices model is better. Moreover we use Genetic Algorithm extracting final 9 variables with heavy impact on credit ratting result. Besides, more variables belong to corporate governance indices that mean corporate governance is the important information source of business evaluation. Second we compare the forecasting ability of Multivariate Discriminate Analysis model with Artificial Neural Network model. We find the latter model built only by 9 variables but its whole validity (90% hit ratio), internal validity (89.29% hit ratio) and external validity (88.57% hit ratio) all are better than Discriminate Analysis model. In contrast to two models, Artificial Neural Network model have better generality that can provide external stakeholders to apply different sample businesses to forecast risk degree.
APA, Harvard, Vancouver, ISO, and other styles
43

Vana, Laura. "Statistical Modeling for Credit Ratings." Thesis, 2018. http://epub.wu.ac.at/6439/1/dissertation_lvana.pdf.

Full text
Abstract:
This thesis deals with the development, implementation and application of statistical modeling techniques which can be employed in the analysis of credit ratings. Credit ratings are one of the most widely used measures of credit risk and are relevant for a wide array of financial market participants, from investors, as part of their investment decision process, to regulators and legislators as a means of measuring and limiting risk. The majority of credit ratings is produced by the "Big Three" credit rating agencies Standard & Poors', Moody's and Fitch. Especially in the light of the 2007-2009 financial crisis, these rating agencies have been strongly criticized for failing to assess risk accurately and for the lack of transparency in their rating methodology. However, they continue to maintain a powerful role as financial market participants and have a huge impact on the cost of funding. These points of criticism call for the development of modeling techniques that can 1) facilitate an understanding of the factors that drive the rating agencies' evaluations, 2) generate insights into the rating patterns that these agencies exhibit. This dissertation consists of three research articles. The first one focuses on variable selection and assessment of variable importance in accounting-based models of credit risk. The credit risk measure employed in the study is derived from credit ratings assigned by ratings agencies Standard & Poors' and Moody's. To deal with the lack of theoretical foundation specific to this type of models, state-of-the-art statistical methods are employed. Different models are compared based on a predictive criterion and model uncertainty is accounted for in a Bayesian setting. Parsimonious models are identified after applying the proposed techniques. The second paper proposes the class of multivariate ordinal regression models for the modeling of credit ratings. The model class is motivated by the fact that correlated ordinal data arises naturally in the context of credit ratings. From a methodological point of view, we extend existing model specifications in several directions by allowing, among others, for a flexible covariate dependent correlation structure between the continuous variables underlying the ordinal credit ratings. The estimation of the proposed models is performed using composite likelihood methods. Insights into the heterogeneity among the "Big Three" are gained when applying this model class to the multiple credit ratings dataset. A comprehensive simulation study on the performance of the estimators is provided. The third research paper deals with the implementation and application of the model class introduced in the second article. In order to make the class of multivariate ordinal regression models more accessible, the R package mvord and the complementary paper included in this dissertation have been developed. The mvord package is available on the "Comprehensive R Archive Network" (CRAN) for free download and enhances the available ready-to-use statistical software for the analysis of correlated ordinal data. In the creation of the package a strong emphasis has been put on developing a user-friendly and flexible design. The user-friendly design allows end users to estimate in an easy way sophisticated models from the implemented model class. The end users the package appeals to are practitioners and researchers who deal with correlated ordinal data in various areas of application, ranging from credit risk to medicine or psychology.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!