Siga este enlace para ver otros tipos de publicaciones sobre el tema: Discrete multivariate model.

Tesis sobre el tema "Discrete multivariate model"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 24 mejores tesis para su investigación sobre el tema "Discrete multivariate model".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Dong, Fanglong. "Bayesian Model Checking in Multivariate Discrete Regression Problems". Bowling Green State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1223329230.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Wu, Hao. "Probabilistic Modeling of Multi-relational and Multivariate Discrete Data". Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/74959.

Texto completo
Resumen
Modeling and discovering knowledge from multi-relational and multivariate discrete data is a crucial task that arises in many research and application domains, e.g. text mining, intelligence analysis, epidemiology, social science, etc. In this dissertation, we study and address three problems involving the modeling of multi-relational discrete data and multivariate multi-response count data, viz. (1) discovering surprising patterns from multi-relational data, (2) constructing a generative model for multivariate categorical data, and (3) simultaneously modeling multivariate multi-response count data and estimating covariance structures between multiple responses. To discover surprising multi-relational patterns, we first study the ``where do I start?'' problem originating from intelligence analysis. By studying nine methods with origins in association analysis, graph metrics, and probabilistic modeling, we identify several classes of algorithmic strategies that can supply starting points to analysts, and thus help to discover interesting multi-relational patterns from datasets. To actually mine for interesting multi-relational patterns, we represent the multi-relational patterns as dense and well-connected chains of biclusters over multiple relations, and model the discrete data by the maximum entropy principle, such that in a statistically well-founded way we can gauge the surprisingness of a discovered bicluster chain with respect to what we already know. We design an algorithm for approximating the most informative multi-relational patterns, and provide strategies to incrementally organize discovered patterns into the background model. We illustrate how our method is adept at discovering the hidden plot in multiple synthetic and real-world intelligence analysis datasets. Our approach naturally generalizes traditional attribute-based maximum entropy models for single relations, and further supports iterative, human-in-the-loop, knowledge discovery. To build a generative model for multivariate categorical data, we apply the maximum entropy principle to propose a categorical maximum entropy model such that in a statistically well-founded way we can optimally use given prior information about the data, and are unbiased otherwise. Generally, inferring the maximum entropy model could be infeasible in practice. Here, we leverage the structure of the categorical data space to design an efficient model inference algorithm to estimate the categorical maximum entropy model, and we demonstrate how the proposed model is adept at estimating underlying data distributions. We evaluate this approach against both simulated data and US census datasets, and demonstrate its feasibility using an epidemic simulation application. Modeling data with multivariate count responses is a challenging problem due to the discrete nature of the responses. Existing methods for univariate count responses cannot be easily extended to the multivariate case since the dependency among multiple responses needs to be properly accounted for. To model multivariate data with multiple count responses, we propose a novel multivariate Poisson log-normal model (MVPLN). By simultaneously estimating the regression coefficients and inverse covariance matrix over the latent variables with an efficient Monte Carlo EM algorithm, the proposed model takes advantages of association among multiple count responses to improve the model prediction accuracy. Simulation studies and applications to real world data are conducted to systematically evaluate the performance of the proposed method in comparison with conventional methods.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Zheng, Xiyu. "SENSITIVITY ANALYSIS IN HANDLING DISCRETE DATA MISSING AT RANDOM IN HIERARCHICAL LINEAR MODELS VIA MULTIVARIATE NORMALITY". VCU Scholars Compass, 2016. http://scholarscompass.vcu.edu/etd/4403.

Texto completo
Resumen
Abstract In a two-level hierarchical linear model(HLM2), the outcome as well as covariates may have missing values at any of the levels. One way to analyze all available data in the model is to estimate a multivariate normal joint distribution of variables, including the outcome, subject to missingness conditional on covariates completely observed by maximum likelihood(ML); draw multiple imputation (MI) of missing values given the estimated joint model; and analyze the hierarchical model given the MI [1,2]. The assumption is data missing at random (MAR). While this method yields efficient estimation of the hierarchical model, it often estimates the model given discrete missing data that is handled under multivariate normality. In this thesis, we evaluate how robust it is to estimate a hierarchical linear model given discrete missing data by the method. We simulate incompletely observed data from a series of hierarchical linear models given discrete covariates MAR, estimate the models by the method, and assess the sensitivity of handling discrete missing data under the multivariate normal joint distribution by computing bias, root mean squared error, standard error, and coverage probability in the estimated hierarchical linear models via a series of simulation studies. We want to achieve the following aim: Evaluate the performance of the method handling binary covariates MAR. We let the missing patterns of level-1 and -2 binary covariates depend on completely observed variables and assess how the method handles binary missing data given different values of success probabilities and missing rates. Based on the simulation results, the missing data analysis is robust under certain parameter settings. Efficient analysis performs very well for estimation of level-1 fixed and random effects across varying success probabilities and missing rates. MAR estimation of level-2 binary covariate is not well estimated when the missing rate in level-2 binary covariate is greater than 10%. The rest of the thesis is organized as follows: Section 1 introduces the background information including conventional methods for hierarchical missing data analysis, different missing data mechanisms, and the innovation and significance of this study. Section 2 explains the efficient missing data method. Section 3 represents the sensitivity analysis of the missing data method and explain how we carry out the simulation study using SAS, software package HLM7, and R. Section 4 illustrates the results and useful recommendations for researchers who want to use the missing data method for binary covariates MAR in HLM2. Section 5 presents an illustrative analysis National Growth of Health Study (NGHS) by the missing data method. The thesis ends with a list of useful references that will guide the future study and simulation codes we used.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Yildirak, Sahap Kasirga. "The Identificaton Of A Bivariate Markov Chain Market Model". Phd thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/1257898/index.pdf.

Texto completo
Resumen
This work is an extension of the classical Cox-Ross-Rubinstein discrete time market model in which only one risky asset is considered. We introduce another risky asset into the model. Moreover, the random structure of the asset price sequence is generated by bivariate finite state Markov chain. Then, the interest rate varies over time as it is the function of generating sequences. We discuss how the model can be adapted to the real data. Finally, we illustrate sample implementations to give a better idea about the use of the model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Valiquette, Samuel. "Sur les données de comptage dans le cadre des valeurs extrêmes et la modélisation multivariée". Electronic Thesis or Diss., Université de Montpellier (2022-....), 2024. http://www.theses.fr/2024UMONS028.

Texto completo
Resumen
Cette thèse s’intéresse à certains aspects théoriques de la modélisation des données de comptage. Deux cadres distincts sont abordés : celui des valeurs extrêmes et celui de la modélisation multivariée. Notre première contribution explore, en termes des comportements extrêmes, les liens existants entre le mélange Poisson et sa loi de mélange. Ce travail permet de caractériser et séparer plusieurs familles de lois de mélanges Poisson selon leur comportement en queue. Bien que ce travail soit théorique, nous discutons de son utilité d’un point de vue pratique, notamment pour le choix de la loi de mélange. Notre deuxième contribution porte sur une nouvelle classe de modèles multivariés dénommée Tree Pólya Splitting. Celle-ci repose sur une modélisation hiérarchique et suppose qu’une quantité aléatoire est répartie successivement selon une loi de Pólya à travers une structure d’arbre de partition. Dans ce travail, nous caractérisons les lois marginales univariées et multivariées, les moments factoriels, ainsi que les structures de dépendance (covariance/corrélation) qui en découlent. Nous mettons en évidence, à l’aide d’un jeu de données correspondant à l’abondance de trichoptères, l’intérêt de cette classe de modèles en comparant nos résultats à ceux obtenus, par exemple, avec des modèles de type Poisson log-normale multivariée. Nous concluons cette thèse en présentant diverses perspectives de recherche
This thesis focuses on certain theoretical aspects of counting data modeling. Two distinct frameworks are addressed: extreme values and multivariate modeling. Our first contribution explores, in terms of extreme behaviors, the existing connections between the Poisson mixture and its mixing distribution. This work allows us to characterize and discriminate several families of Poisson mixture according to their tail behavior. Although this work is theoretical, we discuss its practical utility, particularly regarding the choice of the mixing distribution. Our second contribution focuses on a new class of multivariate models called Tree Pólya Splitting. This class is based on hierarchical modeling and assumes that a random quantity is successively divided according to a Pólya distribution through a partition tree structure. In this work, we characterize univariate and multivariate marginal distributions, factorial moments, as well as the resulting dependency structures (covariance/correlation). Using a dataset corresponding to the abundance of Trichoptera, we highlight the interest of this class of models by comparing our results to those obtained, for example, with multivariate Poisson-lognormal models. We conclude this thesis by presenting various perspectives
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Comas, Cufí Marc. "Aportacions de l'anàlisi composicional a les mixtures de distribucions". Doctoral thesis, Universitat de Girona, 2018. http://hdl.handle.net/10803/664902.

Texto completo
Resumen
The present thesis is a compendium of three original works produced between 2014 and 2018. The papers have a common link: they are different contributions made by compositional data analysis to the study of the models based on mixtures of probability distributions. In brief, we could say that compositional data analysis is a methodology that consists of studying a sample of measures that are strictly positive from a relative point of view. Mixtures of distributions are a specific type of probability distribution defined to be the convex linear combination of other distributions
La present tesi representa un compendi de tres treballs originals realitzats durant els anys 2014-2018. Aquests treballs comparteixen un nexe comú: tots ells són diferents aportacions de l'anàlisi composicional a l'estudi dels models basats en mixtures de distribucions de probabilitat. D'una forma molt breu, podríem dir que l'anàlisi composicional és una metodologia consistent en estudiar una mostra de mesures estrictament positives des d'un punt de vista relatiu. Les mixtures de distribucions, també anomenades barreges de distribucions, són un tipus particular de distribucions de probabilitat definides com la combinació lineal convexa d'altres distribucions
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Wiberg, Viktor. "Terrain machine learning : A predictive method for estimating terrain model parameters using simulated sensors, vehicle and terrain". Thesis, Umeå universitet, Institutionen för fysik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-149815.

Texto completo
Resumen
Predicting terrain trafficability of deformable terrain is a difficult task with applications in e.g, forestry, agriculture, exploratory missions. The currently used techniques are neither practical, efficient, nor sufficiently accurate and inadequate for certain soil types. An online method which predicts terrain trafficability is of interest for any vehicle with purpose to reduce ground damage, improve steering and increase mobility. This thesis presents a novel approach for predicting the model parameters used in modelling a virtual terrain. The model parameters include particle stiffness, tangential friction, rolling resistance and two parameters related to particle plasticity and adhesion. Using multi-body dynamics, both vehicle and terrain can be simulated, which allows for an efficient exploration of a great variety of terrains. A vehicle with access to certain sensors can frequently gather sensor data providing information regarding vehicle-terrain interaction. The proposed method develops a statistical model which uses the sensor data in predicting the terrain model parameters. However, these parameters are specified at model particle level and do not directly explain bulk properties measurable on a real terrain. Simulations were carried out of a single tracked bogie constrained to move in one direction when traversing flat, homogeneous terrains. The statistical model with best prediction accuracy was ridge regression using polynomial features and interaction terms of second degree. The model proved capable of predicting particle stiffness, tangential friction and particle plasticity, with moderate accuracy. However, it was deduced that the current predictors and training scenarios were insufficient in estimating particle adhesion and rolling resistance. Nevertheless, this thesis indicates that it should be possible to develop a method which successfully predicts terrain model properties.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Siddiqi, Junaid Sagheer. "Mixture and latent class models for discrete multivariate data". Thesis, University of Exeter, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.303877.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Scott, Laurie Croslin Zeng Yong. "Bayesian inference via filtering of micro-movement multivariate stock price models with discrete noises". Diss., UMK access, 2006.

Buscar texto completo
Resumen
Thesis (Ph. D.)--Dept. of Mathematics and Statistics and Dept. of Economics. University of Missouri--Kansas City, 2006.
"A dissertation in mathematics and economics." Advisor: Yong Zeng. Typescript. Vita. Title from "catalog record" of the print edition Description based on contents viewed Jan. 29, 2007. Includes bibliographical references (leaves 121-124). Online version of the print edition.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Olson, Brent. "Evaluating the error of measurement due to categorical scaling with a measurement invariance approach to confirmatory factor analysis". Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/332.

Texto completo
Resumen
It has previously been determined that using 3 or 4 points on a categorized response scale will fail to produce a continuous distribution of scores. However, there is no evidence, thus far, revealing the number of scale points that may indeed possess an approximate or sufficiently continuous distribution. This study provides the evidence to suggest the level of categorization in discrete scales that makes them directly comparable to continuous scales in terms of their measurement properties. To do this, we first introduced a novel procedure for simulating discretely scaled data that was both informed and validated through the principles of the Classical True Score Model. Second, we employed a measurement invariance (MI) approach to confirmatory factor analysis (CFA) in order to directly compare the measurement quality of continuously scaled factor models to that of discretely scaled models. The simulated design conditions of the study varied with respect to item-specific variance (low, moderate, high), random error variance (none, moderate, high), and discrete scale categorization (number of scale points ranged from 3 to 101). A population analogue approach was taken with respect to sample size (N = 10,000). We concluded that there are conditions under which response scales with 11 to 15 scale points can reproduce the measurement properties of a continuous scale. Using response scales with more than 15 points may be, for the most part, unnecessary. Scales having from 3 to 10 points introduce a significant level of measurement error, and caution should be taken when employing such scales. The implications of this research and future directions are discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Scutari, Marco. "Measures of Variability for Graphical Models". Doctoral thesis, Università degli studi di Padova, 2011. http://hdl.handle.net/11577/3422736.

Texto completo
Resumen
In recent years, graphical models have been successfully applied in several different disciplines, including medicine, biology and epidemiology. This has been made possible by the rapid evolution of structure learning algorithms, from constraint-based ones to score-based and hybrid ones. The main goal in the development of these algorithms has been the reduction of the number of either independence tests or score comparisons needed to learn the structure of the Bayesian network. In most cases the characteristics of the learned networks have been studied using a small number of reference data sets as benchmarks, and differences from the true structure heve been measured with purely descriptive measures such as Hamming distance. This approach to model validation is not possible for real world data sets, as the true structure of their probability distribution is not known. An alternative is provided by the use of either parametric or nonparametric bootstrap. By applying a learning algorithm to a sufficiently large number of bootstrap samples it is possible to obtain the empirical probability of any feature of the resulting network, such as the structure of the Markov Blanket of a particular node. The fundamental limit in the interpretation of the results is that the “reasonable” level of confidence for thresholding depends on the data and the learning algorithm. In this thesis we extend the aforementioned bootstrap-based approach for the in- ference on the structure of a Bayesian or Markov network. The graph representing the network structure and its underlying undirected graph (in the case of Bayesian networks) are modelled using a multivariate extension of the Trinomial and Bernoulli distributions; each component is associated with an arc. These assumptions allow the derivation of exact and asymptotic measures of the variability of the network structure or any of its parts. These measures are then applied to some common learning strate- gies used in literature using the implementation provided by the bnlearn R package implemented and maintained by the author.
Negli ultimi anni i modelli grafici, ed in particolare i network Bayesiani, sono entrati nella pratica corrente delle analisi statistiche in diversi settori scientifici, tra cui medi cina e biostatistica. L’uso di questo tipo di modelli è stato reso possibile dalla rapida evoluzione degli algoritmi per apprenderne la struttura, sia quelli basati su test statistici che quelli basati su funzioni punteggio. L’obiettivo principale di questi nuovi algoritmi è la riduzione del numero di modelli intermedi considerati nell’apprendimento; le loro caratteristiche sono state usualmente valutate usando dei dati di riferimento (per i quali la vera struttura del modello è nota da letteratura) e la distanza di Hamming. Questo approccio tuttavia non può essere usato per dati sperimentali, poiché la loro struttura probabilistica non è nota a priori. In questo caso una valida alternativa è costituita dal bootstrap non parametrico: apprendendo un numero sufficientemente grande di modelli da campioni bootstrap è infatti possibile ottenere una stima empirica della probabilità di ogni caratteristica di interesse del network stesso. In questa tesi viene affrontato il principale limite di questo secondo approccio: la difficoltà di stabilire una soglia di significatività per le probabilità empiriche. Una possibile soluzione è data dall’assunzione di una distribuzione Trinomiale multivariata (nel caso di grafi orientati aciclici) o Bernoulliana multivariata (nel caso di grafi non orientati), che permette di associare ogni arco del network ad una distribuzione mar ginale. Questa assunzione permette di costruire dei test statistici, sia asintotici che esatti, per la variabilità multivariata della struttura del network nel suo complesso o di una sua parte. Tali misure di variabilità sono state poi applicate ad alcuni algoritmi di apprendimento della struttura di network Bayesiani utilizzando il pacchetto R bnlearn, implementato e mantenuto dall’autore.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Caballero, Aguirre Enrique Sebastián. "Prediccion Multivariable de Recursos Recuperables". Tesis, Universidad de Chile, 2012. http://www.repositorio.uchile.cl/handle/2250/104377.

Texto completo
Resumen
La evaluación de recursos minerales es esencial para el diseño y la planificación minera, dado que cuantifica la distribución de elementos de interés, subproductos y contaminantes dentro de un depósito minero. Tradicionalmente, los modelos de recursos se elaboran mediante ponderación del inverso de la distancia o kriging, considerando una variable a la vez e ignorando las correlaciones espaciales entre las especies minerales. Métodos de estimación multivariable, como el cokriging, siguen siendo poco utilizados en la práctica, debido principalmente a la dificultad de ajuste de un modelo variográfico. Este trabajo aborda el problema de la predicción multivariable de recursos recuperables en depósitos mineros. Con este objetivo, un modelo geoestadístico (el modelo Gaussiano discreto) es utilizado para cosimular las leyes de bloques y así determinar sus distribuciones locales conjuntas, considerando además el caso en que las leyes medias son desconocidas. De las distribuciones locales obtenidas, es posible calcular, para cada bloque (unidad de selectividad minera), el valor esperado de cualquier función de las leyes y evaluar los recursos que pueden ser recuperados por sobre leyes de corte dadas. La metodología es aplicada a un caso de estudio que consiste en datos de producción (pozos de tronadura) de un depósito de lateritas niquelíferas. Las variables de interés corresponden a leyes de níquel, fierro, cromo, alúmina y sílice. Se demuestra que el modelo Gaussiano discreto permite estimar el beneficio esperado de unidad de selectividad minera y su mejor destino (planta de procesamiento o botadero). Estas estimaciones dependen de la probabilidad de superar leyes de corte dadas y de la razón entre las leyes de sílice y magnesio, la cual juega un papel importante en el procesamiento metalúrgico para obtener ferroníquel. Los resultados de esta metodología son comparados con una estimación tradicional mediante cokriging ordinario, arrojando un 24.6% de bloques clasificados de forma diferente (entre estéril y mineral) y una discrepancia de un 22.5% en la relación estéril/mineral estimada y de 8.2 [MUS$] en el beneficio estimado para el caso de estudio. Estas discrepancias se explican por la propiedad de suavizamiento del cokriging, que no reproduce la variabilidad espacial de las leyes y produce sesgos en la estimación de variables no aditivas. En cambio, la metodología propuesta entrega resultados teóricamente insesgados y permite el análisis de escenarios y el cálculo de la respuesta más probable, en particular cuando hay involucradas variables no aditivas.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Silva, Gustavo Rodrigues Gonçalves da. "Especificação do modelo de referência em projeto de controladores multivariáveis discretos". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/141821.

Texto completo
Resumen
A escolha do modelo de referência é a principal tarefa a ser executada pelo projetista em um projeto de controle por modelo de referência. Uma má escolha do modelo de referência pode resultar em um desempenho de malha fechada que tem pouca semelhança com o especificado e a malha fechada pode até ser instável. Neste trabalho, esse problema será discutido no controle de plantas multivariáveis. O resultado experimental em uma planta de controle de nível de três tanques mostra uma aparentemente correta, ainda que ingênua, escolha do modelo de referência levando a um desempenho muito pobre em malha fechada. O problema é, então, analisado, expondo a ingenuidade do exemplo. Começa-se por reconhecer as restrições fundamentais impostas pelo sistema e, em seguida, deriva-se diretrizes gerais que respeitam essas restrições, para uma escolha eficaz do modelo de referência em sistemas multivariáveis. Também é proporcionada uma nova formulação para calcular o grau relativo mínimo de cada elemento do modelo de referência sem a necessidade de um modelo completo da planta. A aplicação destas orientações em simulações e na planta de três tanques ilustra sua eficácia.
The choice of the reference model is the main task to be performed by the designer in a model reference control design. A poor choice of the reference model may result in a closed-loop performance that bears no resemblance to the specifications and the closedloop may even be unstable. In this work we discuss this issue in the control of multivariable plants. Experimental results in a three tank level control plant show a seemingly correct, yet naive, choice of reference model leading to very poor closed-loop performance. The problem is then analyzed, exposing the naivete of the design example. We start by recognizing the fundamental constraints imposed by the system and then deriving general guidelines respecting these contraints for the effective choice of the reference model in multivariable systems. We also provide a novel formulation to compute the minimal relative degree of each element of the reference model without needing a complete model of the plant. The application of these guidelines to simulations and the three tank plant illustrates their effectiveness.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Kato, Fernando Hideki. "Análise de carteiras em tempo discreto". Universidade de São Paulo, 2004. http://www.teses.usp.br/teses/disponiveis/12/12139/tde-24022005-005812/.

Texto completo
Resumen
Nesta dissertação, o modelo de seleção de carteiras de Markowitz será estendido com uma análise em tempo discreto e hipóteses mais realísticas. Um produto tensorial finito de densidades Erlang será usado para aproximar a densidade de probabilidade multivariada dos retornos discretos uniperiódicos de ativos dependentes. A Erlang é um caso particular da distribuição Gama. Uma mistura finita pode gerar densidades multimodais não-simétricas e o produto tensorial generaliza este conceito para dimensões maiores. Assumindo que a densidade multivariada foi independente e identicamente distribuída (i.i.d.) no passado, a aproximação pode ser calibrada com dados históricos usando o critério da máxima verossimilhança. Este é um problema de otimização em larga escala, mas com uma estrutura especial. Assumindo que esta densidade multivariada será i.i.d. no futuro, então a densidade dos retornos discretos de uma carteira de ativos com pesos não-negativos será uma mistura finita de densidades Erlang. O risco será calculado com a medida Downside Risk, que é convexa para determinados parâmetros, não é baseada em quantis, não causa a subestimação do risco e torna os problemas de otimização uni e multiperiódico convexos. O retorno discreto é uma variável aleatória multiplicativa ao longo do tempo. A distribuição multiperiódica dos retornos discretos de uma seqüência de T carteiras será uma mistura finita de distribuições Meijer G. Após uma mudança na medida de probabilidade para a composta média, é possível calcular o risco e o retorno, que levará à fronteira eficiente multiperiódica, na qual cada ponto representa uma ou mais seqüências ordenadas de T carteiras. As carteiras de cada seqüência devem ser calculadas do futuro para o presente, mantendo o retorno esperado no nível desejado, o qual pode ser função do tempo. Uma estratégia de alocação dinâmica de ativos é refazer os cálculos a cada período, usando as novas informações disponíveis. Se o horizonte de tempo tender a infinito, então a fronteira eficiente, na medida de probabilidade composta média, tenderá a um único ponto, dado pela carteira de Kelly, qualquer que seja a medida de risco. Para selecionar um dentre vários modelos de otimização de carteira, é necessário comparar seus desempenhos relativos. A fronteira eficiente de cada modelo deve ser traçada em seu respectivo gráfico. Como os pesos dos ativos das carteiras sobre estas curvas são conhecidos, é possível traçar todas as curvas em um mesmo gráfico. Para um dado retorno esperado, as carteiras eficientes dos modelos podem ser calculadas, e os retornos realizados e suas diferenças ao longo de um backtest podem ser comparados.
In this thesis, Markowitz’s portfolio selection model will be extended by means of a discrete time analysis and more realistic hypotheses. A finite tensor product of Erlang densities will be used to approximate the multivariate probability density function of the single-period discrete returns of dependent assets. The Erlang is a particular case of the Gamma distribution. A finite mixture can generate multimodal asymmetric densities and the tensor product generalizes this concept to higher dimensions. Assuming that the multivariate density was independent and identically distributed (i.i.d.) in the past, the approximation can be calibrated with historical data using the maximum likelihood criterion. This is a large-scale optimization problem, but with a special structure. Assuming that this multivariate density will be i.i.d. in the future, then the density of the discrete returns of a portfolio of assets with nonnegative weights will be a finite mixture of Erlang densities. The risk will be calculated with the Downside Risk measure, which is convex for certain parameters, is not based on quantiles, does not cause risk underestimation and makes the single and multiperiod optimization problems convex. The discrete return is a multiplicative random variable along the time. The multiperiod distribution of the discrete returns of a sequence of T portfolios will be a finite mixture of Meijer G distributions. After a change of the distribution to the average compound, it is possible to calculate the risk and the return, which will lead to the multiperiod efficient frontier, where each point represents one or more ordered sequences of T portfolios. The portfolios of each sequence must be calculated from the future to the present, keeping the expected return at the desired level, which can be a function of time. A dynamic asset allocation strategy is to redo the calculations at each period, using new available information. If the time horizon tends to infinite, then the efficient frontier, in the average compound probability measure, will tend to only one point, given by the Kelly’s portfolio, whatever the risk measure is. To select one among several portfolio optimization models, it is necessary to compare their relative performances. The efficient frontier of each model must be plotted in its respective graph. As the weights of the assets of the portfolios on these curves are known, it is possible to plot all curves in the same graph. For a given expected return, the efficient portfolios of the models can be calculated, and the realized returns and their differences along a backtest can be compared.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Kool, Johnathan. "Connectivity and Genetic Structure in Coral Reef Ecosystems: Modeling and Analysis". Scholarly Repository, 2008. http://scholarlyrepository.miami.edu/oa_dissertations/157.

Texto completo
Resumen
This dissertation examines aspects of the relationship between connectivity and the development of genetic structure in subdivided coral reef populations using both simulation and algebraic methods. The first chapter develops an object-oriented, individual based method of simulating the dynamics of genes in subdivided populations. The model is then used to investigate how changes to different components of population structure (e.g., connectivity, birth rate, population size) influence genetic structure through the use of autocorrelation analysis. The autocorrelograms also demonstrate how relationships between populations change at different spatial and temporal scales. The second chapter uses discrete multivariate distributions to model the relationship between connectivity, selection and resource use in subdivided populations. The equations provide a stochastic basis for multiple-niche polymorphism through differential resource use, and the role of scale in changing selective weightings is also considered. The third chapter uses matrix equations to study the expected development of genetic structure among Caribbean coral reefs. The results show an expected break between eastern and western portions of the Caribbean, as well as additional nested structure within the Bahamas, the central Caribbean (Jamaica and the reefs of the Nicaraguan Rise) and the Mesoamerican Barrier Reef. The matrix equations provide an efficient means of modeling the development of genetic structure in subdivided populations through time. The fourth chapter uses matrix equations to examine the expected development of genetic structure among Southeast Asian coral reefs. Projecting genetic structure reveals an expected unidirectional connection from the South China Sea into the Coral Triangle region via the Sulu Sea. Larvae appear to be restricted from moving back into the South China Sea by a cyclonic gyre in the Sulu Sea. Additional structure is also evident, including distinct clusters within the Philippines, in the vicinity of the Makassar Strait, in the Flores Sea, and near Halmahera and the Banda Sea. The ability to evaluate the expected development of genetic structure over time in subdivided populations offers a number of potential benefits, including the ability to ascertain the expected direction of gene flow, to delineate natural regions of exchange through clustering, or to identify critical areas for conservation or for managing the spread of invasive material via elasticity analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Roddam, Andrew Wilfred. "Some problems in the theory & application of graphical models". Thesis, University of Oxford, 1999. http://ora.ox.ac.uk/objects/uuid:b90d5dbc-6e9a-4c5e-bdca-0c3558b4ee17.

Texto completo
Resumen
A graphical model is simply a representation of the results of an analysis of relationships between sets of variables. It can include the study of the dependence of one variable, or a set of variables on another variable or sets of variables, and can be extended to include variables which could be considered as intermediate to the others. This leads to the concept of representing these chains of relationships by means of a graph; where variables are represented by vertices, and relationships between the variables are represented by edges. These edges can be either directed or undirected, depending upon the type of relationship being represented. The thesis investigates a number of outstanding problems in the area of statistical modelling, with particular emphasis on representing the results in terms of a graph. The thesis will study models for multivariate discrete data and in the case of binary responses, some theoretical results are given on the relationship between two common models. In the more general setting of multivariate discrete responses, a general class of models is studied and an approximation to the maximum likelihood estimates in these models is proposed. This thesis also addresses the problem of measurement errors. An investigation into the effect that measurement error has on sample size calculations is given with respect to a general measurement error specification in both linear and binary regression models. Finally, the thesis presents, in terms of a graphical model, a re-analysis of a set of childhood growth data, collected in South Wales during the 1970s. Within this analysis, a new technique is proposed that allows the calculation of derived variables under the assumption that the joint relationships between the variables are constant at each of the time points.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Hitz, Adrien. "Modelling of extremes". Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:ad32f298-b140-4aae-b50e-931259714085.

Texto completo
Resumen
This work focuses on statistical methods to understand how frequently rare events occur and what the magnitude of extreme values such as large losses is. It lies in a field called extreme value analysis whose scope is to provide support for scientific decision making when extreme observations are of particular importance such as in environmental applications, insurance and finance. In the univariate case, I propose new techniques to model tails of discrete distributions and illustrate them in an application on word frequency and multiple birth data. Suitably rescaled, the limiting tails of some discrete distributions are shown to converge to a discrete generalized Pareto distribution and generalized Zipf distribution respectively. In the multivariate high-dimensional case, I suggest modeling tail dependence between random variables by a graph such that its nodes correspond to the variables and shocks propagate through the edges. Relying on the ideas of graphical models, I prove that if the variables satisfy a new notion called asymptotic conditional independence, then the density of the joint distribution can be simplified and expressed in terms of lower dimensional functions. This generalizes the Hammersley- Clifford theorem and enables us to infer tail distributions from observations in reduced dimension. As an illustration, extreme river flows are modeled by a tree graphical model whose structure appears to recover almost exactly the actual river network. A fundamental concept when studying limiting tail distributions is regular variation. I propose a new notion in the multivariate case called one-component regular variation, of which Karamata's and the representation theorem, two important results in the univariate case, are generalizations. Eventually, I turn my attention to website visit data and fit a censored copula Gaussian graphical model allowing the visualization of users' behavior by a graph.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

SANTOS, Watson Robert Macedo. "Metodos para Solução da Equação HJB-Riccati via Famíla de Estimadores Parametricos RLS Simplificados e Dependentes de Modelo". Universidade Federal do Maranhão, 2014. http://tedebc.ufma.br:8080/jspui/handle/tede/1892.

Texto completo
Resumen
Submitted by Maria Aparecida (cidazen@gmail.com) on 2017-09-04T13:42:58Z No. of bitstreams: 1 Watson Robert.pdf: 2699368 bytes, checksum: cf204eec3df50b251f4adbbbd380ffd0 (MD5)
Made available in DSpace on 2017-09-04T13:42:58Z (GMT). No. of bitstreams: 1 Watson Robert.pdf: 2699368 bytes, checksum: cf204eec3df50b251f4adbbbd380ffd0 (MD5) Previous issue date: 2014-08-21
Due to the demand for high-performance equipments and the rising cost of energy, the industrial sector is developing equipments to attend minimization of the theirs operational costs. The implementation of these requirements generate a demand for projects and implementations of high-performance control systems. The optimal control theory is an alternative to solve this problem, because in its design considers the normative specifications of the system design, as well as those that are related to the operational costs. Motivated by these perspectives, it is presented the study of methods and the development of algorithms to the approximated solution of the Equation Hamilton-Jacobi-Bellman, in the form of discrete Riccati equation, model free and dependent of the dynamic system. The proposed solutions are developed in the context of adaptive dynamic programming that are based on the methods for online design of optimal control systems, Discrete Linear Quadratic Regulator type. The proposed approach is evaluated in multivariable models of the dynamic systems to evaluate the perspectives of the optimal control law for online implementations.
Devido a demanda por equipamentos de alto desempenho e o custo crescente da energia, o setor industrial desenvolve equipamentos que atendem a minimização dos seus custos operacionais. A implantação destas exigências geram uma demanda por projetos e implementações de sistemas de controle de alto desempenho. A teoria de controle ótimo é uma alternativa para solucionar este problema, porque considera no seu projeto as especificações normativas de projeto do sistema, como também as relativas aos seus custos operacionais. Motivado por estas perspectivas, apresenta-se o estudo de métodos e o desenvolvimento de algoritmos para solução aproximada da Equação Hamilton-Jacobi-Bellman, do tipo Equação Discreta de Riccati, livre e dependente de modelo do sistema dinâmico. As soluções propostas são desenvolvidas no contexto de programação dinâmica adaptativa (ADP) que baseiam-se nos métodos para o projeto on-line de Controladores Ótimos, do tipo Regulador Linear Quadrático Discreto. A abordagem proposta é avaliada em modelos de sistemas dinâmicos multivariáveis, tendo em vista a implementação on-line de leis de controle ótimo.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Akpoué, Blache Paul. "Modélisation bayésienne des changements aux niches écologiques causés par le réchauffement climatique". Thèse, 2012. http://hdl.handle.net/1866/8503.

Texto completo
Resumen
Cette thèse présente des méthodes de traitement de données de comptage en particulier et des données discrètes en général. Il s'inscrit dans le cadre d'un projet stratégique du CRNSG, nommé CC-Bio, dont l'objectif est d'évaluer l'impact des changements climatiques sur la répartition des espèces animales et végétales. Après une brève introduction aux notions de biogéographie et aux modèles linéaires mixtes généralisés aux chapitres 1 et 2 respectivement, ma thèse s'articulera autour de trois idées majeures. Premièrement, nous introduisons au chapitre 3 une nouvelle forme de distribution dont les composantes ont pour distributions marginales des lois de Poisson ou des lois de Skellam. Cette nouvelle spécification permet d'incorporer de l'information pertinente sur la nature des corrélations entre toutes les composantes. De plus, nous présentons certaines propriétés de ladite distribution. Contrairement à la distribution multidimensionnelle de Poisson qu'elle généralise, celle-ci permet de traiter les variables avec des corrélations positives et/ou négatives. Une simulation permet d'illustrer les méthodes d'estimation dans le cas bidimensionnel. Les résultats obtenus par les méthodes bayésiennes par les chaînes de Markov par Monte Carlo (CMMC) indiquent un biais relatif assez faible de moins de 5% pour les coefficients de régression des moyennes contrairement à ceux du terme de covariance qui semblent un peu plus volatils. Deuxièmement, le chapitre 4 présente une extension de la régression multidimensionnelle de Poisson avec des effets aléatoires ayant une densité gamma. En effet, conscients du fait que les données d'abondance des espèces présentent une forte dispersion, ce qui rendrait fallacieux les estimateurs et écarts types obtenus, nous privilégions une approche basée sur l'intégration par Monte Carlo grâce à l'échantillonnage préférentiel. L'approche demeure la même qu'au chapitre précédent, c'est-à-dire que l'idée est de simuler des variables latentes indépendantes et de se retrouver dans le cadre d'un modèle linéaire mixte généralisé (GLMM) conventionnel avec des effets aléatoires de densité gamma. Même si l'hypothèse d'une connaissance a priori des paramètres de dispersion semble trop forte, une analyse de sensibilité basée sur la qualité de l'ajustement permet de démontrer la robustesse de notre méthode. Troisièmement, dans le dernier chapitre, nous nous intéressons à la définition et à la construction d'une mesure de concordance donc de corrélation pour les données augmentées en zéro par la modélisation de copules gaussiennes. Contrairement au tau de Kendall dont les valeurs se situent dans un intervalle dont les bornes varient selon la fréquence d'observations d'égalité entre les paires, cette mesure a pour avantage de prendre ses valeurs sur (-1;1). Initialement introduite pour modéliser les corrélations entre des variables continues, son extension au cas discret implique certaines restrictions. En effet, la nouvelle mesure pourrait être interprétée comme la corrélation entre les variables aléatoires continues dont la discrétisation constitue nos observations discrètes non négatives. Deux méthodes d'estimation des modèles augmentés en zéro seront présentées dans les contextes fréquentiste et bayésien basées respectivement sur le maximum de vraisemblance et l'intégration de Gauss-Hermite. Enfin, une étude de simulation permet de montrer la robustesse et les limites de notre approche.
This thesis presents some estimation methods and algorithms to analyse count data in particular and discrete data in general. It is also part of an NSERC strategic project, named CC-Bio, which aims to assess the impact of climate change on the distribution of plant and animal species in Québec. After a brief introduction to the concepts and definitions of biogeography and those relative to the generalized linear mixed models in chapters 1 and 2 respectively, my thesis will focus on three major and new ideas. First, we introduce in chapter 3 a new form of distribution whose components have marginal distribution Poisson or Skellam. This new specification allows to incorporate relevant information about the nature of the correlations between all the components. In addition, we present some properties of this probability distribution function. Unlike the multivariate Poisson distribution initially introduced, this generalization enables to handle both positive and negative correlations. A simulation study illustrates the estimation in the two-dimensional case. The results obtained by Bayesian methods via Monte Carlo Markov chain (MCMC) suggest a fairly low relative bias of less than 5% for the regression coefficients of the mean. However, those of the covariance term seem a bit more volatile. Later, the chapter 4 presents an extension of the multivariate Poisson regression with random effects having a gamma density. Indeed, aware that the abundance data of species have a high dispersion, which would make misleading estimators and standard deviations, we introduce an approach based on integration by Monte Carlo sampling. The approach remains the same as in the previous chapter. Indeed, the objective is to simulate independent latent variables to transform the multivariate problem estimation in many generalized linear mixed models (GLMM) with conventional gamma random effects density. While the assumption of knowledge a priori dispersion parameters seems too strong and not realistic, a sensitivity analysis based on a measure of goodness of fit is used to demonstrate the robustness of the method. Finally, in the last chapter, we focus on the definition and construction of a measure of concordance or a correlation measure for some zeros augmented count data with Gaussian copula models. In contrast to Kendall's tau whose values lie in an interval whose bounds depend on the frequency of ties observations, this measure has the advantage of taking its values on the interval (-1, 1). Originally introduced to model the correlations between continuous variables, its extension to the discrete case implies certain restrictions and its values are no longer in the entire interval (-1,1) but only on a subset. Indeed, the new measure could be interpreted as the correlation between continuous random variables before being transformed to discrete variables considered as our discrete non negative observations. Two methods of estimation based on integration via Gaussian quadrature and maximum likelihood are presented. Some simulation studies show the robustness and the limits of our approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Mondal, Anirban. "Bayesian Uncertainty Quantification for Large Scale Spatial Inverse Problems". Thesis, 2011. http://hdl.handle.net/1969.1/ETD-TAMU-2011-08-9905.

Texto completo
Resumen
We considered a Bayesian approach to nonlinear inverse problems in which the unknown quantity is a high dimension spatial field. The Bayesian approach contains a natural mechanism for regularization in the form of prior information, can incorporate information from heterogeneous sources and provides a quantitative assessment of uncertainty in the inverse solution. The Bayesian setting casts the inverse solution as a posterior probability distribution over the model parameters. Karhunen-Lo'eve expansion and Discrete Cosine transform were used for dimension reduction of the random spatial field. Furthermore, we used a hierarchical Bayes model to inject multiscale data in the modeling framework. In this Bayesian framework, we have shown that this inverse problem is well-posed by proving that the posterior measure is Lipschitz continuous with respect to the data in total variation norm. The need for multiple evaluations of the forward model on a high dimension spatial field (e.g. in the context of MCMC) together with the high dimensionality of the posterior, results in many computation challenges. We developed two-stage reversible jump MCMC method which has the ability to screen the bad proposals in the first inexpensive stage. Channelized spatial fields were represented by facies boundaries and variogram-based spatial fields within each facies. Using level-set based approach, the shape of the channel boundaries was updated with dynamic data using a Bayesian hierarchical model where the number of points representing the channel boundaries is assumed to be unknown. Statistical emulators on a large scale spatial field were introduced to avoid the expensive likelihood calculation, which contains the forward simulator, at each iteration of the MCMC step. To build the emulator, the original spatial field was represented by a low dimensional parameterization using Discrete Cosine Transform (DCT), then the Bayesian approach to multivariate adaptive regression spline (BMARS) was used to emulate the simulator. Various numerical results were presented by analyzing simulated as well as real data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Barr, Aila. "New statistical models for discrete uni- and multivariate data sets with special reference to the Dirichlet multinomial distribution". Thesis, 2014. http://hdl.handle.net/10539/15910.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

"Minimum Distance Estimation in Categorical Conditional Independence Models". Thesis, 2012. http://hdl.handle.net/1911/70285.

Texto completo
Resumen
One of the oldest and most fundamental problems in statistics is the analysis of cross-classified data called contingency tables. Analyzing contingency tables is typically a question of association - do the variables represented in the table exhibit special dependencies or lack thereof? The statistical models which best capture these experimental notions of dependence are the categorical conditional independence models; however, until recent discoveries concerning the strongly algebraic nature of the conditional independence models surfaced, the models were widely overlooked due to their unwieldy implicit description. Apart from the inferential question above, this thesis asks the more basic question - suppose such an experimental model of association is known, how can one incorporate this information into the estimation of the joint distribution of the table? In the traditional parametric setting several estimation paradigms have been developed over the past century; however, traditional results are not applicable to arbitrary categorical conditional independence models due to their implicit nature. After laying out the framework for conditional independence and algebraic statistical models, we consider three aspects of estimation in the models using the minimum Euclidean (L2E), minimum Pearson chi-squared, and minimum Neyman modified chi-squared distance paradigms as well as the more ubiquitous maximum likelihood approach (MLE). First, we consider the theoretical properties of the estimators and demonstrate that under general conditions the estimators exist and are asymptotically normal. For small samples, we present the results of large scale simulations to address the estimators' bias and mean squared error (in the Euclidean and Frobenius norms, respectively). Second, we identify the computation of such estimators as an optimization problem and, for the case of the L2E, propose two different methods by which the problem can be solved, one algebraic and one numerical. Finally, we present an R implementation via two novel packages, mpoly for symbolic computing with multivariate polynomials and catcim for fitting categorical conditional independence models. It is found that in general minimum distance estimators in categorical conditional independence models behave as they do in the more traditional parametric setting and can be computed in many practical situations with the implementation provided.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Fang, Yan. "Extensions to Gaussian copula models". Thesis, 2012. http://hdl.handle.net/1957/29482.

Texto completo
Resumen
A copula is the representation of a multivariate distribution. Copulas are used to model multivariate data in many fields. Recent developments include copula models for spatial data and for discrete marginals. We will present a new methodological approach for modeling discrete spatial processes and for predicting the process at unobserved locations. We employ Bayesian methodology for both estimation and prediction. Comparisons between the new method and Generalized Additive Model (GAM) are done to test the performance of the prediction. Although there exists a large variety of copula functions, only a few are practically manageable and in certain problems one would like to choose the Gaussian copula to model the dependence. Furthermore, most copulas are exchangeable, thus implying symmetric dependence. However, none of them is flexible enough to catch the tailed (upper tailed or lower tailed) distribution as well as elliptical distributions. An elliptical copula is the copula corresponding to an elliptical distribution by Sklar's theorem, so it can be used appropriately and effectively only to fit elliptical distributions. While in reality, data may be better described by a "fat-tailed" or "tailed" copula than by an elliptical copula. This dissertation proposes a novel pseudo-copula (the modified Gaussian pseudo-copula) based on the Gaussian copula to model dependencies in multivariate data. Our modified Gaussian pseudo-copula differs from the standard Gaussian copula in that it can model the tail dependence. The modified Gaussian pseudo-copula captures properties from both elliptical copulas and Archimedean copulas. The modified Gaussian pseudo-copula and its properties are described. We focus on issues related to the dependence of extreme values. We give our pseudo-copula characteristics in the bivariate case, which can be extended to multivariate cases easily. The proposed pseudo-copula is assessed by estimating the measure of association from two real data sets, one from finance and one from insurance. A simulation study is done to test the goodness-of-fit of this new model.
Graduation date: 2012
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Pavão, Diogo Cláudio. "Comparison of discrete and continuum community models : insights from numerical ecology and Bayesian methods applied to Azorean plant communities". Master's thesis, 2018. http://hdl.handle.net/10400.3/4655.

Texto completo
Resumen
Dissertação de Mestrado, Biodiversidade e Biotecnologia, 06 de fevereiro de 2018, Universidade dos Açores.
A nossa visão da ecologia de comunidades tem evoluído ao longo do tempo, partindo de duas visões extremas das comunidades vegetais, uma em que foram consideradas como associações de espécies, conduzidas por coincidências aleatórias, e outra em que foram consideradas como organismos complexos com claras interdependências. A abordagem fitossociológica está principalmente ligada à última abordagem, incluindo o desenvolvimento da sintaxonomia. Mais recentemente, as comunidades biológicas tendem a ser vistas como um conjunto de comunidades locais ligadas através da dispersão de múltiplas espécies que, potencialmente, interagem entre si (i.e., uma metacomunidade). Este conceito de metacomunidade, tem sido usado para explicar dinâmicas espácio-temporais. Têm sido propostos diversos modelos para explicar os padrões de distribuição das espécies e comunidades ao longo de gradientes ambientais, que incluem desde tipos de comunidade individuais e discretas, até um continuum de comunidades vegetais. A vegetação natural dos Açores é um bom modelo de estudo para testar essas hipóteses, pois tem sido descrita em detalhe por diversos autores, possibilitando a abordagem de questões teóricas, no âmbito dos conceitos ligados às metacomunidades. O presente estudo avaliou se os dados relativos às comunidades vegetais naturais do arquipélago dos Açores apoiam a existência de "tipos de comunidades discretas" ou um "continuum de comunidades", através da aplicação de métodos na área da ecologia numérica e de análises bayesianas. Foram usados métodos de aglomeração hierárquica (distância de Hellinger e UPGMA) e não-hierárquica (kmeans cluster), bem como um modelo multinomial num contexto bayesiano para determinar o número de grupos de comunidades vegetais. Foram amostradas um total de 139 comunidades vegetais e 85 espécies em cinco ilhas. O número ótimo de grupos de comunidades vegetais variou entre 4 e 6 para os métodos de aglomeração hierárquicos, aproximou-se de 43 para os métodos de aglomeração não hierárquicos, e foi de cerca de 70 para a análise multinomial. As curvas de distribuição em função da altitude, estimadas para as espécies de plantas vasculares, sugerem que a respetiva distribuição é determinada pelos limites fisiológicos nos extremos e pela competição em condições intermédias, mas com uma possível partição de nichos entre as espécies dominantes. Os nossos resultados estão mais de acordo com uma visão ecológica das comunidades ao longo de um continuum, do que com a existência de tipos de comunidades discretas. A compreensão destes padrões é essencial na gestão sustentável da vegetação, especialmente neste laboratório natural único, os Açores.
ABSTRACT: Our view of community ecology has evolved over time, beginning with two extreme visions of plant communities which were considered either as species associations driven by random coincidences or as complex organisms with clear interdependencies. The phytosociological approach was mainly linked to the latter, including the development of syntaxonomy. More recently, biological communities tend to be viewed as set of local community assemblages that are linked by dispersal of multiple potentially interacting species (i.e. a metacommunity), a concept that has been used to explain spatio-temporal dynamics. Several models have been proposed to explain the distribution patterns of species and communities along environmental gradients, ranging from discrete, individual community types, to a continuum of plant communities. The Azorean natural vegetation is a good study model to test those hypotheses, since it has been described in detail by several authors therefore creating the opportunity to address theoretical questions within a conceptual metacommunity framework. Through a combination of numerical ecology and Bayesian analyses applied to natural plant community data from the Azores archipelago, the present study evaluated if the present data supports the existence of "discrete community types" or of a "continuum of communities". We used hierarchical clustering (Hellinger distance and UPGMA) and non-hierarchical clustering (k-means clustering), as well as a multinomial model in a Bayesian context to determine the number of plant community groups. A total of 139 plant communities and 85 species were sampled in five islands. The optimum number of plant community groups ranged from 4 to 6 for hierarchical clustering, and neared 43 for non-hierarchical clustering and about 70 for the multinomial analysis. The elevation distribution curves estimated for the vascular plant species suggest that species distributions are determined by physiological limits at the extremes, and by competition under intermediate conditions, but with some niche partitioning between dominant species. Our results would be in agreement with an ecological view of the communities as a continuum, more than with a view considering the existence of discrete community types. Understanding these patterns is an essential ingredient in sustainable vegetation management, especially on this unique natural laboratory, the Azores.
Towards an Ecological and Economic valorization of the Azorean Forest. Forest-Eco2 n.º ACORES-01-0145-FEDER-000014. Programa Operacional AÇORES 2020 - Região Autónoma dos Açores
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía