Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Bayesian Structural Time Series Models.

Дисертації з теми "Bayesian Structural Time Series Models"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Bayesian Structural Time Series Models".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Murphy, James Kevin. "Hidden states, hidden structures : Bayesian learning in time series models." Thesis, University of Cambridge, 2014. https://www.repository.cam.ac.uk/handle/1810/250355.

Повний текст джерела
Анотація:
This thesis presents methods for the inference of system state and the learning of model structure for a number of hidden-state time series models, within a Bayesian probabilistic framework. Motivating examples are taken from application areas including finance, physical object tracking and audio restoration. The work in this thesis can be broadly divided into three themes: system and parameter estimation in linear jump-diffusion systems, non-parametric model (system) estimation and batch audio restoration. For linear jump-diffusion systems, efficient state estimation methods based on the variable rate particle filter are presented for the general linear case (chapter 3) and a new method of parameter estimation based on Particle MCMC methods is introduced and tested against an alternative method using reversible-jump MCMC (chapter 4). Non-parametric model estimation is examined in two settings: the estimation of non-parametric environment models in a SLAM-style problem, and the estimation of the network structure and forms of linkage between multiple objects. In the former case, a non-parametric Gaussian process prior model is used to learn a potential field model of the environment in which a target moves. Efficient solution methods based on Rao-Blackwellized particle filters are given (chapter 5). In the latter case, a new way of learning non-linear inter-object relationships in multi-object systems is developed, allowing complicated inter-object dynamics to be learnt and causality between objects to be inferred. Again based on Gaussian process prior assumptions, the method allows the identification of a wide range of relationships between objects with minimal assumptions and admits efficient solution, albeit in batch form at present (chapter 6). Finally, the thesis presents some new results in the restoration of audio signals, in particular the removal of impulse noise (pops and clicks) from audio recordings (chapter 7).
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Wigren, Richard, and Filip Cornell. "Marketing Mix Modelling: A comparative study of statistical models." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160082.

Повний текст джерела
Анотація:
Deciding the optimal media advertisement spending is a complex issue that many companies today are facing. With the rise of new ways to market products, the choices can appear infinite. One methodical way to do this is to use Marketing Mix Modelling (MMM), in which statistical modelling is used to attribute sales to media spendings. However, many problems arise during the modelling. Modelling and mitigation of uncertainty, time-dependencies of sales, incorporation of expert information and interpretation of models are all issues that need to be addressed. This thesis aims to investigate the effectiveness of eight different statistical and machine learning methods in terms of prediction accuracy and certainty, each one addressing one of the previously mentioned issues. It is concluded that while Shapley Value Regression has the highest certainty in terms of coefficient estimation, it sacrifices some prediction accuracy. The overall highest performing model is the Bayesian hierarchical model, achieving both high prediction accuracy and high certainty.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Rahier, Thibaud. "Réseaux Bayésiens pour fusion de données statiques et temporelles." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM083/document.

Повний текст джерела
Анотація:
La prédiction et l'inférence sur des données temporelles sont très souvent effectuées en utilisant uniquement les séries temporelles. Nous sommes convaincus que ces tâches pourraient tirer parti de l'utilisation des métadonnées contextuelles associées aux séries temporelles, telles que l'emplacement, le type, etc. Réciproquement, les tâches de prédiction et d'inférence sur les métadonnées pourraient bénéficier des informations contenues dans les séries temporelles. Cependant, il n'existe pas de méthode standard pour modéliser conjointement les données de séries temporelles et les métadonnées descriptives. De plus, les métadonnées contiennent fréquemment des informations hautement corrélées ou redondantes et peuvent contenir des erreurs et des valeurs manquantes.Nous examinons d’abord le problème de l’apprentissage de la structure graphique probabiliste inhérente aux métadonnées en tant que réseau Bayésien. Ceci présente deux avantages principaux: (i) une fois structurées en tant que modèle graphique, les métadonnées sont plus faciles à utiliser pour améliorer les tâches sur les données temporelles et (ii) le modèle appris permet des tâches d'inférence sur les métadonnées uniquement, telles que l'imputation de données manquantes. Cependant, l'apprentissage de la structure de réseau Bayésien est un défi mathématique conséquent, impliquant un problème d'optimisation NP-difficile. Pour faire face à ce problème, nous présentons un algorithme d'apprentissage de structure sur mesure, inspiré de nouveaux résultats théoriques, qui exploite les dépendances (quasi)-déterministes généralement présentes dans les métadonnées descriptives. Cet algorithme est testé sur de nombreux jeux de données de référence et sur certains jeux de métadonnées industriels contenant des relations déterministes. Dans les deux cas, il s'est avéré nettement plus rapide que l'état de la l'art, et a même trouvé des structures plus performantes sur des données industrielles. De plus, les réseaux Bayésiens appris sont toujours plus parcimonieux et donc plus lisibles.Nous nous intéressons ensuite à la conception d'un modèle qui inclut à la fois des (méta)données statiques et des données temporelles. En nous inspirant des modèles graphiques probabilistes pour les données temporelles (réseaux Bayésiens dynamiques) et de notre approche pour la modélisation des métadonnées, nous présentons une méthodologie générale pour modéliser conjointement les métadonnées et les données temporelles sous forme de réseaux Bayésiens hybrides statiques-dynamiques. Nous proposons deux algorithmes principaux associés à cette représentation: (i) un algorithme d'apprentissage qui, bien qu'optimisé pour les données industrielles, reste généralisable à toute tâche de fusion de données statiques et dynamiques, et (ii) un algorithme d'inférence permettant les d'effectuer à la fois des requêtes sur des données temporelles ou statiques uniquement, et des requêtes utilisant ces deux types de données.%Nous fournissons ensuite des résultats sur diverses applications inter-domaines telles que les prévisions, le réapprovisionnement en métadonnées à partir de séries chronologiques et l’analyse de dépendance d’alarmes en utilisant les données de certains cas d’utilisation difficiles de Schneider Electric.Enfin, nous approfondissons certaines des notions introduites au cours de la thèse, et notamment la façon de mesurer la performance en généralisation d’un réseau Bayésien par un score inspiré de la procédure de validation croisée provenant de l’apprentissage automatique supervisé. Nous proposons également des extensions diverses aux algorithmes et aux résultats théoriques présentés dans les chapitres précédents, et formulons quelques perspectives de recherche
Prediction and inference on temporal data is very frequently performed using timeseries data alone. We believe that these tasks could benefit from leveraging the contextual metadata associated to timeseries - such as location, type, etc. Conversely, tasks involving prediction and inference on metadata could benefit from information held within timeseries. However, there exists no standard way of jointly modeling both timeseries data and descriptive metadata. Moreover, metadata frequently contains highly correlated or redundant information, and may contain errors and missing values.We first consider the problem of learning the inherent probabilistic graphical structure of metadata as a Bayesian Network. This has two main benefits: (i) once structured as a graphical model, metadata is easier to use in order to improve tasks on temporal data and (ii) the learned model enables inference tasks on metadata alone, such as missing data imputation. However, Bayesian network structure learning is a tremendous mathematical challenge, that involves a NP-Hard optimization problem. We present a tailor-made structure learning algorithm, inspired from novel theoretical results, that exploits (quasi)-determinist dependencies that are typically present in descriptive metadata. This algorithm is tested on numerous benchmark datasets and some industrial metadatasets containing deterministic relationships. In both cases it proved to be significantly faster than state of the art, and even found more performant structures on industrial data. Moreover, learned Bayesian networks are consistently sparser and therefore more readable.We then focus on designing a model that includes both static (meta)data and dynamic data. Taking inspiration from state of the art probabilistic graphical models for temporal data (Dynamic Bayesian Networks) and from our previously described approach for metadata modeling, we present a general methodology to jointly model metadata and temporal data as a hybrid static-dynamic Bayesian network. We propose two main algorithms associated to this representation: (i) a learning algorithm, which while being optimized for industrial data, is still generalizable to any task of static and dynamic data fusion, and (ii) an inference algorithm, enabling both usual tasks on temporal or static data alone, and tasks using the two types of data.%We then provide results on diverse cross-field applications such as forecasting, metadata replenishment from timeseries and alarms dependency analysis using data from some of Schneider Electric’s challenging use-cases.Finally, we discuss some of the notions introduced during the thesis, including ways to measure the generalization performance of a Bayesian network by a score inspired from the cross-validation procedure from supervised machine learning. We also propose various extensions to the algorithms and theoretical results presented in the previous chapters, and formulate some research perspectives
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Bracegirdle, C. I. "Inference in Bayesian time-series models." Thesis, University College London (University of London), 2013. http://discovery.ucl.ac.uk/1383529/.

Повний текст джерела
Анотація:
Time series-data accompanied with a sequential ordering-occur and evolve all around us. Analysing time series is the problem of trying to discern and describe a pattern in the sequential data that develops in a logical way as the series continues, and the study of sequential data has occurred for a long period across a vast array of fields, including signal processing, bioinformatics, and finance-to name but a few. Classical approaches are based on estimating the parameters of temporal evolution of the process according to an assumed model. In econometrics literature, the field is focussed on parameter estimation of linear (regression) models with a number of extensions. In this thesis, I take a Bayesian probabilistic modelling approach in discrete time, and focus on novel inference schemes. Fundamentally, Bayesian analysis replaces parameter estimates by quantifying uncertainty in the value, and probabilistic inference is used to update the uncertainty based on what is observed in practice. I make three central contributions. First, I discuss a class of latent Markov model which allows a Bayesian approach to internal process resets, and show how inference in such a model can be performed efficiently, before extending the model to a tractable class of switching time series models. Second, I show how inference in linear-Gaussian latent models can be extended to allow a Bayesian approach to variance, and develop a corresponding variance-resetting model, the heteroskedastic linear-dynamical system. Third, I turn my attention to cointegration-a headline topic in finance-and describe a novel estimation scheme implied by Bayesian analysis, which I show to be empirically superior to the classical approach. I offer example applications throughout and conclude with a discussion.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Johnson, Matthew James Ph D. Massachusetts Institute of Technology. "Bayesian time series models and scalable inference." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/89993.

Повний текст джерела
Анотація:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 197-206).
With large and growing datasets and complex models, there is an increasing need for scalable Bayesian inference. We describe two lines of work to address this need. In the first part, we develop new algorithms for inference in hierarchical Bayesian time series models based on the hidden Markov model (HMM), hidden semi-Markov model (HSMM), and their Bayesian nonparametric extensions. The HMM is ubiquitous in Bayesian time series models, and it and its Bayesian nonparametric extension, the hierarchical Dirichlet process hidden Markov model (HDP-HMM), have been applied in many settings. HSMMs and HDP-HSMMs extend these dynamical models to provide state-specific duration modeling, but at the cost of increased computational complexity for inference, limiting their general applicability. A challenge with all such models is scaling inference to large datasets. We address these challenges in several ways. First, we develop classes of duration models for which HSMM message passing complexity scales only linearly in the observation sequence length. Second, we apply the stochastic variational inference (SVI) framework to develop scalable inference for the HMM, HSMM, and their nonparametric extensions. Third, we build on these ideas to define a new Bayesian nonparametric model that can capture dynamics at multiple timescales while still allowing efficient and scalable inference. In the second part of this thesis, we develop a theoretical framework to analyze a special case of a highly parallelizable sampling strategy we refer to as Hogwild Gibbs sampling. Thorough empirical work has shown that Hogwild Gibbs sampling works very well for inference in large latent Dirichlet allocation models (LDA), but there is little theory to understand when it may be effective in general. By studying Hogwild Gibbs applied to sampling from Gaussian distributions we develop analytical results as well as a deeper understanding of its behavior, including its convergence and correctness in some regimes.
by Matthew James Johnson.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Qiang, Fu. "Bayesian multivariate time series models for forecasting European macroeconomic series." Thesis, University of Hull, 2000. http://hydra.hull.ac.uk/resources/hull:8068.

Повний текст джерела
Анотація:
Research on and debate about 'wise use' of explicitly Bayesian forecasting procedures has been widespread and often heated. This situation has come about partly in response to the dissatisfaction with the poor forecasting performance of conventional methods and partly in view of the development of computational capacity and macro-data availability. Experience with Bayesian econometric forecasting schemes is still rather limited, but it seems to be an attractive alternative to subjectively adjusted statistical models [see, for example, Phillips (1995a), Todd (1984) and West & Harrison (1989)]. It provides effective standards of forecasting performance and has demonstrated success in forecasting macroeconomic variables. Therefore, there would seem a case for seeking some additional insights into the important role of such methods in achieving objectives within the macroeconomics profession. The primary concerns of this study, motivated by the apparent deterioration of mainstream macroeconometric forecasts of the world economy in recent years [Wallis (1989), pp.34-43], are threefold. The first is to formalize a thorough, yet simple, methodological framework for empirical macroeconometric modelling in a Bayesian spirit. The second is to investigate whether improved forecasting accuracy is feasible within a European-based multicountry context. This is conducted with particular emphasis on the construction and implementation of Bayesian vector autoregressive (BVAR) models that incorporate both a priori and cointegration restrictions. The third is to extend the approach and apply it to the joint-modelling of system-wide interactions amongst national economies. The intention is to attempt to generate more accurate answers to a variety of practical questions about the future path towards a united Europe. The use of BVARs has advanced considerably. In particular, the value of joint-modelling with time-varying parameters and much more sophisticated prior distributions has been stressed in the econometric methodology literature. See e.g. Doan et al. (1984). Kadiyala and Karlsson (1993, 1997), Litterman (1986a), and Phillips (1995a, 1995b). Although trade-linked multicountry macroeconomic models may not be able to clarify all the structural and finer economic characteristics of each economy, they do provide a flexible and adaptable framework for analysis of global economic issues. In this thesis, the forecasting record for the main European countries is examined using the 'post mortem' of IMF, DECO and EEC sources. The formulation, estimation and selection of BVAR forecasting models, carried out using Microfit, MicroTSP, PcGive and RATS packages, are reported. Practical applications of BVAR models especially address the issues as to whether combinations of forecasts explicitly outperform the forecasts of a single model, and whether the recent failures of multicountry forecasts can be attributed to an increase in the 'internal volatility' of the world economic environment. See Artis and Holly (1992), and Barrell and Pain (1992, p.3). The research undertaken consolidates existing empirical and theoretical knowledge of BVAR modelling. It provides a unified coverage of economic forecasting applications and develops a common, effective and progressive methodology for the European economies. The empirical results reflect that in simulated 'out-of-sample' forecasting performances, the gains in forecast accuracy from imposing prior and long-run constraints are statistically significant, especially for small estimation sample sizes and long forecast horizons.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Fernandes, Cristiano Augusto Coelho. "Non-Gaussian structural time series models." Thesis, London School of Economics and Political Science (University of London), 1991. http://etheses.lse.ac.uk/1208/.

Повний текст джерела
Анотація:
This thesis aims to develop a class of state space models for non-Gaussian time series. Our models are based on distributions of the exponential family, such as the Poisson, the negative-binomial, the binomial and the gamma. In these distributions the mean is allowed to change over time through a mechanism which mimics a random walk. By adopting a closed sampling analysis we are able to derive finite dimensional filters, similar to the Kalman filter. These are then used to construct the likelihood function and to make forecasts of future observations. In fact for all the specifications here considered we have been able to show that the predictions give rise to schemes based on an exponentially weighted moving average (EWMA). The models may be extended to include explanatory variables via the kind of link functions that appear in GLIM models. This enables nonstochastic slope and seasonal components to be included. The Poisson, negative binomial and bivariate Poisson models are illustrated by considering applications to real data. Monte Carlo experiments are also conducted in order to investigate properties of maximum likelihood estimators and power studies of a post sample predictive test developed for the Poisson model.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Queen, Catriona M. "Bayesian graphical forecasting models for business time series." Thesis, University of Warwick, 1991. http://wrap.warwick.ac.uk/4321/.

Повний текст джерела
Анотація:
This thesis develops three new classes of Bayesian graphical models to forecast multivariate time series. Although these models were originally motivated by the need for flexible and tractable forecasting models appropriate for modelling competitive business markets, they are of theoretical interest in their own right. Multiregression dynamic models are defined to preserve certain conditional independence structures over time. Although these models are typically very non-Gaussian, it is proved that they are simple to update, amenable to practical implementation and promise more efficient identification of causal structures in a time series than has been possible in the past. Dynamic graphical models are defined for multivariate time series for which there is believed to be symmetry between certain subsets of variables and a causal driving mechanism between these subsets. They are a specific type of graphical chain model (Wermuth & Lauritzen, 1990) which are once again typically non- Gaussian. Dynamic graphical models are a combination of multiregression dynamic models and multivariate regression models (Quintana, 1985,87, Quintana & West, 1987,88) and as such, they inherit the simplicity of both these models. Partial segmentation models extend the work of Dickey et al. (1987) to the study of models with latent conditional independence structures. Conjugate Bayesian anaylses are developed for processes whose probability parameters are hypothesised to be dependent, using the fact that a certain likelihood separates given a matrix of likelihood ratios. It is shown how these processes can be represented by undirected graphs and how these help in its reparameterisation into conjugate form.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Pope, Kenneth James. "Time series analysis." Thesis, University of Cambridge, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318445.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Surapaitoolkorn, Wantanee. "Bayesian inference for volatility models in financial time series." Thesis, Imperial College London, 2006. http://hdl.handle.net/10044/1/1249.

Повний текст джерела
Анотація:
The aim of the thesis is to study the two principal volatility models used in ¯nancial time series, and to perform inference using a Bayesian approach. The ¯rst model is the Deterministic Time-Varying volatility represented by Autoregressive Conditional Heteroscedastic (ARCH) models. The second model is the Stochastic Time Varying volatility or Stochastic Volatility (SV) model. The thesis concentrates on using Financial Foreign Exchange (FX) data including time series for four Asian countries of Thailand, Singapore, Japan and Hong Kong, and FX data sets from other countries. The time period this particular FX data set covers includes the recent biggest crisis in Asian ¯nancial markets in 1997. The analysis involves exploring high frequency ¯nancial FX data where the sets of data used are the daily and hourly opening FX rates. The key development of the thesis is the implementation of changepoint models to allow for non-stationarity in the volatility process. The changepoint approach has only rarely been implemented for volatility data. In this thesis, the changepoint model for SVtype volatility structures is formulated. The variable dimensional nature of the inference problem, that is, that the number as well as the locations of the volatility changepoints are unknown, is acknowledged and incorporated, as are the potential leptokurtic nature of ¯nancial returns. The Bayesian computational approach used for making inference about the model parameters is Markov Chain Monte Carlo (MCMC). Another contribution of this thesis is the study of reparameterizations of parameters in both ARCH and SV models. The objective is to improve the performance of the MCMC method.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Hossain, Shahadat. "Complete Bayesian analysis of some mixture time series models." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/complete-bayesian-analysis-of-some-mixture-time-series-models(6746d653-e08f-4866-ace9-29586f8160f6).html.

Повний текст джерела
Анотація:
In this thesis we consider some finite mixture time series models in which each component is following a well-known process, e.g. AR, ARMA or ARMA-GARCH process, with either normal-type errors or Student-t type errors. We develop MCMC methods and use them in the Bayesian analysis of these mixture models. We introduce some new models such as mixture of Student-t ARMA components and mixture of Student-t ARMA-GARCH components with complete Bayesian treatments. Moreover, we use component precision (instead of variance) with an additional hierarchical level which makes our model more consistent with the MCMC moves. We have implemented the proposed methods in R and give examples with real and simulated data.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Kwan, Tan Hwee. "Robust estimation for structural time series models." Thesis, London School of Economics and Political Science (University of London), 1990. http://etheses.lse.ac.uk/2809/.

Повний текст джерела
Анотація:
This thesis aims at developing robust methods of estimation in order to draw valid inference from contaminated time series. We concentrate on additive and innovation outliers in structural time series models using a state space representation. The parameters of interest are the state, hyperparameters and coefficients of explanatory variables. Three main contributions evolve from the research. Firstly, a filter named the approximate Gaussian sum filter is proposed to cope with noisy disturbances in both the transition and measurement equations. Secondly, the Kalman filter is robustified by carrying over the M-estimation of scale for i.i.d observations to time-dependent data. Thirdly, robust regression techniques are implemented to modify the generalised least squares transformation procedure to deal with explanatory variables in time series models. All the above procedures are tested against standard non-robust estimation methods for time series by means of simulations. Two real examples are also included.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Rivera, Pablo Marshall. "Analysis of a cross-section of time series using structural time series models." Thesis, London School of Economics and Political Science (University of London), 1990. http://etheses.lse.ac.uk/13/.

Повний текст джерела
Анотація:
This study deals with multivariate structural time series models, and in particular, with the analysis and modelling of cross-sections of time series. In this context, no cause and effect relationships are assumed between the time series, although they are subject to the same overall environment. The main motivations in the analysis of cross-sections of time series are (i) the gains in efficiency in the estimation of the irregular, trend and seasonal components; and (ii) the analysis of models with common effects. The study contains essentially two parts. The first one considers models with a general specification for the correlation of the irregular, trend and seasonal components across the time series. Four structural time series models are presented, and the estimation of the components of the time series, as well as the estimation of the parameters which define this components, is discussed. The second part of the study deals with dynamic error components models where the irregular, trend and seasonal components are generated by common, as well as individual, effects. The extension to models for multivariate observations of cross-sections is also considered. Several applications of the methods studied are presented. Particularly relevant is an econometric study of the demand for energy in the U. K.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Marriott, John M. "Bayesian numerical and approximation techniques for ARMA time series." Thesis, University of Nottingham, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.329935.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Ehlers, Ricardo Sandes. "Bayesian model discrimination for time series and state space models." Thesis, University of Surrey, 2002. http://epubs.surrey.ac.uk/843599/.

Повний текст джерела
Анотація:
In this thesis, a Bayesian approach is adopted to handle parameter estimation and model uncertainty in autoregressive moving average (ARMA) time series models and dynamic linear models (DLM). Bayesian model uncertainty is handled in a parametric fashion through the use of posterior model probabilities computed via Markov chain Monte Carlo (MCMC) simulation techniques. Attention is focused on reversible jump Markov chain Monte Carlo (RJMCMC) samplers, which can move between models of different dimensions, to address the problem of model order uncertainty and strategies for proposing efficient sampling schemes in autoregressive moving average time series models and dynamic linear models are developed. The general problem of assessing convergence of the sampler in a dimension-changing context is addressed by computing estimates of the probabilities of moving to higher and lower dimensional spaces. Graphical and numerical techniques are used to compare different updating schemes. The methodology is illustrated by applying it to both simulated and real data sets and the results for the Bayesian model selection and parameter estimation procedures are compared with the classical model selection criteria and maximum likelihood estimation.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Barbosa, Emanuel Pimentel. "Dynamic Bayesian models for vector time series analysis & forecasting." Thesis, University of Warwick, 1989. http://wrap.warwick.ac.uk/34817/.

Повний текст джерела
Анотація:
This thesis considers the Bayesian analysis of general multivariate DLM's (Dynamic Linear Models) for vector time series forecasting where the observational variance matrices are unknown. This extends considerably some previous work based on conjugate analysis for a special sub—class of vector DLM's where all marginal univariate models follow the same structure. The new methods developed in this thesis, are shown to have a better performance than other competing approaches to vector DLM analysis, as for instance, the one based on the Student t filter. Practical aspects of implementation of the new methods, as well as some theoretical properties are discussed, further model extensions are considered, including non—linear models and some applications with real and simulated data are provided.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Lazarova, Stepana. "Long memory and structural breaks in time series models." Thesis, London School of Economics and Political Science (University of London), 2006. http://etheses.lse.ac.uk/1927/.

Повний текст джерела
Анотація:
This thesis examines structural breaks in time series regressions where both regressors and errors may exhibit long range dependence. Statistical properties of methods for detecting and estimating structural breaks are analysed and asymptotic distribution of estimators and test statistics are obtained. Valid bootstrap methods of approximating the limiting distribution of the relevant statistics are also developed to improve on the asymptotic approximation in finite samples or to deal with the problem of unknown asymptotic distribution. The performance of the asymptotic and bootstrap methods are compared through Monte Carlo experiments. A background of the concepts of structural breaks, long memory and bootstrap is offered in Introduction where the main contribution of the thesis is also outlined. Chapter 1 proposes a fluctuation-type test procedure for detecting instability of slope coefficients. A first-order bootstrap approximation of the distribution of the test statistic is proposed. Chapter 2 considers estimation and testing of the time of the structural break. Statistical properties of the estimator are examined under a range of assumptions on the size of the break. Under the assumption of shrinking break, a bootstrap approximation of the asymptotic test procedure is proposed. Chapter 3 addresses a drawback of the assumption of fixed size of break. Under this assumption, the asymptotic distribution of the estimator of the breakpoint depends on the unknown underlying distribution of data and thus it is not available for inference purposes. The proposed solution is a bootstrap procedure based on a specific type of deconvolution.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Macho, Francisco Javier Fernandez. "Estimation and testing of multivariate structural time series models." Thesis, London School of Economics and Political Science (University of London), 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.308315.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Salabasis, Mickael. "Bayesian time series and panel models : unit roots, dynamics and random effects." Doctoral thesis, Stockholm : Economic Research Institute, Stockholm School of Economics (Ekonomiska forskningsinstitutet vid Handelshögsk.) (EFI), 2004. http://www.hhs.se/efi/summary/632.htm.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Triantafyllopoulos, K. "On observational variance learning for multivariate Bayesian time series and related models." Thesis, University of Warwick, 2002. http://wrap.warwick.ac.uk/50495/.

Повний текст джерела
Анотація:
This thesis is concerned with variance learning in multivariate dynamic linear models (DLMs). Three new models are developed in this thesis. The first one is a dynamic regression model with no distributional assumption of the unknown variance matrix. The second is an extension of a known model that enables comprehensive treatment of any missing observations. For this purpose new distributions that replace the inverse Wishart and matrix T and that allow conjugacy are introduced. The third model is the general multivariate DLM without any precise assumptions of the error sequences and of the unknown variance matrix. We find analytic updatings of the first two moments based on weak assumptions that are satisfied for the usual models. Missing observations and time varying variances are considered in detail for every model. For the first time, deterministic and stochastic variance laws for the general multivariate DLM are presented. Also, by introducing a new distribution that replaces the matrix-beta of a previous work, we prove results on stochastic changes in variance that are in line with missing observation analysis and variance intervention.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Yang, Fuyu. "Bayesian inference in nonlinear univariate time series : investigation of GSTUR and SB models." Thesis, University of Leicester, 2009. http://hdl.handle.net/2381/4375.

Повний текст джерела
Анотація:
In the literature, many statistical models have been used to investigate the existence of a deterministic time trend, changing persistence and nonlinearity in macroeconomic and financial data. Good understanding of these properties in a univariate time series model is crucial when making forecasts. Forecasts are used in various ways, such as helping to control risks in financial institutions and to assist in setting monetary policies in central banks. Hence, evaluating the forecast capacities of statistical models, quantifying and reducing forecast uncertainties are the main concerns of forecast practitioners. In this thesis, we propose two flexible parametric models that allow for autoregressive parameters to be time varying. One is a novel Generalised Stochastic Unit Root (GSTUR) model and the other is a Stationary Bilinear (SR) model. Bayesian inference in these two models are developed using methods on the frontier of numerical analysis. Programs, including model estimation with Markov chain Monte Carlo (MCMC), model comparison with Bayes Factors, model forecasting and Forecast Model Averaging, are developed and made available to meet the demand of economic modelers. With an application to the S&P 500 series, we found strong evidences of a deterministic trend when we allow the persistence to change with time. By fitting the GSTUR model to monthly UK/US real exchange rate data, the Purchasing Power Parity (PPP) theory is revisited. Our findings of a changing persistence in the data suggest that the GSTUR model may reconcile the empirical findings of nonstationarity in real exchange rates with the PPP theory. The forecasting capacities of a group of nonlinear and linear models are evaluated with an application to UK inflation rates. We propose a GSTUR model to be applied with data, which contains as much information as possible, for forecasting near-term inflation rates.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Chen, Wilson Ye. "New Advances in Dynamic Risk Models." Thesis, The University of Sydney, 2016. http://hdl.handle.net/2123/16953.

Повний текст джерела
Анотація:
The central theme of the entire thesis is to explore new ways of modelling the time-varying conditional distributions of financial asset returns. Connected by this central theme, the thesis is separated into three main parts. The first part is on modelling the time-varying variances of financial returns, where the idea of flexibly modelling the news impact curve in a GARCH model is extended to build a more general functional coefficient semiparametric volatility model. It is shown that most existing GARCH models can be written as special cases of the new functional coefficient model. The coefficient function is approximated by a regression spline. An adaptive MCMC algorithm is developed to simulate from the joint posterior of knot configurations and the spline coefficients. The second part is on modelling insurance loss using flexible Tukey family of quantile distributions, such as the g-and-h and g-and-k. The key contribution here is to propose a new estimator for the parameters of the Tukey family of distributions using the idea of L-moments. The estimator is shown to be more statistically efficient and requires less computing time compared to previously proposed methods. This second part serves as an important prerequisite for last part of the thesis; two important concepts introduced in this part are fundamental to the ideas developed in the final part, namely, the g-and-h quantile function and the method of L-moments. In the final part of the thesis, a functional time series model is developed and applied to model the quantile functions of high frequency financial returns; the model can also be viewed as a time-varying model for symbolic data, where each symbolic observation is a quantile function. A key advantage (and contribution) of this symbolic model is that the likelihood function of high frequency returns is allowed to be efficiently constructed via the g-and-h distribution and L-moments. This further allows a forecast model to be built for the entire quantile function, e.g., for the one-minute returns of an asset in one trading day. An efficient adaptive MCMC algorithm is developed for parameter estimation. A novel mixture distribution for modelling positive random variables is also proposed, which is called the Apatosaurus distribution.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Yildirim, Sinan. "Maximum likelihood parameter estimation in time series models using sequential Monte Carlo." Thesis, University of Cambridge, 2013. https://www.repository.cam.ac.uk/handle/1810/244707.

Повний текст джерела
Анотація:
Time series models are used to characterise uncertainty in many real-world dynamical phenomena. A time series model typically contains a static variable, called parameter, which parametrizes the joint law of the random variables involved in the definition of the model. When a time series model is to be fitted to some sequentially observed data, it is essential to decide on the value of the parameter that describes the data best, a procedure generally called parameter estimation. This thesis comprises novel contributions to the methodology on parameter estimation in time series models. Our primary interest is online estimation, although batch estimation is also considered. The developed methods are based on batch and online versions of expectation-maximisation (EM) and gradient ascent, two widely popular algorithms for maximum likelihood estimation (MLE). In the last two decades, the range of statistical models where parameter estimation can be performed has been significantly extended with the development of Monte Carlo methods. We provide contribution to the field in a similar manner, namely by combining EM and gradient ascent algorithms with sequential Monte Carlo (SMC) techniques. The time series models we investigate are widely used in statistical and engineering applications. The original work of this thesis is organised in Chapters 4 to 7. Chapter 4 contains an online EM algorithm using SMC for MLE in changepoint models, which are widely used to model heterogeneity in sequential data. In Chapter 5, we present batch and online EM algorithms using SMC for MLE in linear Gaussian multiple target tracking models. Chapter 6 contains a novel methodology for implementing MLE in a hidden Markov model having intractable probability densities for its observations. Finally, in Chapter 7 we formulate the nonnegative matrix factorisation problem as MLE in a specific hidden Markov model and propose online EM algorithms using SMC to perform MLE.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Yfanti, Stavroula. "Non-linear time series models with applications to financial data." Thesis, Brunel University, 2014. http://bura.brunel.ac.uk/handle/2438/9247.

Повний текст джерела
Анотація:
The purpose of this thesis is to investigate the financial volatility dynamics through the GARCH modelling framework. We use univariate and multivariate GARCH-type models enriched with long memory, asymmetries and power transformations. We study the financial time series volatility and co-volatility taking into account the structural breaks detected and focusing on the effects of the corresponding financial crisis events. We conclude to provide a complete framework for the analysis of volatility with major policy implications and benefits for the current risk management practices. We first investigate the volume-volatility link for different investor categories and orders, around the Asian crisis applying a univariate dual long memory model. Our analysis suggests that the behaviour of volatility depends upon volume, but also that the nature of this dependence varies with time and the source of volume. We further apply the vector AR-DCC-FIAPARCH and the UEDCC-AGARCH models to several stock indices daily returns, taking into account the structural breaks of the time series linked to major economic events including crisis shocks We find significant cross effects, time-varying shock and volatility spillovers, time-varying persistence in the conditional variances, as well as long range volatility dependence, asymmetric volatility response to positive and negative shocks and the power of returns that best fits the volatility pattern. We observe higher dynamic correlations of the stock markets after a crisis event, which means increased contagion effects between the markets, a continuous herding investors’ behaviour, as the in-crisis correlations remain high, and a higher level of correlations during the recent financial crisis than during the Asian. Finally, we study the High-frEquency-bAsed VolatilitY (HEAVY) models that combine daily returns with realised volatility. We enrich the HEAVY equations through the HYAPARCH formulation to propose the HYDAP-HEAVY (HYperbolic Double Asymmetric Power) and provide a complete framework to analyse the volatility process.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Jähnichen, Patrick. "Time Dynamic Topic Models." Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-200796.

Повний текст джерела
Анотація:
Information extraction from large corpora can be a useful tool for many applications in industry and academia. For instance, political communication science has just recently begun to use the opportunities that come with the availability of massive amounts of information available through the Internet and the computational tools that natural language processing can provide. We give a linguistically motivated interpretation of topic modeling, a state-of-the-art algorithm for extracting latent semantic sets of words from large text corpora, and extend this interpretation to cover issues and issue-cycles as theoretical constructs coming from political communication science. We build on a dynamic topic model, a model whose semantic sets of words are allowed to evolve over time governed by a Brownian motion stochastic process and apply a new form of analysis to its result. Generally this analysis is based on the notion of volatility as in the rate of change of stocks or derivatives known from econometrics. We claim that the rate of change of sets of semantically related words can be interpreted as issue-cycles, the word sets as describing the underlying issue. Generalizing over the existing work, we introduce dynamic topic models that are driven by general (Brownian motion is a special case of our model) Gaussian processes, a family of stochastic processes defined by the function that determines their covariance structure. We use the above assumption and apply a certain class of covariance functions to allow for an appropriate rate of change in word sets while preserving the semantic relatedness among words. Applying our findings to a large newspaper data set, the New York Times Annotated corpus (all articles between 1987 and 2007), we are able to identify sub-topics in time, \\\\textit{time-localized topics} and find patterns in their behavior over time. However, we have to drop the assumption of semantic relatedness over all available time for any one topic. Time-localized topics are consistent in themselves but do not necessarily share semantic meaning between each other. They can, however, be interpreted to capture the notion of issues and their behavior that of issue-cycles.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Sando, Simon Andrew. "Estimation of a class of nonlinear time series models." Thesis, Queensland University of Technology, 2004. https://eprints.qut.edu.au/15985/1/Simon_Sando_Thesis.pdf.

Повний текст джерела
Анотація:
The estimation and analysis of signals that have polynomial phase and constant or time-varying amplitudes with the addititve noise is considered in this dissertation.Much work has been undertaken on this problem over the last decade or so, and there are a number of estimation schemes available. The fundamental problem when trying to estimate the parameters of these type of signals is the nonlinear characterstics of the signal, which lead to computationally difficulties when applying standard techniques such as maximum likelihood and least squares. When considering only the phase data, we also encounter the well known problem of the unobservability of the true noise phase curve. The methods that are currently most popular involve differencing in phase followed by regression, or nonlinear transformations. Although these methods perform quite well at high signal to noise ratios, their performance worsens at low signal to noise, and there may be significant bias. One of the biggest problems to efficient estimation of these models is that the majority of methods rely on sequential estimation of the phase coefficients, in that the highest-order parameter is estimated first, its contribution removed via demodulation, and the same procedure applied to estimation of the next parameter and so on. This is clearly an issue in that errors in estimation of high order parameters affect the ability to estimate the lower order parameters correctly. As a result, stastical analysis of the parameters is also difficult. In thie dissertation, we aim to circumvent the issues of bias and sequential estiamtion by considering the issue of full parameter iterative refinement techniques. ie. given a possibly biased initial estimate of the phase coefficients, we aim to create computationally efficient iterative refinement techniques to produce stastically efficient estimators at low signal to noise ratios. Updating will be done in a multivariable manner to remove inaccuracies and biases due to sequential procedures. Stastical analysis and extensive simulations attest to the performance of the schemes that are presented, which include likelihood, least squares and bayesian estimation schemes. Other results of importance to the full estimatin problem, namely when there is error in the time variable, the amplitude is not constant, and when the model order is not known, are also condsidered.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Sando, Simon Andrew. "Estimation of a class of nonlinear time series models." Queensland University of Technology, 2004. http://eprints.qut.edu.au/15985/.

Повний текст джерела
Анотація:
The estimation and analysis of signals that have polynomial phase and constant or time-varying amplitudes with the addititve noise is considered in this dissertation.Much work has been undertaken on this problem over the last decade or so, and there are a number of estimation schemes available. The fundamental problem when trying to estimate the parameters of these type of signals is the nonlinear characterstics of the signal, which lead to computationally difficulties when applying standard techniques such as maximum likelihood and least squares. When considering only the phase data, we also encounter the well known problem of the unobservability of the true noise phase curve. The methods that are currently most popular involve differencing in phase followed by regression, or nonlinear transformations. Although these methods perform quite well at high signal to noise ratios, their performance worsens at low signal to noise, and there may be significant bias. One of the biggest problems to efficient estimation of these models is that the majority of methods rely on sequential estimation of the phase coefficients, in that the highest-order parameter is estimated first, its contribution removed via demodulation, and the same procedure applied to estimation of the next parameter and so on. This is clearly an issue in that errors in estimation of high order parameters affect the ability to estimate the lower order parameters correctly. As a result, stastical analysis of the parameters is also difficult. In thie dissertation, we aim to circumvent the issues of bias and sequential estiamtion by considering the issue of full parameter iterative refinement techniques. ie. given a possibly biased initial estimate of the phase coefficients, we aim to create computationally efficient iterative refinement techniques to produce stastically efficient estimators at low signal to noise ratios. Updating will be done in a multivariable manner to remove inaccuracies and biases due to sequential procedures. Stastical analysis and extensive simulations attest to the performance of the schemes that are presented, which include likelihood, least squares and bayesian estimation schemes. Other results of importance to the full estimatin problem, namely when there is error in the time variable, the amplitude is not constant, and when the model order is not known, are also condsidered.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Chakraborty, Prithwish. "Data-Driven Methods for Modeling and Predicting Multivariate Time Series using Surrogates." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/81432.

Повний текст джерела
Анотація:
Modeling and predicting multivariate time series data has been of prime interest to researchers for many decades. Traditionally, time series prediction models have focused on finding attributes that have consistent correlations with target variable(s). However, diverse surrogate signals, such as News data and Twitter chatter, are increasingly available which can provide real-time information albeit with inconsistent correlations. Intelligent use of such sources can lead to early and real-time warning systems such as Google Flu Trends. Furthermore, the target variables of interest, such as public heath surveillance, can be noisy. Thus models built for such data sources should be flexible as well as adaptable to changing correlation patterns. In this thesis we explore various methods of using surrogates to generate more reliable and timely forecasts for noisy target signals. We primarily investigate three key components of the forecasting problem viz. (i) short-term forecasting where surrogates can be employed in a now-casting framework, (ii) long-term forecasting problem where surrogates acts as forcing parameters to model system dynamics and, (iii) robust drift models that detect and exploit 'changepoints' in surrogate-target relationship to produce robust models. We explore various 'physical' and 'social' surrogate sources to study these sub-problems, primarily to generate real-time forecasts for endemic diseases. On modeling side, we employed matrix factorization and generalized linear models to detect short-term trends and explored various Bayesian sequential analysis methods to model long-term effects. Our research indicates that, in general, a combination of surrogates can lead to more robust models. Interestingly, our findings indicate that under specific scenarios, particular surrogates can decrease overall forecasting accuracy - thus providing an argument towards the use of 'Good data' against 'Big data'.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Li, Dan. "Efficient Bayesian estimation for GARCH-type models via sequential Monte Carlo." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/180752/1/Dan_Li_Thesis.pdf.

Повний текст джерела
Анотація:
This thesis develops a new and principled approach for estimation, prediction and model selection for a class of challenging models in econometrics, which are used to predict the dynamics of the volatility of financial asset returns. The results of both the simulation and empirical study in this research showcased the advantages of the proposed approach, offering improved robustness and more appropriate uncertainty quantification. The new methods will enable practitioners to gain more information and evaluate different models' predictive performance in a more efficient and principled manner, for long financial time series data.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Safari, Katesari Hadi. "BAYESIAN DYNAMIC FACTOR ANALYSIS AND COPULA-BASED MODELS FOR MIXED DATA." OpenSIUC, 2021. https://opensiuc.lib.siu.edu/dissertations/1948.

Повний текст джерела
Анотація:
Available statistical methodologies focus more on accommodating continuous variables, however recently dealing with count data has received high interest in the statistical literature. In this dissertation, we propose some statistical approaches to investigate linear and nonlinear dependencies between two discrete random variables, or between a discrete and continuous random variables. Copula functions are powerful tools for modeling dependencies between random variables. We derive copula-based population version of Spearman’s rho when at least one of the marginal distribution is discrete. In each case, the functional relationship between Kendall’s tau and Spearman’s rho is obtained. The asymptotic distributions of the proposed estimators of these association measures are derived and their corresponding confidence intervals are constructed, and tests of independence are derived. Then, we propose a Bayesian copula factor autoregressive model for time series mixed data. This model assumes conditional independence and shares latent factors in both mixed-type response and multivariate predictor variables of the time series through a quadratic timeseries regression model. This model is able to reduce the dimensionality by accommodating latent factors in both response and predictor variables of the high-dimensional time series data. A semiparametric time series extended rank likelihood technique is applied to the marginal distributions to handle mixed-type predictors of the high-dimensional time series, which decreases the number of estimated parameters and provides an efficient computational algorithm. In order to update and compute the posterior distributions of the latent factors and other parameters of the models, we propose a naive Bayesian algorithm with Metropolis-Hasting and Forward Filtering Backward Sampling methods. We evaluate the performance of the proposed models and methods through simulation studies. Finally, each proposed model is applied to a real dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Frühwirth-Schnatter, Sylvia. "Applied State Space Modelling of Non-Gaussian Time Series using Integration-based Kalman-filtering." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1993. http://epub.wu.ac.at/1558/1/document.pdf.

Повний текст джерела
Анотація:
The main topic of the paper is on-line filtering for non-Gaussian dynamic (state space) models by approximate computation of the first two posterior moments using efficient numerical integration. Based on approximating the prior of the state vector by a normal density, we prove that the posterior moments of the state vector are related to the posterior moments of the linear predictor in a simple way. For the linear predictor Gauss-Hermite integration is carried out with automatic reparametrization based on an approximate posterior mode filter. We illustrate how further topics in applied state space modelling such as estimating hyperparameters, computing model likelihoods and predictive residuals, are managed by integration-based Kalman-filtering. The methodology derived in the paper is applied to on-line monitoring of ecological time series and filtering for small count data. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Kypraios, Theodore. "Efficient Bayesian inference for partially observed stochastic epidemics and a new class of semi-parametric time series models." Thesis, Lancaster University, 2007. http://eprints.lancs.ac.uk/26392/.

Повний текст джерела
Анотація:
This thesis is divided in two distinct parts. In the First part we are concerned with developing new statistical methodology for drawing Bayesian inference for partially observed stochastic epidemic models. In the second part, we develop a novel methodology for constructing a wide class of semi-parametric time series models. First, we introduce a general framework for the heterogeneously mixing stochastic epidemic models (HMSE) and we also review some of the existing methods of statistical inference for epidemic models. The performance of a variety of centered Markov Chain Monte Carlo (MCMC) algorithms is studied. It is found that as the number of infected individuals increases, then the performance of these algorithms deteriorates. We then develop a variety of centered, non-centered and partially non-centered reparameterisations. We show that partially non-centered reparameterisations often offer more effcient MCMC algorithms than the centered ones. The methodology developed for drawing eciently Bayesian inference for HMSE is then applied to the 2001 UK Foot-and-Mouth disease outbreak in Cumbria. Unlike other existing modelling approaches, we model stochastically the infectious period of each farm assuming that the infection date of each farm is typically unknown. Due to the high dimensionality of the problem, standard MCMC algorithms are inefficient. Therefore, a partially non-centered algorithm is applied for the purpose of obtaining reliable estimates for the model's parameter of interest. In addition, we discuss similarities and differences of our fndings in comparison to other results in the literature. The main purpose of the second part of this thesis, is to develop a novel class of semi-parametric time series models. We are interested in constructing models for which we can specify in advance the marginal distribution of the observations and then build the dependence structure of the observations around them. First, we review current work concerning modelling time series with fixed non-Gaussian margins and various correlation structures. Then, we introduce a stochastic process which we term a latent branching tree (LBT). The LBT enables us to allow for a rich variety of correlation structures. Apart from discussing in detail the tree's properties, we also show how Bayesian inference can be carried out via MCMC methods. Various MCMC strategies are discussed including non-centered parameterisations. It is found that non-centered algorithms significantly improve the mixing of some of the algorithms based on centered reparameterisations. Finally, we present an application of this class of models to a real dataset on genome scheme data.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Otto, Sven [Verfasser], Jörg [Gutachter] Breitung, and Dominik [Gutachter] Wied. "Three Essays on Structural Stability of Time Series Models / Sven Otto ; Gutachter: Jörg Breitung, Dominik Wied." Köln : Universitäts- und Stadtbibliothek Köln, 2019. http://d-nb.info/1197797416/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Otto, Sven [Verfasser], Jörg Gutachter] Breitung, and Dominik [Gutachter] [Wied. "Three Essays on Structural Stability of Time Series Models / Sven Otto ; Gutachter: Jörg Breitung, Dominik Wied." Köln : Universitäts- und Stadtbibliothek Köln, 2019. http://nbn-resolving.de/urn:nbn:de:hbz:38-100113.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Belkhouja, Mustapha. "Modelling nonlinearities in long-memory time series : simulation and empirical studies." Thesis, Aix-Marseille 2, 2010. http://www.theses.fr/2010AIX24010/document.

Повний текст джерела
Анотація:
Cette thèse porte sur l'identification et l'estimation des ruptures structurelles pouvant affecter des données économiques et financières à mémoire longue. Notre étude s'est limitée dans les trois premiers chapitres au cadre univarié où nous avons modélisé la dépendance de long terme et les changements structurels simultanément et séparément au niveau de la moyenne ainsi que la volatilité. Dans un premier temps nous n'avons tenu compte que des sauts instantanés d'état ensuite nous nous sommes intéressés à la possibilité d'avoir des changements graduels et lisses au cours du temps grâce à des modèles nonlinéaires plus complexes. Par ailleurs, des expériences de simulation ont été menées dans le but d'offrir une analyse comparative des méthodes utilisées et d'attester de la robustesse des tests sous certaines conditions telle que la présence de la mémoire longue dans la série. Ce travail s'est achevé sur une extension aux modèles multivariés.Ces modèles permettent de rendre compte des mécanismes de propagation d'une variation d'une série sur l'autre et d'identifier les liens entre les variables ainsi que la nature des ces liens. Les interactions entre les différentes variables financières ont été analysées tant à court terme qu'à long terme. Bien que le concept du changement structurel n'a pas été abordé dans ce dernier chapitre, nous avons pris en compte l'effet d'asymétrie et de mémoire longue dans la modélisation de la volatilité
This dissertation deals with the detection and the estimation of structural changes in long memory economic and financial time series. Within the rest three chapters we focused on the univariate case to model both the long range dependence and structural changes in the mean and the volatility of the examined series. In the beginning we just take into account abrupt regime switches but after we use more developed nonlinear models in order to capture the smooth time variations of the dynamics. Otherwise we analyse the efficiency of various techniques permitting to select the number of breaks and we assess the robustness of the used tests in a long memory environment via simulations. Last, this thesis was completed by an extension to multivariate models. These models allow us to detect the impact of some series on the others and identify the relationships among them. The interdependencies between the financial variables were studied and analysed both in the short and the long range. While structural changes were not considered in the last chapter, our multivariate model takes into account asymmetry effects and the long memory behaviour in the volatility
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Nguyen, Trong Nghia. "Deep Learning Based Statistical Models for Business and Financial Data." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/26944.

Повний текст джерела
Анотація:
We investigate a wide range of statistical models commonly used in many business and financial econometrics applications and propose flexible ways to combine these highly interpretable models with powerful predictive models in the deep learning literature to leverage the advantages and compensate the disadvantages of each of the modelling approaches. Our approaches of utilizing deep learning techniques for financial data are different from the recently proposed deep learning-based models in the financial econometrics literature in several perspectives. First, we do not overlook well-established structures that have been successfully used in statistical modelling. We flexibly incorporate deep learning techniques to the statistical models to capture the data effects that cannot be explained by the simple linear components of those models. Our proposed modelling frameworks therefore normally include two components: a linear part to explain linear dependencies and a deep learning-based part to capture data effects rather than linearity possibly exhibited in the underlying process. Second, we do not use the neural network structures in the same fashion as they are implemented in the deep learning literature but modify those black-box methods to make them more explainable and hence improve the interpretability of the proposed models. As the results, our hybrid models not only perform better than the pure deep learning techniques in term of interpretation but also often produce more accurate out-of-sample forecasts than the counterpart statistical frameworks. Third, we propose advanced Bayesian inference methodologies to efficiently quantify the uncertainty about the model estimation and prediction. For the proposed high dimensional deep learning-based models, performing efficient Bayesian inference is extremely challenging and is often ignored in the engineer-oriented papers, which generally prefer the frequentist estimation approaches mainly due to the simplicity.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Hay, John Leslie. "Statistical modelling for non-Gaussian time series data with explanatory variables." Thesis, Queensland University of Technology, 1999.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Costa, Maria da Conceição Cristo Santos Lopes. "Optimal alarms systems and its application to financial time series." Doctoral thesis, Universidade de Aveiro, 2014. http://hdl.handle.net/10773/12872.

Повний текст джерела
Анотація:
Doutoramento em Matemática
This thesis focuses on the application of optimal alarm systems to non linear time series models. The most common classes of models in the analysis of real-valued and integer-valued time series are described. The construction of optimal alarm systems is covered and its applications explored. Considering models with conditional heteroscedasticity, particular attention is given to the Fractionally Integrated Asymmetric Power ARCH, FIAPARCH(p; d; q) model and an optimal alarm system is implemented, following both classical and Bayesian methodologies. Taking into consideration the particular characteristics of the APARCH(p; q) representation for financial time series, the introduction of a possible counterpart for modelling time series of counts is proposed: the INteger-valued Asymmetric Power ARCH, INAPARCH(p; q). The probabilistic properties of the INAPARCH(1; 1) model are comprehensively studied, the conditional maximum likelihood (ML) estimation method is applied and the asymptotic properties of the conditional ML estimator are obtained. The final part of the work consists on the implementation of an optimal alarm system to the INAPARCH(1; 1) model. An application is presented to real data series.
Esta tese centra-se na aplicação de sistemas de alarme ótimos a modelos de séries temporais não lineares. As classes de modelos mais comuns na análise de séries temporais de valores reais e de valores inteiros são descritas com alguma profundidade. É abordada a construção de sistemas de alarme ótimos e as suas aplicações são exploradas. De entre os modelos com heterocedasticidade condicional é dada especial atenção ao modelo ARCH Fraccionalmente Integrável de Potência Assimétrica, FIAPARCH(p; d; q), e é feita a implementação de um sistema de alarme ótimo, considerando ambas as metodologias clássica e Bayesiana. Tomando em consideração as características particulares do modelo APARCH(p; q) na aplicação a séries de dados financeiros, é proposta a introdução do seu homólogo para a modelação de séries temporais de contagens: o modelo ARCH de valores INteiros e Potência Assimétrica, INAPARCH(p; q). As propriedades probabilísticas do modelo INAPARCH(1; 1) são extensivamente estudadas, é aplicado o método da máxima verosimilhança (MV) condicional para a estimação dos parâmetros do modelo e estudadas as propriedades assintóticas do estimador de MV condicional. Na parte final do trabalho é feita a implementação de um sistema de alarme ótimo ao modelo INAPARCH(1; 1) e apresenta-se uma aplicação a séries de dados reais.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Gomes, Maria Helena Rodrigues. "Uso da abordagem Bayesiana para a estimativa de parâmetros sazonais dos modelos auto-regressivos periódicos." Universidade de São Paulo, 2003. http://www.teses.usp.br/teses/disponiveis/18/18138/tde-06012016-113635/.

Повний текст джерела
Анотація:
O presente trabalho tem por finalidade o uso da abordagem bayesiana para a estimativa de parâmetros sazonais dos modelos periódicos auto-regressivos (PAR). Após a determinação dos estimadores bayesianos, estes são comparados com os estimadores de máxima verossimilhança. A previsão para 12 meses é realizada usando os dois estimadores e os resultados comparados por meio de gráficos, tabelas e pelos erros de previsão. Para ilustrar o problema as séries escolhidas foram as séries hidrológicas da Usinas Hidroelétricas de Furnas e Emborcação. Tais séries foram selecionadas tendo em vista a necessidade de previsões com reduzido erro já que o sistema de operação das usinas hidroelétricas depende muito da quantidade de água existente em seus reservatórios e de planejamento e gerenciamento eficazes.
The objective of this research is to use bayesian method to estimate of sazonal parameters of periodic autoregressive models (PAR). The bayesian estimators are then compared with maximum likelihood estimators. The forecast for 12 months is made by using two estimators and comparing their results though graphs, tables and forecast error. The hydrological time series chosen were from Furnas and Emborcação Hydroeletric Power Plant. These series were chosen having in mind the necessity of series with reduced error in their forecast because system of operation in the Hydroeletric Power Plant depends on the quantity of the water in their resevoirs, eficient planning and management.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Monavari, Benyamin. "SHM-based structural deterioration assessment." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/132660/1/Benyamin%20Monavari%20Thesis.pdf.

Повний текст джерела
Анотація:
This research has successfully developed an effective methodology to detect and locate deterioration as well as estimate its severity in the presence of environmental and operational (E&O) variations and high level of measurement noise. It developed a novel data normalization procedure to diminish the E&O variations and high level of noise content; and developed thirteen time-series based deterioration indicators to detect deterioration. The proposed methods were verified utilising measured data from different numerically simulated case studies and laboratory tests, and their efficiency is demonstrated using data acquired from a real-world instrumented building.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Singleton, Michael David. "Nonlinear Hierarchical Models for Longitudinal Experimental Infection Studies." UKnowledge, 2015. http://uknowledge.uky.edu/epb_etds/7.

Повний текст джерела
Анотація:
Experimental infection (EI) studies, involving the intentional inoculation of animal or human subjects with an infectious agent under controlled conditions, have a long history in infectious disease research. Longitudinal infection response data often arise in EI studies designed to demonstrate vaccine efficacy, explore disease etiology, pathogenesis and transmission, or understand the host immune response to infection. Viral loads, antibody titers, symptom scores and body temperature are a few of the outcome variables commonly studied. Longitudinal EI data are inherently nonlinear, often with single-peaked response trajectories with a common pre- and post-infection baseline. Such data are frequently analyzed with statistical methods that are inefficient and arguably inappropriate, such as repeated measures analysis of variance (RM-ANOVA). Newer statistical approaches may offer substantial gains in accuracy and precision of parameter estimation and power. We propose an alternative approach to modeling single-peaked, longitudinal EI data that incorporates recent developments in nonlinear hierarchical models and Bayesian statistics. We begin by introducing a nonlinear mixed model (NLMM) for a symmetric infection response variable. We employ a standard NLMM assuming normally distributed errors and a Gaussian mean response function. The parameters of the model correspond directly to biologically meaningful properties of the infection response, including baseline, peak intensity, time to peak and spread. Through Monte Carlo simulation studies we demonstrate that the model outperforms RM-ANOVA on most measures of parameter estimation and power. Next we generalize the symmetric NLMM to allow modeling of variables with asymmetric time course. We implement the asymmetric model as a Bayesian nonlinear hierarchical model (NLHM) and discuss advantages of the Bayesian approach. Two illustrative applications are provided. Finally we consider modeling of viral load. For several reasons, a normal-errors model is not appropriate for viral load. We propose and illustrate a Bayesian NLHM with the individual responses at each time point modeled as a Poisson random variable with the means across time points related through a Tricube mean response function. We conclude with discussion of limitations and open questions, and a brief survey of broader applications of these models.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Silvestrini, Andrea. "Essays on aggregation and cointegration of econometric models." Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210304.

Повний текст джерела
Анотація:
This dissertation can be broadly divided into two independent parts. The first three chapters analyse issues related to temporal and contemporaneous aggregation of econometric models. The fourth chapter contains an application of Bayesian techniques to investigate whether the post transition fiscal policy of Poland is sustainable in the long run and consistent with an intertemporal budget constraint.

Chapter 1 surveys the econometric methodology of temporal aggregation for a wide range of univariate and multivariate time series models.

A unified overview of temporal aggregation techniques for this broad class of processes is presented in the first part of the chapter and the main results are summarized. In each case, assuming to know the underlying process at the disaggregate frequency, the aim is to find the appropriate model for the aggregated data. Additional topics concerning temporal aggregation of ARIMA-GARCH models (see Drost and Nijman, 1993) are discussed and several examples presented. Systematic sampling schemes are also reviewed.

Multivariate models, which show interesting features under temporal aggregation (Breitung and Swanson, 2002, Marcellino, 1999, Hafner, 2008), are examined in the second part of the chapter. In particular, the focus is on temporal aggregation of VARMA models and on the related concept of spurious instantaneous causality, which is not a time series property invariant to temporal aggregation. On the other hand, as pointed out by Marcellino (1999), other important time series features as cointegration and presence of unit roots are invariant to temporal aggregation and are not induced by it.

Some empirical applications based on macroeconomic and financial data illustrate all the techniques surveyed and the main results.

Chapter 2 is an attempt to monitor fiscal variables in the Euro area, building an early warning signal indicator for assessing the development of public finances in the short-run and exploiting the existence of monthly budgetary statistics from France, taken as "example country".

The application is conducted focusing on the cash State deficit, looking at components from the revenue and expenditure sides. For each component, monthly ARIMA models are estimated and then temporally aggregated to the annual frequency, as the policy makers are interested in yearly predictions.

The short-run forecasting exercises carried out for years 2002, 2003 and 2004 highlight the fact that the one-step-ahead predictions based on the temporally aggregated models generally outperform those delivered by standard monthly ARIMA modeling, as well as the official forecasts made available by the French government, for each of the eleven components and thus for the whole State deficit. More importantly, by the middle of the year, very accurate predictions for the current year are made available.

The proposed method could be extremely useful, providing policy makers with a valuable indicator when assessing the development of public finances in the short-run (one year horizon or even less).

Chapter 3 deals with the issue of forecasting contemporaneous time series aggregates. The performance of "aggregate" and "disaggregate" predictors in forecasting contemporaneously aggregated vector ARMA (VARMA) processes is compared. An aggregate predictor is built by forecasting directly the aggregate process, as it results from contemporaneous aggregation of the data generating vector process. A disaggregate predictor is a predictor obtained from aggregation of univariate forecasts for the individual components of the data generating vector process.

The econometric framework is broadly based on Lütkepohl (1987). The necessary and sufficient condition for the equality of mean squared errors associated with the two competing methods in the bivariate VMA(1) case is provided. It is argued that the condition of equality of predictors as stated in Lütkepohl (1987), although necessary and sufficient for the equality of the predictors, is sufficient (but not necessary) for the equality of mean squared errors.

Furthermore, it is shown that the same forecasting accuracy for the two predictors can be achieved using specific assumptions on the parameters of the VMA(1) structure.

Finally, an empirical application that involves the problem of forecasting the Italian monetary aggregate M1 on the basis of annual time series ranging from 1948 until 1998, prior to the creation of the European Economic and Monetary Union (EMU), is presented to show the relevance of the topic. In the empirical application, the framework is further generalized to deal with heteroskedastic and cross-correlated innovations.

Chapter 4 deals with a cointegration analysis applied to the empirical investigation of fiscal sustainability. The focus is on a particular country: Poland. The choice of Poland is not random. First, the motivation stems from the fact that fiscal sustainability is a central topic for most of the economies of Eastern Europe. Second, this is one of the first countries to start the transition process to a market economy (since 1989), providing a relatively favorable institutional setting within which to study fiscal sustainability (see Green, Holmes and Kowalski, 2001). The emphasis is on the feasibility of a permanent deficit in the long-run, meaning whether a government can continue to operate under its current fiscal policy indefinitely.

The empirical analysis to examine debt stabilization is made up by two steps.

First, a Bayesian methodology is applied to conduct inference about the cointegrating relationship between budget revenues and (inclusive of interest) expenditures and to select the cointegrating rank. This task is complicated by the conceptual difficulty linked to the choice of the prior distributions for the parameters relevant to the economic problem under study (Villani, 2005).

Second, Bayesian inference is applied to the estimation of the normalized cointegrating vector between budget revenues and expenditures. With a single cointegrating equation, some known results concerning the posterior density of the cointegrating vector may be used (see Bauwens, Lubrano and Richard, 1999).

The priors used in the paper leads to straightforward posterior calculations which can be easily performed.

Moreover, the posterior analysis leads to a careful assessment of the magnitude of the cointegrating vector. Finally, it is shown to what extent the likelihood of the data is important in revising the available prior information, relying on numerical integration techniques based on deterministic methods.


Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

Стилі APA, Harvard, Vancouver, ISO та ін.
43

Vikström, Peter. "The big picture : a historical national accounts approach to growth, structural change and income distribution in Sweden 1870-1990." Doctoral thesis, Umeå universitet, Institutionen för ekonomisk historia, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-59808.

Повний текст джерела
Анотація:
One fundamental point of departure for this thesis is the importance of addressing all three basic economic research questions: what is produced, with what and for whom and including them in the discussion regarding long-term macroeconomic performance. This could also be stated as that a consistent historical national accounts approach where both aspects of production and distribution are included can significantly enhance the research on macroeconomic historical issues. Built upon this foundation, the objective of this thesis is twofold. To begin with, the objective includes the broadening of the empirical database of the Swedish historical national accounts (SHNA) with accounts for the process involving the horizontal distribution of income. The second objective of this thesis consists of conducting analyses of the Swedish macroeconomic devel­opment using the extended database of the SHNA. An important aspect of the analytical objective involves the exploration of methods that had not widely been applied in Swedish economic historical research. Thus, great emphasis is placed on the methodology used in the analyses of macroeconomic development. These two main objectives forni the disposition of the thesis. The first empirical part consists of work with income accounts in the SHNA. This work has resulted in the establishment of a set of income accounts concur­ring with the procedure recommended in the contemporary national accounting system. In the second part of the thesis, selected macroeconomic issues are examined using the extended SHNA database. The first analysis consists of a closer examination of the presence of periodization patterns in Swedish growth and structural change. In this chapter an analysis based on structural time series models is applied to the SHNA series. The main results of this chapter is that the time series on growth and structural change reveal a pattern that not unconditionally is consistent with the prevailing periodisation pattern recognised in Swedish economic-historical research. Instead, the development pattern reveals features found in international research. The next analysis is concerned with the role of specific institutions for contributing to the slow-down in growth that occurred from the late 1960s and throughout the 1970s and 1980s. In this chapter the importance of the corporate tax system, investment funds and the public pension funds for the efficiency of the resource alloca­tion process is examined. The hypothesis that is examined is that these institutional arrangements altered the distribution of income in such a way that the investment allocation was disturbed and thereby leading to ineffi­ciencies that affected long-term growth negatively. This hypothesis is supported by empirical evidence on changes in the income distribution and changes in long-term rates of growth and structural change. Thus, the investigated institutional arrangements to a certain extent had a negative effect on the Swedish economic per­formance during the 1960s to the 1980s. In the final analytical chapter, the objective is mainly methodological. Here, the focus is on the potential application of CGE-models as a tool for examining Swedish macroeconomic history. A fairly straightforward CGE-model is formulated for the period 1910 to 1930 and estimated using the broadened SHNA. The predic­tions of the model are evaluated against the actual historical development in order to assess the performance of the model. As the model formulated in this chapter generates accurate prediction of the main macroeconomic indicators, it is subsequently used in a counterfactual analysis of the impact of total factor productivity growth on the overall growth performance. In summary, the thesis demonstrates that much can be achieved in the research on the Swedish macroeco­nomic development by utilizing new theoretical approaches and applying state of the art analysis methods as a complement to the structural analytical research that has been conducted previously. However, much research is still required, especially on the improvement of the macroeconomic database where one priority is to create detailed and consistent input-output tables and social accounting matrices.
digitalisering@umu
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Gutierrez, Karen Fiorella Aquino. "Modelagem da volatilidade em séries temporais financeiras via modelos GARCH com abordagem Bayesiana." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/104/104131/tde-13112017-160115/.

Повний текст джерела
Анотація:
Nas últimas décadas a volatilidade transformou-se num conceito muito importante na área financeira, sendo utilizada para mensurar o risco de instrumentos financeiros. Neste trabalho, o foco de estudo é a modelagem da volatilidade, que faz referência à variabilidade dos retornos, sendo esta uma característica presente nas séries temporais financeiras. Como ferramenta fundamental da modelação usaremos o modelo GARCH (Generalized Autoregressive Conditional Heteroskedasticity), que usa a heterocedasticidade condicional como uma medida da volatilidade. Considerar-se-ão duas características principais a ser modeladas com o propósito de obter um melhor ajuste e previsão da volatilidade, estas são: a assimetria e as caudas pesadas presentes na distribuição incondicional da série dos retornos. A estimação dos parâmetros dos modelos propostos será feita utilizando a abordagem Bayesiana com a metodologia MCMC (Markov Chain Monte Carlo) especificamente o algoritmo de Metropolis-Hastings.
In the last decades volatility has become a very important concept in the financial area, being used to measure the risk of financial instruments. In this work, the focus of study is the modeling of volatility, that refers to the variability of returns, which is a characteristic present in the financial time series. As a fundamental modeling tool, we used the GARCH (Generalized Autoregressive Conditional Heteroskedasticity) model, which uses conditional heteroscedasticity as a measure of volatility. Two main characteristics will be considered to be modeled with the purpose of a better adjustment and prediction of the volatility, these are: heavy tails and an asymmetry present in the unconditional distribution of the return series. The estimation of the parameters of the proposed models is done by means of the Bayesian approach with an MCMC (Markov Chain Monte Carlo) methodology , specifically the Metropolis-Hastings algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Fioruci, José Augusto. "Modelagem de volatilidade via modelos GARCH com erros assimétricos: abordagem Bayesiana." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-05092012-101345/.

Повний текст джерела
Анотація:
A modelagem da volatilidade desempenha um papel fundamental em Econometria. Nesta dissertação são estudados a generalização dos modelos autorregressivos condicionalmente heterocedásticos conhecidos como GARCH e sua principal generalização multivariada, os modelos DCC-GARCH (Dynamic Condicional Correlation GARCH). Para os erros desses modelos são consideradas distribuições de probabilidade possivelmente assimétricas e leptocúrticas, sendo essas parametrizadas em função da assimetria e do peso nas caudas, necessitando assim de estimar esses parâmetros adicionais aos modelos. A estimação dos parâmetros dos modelos é feita sob a abordagem Bayesiana e devido às complexidades destes modelos, métodos computacionais baseados em simulações de Monte Carlo via Cadeias de Markov (MCMC) são utilizados. Para obter maior eficiência computacional os algoritmos de simulação da distribuição a posteriori dos parâmetros são implementados em linguagem de baixo nível. Por fim, a proposta de modelagem e estimação é exemplificada com dois conjuntos de dados reais
The modeling of volatility plays a fundamental role in Econometrics. In this dissertation are studied the generalization of known autoregressive conditionally heteroscedastic (GARCH) models and its main principal multivariate generalization, the DCCGARCH (Dynamic Conditional Correlation GARCH) models. For the errors of these models are considered distribution of probability possibility asymmetric and leptokurtic, these being parameterized as a function of asymmetry and the weight on the tails, thus requiring estimate the models additional parameters. The estimation of parameters is made under the Bayesian approach and due to the complexities of these models, methods computer-based simulations Monte Carlo Markov Chain (MCMC) are used. For more computational efficiency of simulation algorithms of posterior distribution of the parameters are implemented in low-level language. Finally, the proposed modeling and estimation is illustrated with two real data sets
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Jebreen, Kamel. "Modèles graphiques pour la classification et les séries temporelles." Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0248/document.

Повний текст джерела
Анотація:
Dans cette thèse nous nous intéressons aux méthodes de classifications supervisées utilisant les réseaux bayésiens. L'avantage majeur de ces méthodes est qu'elles peuvent prendre en compte les interactions entre les variables explicatives. Dans une première partie nous proposons une procédure de discrétisation spécifique et une procédure de sélection de variables qui permettent d'améliorer considérablement les classifieurs basés sur des réseaux bayésiens. Cette procédure a montré de très bonnes performances empiriques sur un grand choix de jeux de données connus de l’entrepôt d'apprentissage automatique (UCI Machine Learning repository). Une application pour la prévision de type d’épilepsie à partir de de caractéristiques des patients extraites des images de Tomographie par émission de positrons (TEP) confirme l’efficacité de notre approche comparé à des approches communes de classifications supervisées. Dans la deuxième partie de cette thèse nous nous intéressons à la modélisation des interactions entre des variables dans le contexte de séries chronologiques en grande dimension. Nous avons proposé deux nouvelles approches. La première, similaire à la technique "neighborhood Lasso" remplace la technique Lasso par des machines à vecteurs de supports. La deuxième approche est un réseau bayésien restreint: les variables observées à chaque instant et à l’instant précédent sont utilisées dans un réseau dont la structure est restreinte. Nous montrons l’efficacité de ces approches par des simulations utilisant des donnés simulées issues de modèles linéaires, non-linéaires et un mélange des deux
First, in this dissertation, we will show that Bayesian networks classifiers are very accurate models when compared to other classical machine learning methods. Discretising input variables often increase the performance of Bayesian networks classifiers, as does a feature selection procedure. Different types of Bayesian networks may be used for supervised classification. We combine such approaches together with feature selection and discretisation to show that such a combination gives rise to powerful classifiers. A large choice of data sets from the UCI machine learning repository are used in our experiments, and the application to Epilepsy type prediction based on PET scan data confirms the efficiency of our approach. Second, in this dissertation we also consider modelling interaction between a set of variables in the context of time series and high dimension. We suggest two approaches; the first is similar to the neighbourhood lasso where the lasso model is replaced by Support Vector Machines (SVMs); the second is a restricted Bayesian network for time series. We demonstrate the efficiency of our approaches simulations using linear and nonlinear data set and a mixture of both
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Campos, Celso Vilela Chaves. "Previsão da arrecadação de receitas federais: aplicações de modelos de séries temporais para o estado de São Paulo." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/96/96131/tde-12052009-150243/.

Повний текст джерела
Анотація:
O objetivo principal do presente trabalho é oferecer métodos alternativos de previsão da arrecadação tributária federal, baseados em metodologias de séries temporais, inclusive com a utilização de variáveis explicativas, que reflitam a influência do cenário macroeconômico na arrecadação tributária, com o intuito de melhorar a acurácia da previsão da arrecadação. Para tanto, foram aplicadas as metodologias de modelos dinâmicos univariados, multivariados, quais sejam, Função de Transferência, Auto-regressão Vetorial (VAR), VAR com correção de erro (VEC), Equações Simultâneas, e de modelos Estruturais. O trabalho tem abrangência regional e limita-se à análise de três séries mensais da arrecadação, relativas ao Imposto de Importação, Imposto Sobre a Renda das Pessoas Jurídicas e Contribuição para o Financiamento da Seguridade Social - Cofins, no âmbito da jurisdição do estado de São Paulo, no período de 2000 a 2007. Os resultados das previsões dos modelos acima citados são comparados entre si, com a modelagem ARIMA e com o método dos indicadores, atualmente utilizado pela Secretaria da Receita Federal do Brasil (RFB) para previsão anual da arrecadação tributária, por meio da raiz do erro médio quadrático de previsão (RMSE). A redução média do RMSE foi de 42% em relação ao erro cometido pelo método dos indicadores e de 35% em relação à modelagem ARIMA, além da drástica redução do erro anual de previsão. A utilização de metodologias de séries temporais para a previsão da arrecadação de receitas federais mostrou ser uma alternativa viável ao método dos indicadores, contribuindo para previsões mais precisas, tornando-se ferramenta segura de apoio para a tomada de decisões dos gestores.
The main objective of this work is to offer alternative methods for federal tax revenue forecasting, based on methodologies of time series, inclusively with the use of explanatory variables, which reflect the influence of the macroeconomic scenario in the tax collection, for the purpose of improving the accuracy of revenues forecasting. Therefore, there were applied the methodologies of univariate dynamic models, multivariate, namely, Transfer Function, Vector Autoregression (VAR), VAR with error correction (VEC), Simultaneous Equations, and Structural Models. The work has a regional scope and it is limited to the analysis of three series of monthly tax collection of the Import Duty, the Income Tax Law over Legal Entities Revenue and the Contribution for the Social Security Financing Cofins, under the jurisdiction of the state of São Paulo in the period from 2000 to 2007. The results of the forecasts from the models above were compared with each other, with the ARIMA moulding and with the indicators method, currently used by the Secretaria da Receita Federal do Brasil (RFB) to annual foresee of the tax collection, through the root mean square error of approximation (RMSE). The average reduction of RMSE was 42% compared to the error committed by the method of indicators and 35% of the ARIMA model, besides the drastic reduction in the annual forecast error. The use of time-series methodologies to forecast the collection of federal revenues has proved to be a viable alternative to the method of indicators, contributing for more accurate predictions, becoming a safe support tool for the managers decision making process.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Aquino, Gutierrez Karen Fiorella. "Modelagem da volatilidade em séries temporais financeiras via modelos GARCH com abordagem bayesiana." Universidade Federal de São Carlos, 2017. https://repositorio.ufscar.br/handle/ufscar/9340.

Повний текст джерела
Анотація:
Submitted by Bruna Rodrigues (bruna92rodrigues@yahoo.com.br) on 2017-09-27T14:34:29Z No. of bitstreams: 1 DissKFAG.pdf: 21371434 bytes, checksum: e9355d67b5b05eda13ae02e3ae7d0fdf (MD5)
Approved for entry into archive by Ronildo Prado (bco.producao.intelectual@gmail.com) on 2018-01-30T19:20:29Z (GMT) No. of bitstreams: 1 DissKFAG.pdf: 21371434 bytes, checksum: e9355d67b5b05eda13ae02e3ae7d0fdf (MD5)
Approved for entry into archive by Ronildo Prado (bco.producao.intelectual@gmail.com) on 2018-01-30T19:20:37Z (GMT) No. of bitstreams: 1 DissKFAG.pdf: 21371434 bytes, checksum: e9355d67b5b05eda13ae02e3ae7d0fdf (MD5)
Made available in DSpace on 2018-01-30T19:26:17Z (GMT). No. of bitstreams: 1 DissKFAG.pdf: 21371434 bytes, checksum: e9355d67b5b05eda13ae02e3ae7d0fdf (MD5) Previous issue date: 2017-07-18
Não recebi financiamento
In the last decades volatility has become a very important concept in the financial area, being used to measure the risk of financial instruments. In this work, the focus of study is the modeling of volatility, that refers to the variability of returns, which is a characteristic present in the financial time series. As a fundamental modeling tool, we used the GARCH (Generalized Autoregressive Conditional Heteroskedasticity) model, which uses conditional heteroscedasticity as a measure of volatility. Two main characteristics will be considered to be modeled with the purpose of a better adjustment and prediction of the volatility, these are: heavy tails and an asymmetry present in the unconditional distribution of the return series. The estimation of the parameters of the proposed models is done by means of the Bayesian approach with an MCMC (Markov Chain Monte Carlo) methodology , specifically the Metropolis-Hastings algorithm.
Nas últimas décadas a volatilidade transformou-se num conceito muito importante na área financeira, sendo utilizada para mensurar o risco de instrumentos financeiros. Neste trabalho, o foco de estudo é a modelagem da volatilidade, que faz referência à variabilidade dos retornos, sendo esta uma característica presente nas séries temporais financeiras. Como ferramenta fundamental da modelação usaremos o modelo GARCH (Generalized Autoregressive Conditional Heteroskedasticity), que usa a heterocedasticidade condicional como uma medida da volatilidade. Considerar-se-ão duas características principais a ser modeladas com o propósito de obter um melhor ajuste e previsão da volatilidade, estas são: a assimetria e as caudas pesadas presentes na distribuição incondicional da série dos retornos. A estimação dos parâmetros dos modelos propostos será feita utilizando a abordagem Bayesiana com a metodologia MCMC (Markov Chain Monte Carlo) especificamente o algoritmo de Metropolis-Hastings.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Sarferaz, Samad. "Essays on business cycle analysis and demography." Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2010. http://dx.doi.org/10.18452/16151.

Повний текст джерела
Анотація:
Diese Arbeit besteht aus vier Essays, die empirische und methodische Beiträge zur Messung von Konjunkturzyklen und deren Zusammenhänge zu demographischen Variablen liefern. Der erste Essay analysiert unter Zuhilfenahme eines Bayesianischen Dynamischen Faktormodelles die Volatilität des US-amerikanischen Konjunkturzyklus seit 1867. In dem Essay wird gezeigt, dass die Volatilität in der Periode vor dem Ersten Weltkrieg und nachdem Zweiten Weltkrieg niedriger war als in der Zwischenkriegszeit. Eine geringere Volatilität für die Periode nach dem Zweiten Weltkrieg im Vergleich zu der Periode vor dem Ersten Weltkrieg kann nicht bestätigt werden. Der zweite Essay hebt die Bayesianischen Eigenschaften bezüglich dynamischer Faktormodelle hervor. Der Essay zeigt, dass die ganze Analyse hindurch - im Gegensatz zu klassischen Ansätzen - keine Annahmen an die Persistenz der Zeitreihen getroffen werden muss. Des Weiteren wird veranschaulicht, wie im Bayesianischen Rahmen die Anzahl der Faktoren bestimmt werden kann. Der dritte Essay entwickelt einen neuen Ansatz, um altersspezifische Sterblichkeitsraten zu modellieren. Kovariate werden mit einbezogen und ihre Dynamik wird gemeinsam mit der von latenten Variablen, die allen Alterklassen zugrunde liegen, modelliert. Die Resultate bestätigen, dass makroökonomische Variablen Prognosekraft für die Sterblichkeit beinhalten. Im vierten Essay werden makroökonomischen Zeitreihen zusammen mit altersspezifischen Sterblichkeitsraten einer strukturellen Analyse unterzogen. Es wird gezeigt, dass sich die Sterblichkeit von jungen Erwachsenen in Abhängigkeit von Konjunkturzyklen deutlich von den der anderen Alterklassen unterscheidet. Daher sollte in solchen Analysen, um Scheinkorrelation vorzubeugen, zwischen den einzelnen Altersklassen differenziert werden.
The thesis consists of four essays, which make empirical and methodological contributions to the fields of business cycle analysis and demography. The first essay presents insights on U.S. business cycle volatility since 1867 derived from a Bayesian dynamic factor model. The essay finds that volatility increased in the interwar periods, which is reversed after World War II. While evidence can be generated of postwar moderation relative to pre-1914, this evidence is not robust to structural change, implemented by time-varying factor loadings. The second essay scrutinizes Bayesian features in dynamic index models. The essay shows that large-scale datasets can be used in levels throughout the whole analysis, without any pre-assumption on the persistence. Furthermore, the essay shows how to determine the number of factors accurately by computing the Bayes factor. The third essay presents a new way to model age-specific mortality rates. Covariates are incorporated and their dynamics are jointly modeled with the latent variables underlying mortality of all age classes. In contrast to the literature, a similar development of adjacent age groups is assured, allowing for consistent forecasts. The essay demonstrates that time series of covariates contain predictive power for age-specific rates. Furthermore, it is observed that in particular parameter uncertainty is important for long-run forecasts, implicating that ignoring parameter uncertainty might yield misleadingly precise predictions. In the fourth essay the model developed in the third essay is utilized to conduct a structural analysis of macroeconomic fluctuations and age-specific mortality rates. The results reveal that the mortality of young adults, concerning business cycles, noticeably differ from the rest of the population. This implies that differentiating closely between particular age classes, might be important in order to avoid spurious results.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Stuart, Graeme. "Monitoring energy performance in local authority buildings." Thesis, De Montfort University, 2011. http://hdl.handle.net/2086/4964.

Повний текст джерела
Анотація:
Energy management has been an important function of organisations since the oil crisis of the mid 1970’s led to hugely increased costs of energy. Although the financial costs of energy are still important, the growing recognition of the environmental costs of fossil-fuel energy is becoming more important. Legislation is also a key driver. The UK has set an ambitious greenhouse gas (GHG) reduction target of 80% of 1990 levels by 2050 in response to a strong international commitment to reduce GHG emissions globally. This work is concerned with the management of energy consumption in buildings through the analysis of energy consumption data. Buildings are a key source of emissions with a wide range of energy-consuming equipment, such as photocopiers or refrigerators, boilers, air-conditioning plant and lighting, delivering services to the building occupants. Energy wastage can be identified through an understanding of consumption patterns and in particular, of changes in these patterns over time. Changes in consumption patterns may have any number of causes; a fault in heating controls; a boiler or lighting replacement scheme; or a change in working practice entirely unrelated to energy management. Standard data analysis techniques such as degree-day modelling and CUSUM provide a means to measure and monitor consumption patterns. These techniques were designed for use with monthly billing data. Modern energy metering systems automatically generate data at half-hourly or better resolution. Standard techniques are not designed to capture the detailed information contained in this comparatively high-resolution data. The introduction of automated metering also introduces the need for automated analysis. This work assumes that consumption patterns are generally consistent in the short-term but will inevitably change. A novel statistical method is developed which builds automated event detection into a novel consumption modelling algorithm. Understanding these changes to consumption patterns is critical to energy management. Leicester City Council has provided half-hourly data from over 300 buildings covering up to seven years of consumption (a total of nearly 50 million meter readings). Automatic event detection pinpoints and quantifies over 5,000 statistically significant events in the Leicester dataset. It is shown that the total impact of these events is a decrease in overall consumption. Viewing consumption patterns in this way allows for a new, event-oriented approach to energy management where large datasets are automatically and rapidly analysed to produce summary meta-data describing their salient features. These event-oriented meta-data can be used to navigate the raw data event by event and are highly complementary to strategic energy management.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії