Thèses sur le sujet « Economics – Statistical models »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Economics – Statistical models ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.
Tabri, Rami. « Emprical likelihood and constrained statistical inference for some moment inequality models ». Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=119408.
Texte intégralLe principal objectif de cette thèse est d'étendre les procédures basées sur la vraisemblance empirique (EL) à des modèles statistiques caractérisés par des inégalités concernant des moments non conditionnels. On développe dans la thèse des procédures EL pour deux types de modèles. Pour le premier type, le paramètre d'intérêt, de dimension infinie, est la distribution de probabilité sous-jacente, définie par un continuum d'inégalités en correspondance avec une classe générale de fonctions d'estimation. L'approche utilisée afin de développer la théorie d'estimation s'appuie sur une méthode faisable de calcul de la fonction objectif. Elle permet de démontrer la convergence uniforme de l'estimateur sur l'ensemble des distributions du modèle. On démontre en outre que, pour une taille d'échantillon suffisamment grande, son erreur quadratique moyenne est inférieure à celle d'un estimateur qui ne se sert pas des informations fournies par les inégalités. Des algorithmes numériques pour le calcul de l'estimateur sont développés, et employés dans des expériences de simulation afin d'étudier les propriétés de l'estimateur dans le contexte de la dominance stochastique à l'ordre infini. Le second type de modèle concerne la dominance stochastique (SD) entre deux distributions de revenus. On développe des tests asymptotiques et bootstrap, basés sur le rapport de vraisemblance empirique, pour l'hypothèse nulle selon laquelle il existe un ordre de dominance stochastique forte unidirectionnelle entre les deux distributions. Celles-ci sont discrètes et avec support fini, ce qui permet de formuler l'hypothèse nulle en termes de contraintes d'inégalité sur le vecteur d'ordonnées des courbes de dominance. La dominance forte exige que l'hypothèse nulle n'admette qu'une paire d'ordonnées égales dans l'intérieur du support. Les performances des tests sont enfin étudiées au moyen de simulations Monte Carlo.
Chow, Fung-kiu, et 鄒鳳嬌. « Modeling the minority-seeking behavior in complex adaptive systems ». Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B29367487.
Texte intégralCutugno, Carmen. « Statistical models for the corporate financial distress prediction ». Thesis, Università degli Studi di Catania, 2011. http://hdl.handle.net/10761/283.
Texte intégralGrayson, James M. (James Morris). « Economic Statistical Design of Inverse Gaussian Distribution Control Charts ». Thesis, University of North Texas, 1990. https://digital.library.unt.edu/ark:/67531/metadc332397/.
Texte intégralValero, Rafael. « Essays on Sparse-Grids and Statistical-Learning Methods in Economics ». Doctoral thesis, Universidad de Alicante, 2017. http://hdl.handle.net/10045/71368.
Texte intégralDonnelly, James P. « NFL Betting Market : Using Adjusted Statistics to Test Market Efficiency and Build a Betting Model ». Scholarship @ Claremont, 2013. http://scholarship.claremont.edu/cmc_theses/721.
Texte intégralPutnam, Kyle J. « Two Essays in Financial Economics ». ScholarWorks@UNO, 2015. http://scholarworks.uno.edu/td/2010.
Texte intégralEkiz, Funda. « Cagan Type Rational Expectations Model on Time Scales with Their Applications to Economics ». TopSCHOLAR®, 2011. http://digitalcommons.wku.edu/theses/1126.
Texte intégralWang, Junyi. « A Normal Truncated Skewed-Laplace Model in Stochastic Frontier Analysis ». TopSCHOLAR®, 2012. http://digitalcommons.wku.edu/theses/1177.
Texte intégralBury, Thomas. « Collective behaviours in the stock market : a maximum entropy approach ». Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209341.
Texte intégralThe study of the structure and collective modes of financial markets attracts more and more attention. It has been shown that some agent based models are able to reproduce some stylized facts. Despite their partial success, there is still the problem of rules design. In this work, we used a statistical inverse approach to model the structure and co-movements in financial markets. Inverse models restrict the number of assumptions. We found that a pairwise maximum entropy model is consistent with the data and is able to describe the complex structure of financial systems. We considered the existence of a critical state which is linked to how the market processes information, how it responds to exogenous inputs and how its structure changes. The considered data sets did not reveal a persistent critical state but rather oscillations between order and disorder.
In this framework, we also showed that the collective modes are mostly dominated by pairwise co-movements and that univariate models are not good candidates to model crashes. The analysis also suggests a genuine adaptive process since both the maximum variance of the log-likelihood and the accuracy of the predictive scheme vary through time. This approach may provide some clue to crash precursors and may provide highlights on how a shock spreads in a financial network and if it will lead to a crash. The natural continuation of the present work could be the study of such a mechanism.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
Gilbride, Timothy J. « Models for heterogeneous variable selection ». Columbus, Ohio : Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1083591017.
Texte intégralTitle from first page of PDF file. Document formatted into pages; contains xii, 138 p.; also includes graphics. Includes abstract and vita. Advisor: Greg M. Allenby, Dept. of Business Admnistration. Includes bibliographical references (p. 134-138).
Strid, Ingvar. « Computational methods for Bayesian inference in macroeconomic models ». Doctoral thesis, Handelshögskolan i Stockholm, Ekonomisk Statistik (ES), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hhs:diva-1118.
Texte intégralKolb, Jakob J. « Heuristic Decision Making in World Earth Models ». Doctoral thesis, Humboldt-Universität zu Berlin, 2020. http://dx.doi.org/10.18452/22147.
Texte intégralThe trajectory of the Earth system in the Anthropocene is governed by an increasing entanglement of processes on a physical and ecological as well as on a socio-economic level. If models are to be useful as decision support tools in this environment, they ought acknowledge these complex feedback loops as well as the inherently emergent and heterogeneous qualities of societal dynamics. This thesis improves the capability of social-ecological and socio-economic models to picture emergent social phenomena and uses and extends techniques from dynamical systems theory and statistical physics for their analysis. It proposes to model humans as bounded rational decision makers that use (social) learning to acquire decision heuristics that function well in a given environment. This is illustrated in a two sector economic model in which one sector uses a fossil resource for economic production and households make their investment decisions in the previously described way. In the model economy individual decision making and social dynamics can not limit CO 2 emissions to a level that prevents global warming above 1.5 ◦ C. However, a combination of collective action and coordinated public policy actually can. A follow up study analyzes social learning of individual savings rates in a one sector investment economy. Here, the aggregate savings rate in the economy approaches that of an intertemporarily optimizing omniscient social planner if the social interaction rate is sufficiently low. Sumultaneously, a decreasing interaction rate leads to emergent inequality in the model in the form of a sudden transition from a unimodal to a strongly bimodal distribution of wealth among households. Finally, this thesis proposes a combination of different moment closure techniques that can be used to derive analytic approximations for such networked heterogeneous agent models where interactions between agents occur on an individual as well as on an aggregated level.
Zhang, Yanwei. « A hierarchical Bayesian approach to model spatially correlated binary data with applications to dental research ». Diss., Connect to online resource - MSU authorized users, 2008.
Trouver le texte intégralYao, Jiawei. « Factor models| Testing and forecasting ». Thesis, Princeton University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3682786.
Texte intégralThis dissertation focuses on two aspects of factor models, testing and forecasting. For testing, we investigate a more general high-dimensional testing problem, with an emphasis on panel data models. Specifically, we propose a novel technique to boost the power of testing a high-dimensional vector against sparse alternatives. Existing tests based on quadratic forms such as the Wald statistic often suffer from low powers, whereas more powerful tests such as thresholding and extreme-value tests require either stringent conditions or bootstrap to derive the null distribution, and often suffer from size distortions. Based on a screening technique, we introduce a ''power enhancement component", which is zero under the null hypothesis with high probability, but diverges quickly under sparse alternatives. The proposed test statistic combines the power enhancement component with an asymptotically pivotal statistic, and strengthens the power under sparse alternatives. As a byproduct, the power enhancement component also consistently identifies the elements that violate the null hypothesis.
Next, we consider forecasting a single time series using many predictors when nonliearity is present. We develop a new methodology called sufficient forecasting, by connecting sliced inverse regression with factor models. The sufficient forecasting correctly estimates projections of the underlying factors and provides multiple predictive indices for further investigation. We derive asymptotic results for the estimate of the central space spanned by these projection directions. Our method allows the number of predictors larger than the sample size, and therefore extends the applicability of inverse regression. Numerical experiments demonstrate that the proposed method improves upon a linear forecasting model. Our results are further illustrated in an empirical study of macroeconomic variables, where sufficient forecasting is found to deliver additional predictive power over conventional methods.
Incarbone, Giuseppe. « Statistical algorithms for Cluster Weighted Models ». Doctoral thesis, Università di Catania, 2013. http://hdl.handle.net/10761/1383.
Texte intégralKleppertknoop, Lily. « "Here Stands a High Bred Horse" : A Theory of Economics and Horse Breeding in Colonial Virginia, 1750-1780 ; a Statistical Model ». W&M ScholarWorks, 2013. https://scholarworks.wm.edu/etd/1539626711.
Texte intégralBun, Maurice Josephus Gerardus. « Accurate statistical analysis in dynamic panel data models ». [Amsterdam : Amsterdam : Thela Thesis] ; Universiteit van Amsterdam [Host], 2001. http://dare.uva.nl/document/57690.
Texte intégralTuzun, Tayfun. « Applying the statistical market value accounting model to time- series data for individual firms / ». Connect to resource, 1992. http://rave.ohiolink.edu/etdc/view.cgi?acc%5Fnum=osu1261419575.
Texte intégralCox, Gregory Fletcher. « Advances in Weak Identification and Robust Inference for Generically Identified Models ». Thesis, Yale University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10633240.
Texte intégralThis dissertation establishes tools for valid inference in models that are only generically identified with a special focus on factor models.
Chapter one considers inference for models under a general form of identification failure, by studying microeconometric applications of factor models. Factor models postulate unobserved variables (factors) that explain the covariation between observed variables. For example, school quality can be modeled as a common factor to a variety of school characteristics. Observed variables depend on factors linearly with coefficients that are called factor loadings. Identification in factor models is determined by a rank condition on the factor loadings. The rank condition guarantees that the observed variables are sufficiently related to the factors that the parameters in the distribution of the factors can be identified. When the rank condition fails, for example when the observed school characteristics are weakly related to school quality, the asymptotic distribution of test statistics is nonstandard so that chi-squared critical values no longer control size.
Calculating new critical values that do control size requires characterizing the asymptotic distribution of the test statistic along sequences of parameters that converge to points of rank condition failure. This paper presents new theorems for this characterization which overcome two technical difficulties: (1) non-differentiability of the boundary of the identified set and (2) degeneracy in the limit stochastic process for the objective function. These difficulties arise in factor models, as well as a wider class of generically identified models, which these theorems cover. Non-differentiability of the boundary of the identified set is solved by squeezing the distribution of the estimator between a nonsmooth, fixed boundary and a smooth, drifting boundary. Degeneracy in the limit stochastic process is solved by restandardizing the objective function to a higher-order so that the resulting limit satisfies a unique minimum condition. Robust critical values, calculated by taking the supremum over quintiles of the asymptotic distributions of the test statistic, result in a valid robust inference procedure.
Chapter one demonstrates the robust inference procedure in two examples. In the first example, there is only one factor, for which the factor loadings may be zero or close to zero. This simple example highlights the aforementioned important theoretical difficulties. For the second example, Cunha, Heckman, and Schennach (2010), as well as other papers in the literature, use a factor model to estimate the production of skills in children as a function of parental investments. Their empirical specification includes two types of skills, cognitive and noncognitive, but only one type of parental investment out of a concern for identification failure. We formulate and estimate a factor model with two types of parental investment, which may not be identified because of rank condition failure. We find that for one of the four age categories, 6-9 year olds, the factors are close to being unidentified, and therefore standard inference results are misleading. For all other age categories, the distribution of the factors is identified.
Chapter two provides a higher-order stochastic expansion of M- and Z- estimators. Stochastic expansions are useful for a wide variety of stochastic problems, including bootstrap refinements, Edgeworth expansions, and identification failure. Without identification, the higher-order terms in the expansion may become relevant for the limit theory. Stochastic expansions above fourth order are rarely used because the expressions in the expansion become intractable. For M- and Z- estimators, a wide class of estimators that maximize an objective function or set an objective function to zero, this paper provides smoothness conditions and a closed-form expression for a stochastic expansion up to an arbitrary order.
Chapter three provides sufficient conditions for a random function to have a global unique minimum almost surely. Many important statistical objects can be defined as the global minimizing set of a function, including identified sets, extremum estimators, and the limit of a sequence of random variables (due to the argmax theorem). Whether this minimum is achieved at a unique point or a larger set is often practically and/or theoretically relevant. This paper considers a class of functions indexed by a vector of parameters and provides simple transversality-type conditions which are sufficient for the minimizing set to be a unique point for almost every function.
Tindall, Nathaniel W. « Analyses of sustainability goals : Applying statistical models to socio-economic and environmental data ». Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/54259.
Texte intégralWong, Chun-mei May, et 王春美. « The statistical tests on mean reversion properties in financial markets ». Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1994. http://hub.hku.hk/bib/B31211975.
Texte intégralBarone, Anthony J. « State Level Earned Income Tax Credit’s Effects on Race and Age : An Effective Poverty Reduction Policy ». Scholarship @ Claremont, 2013. http://scholarship.claremont.edu/cmc_theses/771.
Texte intégralZhang, Yonghui. « Three essays on large panel data models with cross-sectional dependence ». Thesis, Singapore Management University (Singapore), 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3601351.
Texte intégralMy dissertation consists of three essays which contribute new theoretical results to large panel data models with cross-sectional dependence. These essays try to answer or partially answer some prominent questions such as how to detect the presence of cross-sectional dependence and how to capture the latent structure of cross-sectional dependence and estimate parameters efficiently by removing its effects.
Chapter 2 introduces a nonparametric test for cross-sectional contemporaneous dependence in large dimensional panel data models based on the squared distance between the pair-wise joint density and the product of the marginals. The test can be applied to either raw observable data or residuals from local polynomial time series regressions for each individual to estimate the joint and marginal probability density functions of the error terms. In either case, we establish the asymptotic normality of our test statistic under the null hypothesis by permitting both the cross section dimension n and the time series dimension T to pass to infinity simultaneously and relying upon the Hoeffding decomposition of a two-fold U-statistic. We also establish the consistency of our test. A small set of Monte Carlo simulations is conducted to evaluate the finite sample performance of our test and compare it with that of Pesaran (2004) and Chen, Gao, and Li (2009).
Chapter 3 analyzes nonparametric dynamic panel data models with interactive fixed effects, where the predetermined regressors enter the models nonparametrically and the common factors enter the models linearly but with individual specific factor loadings. We consider the issues of estimation and specification testing when both the cross-sectional dimension N and the time dimension T are large. We propose sieve estimation for the nonparametric function by extending Bai's (2009) principal component analysis (PCA) to our nonparametric framework. Following Moon and Weidner's (2010, 2012) asymptotic expansion of the Gaussian quasilog-likelihood function, we derive the convergence rate for the sieve estimator and establish its asymptotic normality. The sources of asymptotic biases are discussed and a consistent bias-corrected estimator is provided. We also propose a consistent specification test for the linearity of the nonparametric functional form by comparing the linear and sieve estimators. We establish the asymptotic distributions of the test statistic under both the null hypothesis and a sequence of Pitman local alternatives.
To improve the finite sample performance of the test, we also propose a bootstrap procedure to obtain the bootstrap p-values and justify its validity. Monte Carlo simulations are conducted to investigate the finite sample performance of our estimator and test. We apply our model to an economic growth data set to study the relationship between capital accumulation and real GDP growth rate.
Chapter 4 proposes a nonparametric test for common trends in semiparametric panel data models with fixed effects based on a measure of nonparametric goodness-of-fit (R2). We first estimate the model under the null hypothesis of common trends by the method of profile least squares, and obtain the augmented residual which consistently estimates the sum of the fixed effect and the disturbance under the null.
Then we run a local linear regression of the augmented residuals on a time trend and calculate the nonparametric R2 for each cross section unit. The proposed test statistic is obtained by averaging all cross sectional nonparametric R2's, which is close to 0 under the null and deviates from 0 under the alternative. We show that after appropriate standardization the test statistic is asymptotically normally distributed under both the null hypothesis and a sequence of Pitman local alternatives. We prove test consistency and propose a bootstrap procedure to obtain p-values. Monte Carlo simulations indicate that the test performs well infinite samples. Empirical applications are conducted exploring the commonality of spatial trends in UK climate change data and idiosyncratic trends in OECD real GDP growth data. Both applications reveal the fragility of the widely adopted common trends assumption.
Rui, Xiongwen. « Essays on the Solution, Estimation, and Analysis of Dynamic Nonlinear Economic Models / ». The Ohio State University, 1995. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487928649987711.
Texte intégralHwang, Jungbin. « Fixed smoothing asymptotic theory in over-identified econometric models in the presence of time-series and clustered dependence ». Thesis, University of California, San Diego, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10128431.
Texte intégralIn the widely used over-identified econometric model, the two-step Generalized Methods of Moments (GMM) estimator and inference, first suggested by Hansen (1982), require the estimation of optimal weighting matrix at the initial stages. For time series data and clustered dependent data, which is our focus here, the optimal weighting matrix is usually referred to as the long run variance (LRV) of the (scaled) sample moment conditions. To maintain generality and avoid misspecification, nowadays we do not model serial dependence and within-cluster dependence parametrically but use the heteroscedasticity and autocorrelation robust (HAR) variance estimator in standard practice. These estimators are nonparametric in nature with high variation in finite samples, but the conventional increasing smoothing asymptotics, so called small-bandwidth asymptotics, completely ignores the finite sample variation of the estimated GMM weighting matrix. As a consequence, empirical researchers are often in danger of making unreliable inferences and false assessments of the (efficient) two-step GMM methods. Motivated by this issue, my dissertation consists of three papers which explore the efficiency and approximation issues in the two-step GMM methods by developing new, more accurate, and easy-to-use approximations to the GMM weighting matrix.
The first chapter, "Simple and Trustworthy Cluster-Robust GMM Inference" explores new asymptotic theory for two-step GMM estimation and inference in the presence of clustered dependence. Clustering is a common phenomenon for many cross-sectional and panel data sets in applied economics, where individuals in the same cluster will be interdependent while those from different clusters are more likely to be independent. The core of new approximation scheme here is that we treat the number of clusters G fixed as the sample size increases. Under the new fixed-G asymptotics, the centered two-step GMM estimator and two continuously-updating estimators have the same asymptotic mixed normal distribution. Also, the t statistic, J statistic, as well as the trinity of two-step GMM statistics (QLR, LM and Wald) are all asymptotically pivotal, and each can be modified to have an asymptotic standard F distribution or t distribution. We also suggest a finite sample variance correction further to improve the accuracy of the F or t approximation. Our proposed asymptotic F and t tests are very appealing to practitioners, as test statistics are simple modifications of the usual test statistics, and the F or t critical values are readily available from standard statistical tables. We also apply our methods to an empirical study on the causal effect of access to domestic and international markets on household consumption in rural China.
The second paper "Should we go one step further? An Accurate Comparison of One-step and Two-step procedures in a Generalized Method of Moments Framework” (coauthored with Yixiao Sun) focuses on GMM procedure in time-series setting and provides an accurate comparison of one-step and two-step GMM procedures in a fixed-smoothing asymptotics framework. The theory developed in this paper shows that the two-step procedure outperforms the one-step method only when the benefit of using the optimal weighting matrix outweighs the cost of estimating it. We also provide clear guidance on how to choose a more efficient (or powerful) GMM estimator (or test) in practice.
While our fixed smoothing asymptotic theory accurately describes sampling distribution of two-step GMM test statistic, the limiting distribution of conventional GMM statistics is non-standard, and its critical values need to be simulated or approximated by standard distributions in practice. In the last chapter, "Asymptotic F and t Tests in an Efficient GMM Setting" (coauthored with Yixiao Sun), we propose a simple and easy-to-implement modification to the trinity (QLM, LM, and Wald) of two-step GMM statistics and show that the modified test statistics are all asymptotically F distributed under the fixed-smoothing asymptotics. The modification is multiplicative and only involves the J statistic for testing over-identifying restrictions. In fact, what we propose can be regarded as the multiplicative variance correction for two-step GMM statistics that takes into account the additional asymptotic variance term under the fixed-smoothing asymptotics. The results in this paper can be immediately generalized to the GMM setting in the presence of clustered dependence.
Facchinetti, Alessandro <1991>. « Likelihood free methods for inference on complex models ». Master's Degree Thesis, Università Ca' Foscari Venezia, 2020. http://hdl.handle.net/10579/17017.
Texte intégralWitte, Hugh Douglas. « Markov chain Monte Carlo and data augmentation methods for continuous-time stochastic volatility models ». Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/283976.
Texte intégralKoliadenko, Pavlo <1998>. « Time series forecasting using hybrid ARIMA and ANN models ». Master's Degree Thesis, Università Ca' Foscari Venezia, 2021. http://hdl.handle.net/10579/19992.
Texte intégralRabin, Gregory S. « A reduced-form statistical climate model suitable for coupling with economic emissions projections ». Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41672.
Texte intégralIncludes bibliographical references (p. 36-37).
In this work, we use models based on past data and scientific analysis to determine possible future states of the environment. We attempt to improve the equations for temperature and greenhouse gas concentration used in conjunction with the MIT Emissions Prediction and Policy Analysis (EPPA) model or for independent climate analysis based on results from the more complex MIT Integrated Global Systems Model (IGSM). The functions we generate should allow a software system to approximate the environmental variables from the policy inputs in a matter of seconds. At the same time, the estimates should be close enough to the exact values given by the IGSM to be considered meaningful.
by Gregory S. Rabin.
M.Eng.
Cesale, Giancarlo. « A novel approach to forecasting from non scalar DCC models ». Doctoral thesis, Universita degli studi di Salerno, 2016. http://hdl.handle.net/10556/2197.
Texte intégralEstimating and predicting joint second-order moments of asset portfolios is of huge impor- tance in many practical applications and, hence, modeling volatility has become a crucial issue in financial econometrics. In this context multivariate generalized autoregressive condi- tional heteroscedasticity (M-GARCH) models are widely used, especially in their versions for the modeling of conditional correlation matrices (DCC-GARCH). Nevertheless, these models tipically suffer from the so-called curse of dimensionality: the number of needed parameters rapidly increases when the portfolio dimension gets large, so making their use practically infeasible. Due to these reasons, many simplified versions of the original specifications have been developed, often based upon restrictive a priori assumptions, in order to achieve the best tradeoff between flexibility and numerical feasibility. However, these strategies may im- plicate in general a certain loss of information because of the imposed simplifications. After a description of the general framework of M-GARCH models and a discussion on some specific topics relative to second-order multivariate moments of large dimension, the main contribu- tion of this thesis is to propose a new method for forecasting conditional correlation matrices in high-dimensional problems which is able to exploit more information without imposing any a priori structure and without incurring overwhelming calculations. Performances of the proposed method are evaluated and compared to alternative predictors through applications to real data. [edited by author]
XIV n.s.
He, Wei. « Model selection for cointegrated relationships in small samples ». Thesis, Nelson Mandela Metropolitan University, 2008. http://hdl.handle.net/10948/971.
Texte intégralPark, Seoungbyung. « Factor Based Statistical Arbitrage in the U.S. Equity Market with a Model Breakdown Detection Process ». Thesis, Marquette University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10280168.
Texte intégralMany researchers have studied different strategies of statistical arbitrage to provide a steady stream of returns that are unrelated to the market condition. Among different strategies, factor-based mean reverting strategies have been popular and covered by many. This thesis aims to add value by evaluating the generalized pairs trading strategy and suggest enhancements to improve out-of-sample performance. The enhanced strategy generated the daily Sharpe ratio of 6.07% in the out-of-sample period from January 2013 through October 2016 with the correlation of -.03 versus S&P 500. During the same period, S&P 500 generated the Sharpe ratio of 6.03%.
This thesis is differentiated from the previous relevant studies in the following three ways. First, the factor selection process in previous statistical arbitrage studies has been often unclear or rather subjective. Second, most literature focus on in-sample results, rather than out-of-sample results of the strategies, which is what the practitioners are mainly interested in. Third, by implementing hidden Markov model, it aims to detect regime change to improve the timing the trade.
Lu, Zhen Cang. « Price forecasting models in online flower shop implementation ». Thesis, University of Macau, 2017. http://umaclib3.umac.mo/record=b3691395.
Texte intégralKochi, Ikuho. « Essays on the Value of a Statistical Life ». unrestricted, 2007. http://etd.gsu.edu/theses/available/etd-04302007-172639/.
Texte intégralTitle from file title page. Laura O. Taylor, committee chair; H. Spencer Banzhaf, Susan K. Laury, Mary Beth Walker, Kenneth E. McConnell, committee members. Electronic text (177 p. : ill.) : digital, PDF file. Description based on contents viewed Jan. 7, 2008. Includes bibliographical references (p. 172-176).
Doolan, Mark Bernard. « Evaluating multivariate volatility forecasts : how effective are statistical and economic loss functions ? » Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/45750/1/Mark_Doolan_Thesis.pdf.
Texte intégralMcCloud, Nadine. « Model misspecification theory and applications / ». Diss., Online access via UMI:, 2008.
Trouver le texte intégralDonno, Annalisa <1983>. « Multidimensional Measures of Firm Competitiveness : a Model-Based Approach ». Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/5173/1/donno_annalisa_tesi.pdf.
Texte intégralDonno, Annalisa <1983>. « Multidimensional Measures of Firm Competitiveness : a Model-Based Approach ». Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/5173/.
Texte intégralMitchell, Zane Windsor Jr. « A Statistical Analysis Of Construction Equipment Repair Costs Using Field Data & ; The Cumulative Cost Model ». Diss., Virginia Tech, 1998. http://hdl.handle.net/10919/30468.
Texte intégralPh. D.
Pouliot, William. « Two applications of U-Statistic type processes to detecting failures in risk models and structural breaks in linear regression models ». Thesis, City University London, 2010. http://openaccess.city.ac.uk/1166/.
Texte intégralThompson, Mery Helena. « Optimum experimental designs for models with a skewed error distribution : with an application to stochastic frontier models ». Thesis, University of Glasgow, 2008. http://theses.gla.ac.uk/236/.
Texte intégralKemp, Gordon C. R. « Asymptotic expansion approximations and the distributions of various test statistics in dynamic econometric models ». Thesis, University of Warwick, 1987. http://wrap.warwick.ac.uk/99431/.
Texte intégralMetzig, Cornelia. « A Model for a complex economic system ». Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENS038/document.
Texte intégralThe thesis is in the field of complex systems, applied to an economic system. In this thesis, an agent-based model has been proposed to model the production cycle. It comprises firms, workers, and a bank, and respects stock-flow consistency. Its central assumption is that firms plan their production based on an expected profit margin. A simple scenario of the model, where the expected profit margin is the same for all firms, has been analyzed in the context of simple stochastic growth models. Results are a firms' size distribution close to a power law, and tent-shaped growth rate distribution, and a growth rate variance scaling with firm size. These results are close to empirically found stylized facts. In a more comprehensive version, the model contains additional features: heterogeneous profits margins, as well as interest payments and the possibility of bankruptcy. This relates the model to agent-based macroeconomic models. The extensions are described theoretically theoretically with replicator dynamics. New results are the age distribution of active firms, their profit rate distribution, debt distribution, bankruptcy statistics, as well as typical life cycles of firms, which are all qualitatively in agreement with studies of firms databases of various countries.The proposed model yields promising results by respecting the principle that jointly found results may be generated by the same process, or by several ones which are compatible
Pandolfo, Silvia <1993>. « Analysis of the volatility of high-frequency data. The Realized Volatility and the HAR model ». Master's Degree Thesis, Università Ca' Foscari Venezia, 2019. http://hdl.handle.net/10579/14840.
Texte intégralPapa, Bruno Del. « A study of social and economic evolution of human societies using methods of Statistical Mechanics and Information Theory ». Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-26092014-081449/.
Texte intégralNesta dissertação, utilizamos ferramentas de mecânica estatística e de teoria de informação para aplicações em tópicos significativos ás areas de antropologia, ciências sociais e economia. Buscamos desenvolver modelos matemáticos e computacionais com bases empíricas e teóricas para identificar pontos importantes nas questões referentes à transição entre sociedades igualitárias e hierárquicas e à emergência de dinheiro em sociedades humanas. Dados antropológicos sugerem que há correlação entre o tamanho relativo do neocórtex e o tamanho médio de grupos de primatas, predominantemente hierárquicos, enquanto teorias recentes sugerem que pressões sociais e evolutivas alteraram a capacidade cognitiva dos indivíduos, possibilitando sua organização social em outras configurações. Com base nestas observações, desenvolvemos um modelo matemático capaz de incorporar hipóteses de custos cognitivos de representações sociais para explicar a variação de estruturas sociais encontradas em sociedades humanas. Uma dinâmica de Monte Carlo permite a construção de um diagrama de fase, no qual é possivel identificar regiões hierárquicas, igualitárias e intermediárias. Os parâmetros responsáveis pelas transições são a capacidade cognitiva, o número de agentes na sociedade e a pressão social e ecológica. O modelo também permitiu uma modificação da dinâmica, de modo a incluir um parâmetro representando a taxa de troca de informação entre os agentes, o que possibilita a introdução de correlações entre as representações cognitivas, sugerindo assim o aparecimento de assimetrias sociais, que, por fim, resultam em hierarquia. Os resultados obtidos concordam qualitativamente com dados antropológicos, quando as variáveis são interpretadas de acordo com seus equivalentes sociais. O outro modelo desenvolvido neste trabalho diz respeito ao aparecimento de uma mercadoria única de troca, ou dinheiro. Teorias econômicas predominantes descrevem o aparecimento do dinheiro como resultado de uma evolução de economias de escambo (barter). Críticas, entretanto, alertam para a falta de evidências históricas e antropológicas que corroborem esta hipótese, gerando dúvidas sobre os mecanismos que levaram ao advento do dinheiro e a influência da configuração social neste processo. Estudos recentes sugerem que o dinheiro pode se comportar como uma droga perceptual, o que tem levado a novas teorias que objetivam explicar a monetarização de sociedades. Através de um modelo computacional baseado na dinâmica anterior de emergência de hierarquia, buscamos simular este fenômeno através de representações cognitivas de redes econômicas, que representam o reconhecimento ou não da possibilidade de troca entre duas commodities. Formalismos semelhantes já foram utilizados anteriormente, porém sem discutir a influência da configuração social nos resultados. O modelo desenvolvido nesta dissertação foi capaz de empregar o conceito de representações cognitivas e novamente atribuir custos a elas. A nova dinâmica resultante é capaz de analisar como a troca de informações depende da configuração social dos agentes. Os resultados mostram que redes hierárquicas, como estrela e redes livres de escala, induzem uma maior probabilidade de emergência de dinheiro dos que as demais. Os dois modelos sugerem, quando considerados em conjunto, que transições de fase na organização social são importantes para o estudo de emergência de dinheiro, e portanto não podem ser ignoradas em futuras modelagens sociais e econômicas.
Di, Caro Paolo. « Recessions, Recoveries and Regional Resilience : an econometric perspective ». Doctoral thesis, Università di Catania, 2014. http://hdl.handle.net/10761/1540.
Texte intégralBusato, Erick Andrade. « Função de acoplamento t-Student assimetrica : modelagem de dependencia assimetrica ». [s.n.], 2008. http://repositorio.unicamp.br/jspui/handle/REPOSIP/305857.
Texte intégralDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica
Made available in DSpace on 2018-08-12T14:00:24Z (GMT). No. of bitstreams: 1 Busato_ErickAndrade_M.pdf: 4413458 bytes, checksum: b9c4c39b4639c19e685bae736fc86c4f (MD5) Previous issue date: 2008
Resumo: A família de distribuições t-Student Assimétrica, construída a partir da mistura em média e variância da distribuição normal multivariada com a distribuição Inversa Gama possui propriedades desejáveis de flexibilidade para as mais diversas formas de assimetria. Essas propriedades são exploradas na construção de funções de acoplamento que possuem dependência assimétrica. Neste trabalho são estudadas as características e propriedades da distribuição t-Student Assimétrica e a construção da respectiva função de acoplamento, fazendo-se uma apresentação de diferentes estruturas de dependência que pode originar, incluindo assimetrias da dependência nas caudas. São apresentados métodos de estimação de parâmetros das funções de acoplamento, com aplicações até a terceira dimensão da cópula. Essa função de acoplamento é utilizada para compor um modelo ARMA-GARCHCópula com marginais de distribuição t-Student Assimétrica, que será ajustado para os logretornos de preços do Petróleo e da Gasolina, e log-retornos do Índice de Óleo AMEX, buscando o melhor ajuste, principalmente, para a dependência nas caudas das distribuições de preços. Esse modelo será comparado, através de medidas de Valor em Risco e AIC, além de outras medidas de bondade de ajuste, com o modelo de Função de Acoplamento t-Student Simétrico.
Abstract: The Skewed t-Student distribution family, constructed upon the multivariate normal mixture distribution, known as mean-variance mixture, composed with the Inverse-Gamma distribution, has many desirable flexibility properties for many distribution asymmetry structures. These properties are explored by constructing copula functions with asymmetric dependence. In this work the properties and characteristics of the Skewed t-Student distribution and the construction of a respective copula function are studied, presenting different dependence structures that the copula function generates, including tail dependence asymmetry. Parameter estimation methods are presented for the copula, with applications up to the 3rd dimension. This copula function is used to compose an ARMAGARCH- Copula model with Skewed t-Student marginal distribution that is adjusted to logreturns of Petroleum and Gasoline prices and log-returns of the AMEX Oil Index, emphasizing the return's tail distribution. The model will be compared, by the means of the VaR (Value at Risk) and Akaike's Information Criterion, along with other Goodness-of-fit measures, with models based on the Symmetric t-Student Copula.
Mestrado
Mestre em Estatística
Azari, Soufiani Hossein. « Revisiting Random Utility Models ». Thesis, Harvard University, 2014. http://dissertations.umi.com/gsas.harvard:11605.
Texte intégralEngineering and Applied Sciences
Xue, Jiangbo. « A structural forecasting model for the Chinese macroeconomy / ». View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?ECON%202009%20XUE.
Texte intégral