Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Economics – Statistical models.

Thèses sur le sujet « Economics – Statistical models »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Economics – Statistical models ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Tabri, Rami. « Emprical likelihood and constrained statistical inference for some moment inequality models ». Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=119408.

Texte intégral
Résumé :
The principal purpose of this thesis is to extend empirical likelihood (EL) based procedures to some statistical models defined by unconditional moment inequalities. We develop EL procedures for two such models in the thesis. In the first type of model, the underlying probability distribution is the (infinite-dimensional) parameter of interest, and is defined by a continuum of moment inequalities indexed by a general class of estimating functions. We develop the EL estimation theory using a feasible-value-function approach, and demonstrate the uniform consistency of the estimator over the set of underlying distributions in the model. Furthermore, for large sample sizes, we prove that it has smaller mean integrated squared error than the estimator that ignores the information in the moment inequality conditions. We also develop computational algorithms for this estimator, and demonstrate its properties in Monte Carlo simulation experiments for the case of infinite-order stochastic dominance. The second type of moment inequality model concerns stochastic dominance (SD) orderings between two income distributions. We develop asymptotic and bootstrap empirical likelihood-ratio tests for the null hypothesis that a given unidirectional strong SD ordering between the income distributions holds. These distributions are discrete with finite support, and, therefore, the SD conditions are framed as sets of linear inequality constraints on the vector of SD curve ordinates. Testing for strong SD requires that we consider as the null model one that allows at most one pair of these ordinates to be equal at an interior point of their support. Finally, we study the performance of these tests in Monte Carlo simulations.
Le principal objectif de cette thèse est d'étendre les procédures basées sur la vraisemblance empirique (EL) à des modèles statistiques caractérisés par des inégalités concernant des moments non conditionnels. On développe dans la thèse des procédures EL pour deux types de modèles. Pour le premier type, le paramètre d'intérêt, de dimension infinie, est la distribution de probabilité sous-jacente, définie par un continuum d'inégalités en correspondance avec une classe générale de fonctions d'estimation. L'approche utilisée afin de développer la théorie d'estimation s'appuie sur une méthode faisable de calcul de la fonction objectif. Elle permet de démontrer la convergence uniforme de l'estimateur sur l'ensemble des distributions du modèle. On démontre en outre que, pour une taille d'échantillon suffisamment grande, son erreur quadratique moyenne est inférieure à celle d'un estimateur qui ne se sert pas des informations fournies par les inégalités. Des algorithmes numériques pour le calcul de l'estimateur sont développés, et employés dans des expériences de simulation afin d'étudier les propriétés de l'estimateur dans le contexte de la dominance stochastique à l'ordre infini. Le second type de modèle concerne la dominance stochastique (SD) entre deux distributions de revenus. On développe des tests asymptotiques et bootstrap, basés sur le rapport de vraisemblance empirique, pour l'hypothèse nulle selon laquelle il existe un ordre de dominance stochastique forte unidirectionnelle entre les deux distributions. Celles-ci sont discrètes et avec support fini, ce qui permet de formuler l'hypothèse nulle en termes de contraintes d'inégalité sur le vecteur d'ordonnées des courbes de dominance. La dominance forte exige que l'hypothèse nulle n'admette qu'une paire d'ordonnées égales dans l'intérieur du support. Les performances des tests sont enfin étudiées au moyen de simulations Monte Carlo.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Chow, Fung-kiu, et 鄒鳳嬌. « Modeling the minority-seeking behavior in complex adaptive systems ». Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B29367487.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Cutugno, Carmen. « Statistical models for the corporate financial distress prediction ». Thesis, Università degli Studi di Catania, 2011. http://hdl.handle.net/10761/283.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Grayson, James M. (James Morris). « Economic Statistical Design of Inverse Gaussian Distribution Control Charts ». Thesis, University of North Texas, 1990. https://digital.library.unt.edu/ark:/67531/metadc332397/.

Texte intégral
Résumé :
Statistical quality control (SQC) is one technique companies are using in the development of a Total Quality Management (TQM) culture. Shewhart control charts, a widely used SQC tool, rely on an underlying normal distribution of the data. Often data are skewed. The inverse Gaussian distribution is a probability distribution that is wellsuited to handling skewed data. This analysis develops models and a set of tools usable by practitioners for the constrained economic statistical design of control charts for inverse Gaussian distribution process centrality and process dispersion. The use of this methodology is illustrated by the design of an x-bar chart and a V chart for an inverse Gaussian distributed process.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Valero, Rafael. « Essays on Sparse-Grids and Statistical-Learning Methods in Economics ». Doctoral thesis, Universidad de Alicante, 2017. http://hdl.handle.net/10045/71368.

Texte intégral
Résumé :
Compuesta por tres capítulos: El primero es un estudio sobre la implementación the Sparse Grid métodos para es el estudio de modelos económicos con muchas dimensiones. Llevado a cabo mediante aplicaciones noveles del método de Smolyak con el objetivo de favorecer la tratabilidad y obtener resultados preciso. Los resultados muestran mejoras en la eficiencia de la implementación de modelos con múltiples agentes. El segundo capítulo introduce una nueva metodología para la evaluación de políticas económicas, llamada Synthetic Control with Statistical Learning, todo ello aplicado a políticas particulares: a) reducción del número de horas laborales en Portugal en 1996 y b) reducción del coste del despido en España en 2010. La metodología funciona y se erige como alternativa a previos métodos. En términos empíricos se muestra que tras la implementación de la política se produjo una reducción efectiva del desempleo y en el caso de España un incremento del mismo. El tercer capítulo utiliza la metodología utiliza en el segundo capítulo y la aplica para evaluar la implementación del Tercer Programa Europeo para la Seguridad Vial (Third European Road Safety Action Program) entre otras metodologías. Los resultados muestran que la coordinación a nivel europeo de la seguridad vial a supuesto una ayuda complementaria. En el año 2010 se estima una reducción de víctimas mortales de entre 13900 y 19400 personal en toda Europa.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Donnelly, James P. « NFL Betting Market : Using Adjusted Statistics to Test Market Efficiency and Build a Betting Model ». Scholarship @ Claremont, 2013. http://scholarship.claremont.edu/cmc_theses/721.

Texte intégral
Résumé :
The use of statistical analysis has been prevalent in the sports gambling industry for years. More recently, we have seen the emergence of "adjusted statistics", a more sophisticated way to examine each play and each result (further explanation below). And while adjusted statistics have become commonplace for professional and recreational bettors alike, little research has been done to justify their use. In this paper the effectiveness of this data is tested on the most heavily wagered sport in the world – the National Football League (NFL). The results are studied with two central questions in mind: Does the market account for the information provided by adjusted statistics? And, can this data be interpreted to create a profitable betting strategy? First, the Efficient Market Hypothesis is introduced and tested using these new variables. Then, a betting model is built and tested.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Putnam, Kyle J. « Two Essays in Financial Economics ». ScholarWorks@UNO, 2015. http://scholarworks.uno.edu/td/2010.

Texte intégral
Résumé :
The following dissertation contains two distinct empirical essays which contribute to the overall field of Financial Economics. Chapter 1, entitled “The Determinants of Dynamic Dependence: An Analysis of Commodity Futures and Equity Markets,” examines the determinants of the dynamic equity-commodity return correlations between five commodity futures sub-sectors (energy, foods and fibers, grains and oilseeds, livestock, and precious metals) and a value-weighted equity market index (S&P 500). The study utilizes the traditional DCC model, as well as three time-varying copulas: (i) the normal copula, (ii) the student’s t copula, and (iii) the rotated-gumbel copula as dependence measures. Subsequently, the determinants of these various dependence measures are explored by analyzing several macroeconomic, financial, and speculation variables over different sample periods. Results indicate that the dynamic equity-commodity correlations for the energy, grains and oilseeds, precious metals, and to a lesser extent the foods and fibers, sub-sectors have become increasingly explainable by broad macroeconomic and financial market indicators, particularly after May 2003. Furthermore, these variables exhibit heterogeneous effects in terms of both magnitude and sign on each sub-sectors’ equity-commodity correlation structure. Interestingly, the effects of increased financial market speculation are found to be extremely varied among the five sub-sectors. These results have important implications for portfolio selection, price formation, and risk management. Chapter 2, entitled, “US Community Bank Failure: An Empirical Investigation,” examines the declining, but still pivotal role, of the US community banking industry. The study utilizes survival analysis to determine which accounting and macroeconomic variables help to predict community bank failure. Federal Deposit Insurance Corporation and Federal Reserve Bank data are utilized to compare 452 community banks which failed between 2000 and 2013, relative to a sample of surviving community banks. Empirical results indicate that smaller banks are less likely to fail than their larger community bank counterparts. Additionally, several unique bank-specific indicators of failure emerge which relate to asset quality and liquidity, as well as earnings ratios. Moreover, results show that the use of the macroeconomic indicator of liquidity, the TED spread, provides a substantial improvement in modeling predictive community bank failure.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Ekiz, Funda. « Cagan Type Rational Expectations Model on Time Scales with Their Applications to Economics ». TopSCHOLAR®, 2011. http://digitalcommons.wku.edu/theses/1126.

Texte intégral
Résumé :
Rational expectations provide people or economic agents making future decision with available information and past experiences. The first approach to the idea of rational expectations was given approximately fifty years ago by John F. Muth. Many models in economics have been studied using the rational expectations idea. The most familiar one among them is the rational expectations version of the Cagans hyperination model where the expectation for tomorrow is formed using all the information available today. This model was reinterpreted by Thomas J. Sargent and Neil Wallace in 1973. After that time, many solution techniques were suggested to solve the Cagan type rational expectations (CTRE) model. Some economists such as Muth [13], Taylor [26] and Shiller [27] consider the solutions admitting an infinite moving-average representation. Blanchard and Kahn [28] find solutions by using a recursive procedure. A general characterization of the solution was obtained using the martingale approach by Broze, Gourieroux and Szafarz in [22], [23]. We choose to study martingale solution of CTRE model. This thesis is comprised of five chapters where the main aim is to study the CTRE model on isolated time scales. Most of the models studied in economics are continuous or discrete. Discrete models are more preferable by economists since they give more meaningful and accurate results. Discrete models only contain uniform time domains. Time scale calculus enables us to study on m-periodic time domains as well as non periodic time domains. In the first chapter, we give basics of time scales calculus and stochastic calculus. The second chapter is the brief introduction to rational expectations and the CTRE model. Moreover, many other solution techniques are examined in this chapter. After we introduce the necessary background, in the third chapter we construct the CTRE Model on isolated time scales. Then we give the general solution of this model in terms of martingales. We continue our work with defining the linear system and higher order CTRE on isolated time scales. We use Putzer Algorithm to solve the system of the CTRE Model. Then, we examine the existence and uniqueness of the solution of the CTRE model. In the fourth chapter, we apply our solution algorithm developed in the previous chapter to models in Finance and stochastic growth models in Economics.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Wang, Junyi. « A Normal Truncated Skewed-Laplace Model in Stochastic Frontier Analysis ». TopSCHOLAR®, 2012. http://digitalcommons.wku.edu/theses/1177.

Texte intégral
Résumé :
Stochastic frontier analysis is an exciting method of economic production modeling that is relevant to hospitals, stock markets, manufacturing factories, and services. In this paper, we create a new model using the normal distribution and truncated skew-Laplace distribution, namely the normal-truncated skew-Laplace model. This is a generalized model of the normal-exponential case. Furthermore, we compute the true technical efficiency and estimated technical efficiency of the normal-truncated skewed-Laplace model. Also, we compare the technical efficiencies of normal-truncated skewed-Laplace model and normal-exponential model.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Bury, Thomas. « Collective behaviours in the stock market : a maximum entropy approach ». Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209341.

Texte intégral
Résumé :
Scale invariance, collective behaviours and structural reorganization are crucial for portfolio management (portfolio composition, hedging, alternative definition of risk, etc.). This lack of any characteristic scale and such elaborated behaviours find their origin in the theory of complex systems. There are several mechanisms which generate scale invariance but maximum entropy models are able to explain both scale invariance and collective behaviours.

The study of the structure and collective modes of financial markets attracts more and more attention. It has been shown that some agent based models are able to reproduce some stylized facts. Despite their partial success, there is still the problem of rules design. In this work, we used a statistical inverse approach to model the structure and co-movements in financial markets. Inverse models restrict the number of assumptions. We found that a pairwise maximum entropy model is consistent with the data and is able to describe the complex structure of financial systems. We considered the existence of a critical state which is linked to how the market processes information, how it responds to exogenous inputs and how its structure changes. The considered data sets did not reveal a persistent critical state but rather oscillations between order and disorder.

In this framework, we also showed that the collective modes are mostly dominated by pairwise co-movements and that univariate models are not good candidates to model crashes. The analysis also suggests a genuine adaptive process since both the maximum variance of the log-likelihood and the accuracy of the predictive scheme vary through time. This approach may provide some clue to crash precursors and may provide highlights on how a shock spreads in a financial network and if it will lead to a crash. The natural continuation of the present work could be the study of such a mechanism.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

Styles APA, Harvard, Vancouver, ISO, etc.
11

Gilbride, Timothy J. « Models for heterogeneous variable selection ». Columbus, Ohio : Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1083591017.

Texte intégral
Résumé :
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains xii, 138 p.; also includes graphics. Includes abstract and vita. Advisor: Greg M. Allenby, Dept. of Business Admnistration. Includes bibliographical references (p. 134-138).
Styles APA, Harvard, Vancouver, ISO, etc.
12

Strid, Ingvar. « Computational methods for Bayesian inference in macroeconomic models ». Doctoral thesis, Handelshögskolan i Stockholm, Ekonomisk Statistik (ES), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hhs:diva-1118.

Texte intégral
Résumé :
The New Macroeconometrics may succinctly be described as the application of Bayesian analysis to the class of macroeconomic models called Dynamic Stochastic General Equilibrium (DSGE) models. A prominent local example from this research area is the development and estimation of the RAMSES model, the main macroeconomic model in use at Sveriges Riksbank.   Bayesian estimation of DSGE models is often computationally demanding. In this thesis fast algorithms for Bayesian inference are developed and tested in the context of the state space model framework implied by DSGE models. The algorithms discussed in the thesis deal with evaluation of the DSGE model likelihood function and sampling from the posterior distribution. Block Kalman filter algorithms are suggested for likelihood evaluation in large linearised DSGE models. Parallel particle filter algorithms are presented for likelihood evaluation in nonlinearly approximated DSGE models. Prefetching random walk Metropolis algorithms and adaptive hybrid sampling algorithms are suggested for posterior sampling. The generality of the algorithms, however, suggest that they should be of interest also outside the realm of macroeconometrics.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Kolb, Jakob J. « Heuristic Decision Making in World Earth Models ». Doctoral thesis, Humboldt-Universität zu Berlin, 2020. http://dx.doi.org/10.18452/22147.

Texte intégral
Résumé :
Die Dynamik des Erdsystems im Anthropozän wird durch eine zunehmende Verschränkung von Prozessen auf physikalischer und ökologischer sowie auf sozioökonomischer Ebene bestimmt. Wenn Modelle als Entscheidungshilfen in diesem Umfeld nützlich sein sollen, müssen sie diese komplexen Rückkopplungen ebenso berücksichtigen wie die inhärent emergenten und heterogenen Qualitäten gesellschaftlicher Dynamik. Diese Arbeit schlägt vor, den Menschen als begrenzten rationalen Entscheidungsträger zu modellieren, die (soziales) Lernen nutzen, um Entscheidungsheuristiken zu erwerben, die in einer gegebenen Umgebung gut funktionieren. Dies wird in einem Wirtschaftsmodell mit zwei Sektoren veranschaulicht, in dem ein Sektor eine fossile Ressource für die wirtschaftliche Produktion verwendet und die Haushalte ihre Investitionsentscheidungen in der zuvor beschriebenen Weise treffen. In der Modellökonomie können individuelle Entscheidungsfindung und soziale Dynamik die CO 2 Emissionen nicht auf ein Niveau begrenzen, das eine globale Erwärmung über 1,5◦C verhindert. Eine Kombination aus kollektivem Handeln und koordinierter öffentlicher Politik allerdings kann. Eine Folgestudie analysiert das soziale Lernen der individuellen Sparquoten in einer Ein-Sektor-Wirtschaft. Hier nähert sich die aggregierte Sparquote der eines intertemporär optimierenden allwissenden Sozialen Planers an, wenn die soziale Interaktionsrate ausreichend niedrig ist. Gleichzeitig führt eine abnehmende Interaktionsrate einem plötzlichen Übergangs von einer unimodalen zu einer stark bimodalen Verteilung des Vermögens unter den Haushalten. Schließlich schlägt diese Arbeit eine Kombination verschiedener Methoden vor, die zur Ableitung analytischer Näherungen für solche vernetzten heterogenen Agentenmodelle verwendet werden können, bei denen Interaktionen zwischen Agenten sowohl auf individueller als auch auf aggregierter Ebene auftreten.
The trajectory of the Earth system in the Anthropocene is governed by an increasing entanglement of processes on a physical and ecological as well as on a socio-economic level. If models are to be useful as decision support tools in this environment, they ought acknowledge these complex feedback loops as well as the inherently emergent and heterogeneous qualities of societal dynamics. This thesis improves the capability of social-ecological and socio-economic models to picture emergent social phenomena and uses and extends techniques from dynamical systems theory and statistical physics for their analysis. It proposes to model humans as bounded rational decision makers that use (social) learning to acquire decision heuristics that function well in a given environment. This is illustrated in a two sector economic model in which one sector uses a fossil resource for economic production and households make their investment decisions in the previously described way. In the model economy individual decision making and social dynamics can not limit CO 2 emissions to a level that prevents global warming above 1.5 ◦ C. However, a combination of collective action and coordinated public policy actually can. A follow up study analyzes social learning of individual savings rates in a one sector investment economy. Here, the aggregate savings rate in the economy approaches that of an intertemporarily optimizing omniscient social planner if the social interaction rate is sufficiently low. Sumultaneously, a decreasing interaction rate leads to emergent inequality in the model in the form of a sudden transition from a unimodal to a strongly bimodal distribution of wealth among households. Finally, this thesis proposes a combination of different moment closure techniques that can be used to derive analytic approximations for such networked heterogeneous agent models where interactions between agents occur on an individual as well as on an aggregated level.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Zhang, Yanwei. « A hierarchical Bayesian approach to model spatially correlated binary data with applications to dental research ». Diss., Connect to online resource - MSU authorized users, 2008.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Yao, Jiawei. « Factor models| Testing and forecasting ». Thesis, Princeton University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3682786.

Texte intégral
Résumé :

This dissertation focuses on two aspects of factor models, testing and forecasting. For testing, we investigate a more general high-dimensional testing problem, with an emphasis on panel data models. Specifically, we propose a novel technique to boost the power of testing a high-dimensional vector against sparse alternatives. Existing tests based on quadratic forms such as the Wald statistic often suffer from low powers, whereas more powerful tests such as thresholding and extreme-value tests require either stringent conditions or bootstrap to derive the null distribution, and often suffer from size distortions. Based on a screening technique, we introduce a ''power enhancement component", which is zero under the null hypothesis with high probability, but diverges quickly under sparse alternatives. The proposed test statistic combines the power enhancement component with an asymptotically pivotal statistic, and strengthens the power under sparse alternatives. As a byproduct, the power enhancement component also consistently identifies the elements that violate the null hypothesis.

Next, we consider forecasting a single time series using many predictors when nonliearity is present. We develop a new methodology called sufficient forecasting, by connecting sliced inverse regression with factor models. The sufficient forecasting correctly estimates projections of the underlying factors and provides multiple predictive indices for further investigation. We derive asymptotic results for the estimate of the central space spanned by these projection directions. Our method allows the number of predictors larger than the sample size, and therefore extends the applicability of inverse regression. Numerical experiments demonstrate that the proposed method improves upon a linear forecasting model. Our results are further illustrated in an empirical study of macroeconomic variables, where sufficient forecasting is found to deliver additional predictive power over conventional methods.

Styles APA, Harvard, Vancouver, ISO, etc.
16

Incarbone, Giuseppe. « Statistical algorithms for Cluster Weighted Models ». Doctoral thesis, Università di Catania, 2013. http://hdl.handle.net/10761/1383.

Texte intégral
Résumé :
Cluster-weighted modeling (CWM) is a mixture approach to modeling the joint probability of data coming from a heterogeneous population. In this thesis first we investigate statistical properties of CWM from both theoretical and numerical point of view for both Gaussian and Student-t CWM. Then we introduce a novel family of twelve mixture models, all nested in the linear-t cluster weighted model (CWM). This family of models provides a unified framework that also includes the linear Gaussian CWM as a special case. Parameters estimation is carried out through algorithms based on maximum likelihood estimation and both the BIC and ICL are used for model selection. Finally, based on these algorithms, a software package for the R language has been implemented.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Kleppertknoop, Lily. « "Here Stands a High Bred Horse" : A Theory of Economics and Horse Breeding in Colonial Virginia, 1750-1780 ; a Statistical Model ». W&M ScholarWorks, 2013. https://scholarworks.wm.edu/etd/1539626711.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Bun, Maurice Josephus Gerardus. « Accurate statistical analysis in dynamic panel data models ». [Amsterdam : Amsterdam : Thela Thesis] ; Universiteit van Amsterdam [Host], 2001. http://dare.uva.nl/document/57690.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Tuzun, Tayfun. « Applying the statistical market value accounting model to time- series data for individual firms / ». Connect to resource, 1992. http://rave.ohiolink.edu/etdc/view.cgi?acc%5Fnum=osu1261419575.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Cox, Gregory Fletcher. « Advances in Weak Identification and Robust Inference for Generically Identified Models ». Thesis, Yale University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10633240.

Texte intégral
Résumé :

This dissertation establishes tools for valid inference in models that are only generically identified with a special focus on factor models.

Chapter one considers inference for models under a general form of identification failure, by studying microeconometric applications of factor models. Factor models postulate unobserved variables (factors) that explain the covariation between observed variables. For example, school quality can be modeled as a common factor to a variety of school characteristics. Observed variables depend on factors linearly with coefficients that are called factor loadings. Identification in factor models is determined by a rank condition on the factor loadings. The rank condition guarantees that the observed variables are sufficiently related to the factors that the parameters in the distribution of the factors can be identified. When the rank condition fails, for example when the observed school characteristics are weakly related to school quality, the asymptotic distribution of test statistics is nonstandard so that chi-squared critical values no longer control size.

Calculating new critical values that do control size requires characterizing the asymptotic distribution of the test statistic along sequences of parameters that converge to points of rank condition failure. This paper presents new theorems for this characterization which overcome two technical difficulties: (1) non-differentiability of the boundary of the identified set and (2) degeneracy in the limit stochastic process for the objective function. These difficulties arise in factor models, as well as a wider class of generically identified models, which these theorems cover. Non-differentiability of the boundary of the identified set is solved by squeezing the distribution of the estimator between a nonsmooth, fixed boundary and a smooth, drifting boundary. Degeneracy in the limit stochastic process is solved by restandardizing the objective function to a higher-order so that the resulting limit satisfies a unique minimum condition. Robust critical values, calculated by taking the supremum over quintiles of the asymptotic distributions of the test statistic, result in a valid robust inference procedure.

Chapter one demonstrates the robust inference procedure in two examples. In the first example, there is only one factor, for which the factor loadings may be zero or close to zero. This simple example highlights the aforementioned important theoretical difficulties. For the second example, Cunha, Heckman, and Schennach (2010), as well as other papers in the literature, use a factor model to estimate the production of skills in children as a function of parental investments. Their empirical specification includes two types of skills, cognitive and noncognitive, but only one type of parental investment out of a concern for identification failure. We formulate and estimate a factor model with two types of parental investment, which may not be identified because of rank condition failure. We find that for one of the four age categories, 6-9 year olds, the factors are close to being unidentified, and therefore standard inference results are misleading. For all other age categories, the distribution of the factors is identified.

Chapter two provides a higher-order stochastic expansion of M- and Z- estimators. Stochastic expansions are useful for a wide variety of stochastic problems, including bootstrap refinements, Edgeworth expansions, and identification failure. Without identification, the higher-order terms in the expansion may become relevant for the limit theory. Stochastic expansions above fourth order are rarely used because the expressions in the expansion become intractable. For M- and Z- estimators, a wide class of estimators that maximize an objective function or set an objective function to zero, this paper provides smoothness conditions and a closed-form expression for a stochastic expansion up to an arbitrary order.

Chapter three provides sufficient conditions for a random function to have a global unique minimum almost surely. Many important statistical objects can be defined as the global minimizing set of a function, including identified sets, extremum estimators, and the limit of a sequence of random variables (due to the argmax theorem). Whether this minimum is achieved at a unique point or a larger set is often practically and/or theoretically relevant. This paper considers a class of functions indexed by a vector of parameters and provides simple transversality-type conditions which are sufficient for the minimizing set to be a unique point for almost every function.

Styles APA, Harvard, Vancouver, ISO, etc.
21

Tindall, Nathaniel W. « Analyses of sustainability goals : Applying statistical models to socio-economic and environmental data ». Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/54259.

Texte intégral
Résumé :
This research investigates the environment and development issues of three stakeholders at multiple scales—global, national, regional, and local. Through the analysis of financial, social, and environmental metrics, the potential benefits and risks of each case study are estimated, and their implications are considered. In the first case study, the relationship of manufacturing and environmental performance is investigated. Over 700 facilities of a global manufacturer that produce 11 products on six continents were investigated to understand global variations and determinants of environmental performance. Water, energy, carbon dioxide emissions, and production data from these facilities were analyzed to assess environmental performance; the relationship of production composition at the individual firm and environmental performance were investigated. Location-independent environmental performance metrics were combined to provide both global and local measures of environmental performance. These models were extended to estimate future water use, energy use, and greenhouse gas emissions considering potential demand shifts. Natural resource depletion risks were investigated, and mitigation strategies related to vulnerabilities and exposure were discussed. The case study demonstrated how data from multiple facilities can be used to characterize the variability amongst facilities and to preview how changes in production may affect overall corporate environmental metrics. The developed framework adds a new approach to account for environmental performance and degradation as well as assess potential risk in locations where climate change may affect the availability of production resources (i.e., water and energy) and thus, is a tool for understanding risk and maintaining competitive advantage. The second case study was designed to address the issue of delivering affordable and sustainable energy. Energy pricing was evaluated by modeling individual energy consumption behaviors. This analysis simulated a heterogeneous set of residential households in both the urban and rural environments in order to understand demand shifts in the residential energy end-use sector due to the effects of electricity pricing. An agent-based model (ABM) was created to investigate the interactions of energy policy and individual household behaviors; the model incorporated empirical data on beliefs and perceptions of energy. The environmental beliefs, energy pricing grievances, and social networking dynamics were integrated into the ABM model structure. This model projected the aggregate residential sector electricity demand throughout the 30-year time period as well as distinguished the respective number of households who only use electricity, that use solely rely on indigenous fuels, and that incorporate both indigenous fuels and electricity. The model is one of the first characterizations of household electricity demand response and fuel transitions related to energy pricing at the individual household level, and is one of the first approaches to evaluating consumer grievance and rioting response to energy service delivery. The model framework is suggested as an innovative tool for energy policy analysis and can easily be revised to assist policy makers in other developing countries. In the final case study, a framework was developed for a broad cost-benefit and greenhouse gas evaluation of transit systems and their associated developments. A case study was developed of the Atlanta BeltLine. The net greenhouse gas emissions from the BeltLine light rail system will depend on the energy efficiency of the streetcars themselves, the greenhouse gas emissions from the electricity used to power the streetcars, the extent to which people use the BeltLine instead of driving personal vehicles, and the efficiency of their vehicles. The effects of ridership, residential densities, and housing mix on environmental performance were investigated and were used to estimate the overall system efficacy. The range of the net present value of this system was estimated considering health, congestion, per capita greenhouse gas emissions, and societal costs and benefits on a time-varying scale as well as considering the construction and operational costs. The 95% confidence interval was found with a range bounded by a potential loss of $860 million and a benefit of $2.3 billion; the mean net present value was $610 million. It is estimated that the system will generate a savings of $220 per ton of emitted CO2 with a 95% confidence interval bounded by a potential social cost of $86 cost per ton CO2 and a savings of $595 per ton CO2.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Wong, Chun-mei May, et 王春美. « The statistical tests on mean reversion properties in financial markets ». Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1994. http://hub.hku.hk/bib/B31211975.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Barone, Anthony J. « State Level Earned Income Tax Credit’s Effects on Race and Age : An Effective Poverty Reduction Policy ». Scholarship @ Claremont, 2013. http://scholarship.claremont.edu/cmc_theses/771.

Texte intégral
Résumé :
In this paper, I analyze the effectiveness of state level Earned Income Tax Credit programs on improving of poverty levels. I conducted this analysis for the years 1991 through 2011 using a panel data model with fixed effects. The main independent variables of interest were the state and federal EITC rates, minimum wage, gross state product, population, and unemployment all by state. I determined increases to the state EITC rates provided only a slight decrease to both the overall white below-poverty population and the corresponding white childhood population under 18, while both the overall and the under-18 black population for this category realized moderate decreases in their poverty rates for the same time period. I also provide a comparison of the effectiveness of the state level EITCs and minimum wage at the state level over the same time period on these select demographic groups.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Zhang, Yonghui. « Three essays on large panel data models with cross-sectional dependence ». Thesis, Singapore Management University (Singapore), 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3601351.

Texte intégral
Résumé :

My dissertation consists of three essays which contribute new theoretical results to large panel data models with cross-sectional dependence. These essays try to answer or partially answer some prominent questions such as how to detect the presence of cross-sectional dependence and how to capture the latent structure of cross-sectional dependence and estimate parameters efficiently by removing its effects.

Chapter 2 introduces a nonparametric test for cross-sectional contemporaneous dependence in large dimensional panel data models based on the squared distance between the pair-wise joint density and the product of the marginals. The test can be applied to either raw observable data or residuals from local polynomial time series regressions for each individual to estimate the joint and marginal probability density functions of the error terms. In either case, we establish the asymptotic normality of our test statistic under the null hypothesis by permitting both the cross section dimension n and the time series dimension T to pass to infinity simultaneously and relying upon the Hoeffding decomposition of a two-fold U-statistic. We also establish the consistency of our test. A small set of Monte Carlo simulations is conducted to evaluate the finite sample performance of our test and compare it with that of Pesaran (2004) and Chen, Gao, and Li (2009).

Chapter 3 analyzes nonparametric dynamic panel data models with interactive fixed effects, where the predetermined regressors enter the models nonparametrically and the common factors enter the models linearly but with individual specific factor loadings. We consider the issues of estimation and specification testing when both the cross-sectional dimension N and the time dimension T are large. We propose sieve estimation for the nonparametric function by extending Bai's (2009) principal component analysis (PCA) to our nonparametric framework. Following Moon and Weidner's (2010, 2012) asymptotic expansion of the Gaussian quasilog-likelihood function, we derive the convergence rate for the sieve estimator and establish its asymptotic normality. The sources of asymptotic biases are discussed and a consistent bias-corrected estimator is provided. We also propose a consistent specification test for the linearity of the nonparametric functional form by comparing the linear and sieve estimators. We establish the asymptotic distributions of the test statistic under both the null hypothesis and a sequence of Pitman local alternatives.

To improve the finite sample performance of the test, we also propose a bootstrap procedure to obtain the bootstrap p-values and justify its validity. Monte Carlo simulations are conducted to investigate the finite sample performance of our estimator and test. We apply our model to an economic growth data set to study the relationship between capital accumulation and real GDP growth rate.

Chapter 4 proposes a nonparametric test for common trends in semiparametric panel data models with fixed effects based on a measure of nonparametric goodness-of-fit (R2). We first estimate the model under the null hypothesis of common trends by the method of profile least squares, and obtain the augmented residual which consistently estimates the sum of the fixed effect and the disturbance under the null.

Then we run a local linear regression of the augmented residuals on a time trend and calculate the nonparametric R2 for each cross section unit. The proposed test statistic is obtained by averaging all cross sectional nonparametric R2's, which is close to 0 under the null and deviates from 0 under the alternative. We show that after appropriate standardization the test statistic is asymptotically normally distributed under both the null hypothesis and a sequence of Pitman local alternatives. We prove test consistency and propose a bootstrap procedure to obtain p-values. Monte Carlo simulations indicate that the test performs well infinite samples. Empirical applications are conducted exploring the commonality of spatial trends in UK climate change data and idiosyncratic trends in OECD real GDP growth data. Both applications reveal the fragility of the widely adopted common trends assumption.

Styles APA, Harvard, Vancouver, ISO, etc.
25

Rui, Xiongwen. « Essays on the Solution, Estimation, and Analysis of Dynamic Nonlinear Economic Models / ». The Ohio State University, 1995. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487928649987711.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Hwang, Jungbin. « Fixed smoothing asymptotic theory in over-identified econometric models in the presence of time-series and clustered dependence ». Thesis, University of California, San Diego, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10128431.

Texte intégral
Résumé :

In the widely used over-identified econometric model, the two-step Generalized Methods of Moments (GMM) estimator and inference, first suggested by Hansen (1982), require the estimation of optimal weighting matrix at the initial stages. For time series data and clustered dependent data, which is our focus here, the optimal weighting matrix is usually referred to as the long run variance (LRV) of the (scaled) sample moment conditions. To maintain generality and avoid misspecification, nowadays we do not model serial dependence and within-cluster dependence parametrically but use the heteroscedasticity and autocorrelation robust (HAR) variance estimator in standard practice. These estimators are nonparametric in nature with high variation in finite samples, but the conventional increasing smoothing asymptotics, so called small-bandwidth asymptotics, completely ignores the finite sample variation of the estimated GMM weighting matrix. As a consequence, empirical researchers are often in danger of making unreliable inferences and false assessments of the (efficient) two-step GMM methods. Motivated by this issue, my dissertation consists of three papers which explore the efficiency and approximation issues in the two-step GMM methods by developing new, more accurate, and easy-to-use approximations to the GMM weighting matrix.

The first chapter, "Simple and Trustworthy Cluster-Robust GMM Inference" explores new asymptotic theory for two-step GMM estimation and inference in the presence of clustered dependence. Clustering is a common phenomenon for many cross-sectional and panel data sets in applied economics, where individuals in the same cluster will be interdependent while those from different clusters are more likely to be independent. The core of new approximation scheme here is that we treat the number of clusters G fixed as the sample size increases. Under the new fixed-G asymptotics, the centered two-step GMM estimator and two continuously-updating estimators have the same asymptotic mixed normal distribution. Also, the t statistic, J statistic, as well as the trinity of two-step GMM statistics (QLR, LM and Wald) are all asymptotically pivotal, and each can be modified to have an asymptotic standard F distribution or t distribution. We also suggest a finite sample variance correction further to improve the accuracy of the F or t approximation. Our proposed asymptotic F and t tests are very appealing to practitioners, as test statistics are simple modifications of the usual test statistics, and the F or t critical values are readily available from standard statistical tables. We also apply our methods to an empirical study on the causal effect of access to domestic and international markets on household consumption in rural China.

The second paper "Should we go one step further? An Accurate Comparison of One-step and Two-step procedures in a Generalized Method of Moments Framework” (coauthored with Yixiao Sun) focuses on GMM procedure in time-series setting and provides an accurate comparison of one-step and two-step GMM procedures in a fixed-smoothing asymptotics framework. The theory developed in this paper shows that the two-step procedure outperforms the one-step method only when the benefit of using the optimal weighting matrix outweighs the cost of estimating it. We also provide clear guidance on how to choose a more efficient (or powerful) GMM estimator (or test) in practice.

While our fixed smoothing asymptotic theory accurately describes sampling distribution of two-step GMM test statistic, the limiting distribution of conventional GMM statistics is non-standard, and its critical values need to be simulated or approximated by standard distributions in practice. In the last chapter, "Asymptotic F and t Tests in an Efficient GMM Setting" (coauthored with Yixiao Sun), we propose a simple and easy-to-implement modification to the trinity (QLM, LM, and Wald) of two-step GMM statistics and show that the modified test statistics are all asymptotically F distributed under the fixed-smoothing asymptotics. The modification is multiplicative and only involves the J statistic for testing over-identifying restrictions. In fact, what we propose can be regarded as the multiplicative variance correction for two-step GMM statistics that takes into account the additional asymptotic variance term under the fixed-smoothing asymptotics. The results in this paper can be immediately generalized to the GMM setting in the presence of clustered dependence.

Styles APA, Harvard, Vancouver, ISO, etc.
27

Facchinetti, Alessandro <1991&gt. « Likelihood free methods for inference on complex models ». Master's Degree Thesis, Università Ca' Foscari Venezia, 2020. http://hdl.handle.net/10579/17017.

Texte intégral
Résumé :
Complex models often have intractable likelihoods, so methods that involve evaluation of the likelihood function are infeasible. The aims of the research are • to provide a review of the likelihood free methods (e.g., ABC or synthetic likelihood) used in fitting complex models large dataset; • to use likelihood free methods to make inference on complex models such as random networks models; • to develop the code for the analysis; • to apply the model and methods for networks data from economics and finance such as trade, financial flows networks, financial contagion networks; • to write a final report where methods and results are presented and discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Witte, Hugh Douglas. « Markov chain Monte Carlo and data augmentation methods for continuous-time stochastic volatility models ». Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/283976.

Texte intégral
Résumé :
In this paper we exploit some recent computational advances in Bayesian inference, coupled with data augmentation methods, to estimate and test continuous-time stochastic volatility models. We augment the observable data with a latent volatility process which governs the evolution of the data's volatility. The level of the latent process is estimated at finer increments than the data are observed in order to derive a consistent estimator of the variance over each time period the data are measured. The latent process follows a law of motion which has either a known transition density or an approximation to the transition density that is an explicit function of the parameters characterizing the stochastic differential equation. We analyze several models which differ with respect to both their drift and diffusion components. Our results suggest that for two size-based portfolios of U.S. common stocks, a model in which the volatility process is characterized by nonstationarity and constant elasticity of instantaneous variance (with respect to the level of the process) greater than 1 best describes the data. We show how to estimate the various models, undertake the model selection exercise, update posterior distributions of parameters and functions of interest in real time, and calculate smoothed estimates of within sample volatility and prediction of out-of-sample returns and volatility. One nice aspect of our approach is that no transformations of the data or the latent processes, such as subtracting out the mean return prior to estimation, or formulating the model in terms of the natural logarithm of volatility, are required.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Koliadenko, Pavlo <1998&gt. « Time series forecasting using hybrid ARIMA and ANN models ». Master's Degree Thesis, Università Ca' Foscari Venezia, 2021. http://hdl.handle.net/10579/19992.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
30

Rabin, Gregory S. « A reduced-form statistical climate model suitable for coupling with economic emissions projections ». Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41672.

Texte intégral
Résumé :
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (p. 36-37).
In this work, we use models based on past data and scientific analysis to determine possible future states of the environment. We attempt to improve the equations for temperature and greenhouse gas concentration used in conjunction with the MIT Emissions Prediction and Policy Analysis (EPPA) model or for independent climate analysis based on results from the more complex MIT Integrated Global Systems Model (IGSM). The functions we generate should allow a software system to approximate the environmental variables from the policy inputs in a matter of seconds. At the same time, the estimates should be close enough to the exact values given by the IGSM to be considered meaningful.
by Gregory S. Rabin.
M.Eng.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Cesale, Giancarlo. « A novel approach to forecasting from non scalar DCC models ». Doctoral thesis, Universita degli studi di Salerno, 2016. http://hdl.handle.net/10556/2197.

Texte intégral
Résumé :
2014 - 2015
Estimating and predicting joint second-order moments of asset portfolios is of huge impor- tance in many practical applications and, hence, modeling volatility has become a crucial issue in financial econometrics. In this context multivariate generalized autoregressive condi- tional heteroscedasticity (M-GARCH) models are widely used, especially in their versions for the modeling of conditional correlation matrices (DCC-GARCH). Nevertheless, these models tipically suffer from the so-called curse of dimensionality: the number of needed parameters rapidly increases when the portfolio dimension gets large, so making their use practically infeasible. Due to these reasons, many simplified versions of the original specifications have been developed, often based upon restrictive a priori assumptions, in order to achieve the best tradeoff between flexibility and numerical feasibility. However, these strategies may im- plicate in general a certain loss of information because of the imposed simplifications. After a description of the general framework of M-GARCH models and a discussion on some specific topics relative to second-order multivariate moments of large dimension, the main contribu- tion of this thesis is to propose a new method for forecasting conditional correlation matrices in high-dimensional problems which is able to exploit more information without imposing any a priori structure and without incurring overwhelming calculations. Performances of the proposed method are evaluated and compared to alternative predictors through applications to real data. [edited by author]
XIV n.s.
Styles APA, Harvard, Vancouver, ISO, etc.
32

He, Wei. « Model selection for cointegrated relationships in small samples ». Thesis, Nelson Mandela Metropolitan University, 2008. http://hdl.handle.net/10948/971.

Texte intégral
Résumé :
Vector autoregression models have become widely used research tools in the analysis of macroeconomic time series. Cointegrated techniques are an essential part of empirical macroeconomic research. They infer causal long-run relationships between nonstationary variables. In this study, six information criteria were reviewed and compared. The methods focused on determining the optimum information criteria for detecting the correct lag structure of a two-variable cointegrated process.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Park, Seoungbyung. « Factor Based Statistical Arbitrage in the U.S. Equity Market with a Model Breakdown Detection Process ». Thesis, Marquette University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10280168.

Texte intégral
Résumé :

Many researchers have studied different strategies of statistical arbitrage to provide a steady stream of returns that are unrelated to the market condition. Among different strategies, factor-based mean reverting strategies have been popular and covered by many. This thesis aims to add value by evaluating the generalized pairs trading strategy and suggest enhancements to improve out-of-sample performance. The enhanced strategy generated the daily Sharpe ratio of 6.07% in the out-of-sample period from January 2013 through October 2016 with the correlation of -.03 versus S&P 500. During the same period, S&P 500 generated the Sharpe ratio of 6.03%.

This thesis is differentiated from the previous relevant studies in the following three ways. First, the factor selection process in previous statistical arbitrage studies has been often unclear or rather subjective. Second, most literature focus on in-sample results, rather than out-of-sample results of the strategies, which is what the practitioners are mainly interested in. Third, by implementing hidden Markov model, it aims to detect regime change to improve the timing the trade.

Styles APA, Harvard, Vancouver, ISO, etc.
34

Lu, Zhen Cang. « Price forecasting models in online flower shop implementation ». Thesis, University of Macau, 2017. http://umaclib3.umac.mo/record=b3691395.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Kochi, Ikuho. « Essays on the Value of a Statistical Life ». unrestricted, 2007. http://etd.gsu.edu/theses/available/etd-04302007-172639/.

Texte intégral
Résumé :
Thesis (Ph. D.)--Georgia State University, 2007.
Title from file title page. Laura O. Taylor, committee chair; H. Spencer Banzhaf, Susan K. Laury, Mary Beth Walker, Kenneth E. McConnell, committee members. Electronic text (177 p. : ill.) : digital, PDF file. Description based on contents viewed Jan. 7, 2008. Includes bibliographical references (p. 172-176).
Styles APA, Harvard, Vancouver, ISO, etc.
36

Doolan, Mark Bernard. « Evaluating multivariate volatility forecasts : how effective are statistical and economic loss functions ? » Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/45750/1/Mark_Doolan_Thesis.pdf.

Texte intégral
Résumé :
Multivariate volatility forecasts are an important input in many financial applications, in particular portfolio optimisation problems. Given the number of models available and the range of loss functions to discriminate between them, it is obvious that selecting the optimal forecasting model is challenging. The aim of this thesis is to thoroughly investigate how effective many commonly used statistical (MSE and QLIKE) and economic (portfolio variance and portfolio utility) loss functions are at discriminating between competing multivariate volatility forecasts. An analytical investigation of the loss functions is performed to determine whether they identify the correct forecast as the best forecast. This is followed by an extensive simulation study examines the ability of the loss functions to consistently rank forecasts, and their statistical power within tests of predictive ability. For the tests of predictive ability, the model confidence set (MCS) approach of Hansen, Lunde and Nason (2003, 2011) is employed. As well, an empirical study investigates whether simulation findings hold in a realistic setting. In light of these earlier studies, a major empirical study seeks to identify the set of superior multivariate volatility forecasting models from 43 models that use either daily squared returns or realised volatility to generate forecasts. This study also assesses how the choice of volatility proxy affects the ability of the statistical loss functions to discriminate between forecasts. Analysis of the loss functions shows that QLIKE, MSE and portfolio variance can discriminate between multivariate volatility forecasts, while portfolio utility cannot. An examination of the effective loss functions shows that they all can identify the correct forecast at a point in time, however, their ability to discriminate between competing forecasts does vary. That is, QLIKE is identified as the most effective loss function, followed by portfolio variance which is then followed by MSE. The major empirical analysis reports that the optimal set of multivariate volatility forecasting models includes forecasts generated from daily squared returns and realised volatility. Furthermore, it finds that the volatility proxy affects the statistical loss functions’ ability to discriminate between forecasts in tests of predictive ability. These findings deepen our understanding of how to choose between competing multivariate volatility forecasts.
Styles APA, Harvard, Vancouver, ISO, etc.
37

McCloud, Nadine. « Model misspecification theory and applications / ». Diss., Online access via UMI:, 2008.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
38

Donno, Annalisa <1983&gt. « Multidimensional Measures of Firm Competitiveness : a Model-Based Approach ». Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/5173/1/donno_annalisa_tesi.pdf.

Texte intégral
Résumé :
The concept of competitiveness, for a long time considered as strictly connected to economic and financial performances, evolved, above all in recent years, toward new, wider interpretations disclosing its multidimensional nature. The shift to a multidimensional view of the phenomenon has excited an intense debate involving theoretical reflections on the features characterizing it, as well as methodological considerations on its assessment and measurement. The present research has a twofold objective: going in depth with the study of tangible and intangible aspect characterizing multidimensional competitive phenomena by assuming a micro-level point of view, and measuring competitiveness through a model-based approach. Specifically, we propose a non-parametric approach to Structural Equation Models techniques for the computation of multidimensional composite measures. Structural Equation Models tools will be used for the development of the empirical application on the italian case: a model based micro-level competitiveness indicator for the measurement of the phenomenon on a large sample of Italian small and medium enterprises will be constructed.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Donno, Annalisa <1983&gt. « Multidimensional Measures of Firm Competitiveness : a Model-Based Approach ». Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/5173/.

Texte intégral
Résumé :
The concept of competitiveness, for a long time considered as strictly connected to economic and financial performances, evolved, above all in recent years, toward new, wider interpretations disclosing its multidimensional nature. The shift to a multidimensional view of the phenomenon has excited an intense debate involving theoretical reflections on the features characterizing it, as well as methodological considerations on its assessment and measurement. The present research has a twofold objective: going in depth with the study of tangible and intangible aspect characterizing multidimensional competitive phenomena by assuming a micro-level point of view, and measuring competitiveness through a model-based approach. Specifically, we propose a non-parametric approach to Structural Equation Models techniques for the computation of multidimensional composite measures. Structural Equation Models tools will be used for the development of the empirical application on the italian case: a model based micro-level competitiveness indicator for the measurement of the phenomenon on a large sample of Italian small and medium enterprises will be constructed.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Mitchell, Zane Windsor Jr. « A Statistical Analysis Of Construction Equipment Repair Costs Using Field Data & ; The Cumulative Cost Model ». Diss., Virginia Tech, 1998. http://hdl.handle.net/10919/30468.

Texte intégral
Résumé :
The management of heavy construction equipment is a difficult task. Equipment managers are often called upon to make complex economic decisions involving the machines in their charge. These decisions include those concerning acquisitions, maintenance, repairs, rebuilds, replacements, and retirements. The equipment manager must also be able to forecast internal rental rates for their machinery. Repair and maintenance expenditures can have significant impacts on these economic decisions and forecasts. The purpose of this research was to identify a regression model that can adequately represent repair costs in terms of machine age in cumulative hours of use. The study was conducted using field data on 270 heavy construction machines from four different companies. Nineteen different linear and transformed non-linear models were evaluated. A second-order polynomial expression was selected as the best. It was demonstrated how this expression could be incorporated in the Cumulative Cost Model developed by Vorster where it can be used to identify optimum economic decisions. It was also demonstrated how equipment managers could form their own regression equations using standard spreadsheet and database software.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Pouliot, William. « Two applications of U-Statistic type processes to detecting failures in risk models and structural breaks in linear regression models ». Thesis, City University London, 2010. http://openaccess.city.ac.uk/1166/.

Texte intégral
Résumé :
This dissertation is concerned with detecting failures in Risk Models and in detecting structural breaks in linear regression models. By applying Theorem 2.1 of Szyszkowicz on U-statistic type process, a number of weak convergence results regarding three weighted partial sum processes are established. It is shown that these partial sum processes share certain invariance properties; estimation risk does not affect their weak convergence results and they are also robust to asymmetries in the error process in linear regression models. There is also an application of the methods developed here to a four factor Capital Asset Pricing model where it is shown via the methods developed in Chapter 3 that manager stock selection abilities vary over time.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Thompson, Mery Helena. « Optimum experimental designs for models with a skewed error distribution : with an application to stochastic frontier models ». Thesis, University of Glasgow, 2008. http://theses.gla.ac.uk/236/.

Texte intégral
Résumé :
In this thesis, optimum experimental designs for a statistical model possessing a skewed error distribution are considered, with particular interest in investigating possible parameter dependence of the optimum designs. The skewness in the distribution of the error arises from its assumed structure. The error consists of two components (i) random error, say V, which is symmetrically distributed with zero expectation, and (ii) some type of systematic error, say U, which is asymmetrically distributed with nonzero expectation. Error of this type is sometimes called 'composed' error. A stochastic frontier model is an example of a model that possesses such an error structure. The systematic error, U, in a stochastic frontier model represents the economic efficiency of an organisation. Three methods for approximating information matrices are presented. An approximation is required since the information matrix contains complicated expressions, which are difficult to evaluate. However, only one method, 'Method 1', is recommended because it guarantees nonnegative definiteness of the information matrix. It is suggested that the optimum design is likely to be sensitive to the approximation. For models that are linearly dependent on the model parameters, the information matrix is independent of the model parameters but depends on the variance parameters of the random and systematic error components. Consequently, the optimum design is independent of the model parameters but may depend on the variance parameters. Thus, designs for linear models with skewed error may be parameter dependent. For nonlinear models, the optimum design may be parameter dependent in respect of both the variance and model parameters. The information matrix is rank deficient. As a result, only subsets or linear combinations of the parameters are estimable. The rank of the partitioned information matrix is such that designs are only admissible for optimal estimation of the model parameters, excluding any intercept term, plus one linear combination of the variance parameters and the intercept. The linear model is shown to be equivalent to the usual linear regression model, but with a shifted intercept. This suggests that the admissible designs should be optimal for estimation of the slope parameters plus the shifted intercept. The shifted intercept can be viewed as a transformation of the intercept in the usual linear regression model. Since D_A-optimum designs are invariant to linear transformations of the parameters, the D_A-optimum design for the asymmetrically distributed linear model is just the linear, parameter independent, D_A-optimum design for the usual linear regression model with nonzero intercept. C-optimum designs are not invariant to linear transformations. However, if interest is in optimally estimating the slope parameters, the linear transformation of the intercept to the shifted intercept is no longer a consideration and the C-optimum design is just the linear, parameter independent, C-optimum design for the usual linear regression model with nonzero intercept. If interest is in estimating the slope parameters, and the shifted intercept, the C-optimum design will depend on (i) the design region; (ii) the distributional assumption on U; (iii) the matrix used to define admissible linear combinations of parameters; (iv) the variance parameters of U and V; (v) the method used to approximate the information matrix. Some numerical examples of designs for a cross-sectional log-linear Cobb-Douglas stochastic production frontier model are presented to demonstrate the nonlinearity of designs for models with a skewed error distribution. Torsney's (1977) multiplicative algorithm was implemented in finding the optimum designs.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Kemp, Gordon C. R. « Asymptotic expansion approximations and the distributions of various test statistics in dynamic econometric models ». Thesis, University of Warwick, 1987. http://wrap.warwick.ac.uk/99431/.

Texte intégral
Résumé :
In this thesis we examine the derivation of asymptotic expansion approximations to the cumulative distribution functions of asymptotically chi-square test statistics under the null hypothesis being tested and the use of such approximations in the investigation of the properties of testing procedures. We are particularly concerned with how the structure of various test statistics may simplify the derivation of asymptotic expansion approximations to their cumulative distribution functions and also how these approximations can be used in conjunction with other small sample techniques to investigate the properties of testing procedures. In Chapter 1 we briefly review the construction of test statistics based on the Wald testing principle and in Chapter 2 we review the various approaches to finite sample theory which have been adopted in econometrics including asymptotic expansion methods. In Chapter 3 we derive asymptotic expansion approximations to the joint cumulative distribution functions of asymptotically chi-square test statistics making explicit use of certain aspects of the structure of such test statistics. In Chapters 4, 5 and 6 we apply these asymptotic expansion approximations under the null hypothesis, in conjunction with other small sample techniques, to a number of specific testing problems. The test statistics considered in Chapters 4 and 6 are Wald test statistics and those considered in Chapter 5 are predictive failure test statistics. The asymptotic expansion approximations to the cumulative distribution functions of the test statistics under the null hypothesis are evaluated numerically; the Implementation of the algorithm for obtaining asymptotic expansion approximations to the cumulative distribution functions of test statistics is discussed in an Appendix on Computing. Finally, in Chapter 7 we draw overall conclusions from the earlier chapters of the thesis and discuss briefly directions for possible future research.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Metzig, Cornelia. « A Model for a complex economic system ». Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENS038/document.

Texte intégral
Résumé :
Cette thèse s'inscrit dans le cadre de systèmes complexes appliqués aux systèmes économiques. Dans cette thèse, un modèle multi-agent a été proposé, qui modélise le cycle de production. Il est consitué d'entreprises, ouvirers/foyers, et une banque, et repecte la conservation de la monnaie. Son hypothèse centrale est que les entreprises se basent sur une marge espérée pour déterminer leur production. Un scénario simple de ce modèle, ou les marges espérées sont homogènes, a été analysé dans le cadre de models de croissance stochastique. Les résultats sont la distribution de tailles d'entreprises rassemblant des lois de puissance, et leur distribution du taux de croissance de forme 'tente', ainsi qu'une dépendence de taille de la variance de la croissance. Ces résultats sont proches aux faits stylisés issus d'études empiriques. Dans un scénario plus complet, le modèle contient des caractéristiques supplémentaires: des marges espérées hétérogèges, ainsi que des paiements d'intérêts, la possibilité de faire faillite. Cela ramène le modèle aux modèles macro-économiques multi-agents. Les extensions sont décrites de façon théorique par des équations de replicateur. Les résultats nouveaux sont la distribution d'age d'entreprises actives, la distribution de leur taux de profit, la distribution de dette, des statistiques sur les faillites, et des cycles de vie caractéristiques. Tout ces résultats sont qualitativement en accord avec des résultats d'études empiriques de plusieurs pays.Le modèle proposé génère des résultats prometteurs, en respectant le principe que des résultats qui apparaissent simultanément peuvent soit etre générés par un même processus, soit par plusieurs aui qui sont compatibles
The thesis is in the field of complex systems, applied to an economic system. In this thesis, an agent-based model has been proposed to model the production cycle. It comprises firms, workers, and a bank, and respects stock-flow consistency. Its central assumption is that firms plan their production based on an expected profit margin. A simple scenario of the model, where the expected profit margin is the same for all firms, has been analyzed in the context of simple stochastic growth models. Results are a firms' size distribution close to a power law, and tent-shaped growth rate distribution, and a growth rate variance scaling with firm size. These results are close to empirically found stylized facts. In a more comprehensive version, the model contains additional features: heterogeneous profits margins, as well as interest payments and the possibility of bankruptcy. This relates the model to agent-based macroeconomic models. The extensions are described theoretically theoretically with replicator dynamics. New results are the age distribution of active firms, their profit rate distribution, debt distribution, bankruptcy statistics, as well as typical life cycles of firms, which are all qualitatively in agreement with studies of firms databases of various countries.The proposed model yields promising results by respecting the principle that jointly found results may be generated by the same process, or by several ones which are compatible
Styles APA, Harvard, Vancouver, ISO, etc.
45

Pandolfo, Silvia <1993&gt. « Analysis of the volatility of high-frequency data. The Realized Volatility and the HAR model ». Master's Degree Thesis, Università Ca' Foscari Venezia, 2019. http://hdl.handle.net/10579/14840.

Texte intégral
Résumé :
Over the last decades, the advanced technologies for data acquisition made it easier to collect, store and manage high-frequency data. However, the analysis of observations collected at an extremely fine time scale is still a challenge: these data are characterized by specific features, related to the trading process and the microstructure of the market, which standard time series and econometrics techniques are not able to reproduce. In particular, the behavior of the high-frequency volatility cannot be reflected by a GARCH model, hence, there is a need for more accurate ways to model it. Recently, the Heterogeneous Autoregressive model of Realized Volatility (HAR-RV) has been introduced: it allows for an easy estimation and economic interpretation of the dynamics of the Realized Volatility, a consistent estimator for daily volatility based on intraday returns. The purpose of this thesis is to model and forecast high-frequency volatility, comparing the HAR performances to those of more classical time series models. In doing so, also jump components and leverage effect have been considered.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Papa, Bruno Del. « A study of social and economic evolution of human societies using methods of Statistical Mechanics and Information Theory ». Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-26092014-081449/.

Texte intégral
Résumé :
This dissertation explores some applications of statistical mechanics and information theory tools to topics of interest in anthropology, social sciences, and economics. We intended to develop mathematical and computational models with empirical and theoretical bases aiming to identify important features of two problems: the transitions between egalitarian and hierarchical societies and the emergence of money in human societies. Anthropological data suggest the existence of a correlation between the relative neocortex size and the average size of primates\' groups, most of which are hierarchical. Recent theories also suggest that social and evolutionary pressures are responsible for modications in the cognitive capacity of the individuals, what might have made possible the emergence of different types of social organization. Based on those observations, we studied a mathematical model that incorporates the hypothesis of cognitive costs, attributed for each cognitive social representation, to explain the variety of social structures in which humans may organize themselves. A Monte Carlo dynamics allows for the plotting of a phase diagram containing hierarchical, egalitarian, and intermediary regions. There are roughly three parameters responsible for that behavior: the cognitive capacity, the number of agents in the society, and the social and environmental pressure. The model also introduces a modication in the dynamics to account for a parameter representing the information exchange rate, which induces the correlations amongst the cognitive representations. Those correlations ultimately lead to the phase transition to a hierarchical society. Our results qualitatively agree with anthropological data if the variables are interpreted as their social equivalents. The other model developed during this work tries to give insights into the problem of emergence of a unique medium of exchange, also called money. Predominant economical theories, describe the emergence of money as the result of barter economies evolution. However, criticism recently shed light on the lack of historical and anthropological evidence to corroborate the barter hypothesis, thus bringing out doubts about the mechanisms leading to money emergence and questions regarding the inuence of the social configuration. Recent studies also suggest that money may be perceived by individuals as a perceptual drug and new money theories have been developed aiming to explain the monetization of societies. By developing a computational model based on the previous dynamics for hierarchy emergence, we sought to simulate those phenomena using cognitive representations of economic networks containing information about the exchangeability of any two commodities. Similar mathematical frameworks have been used before, but no discussion about the effects of the social network configuration was presented. The model developed in this dissertation is capable of employing the concept of cognitive representations and of assigning them costs as part of the dynamics. The new dynamics is capable of analyzing how the information exchange depends on the social structure. Our results show that centralized networks, such as star or scale-free structures, yield a higher probability of money emergence. The two models suggest, when observe together, that phase transitions in social organization might be essential factors for the money emergency phenomena, and thus cannot be ignored in future social and economical modeling.
Nesta dissertação, utilizamos ferramentas de mecânica estatística e de teoria de informação para aplicações em tópicos significativos ás areas de antropologia, ciências sociais e economia. Buscamos desenvolver modelos matemáticos e computacionais com bases empíricas e teóricas para identificar pontos importantes nas questões referentes à transição entre sociedades igualitárias e hierárquicas e à emergência de dinheiro em sociedades humanas. Dados antropológicos sugerem que há correlação entre o tamanho relativo do neocórtex e o tamanho médio de grupos de primatas, predominantemente hierárquicos, enquanto teorias recentes sugerem que pressões sociais e evolutivas alteraram a capacidade cognitiva dos indivíduos, possibilitando sua organização social em outras configurações. Com base nestas observações, desenvolvemos um modelo matemático capaz de incorporar hipóteses de custos cognitivos de representações sociais para explicar a variação de estruturas sociais encontradas em sociedades humanas. Uma dinâmica de Monte Carlo permite a construção de um diagrama de fase, no qual é possivel identificar regiões hierárquicas, igualitárias e intermediárias. Os parâmetros responsáveis pelas transições são a capacidade cognitiva, o número de agentes na sociedade e a pressão social e ecológica. O modelo também permitiu uma modificação da dinâmica, de modo a incluir um parâmetro representando a taxa de troca de informação entre os agentes, o que possibilita a introdução de correlações entre as representações cognitivas, sugerindo assim o aparecimento de assimetrias sociais, que, por fim, resultam em hierarquia. Os resultados obtidos concordam qualitativamente com dados antropológicos, quando as variáveis são interpretadas de acordo com seus equivalentes sociais. O outro modelo desenvolvido neste trabalho diz respeito ao aparecimento de uma mercadoria única de troca, ou dinheiro. Teorias econômicas predominantes descrevem o aparecimento do dinheiro como resultado de uma evolução de economias de escambo (barter). Críticas, entretanto, alertam para a falta de evidências históricas e antropológicas que corroborem esta hipótese, gerando dúvidas sobre os mecanismos que levaram ao advento do dinheiro e a influência da configuração social neste processo. Estudos recentes sugerem que o dinheiro pode se comportar como uma droga perceptual, o que tem levado a novas teorias que objetivam explicar a monetarização de sociedades. Através de um modelo computacional baseado na dinâmica anterior de emergência de hierarquia, buscamos simular este fenômeno através de representações cognitivas de redes econômicas, que representam o reconhecimento ou não da possibilidade de troca entre duas commodities. Formalismos semelhantes já foram utilizados anteriormente, porém sem discutir a influência da configuração social nos resultados. O modelo desenvolvido nesta dissertação foi capaz de empregar o conceito de representações cognitivas e novamente atribuir custos a elas. A nova dinâmica resultante é capaz de analisar como a troca de informações depende da configuração social dos agentes. Os resultados mostram que redes hierárquicas, como estrela e redes livres de escala, induzem uma maior probabilidade de emergência de dinheiro dos que as demais. Os dois modelos sugerem, quando considerados em conjunto, que transições de fase na organização social são importantes para o estudo de emergência de dinheiro, e portanto não podem ser ignoradas em futuras modelagens sociais e econômicas.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Di, Caro Paolo. « Recessions, Recoveries and Regional Resilience : an econometric perspective ». Doctoral thesis, Università di Catania, 2014. http://hdl.handle.net/10761/1540.

Texte intégral
Résumé :
Three chapters constitute the main structure of this contribution. Chapter I reviews selected theoretical and empirical approaches dealing with regional evolution in order to identify recent developments and extensions incorporating spatial econometrics techniques. Chapter II investigates transient and permanent asymmetric effects of national-wide recessions across Italian regions during the last thirty years, by proposing the recent resilience framework as an helpful synthesis. Chapter III studies the determinants of the uneven cross-regional behaviour during crises and recoveries, by presenting two complementary econometric models, namely a linear vector error correction (VECM) model and a non-linear smooth-transition autoregressive (STAR) specification. Some of the main results here obtained are: regions within the same country differ in terms of both shock-absorption and post-recession pattern; the broad impact of a common shock shall take into account temporary and persistent effects; differences in recessions and recoveries among areas can be motivated by some elements such as industrial structure, export propensity, human and civic capital, and financial constraints. Moreover, the presence of spatial interdependencies and neighbouring interactions can play a relevant role. Moving from some of the results here presented, the desirable next step should be addressed towards a deeper analysis of the determinants of regional heterogeneity during recessions and recoveries, cross-country comparisons, the development of a more structured theoretical and empirical background, the assessment of the place-specific impact of countercyclical policies. These and other questions are left for future research.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Busato, Erick Andrade. « Função de acoplamento t-Student assimetrica : modelagem de dependencia assimetrica ». [s.n.], 2008. http://repositorio.unicamp.br/jspui/handle/REPOSIP/305857.

Texte intégral
Résumé :
Orientador: Luiz Koodi Hotta
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica
Made available in DSpace on 2018-08-12T14:00:24Z (GMT). No. of bitstreams: 1 Busato_ErickAndrade_M.pdf: 4413458 bytes, checksum: b9c4c39b4639c19e685bae736fc86c4f (MD5) Previous issue date: 2008
Resumo: A família de distribuições t-Student Assimétrica, construída a partir da mistura em média e variância da distribuição normal multivariada com a distribuição Inversa Gama possui propriedades desejáveis de flexibilidade para as mais diversas formas de assimetria. Essas propriedades são exploradas na construção de funções de acoplamento que possuem dependência assimétrica. Neste trabalho são estudadas as características e propriedades da distribuição t-Student Assimétrica e a construção da respectiva função de acoplamento, fazendo-se uma apresentação de diferentes estruturas de dependência que pode originar, incluindo assimetrias da dependência nas caudas. São apresentados métodos de estimação de parâmetros das funções de acoplamento, com aplicações até a terceira dimensão da cópula. Essa função de acoplamento é utilizada para compor um modelo ARMA-GARCHCópula com marginais de distribuição t-Student Assimétrica, que será ajustado para os logretornos de preços do Petróleo e da Gasolina, e log-retornos do Índice de Óleo AMEX, buscando o melhor ajuste, principalmente, para a dependência nas caudas das distribuições de preços. Esse modelo será comparado, através de medidas de Valor em Risco e AIC, além de outras medidas de bondade de ajuste, com o modelo de Função de Acoplamento t-Student Simétrico.
Abstract: The Skewed t-Student distribution family, constructed upon the multivariate normal mixture distribution, known as mean-variance mixture, composed with the Inverse-Gamma distribution, has many desirable flexibility properties for many distribution asymmetry structures. These properties are explored by constructing copula functions with asymmetric dependence. In this work the properties and characteristics of the Skewed t-Student distribution and the construction of a respective copula function are studied, presenting different dependence structures that the copula function generates, including tail dependence asymmetry. Parameter estimation methods are presented for the copula, with applications up to the 3rd dimension. This copula function is used to compose an ARMAGARCH- Copula model with Skewed t-Student marginal distribution that is adjusted to logreturns of Petroleum and Gasoline prices and log-returns of the AMEX Oil Index, emphasizing the return's tail distribution. The model will be compared, by the means of the VaR (Value at Risk) and Akaike's Information Criterion, along with other Goodness-of-fit measures, with models based on the Symmetric t-Student Copula.
Mestrado
Mestre em Estatística
Styles APA, Harvard, Vancouver, ISO, etc.
49

Azari, Soufiani Hossein. « Revisiting Random Utility Models ». Thesis, Harvard University, 2014. http://dissertations.umi.com/gsas.harvard:11605.

Texte intégral
Résumé :
This thesis explores extensions of Random Utility Models (RUMs), providing more flexible models and adopting a computational perspective. This includes building new models and understanding their properties such as identifiability and the log concavity of their likelihood functions as well as the development of estimation algorithms.
Engineering and Applied Sciences
Styles APA, Harvard, Vancouver, ISO, etc.
50

Xue, Jiangbo. « A structural forecasting model for the Chinese macroeconomy / ». View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?ECON%202009%20XUE.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie