To see the other types of publications on this topic, follow the link: Macroeconomics – Forecasting.

Dissertations / Theses on the topic 'Macroeconomics – Forecasting'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Macroeconomics – Forecasting.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Heidari, Hassan Economics Australian School of Business UNSW. "Essays on macoroeconomics and macroeconomic forecasting." Awarded by:University of New South Wales. School of Economics, 2006. http://handle.unsw.edu.au/1959.4/22800.

Full text
Abstract:
This dissertation collects three independent essays in the area of Macroeconomics and Macroeconomic forecasting. The first chapter introduces and motivates the three essays. Chapter 2 highlights a serious problem of the Bayesian vector autoregressive (BVAR) models with Litterman???s prior cannot be used to get accurate forecasts of the driftless variables in a mixed drift models. BVAR models with Litterman???s prior, because of the diffuse prior on the constant, do not perform well in the long-run forecasting of I(1) variables either, if they have no drift. This is interesting as in practice most of the macro models include both drift and driftless variables. One solution to this problem is using the Bewley (1979) transformation to impose zero drift to driftless variables in a mixed drift VAR models. A novel feature of this chapter is the use of g-prior in BVAR models to alleviate poor estimation of drift parameters of the Traditional BVAR model. Chapter 3 deals with another possible explanation for the poor performance of the Traditional BVAR models in inflation forecasting. BVAR with Litterman???s prior have the disadvantage of a lack of robustness to deterministic shifts, exacerbated by the ill-determination of the intercept. Several structural break tests show that Australian inflation has breaks in the mean. Chapter 3 uses the Kalman filter to allow parameters to vary over time. The novelty of this chapter is modifying the standard BVAR model, where deterministic components evolve over time. Moreover, this chapter set aside the assumption of diagonality in the prior variance-covariance. Hence, another novelty of this chapter is using a BVAR model with modified non-diagonal variance-covariance matrix similar to the g-prior, where the deterministic components are the only source of variation, to forecast Australian inflation. Chapter 4 moves onto DSGE models and estimates a partially microfunded small-open economy (SOE) New-Keynesian model of the Australian economy. In this chapter, structural parameters of the rest of world (ROW), SOE, and closed economy, are estimated using Australian data as the small economy, and the US as the ROW, with the full information maximum likelihood.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Dandan. "Essays on macroeconomics and forecasting." Texas A&M University, 2005. http://hdl.handle.net/1969.1/4271.

Full text
Abstract:
This dissertation consists of three essays. Chapter II uses the method of structural factor analysis to study the effects of monetary policy on key macroeconomic variables in a data rich environment. I propose two structural factor models. One is the structural factor augmented vector autoregressive (SFAVAR) model and the other is the structural factor vector autoregressive (SFVAR) model. Compared to the traditional vector autogression (VAR) model, both models incorporate far more information from hundreds of data series, series that can be and are monitored by the Central Bank. Moreover, the factors used are structurally meaningful, a feature that adds to the understanding of the “black box” of the monetary transmission mechanism. Both models generate qualitatively reasonable impulse response functions. Using the SFVAR model, both the “price puzzle” and the “liquidity puzzle” are eliminated. Chapter III employs the method of structural factor analysis to conduct a forecasting exercise in a data rich environment. I simulate out-of-sample real time forecasting using a structural dynamic factor forecasting model and its variations. I use several structural factors to summarize the information from a large set of candidate explanatory variables. Compared to Stock and Watson (2002)’s models, the models proposed in this chapter can further allow me to select the factors structurally for each variable to be forecasted. I find advantages to using the structural dynamic factor forecasting models compared to alternatives that include univariate autoregression (AR) model, the VAR model and Stock and Watson’s (2002) models, especially when forecasting real variables. In chapter IV, we measure U.S. technology shocks by implementing a dual approach, which is based on more reliable price data instead of aggregate quantity data. By doing so, we find the relative volatility of technology shocks and the correlation between output fluctuation and technology shocks to be much smaller than those revealed in most real-business-cycle (RBC) studies. Our results support the findings of Burnside, Eichenbaum and Rebelo (1996), who showed that the correlation between technology shocks and output is exaggerated in the RBC literature. This suggests that one should examine other sources of fluctuations for a better understanding of the business cycle phenomena.
APA, Harvard, Vancouver, ISO, and other styles
3

De, Antonio Liedo David. "Structural models for macroeconomics and forecasting." Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210142.

Full text
Abstract:
This Thesis is composed by three independent papers that investigate

central debates in empirical macroeconomic modeling.

Chapter 1, entitled “A Model for Real-Time Data Assessment with an Application to GDP Growth Rates”, provides a model for the data

revisions of macroeconomic variables that distinguishes between rational expectation updates and noise corrections. Thus, the model encompasses the two polar views regarding the publication process of statistical agencies: noise versus news. Most of the studies previous studies that analyze data revisions are based

on the classical noise and news regression approach introduced by Mankiew, Runkle and Shapiro (1984). The problem is that the statistical tests available do not formulate both extreme hypotheses as collectively exhaustive, as recognized by Aruoba (2008). That is, it would be possible to reject or accept both of them simultaneously. In turn, the model for the

DPP presented here allows for the simultaneous presence of both noise and news. While the “regression approach” followed by Faust et al. (2005), along the lines of Mankiew et al. (1984), identifies noise in the preliminary

figures, it is not possible for them to quantify it, as done by our model.

The second and third chapters acknowledge the possibility that macroeconomic data is measured with errors, but the approach followed to model the missmeasurement is extremely stylized and does not capture the complexity of the revision process that we describe in the first chapter.

Chapter 2, entitled “Revisiting the Success of the RBC model”, proposes the use of dynamic factor models as an alternative to the VAR based tools for the empirical validation of dynamic stochastic general equilibrium (DSGE) theories. Along the lines of Giannone et al. (2006), we use the state-space parameterisation of the factor models proposed by Forni et al. (2007) as a competitive benchmark that is able to capture weak statistical restrictions that DSGE models impose on the data. Our empirical illustration compares the out-of-sample forecasting performance of a simple RBC model augmented with a serially correlated noise component against several specifications belonging to classes of dynamic factor and VAR models. Although the performance of the RBC model is comparable

to that of the reduced form models, a formal test of predictive accuracy reveals that the weak restrictions are more useful at forecasting than the strong behavioral assumptions imposed by the microfoundations in the model economy.

The last chapter, “What are Shocks Capturing in DSGE modeling”, contributes to current debates on the use and interpretation of larger DSGE

models. Recent tendency in academic work and at central banks is to develop and estimate large DSGE models for policy analysis and forecasting. These models typically have many shocks (e.g. Smets and Wouters, 2003 and Adolfson, Laseen, Linde and Villani, 2005). On the other hand, empirical studies point out that few large shocks are sufficient to capture the covariance structure of macro data (Giannone, Reichlin and

Sala, 2005, Uhlig, 2004). In this Chapter, we propose to reconcile both views by considering an alternative DSGE estimation approach which

models explicitly the statistical agency along the lines of Sargent (1989). This enables us to distinguish whether the exogenous shocks in DSGE

modeling are structural or instead serve the purpose of fitting the data in presence of misspecification and measurement problems. When applied to the original Smets and Wouters (2007) model, we find that the explanatory power of the structural shocks decreases at high frequencies. This allows us to back out a smoother measure of the natural output gap than that

resulting from the original specification.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
4

Schwarzmüller, Tim [Verfasser]. "Essays in Macroeconomics and Forecasting / Tim Schwarzmüller." Kiel : Universitätsbibliothek Kiel, 2016. http://d-nb.info/1102204021/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Brinca, Pedro Soares. "Essays in Quantitative Macroeconomics." Doctoral thesis, Stockholms universitet, Nationalekonomiska institutionen, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-92861.

Full text
Abstract:
In the first essay, Distortions in the Neoclassical Growth Model: A Cross Country Analysis, I show that shocks that express themselves as total factor productivity and labor income taxes are comparably more synchronized than shocks that resemble distortions to the ability of allocating resources across time and states of the world. These two shocks are also the most important to model. Lastly, I document the importance of international channels of transmission for the shocks, given that these are spatially correlated and that international trade variables, such as trade openness correlate particularly well with them. The second essay is called Monetary Business Cycle Accounting for Sweden. Given that the analysis is focused in one country, I can extend the prototype economy to include a nominal interest rate setting rule and government bonds. As in the previous essay, distortions to the labor-leisure condition and total factor productivity are the most relevant margins to be modeled, now joined by deviations from the nominal interest rate setting rule. Also, distortions do not share a structural break during the Great Recession, but they do during the 1990’s.  Researchers aiming to model Swedish business cycles must take into account the structural changes the Swedish economy went through in the 1990’s, though not so during the last recession. The third essay, Consumer Confidence and Consumption Spending: Evidence for the United States and the Euro Area, we show that, the consumer confidence index can be in certain circumstances a good predictor of consumption. In particular, out-of-sample evidence shows that the contribution of confidence in explaining consumption expenditures increases when household survey indicators feature large changes, so that confidence indicators can have some increasing predictive power during such episodes. Moreover, there is some evidence of a confidence channel in the international transmission of shocks, as U.S. confidence indices help predicting consumer sentiment in the euro area.
APA, Harvard, Vancouver, ISO, and other styles
6

Galimberti, Jaqueson Kingeski. "Adaptive learning for applied macroeconomics." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/adaptive-learning-for-applied-macroeconomics(cde517d7-d552-4a53-a442-c584262c3a8f).html.

Full text
Abstract:
The literature on bounded rationality and learning in macroeconomics has often used recursive algorithms to depict the evolution of agents' beliefs over time. In this thesis we assess this practice from an applied perspective, focusing on the use of such algorithms for the computation of forecasts of macroeconomic variables. Our analysis develops around three issues we find to have been previously neglected in the literature: (i) the initialization of the learning algorithms; (ii) the determination and calibration of the learning gains, which are key parameters of the algorithms' specifications; and, (iii) the choice of a representative learning mechanism. In order to approach these issues we establish an estimation framework under which we unify the two main algorithms considered in this literature, namely the least squares and the stochastic gradient algorithms. We then propose an evaluation framework that mimics the real-time process of expectation formation through learning-to-forecast exercises. To analyze the quality of the forecasts associated to the learning approach, we evaluate their forecasting accuracy and resemblance to surveys, these latter taken as proxy for agents' expectations. In spite of taking these two criteria as mutually desirable, it is not clear whether they are compatible with each other: whilst forecasting accuracy represents the goal of optimizing agents, resemblance to surveys is indicative of actual agents behavior. We carry out these exercises using real-time quarterly data on US inflation and output growth covering a broad post-WWII period of time. Our main contribution is to show that a proper assessment of the adaptive learning approach requires going beyond the previous views in the literature about these issues. For the initialization of the learning algorithms we argue that such initial estimates need to be coherent with the ongoing learning process that was already in place at the beginning of our sample of data. We find that the previous initialization methods in the literature are vulnerable to this requirement, and propose a new smoothing-based method that is not prone to this critic. Regarding the learning gains, we distinguish between two possible rationales to its determination: as a choice of agents; or, as a primitive parameter of agents learning-to-forecast behavior. Our results provide strong evidence in favor of the gain as a primitive approach, hence favoring the use of surveys data for their calibration. In the third issue, about the choice of a representative algorithm, we challenge the view that learning should be represented by only one of the above algorithms; on the basis of our two evaluation criteria, our results suggest that using a single algorithm represents a misspecification. That motivate us to propose the use of hybrid forms of the LS and SG algorithms, for which we find favorable evidence as representatives of how agents learn. Finally, our analysis concludes with an optimistic assessment on the plausibility of adaptive learning, though conditioned to an appropriate treatment of the above issues. We hope our results provide some guidance on that respect.
APA, Harvard, Vancouver, ISO, and other styles
7

Arora, Siddharth. "Time series forecasting with applications in macroeconomics and energy." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:c763b735-e4fa-4466-9c1f-c3f6daf04a67.

Full text
Abstract:
The aim of this study is to develop novel forecasting methodologies. The applications of our proposed models lie in two different areas: macroeconomics and energy. Though we consider two very different applications, the common underlying theme of this thesis is to develop novel methodologies that are not only accurate, but are also parsimonious. For macroeconomic time series, we focus on generating forecasts for the US Gross National Product (GNP). The contribution of our study on macroeconomic forecasting lies in proposing a novel nonlinear and nonparametric method, called weighted random analogue prediction (WRAP) method. The out-of-sample forecasting ability of WRAP is evaluated by employing a range of different performance scores, which measure its accuracy in generating both point and density forecasts. We show that WRAP outperforms some of the most commonly used models for forecasting the GNP time series. For energy, we focus on two different applications: (1) Generating accurate short-term forecasts for the total electricity demand (load) for Great Britain. (2) Modelling Irish electricity smart meter data (consumption) for both residential consumers and small and medium-sized enterprises (SMEs), using methods based on kernel density (KD) and conditional kernel density (CKD) estimation. To model load, we propose methods based on a commonly used statistical dimension reduction technique, called singular value decomposition (SVD). Specifically, we propose two novel methods, namely, discount weighted (DW) intraday and DW intraweek SVD-based exponential smoothing methods. We show that the proposed methods are competitive with some of the most commonly used models for load forecasting, and also lead to a substantial reduction in the dimension of the model. The load time series exhibits a prominent intraday, intraweek and intrayear seasonality. However, most existing studies accommodate the ‘double seasonality’ while modelling short-term load, focussing only on the intraday and intraweek seasonal effects. The methods considered in this study accommodate the ‘triple seasonality’ in load, by capturing not only intraday and intraweek seasonal cycles, but also intrayear seasonality. For modelling load, we also propose a novel rule-based approach, with emphasis on special days. The load observed on special days, e.g. public holidays, is substantially lower compared to load observed on normal working days. Special day effects have often been ignored during the modelling process, which leads to large forecast errors on special days, and also on normal working days that lie in the vicinity of special days. The contribution of this study lies in adapting some of the most commonly used seasonal methods to model load for both normal and special days in a coherent and unified framework, using a rule-based approach. We show that the post-sample error across special days for the rule-based methods are less than half, compared to their original counterparts that ignore special day effects. For modelling electricity smart meter data, we investigate a range of different methods based on KD and CKD estimation. Over the coming decade, electricity smart meters are scheduled to replace the conventional electronic meters, in both US and Europe. Future estimates of consumption can help the consumer identify and reduce excess consumption, while such estimates can help the supplier devise innovative tariff strategies. To the best of our knowledge, there are no existing studies which focus on generating density forecasts of electricity consumption from smart meter data. In this study, we evaluate the density, quantile and point forecast accuracy of different methods across one thousand consumption time series, recorded from both residential consumers and SMEs. We show that the KD and CKD methods accommodate the seasonality in consumption, and correctly distinguish weekdays from weekends. For each application, our comprehensive empirical comparison of the existing and proposed methods was undertaken using multiple performance scores. The results show strong potential for the models proposed in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
8

Ward, Felix [Verfasser]. "Essays in International Macroeconomics and Financial Crisis Forecasting / Felix Ward." Bonn : Universitäts- und Landesbibliothek Bonn, 2018. http://d-nb.info/1167856899/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Xue, Jiangbo. "A structural forecasting model for the Chinese macroeconomy /." View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?ECON%202009%20XUE.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ricci, Lorenzo. "Essays on tail risk in macroeconomics and finance: measurement and forecasting." Doctoral thesis, Universite Libre de Bruxelles, 2017. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/242122.

Full text
Abstract:
This thesis is composed of three chapters that propose some novel approaches on tail risk for financial market and forecasting in finance and macroeconomics. The first part of this dissertation focuses on financial market correlations and introduces a simple measure of tail correlation, TailCoR, while the second contribution addresses the issue of identification of non- normal structural shocks in Vector Autoregression which is common on finance. The third part belongs to the vast literature on predictions of economic growth; the problem is tackled using a Bayesian Dynamic Factor model to predict Norwegian GDP.Chapter I: TailCoRThe first chapter introduces a simple measure of tail correlation, TailCoR, which disentangles linear and non linear correlation. The aim is to capture all features of financial market co- movement when extreme events (i.e. financial crises) occur. Indeed, tail correlations may arise because asset prices are either linearly correlated (i.e. the Pearson correlations are different from zero) or non-linearly correlated, meaning that asset prices are dependent at the tail of the distribution.Since it is based on quantiles, TailCoR has three main advantages: i) it is not based on asymptotic arguments, ii) it is very general as it applies with no specific distributional assumption, and iii) it is simple to use. We show that TailCoR also disentangles easily between linear and non-linear correlations. The measure has been successfully tested on simulated data. Several extensions, useful for practitioners, are presented like downside and upside tail correlations.In our empirical analysis, we apply this measure to eight major US banks for the period 2003-2012. For comparison purposes, we compute the upper and lower exceedance correlations and the parametric and non-parametric tail dependence coefficients. On the overall sample, results show that both the linear and non-linear contributions are relevant. The results suggest that co-movement increases during the financial crisis because of both the linear and non- linear correlations. Furthermore, the increase of TailCoR at the end of 2012 is mostly driven by the non-linearity, reflecting the risks of tail events and their spillovers associated with the European sovereign debt crisis. Chapter II: On the identification of non-normal shocks in structural VARThe second chapter deals with the structural interpretation of the VAR using the statistical properties of the innovation terms. In general, financial markets are characterized by non- normal shocks. Under non-Gaussianity, we introduce a methodology based on the reduction of tail dependency to identify the non-normal structural shocks.Borrowing from statistics, the methodology can be summarized in two main steps: i) decor- relate the estimated residuals and ii) the uncorrelated residuals are rotated in order to get a vector of independent shocks using a tail dependency matrix. We do not label the shocks a priori, but post-estimate on the basis of economic judgement.Furthermore, we show how our approach allows to identify all the shocks using a Monte Carlo study. In some cases, the method can turn out to be more significant when the amount of tail events are relevant. Therefore, the frequency of the series and the degree of non-normality are relevant to achieve accurate identification.Finally, we apply our method to two different VAR, all estimated on US data: i) a monthly trivariate model which studies the effects of oil market shocks, and finally ii) a VAR that focuses on the interaction between monetary policy and the stock market. In the first case, we validate the results obtained in the economic literature. In the second case, we cannot confirm the validity of an identification scheme based on combination of short and long run restrictions which is used in part of the empirical literature.Chapter III :Nowcasting NorwayThe third chapter consists in predictions of Norwegian Mainland GDP. Policy institutions have to decide to set their policies without knowledge of the current economic conditions. We estimate a Bayesian dynamic factor model (BDFM) on a panel of macroeconomic variables (all followed by market operators) from 1990 until 2011.First, the BDFM is an extension to the Bayesian framework of the dynamic factor model (DFM). The difference is that, compared with a DFM, there is more dynamics in the BDFM introduced in order to accommodate the dynamic heterogeneity of different variables. How- ever, in order to introduce more dynamics, the BDFM requires to estimate a large number of parameters, which can easily lead to volatile predictions due to estimation uncertainty. This is why the model is estimated with Bayesian methods, which, by shrinking the factor model toward a simple naive prior model, are able to limit estimation uncertainty.The second aspect is the use of a small dataset. A common feature of the literature on DFM is the use of large datasets. However, there is a literature that has shown how, for the purpose of forecasting, DFMs can be estimated on a small number of appropriately selected variables.Finally, through a pseudo real-time exercise, we show that the BDFM performs well both in terms of point forecast, and in terms of density forecasts. Results indicate that our model outperforms standard univariate benchmark models, that it performs as well as the Bloomberg Survey, and that it outperforms the predictions published by the Norges Bank in its monetary policy report.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
11

Steinbach, Max Rudibert. "Essays on dynamic macroeconomics." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/86196.

Full text
Abstract:
Thesis (PhD)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: In the first essay of this thesis, a medium scale DSGE model is developed and estimated for the South African economy. When used for forecasting, the model is found to outperform private sector economists when forecasting CPI inflation, GDP growth and the policy rate over certain horizons. In the second essay, the benchmark DSGE model is extended to include the yield on South African 10-year government bonds. The model is then used to decompose the 10-year yield spread into (1) the structural shocks that contributed to its evolution during the inflation targeting regime of the South African Reserve Bank, as well as (2) an expected yield and a term premium. In addition, it is found that changes in the South African term premium may predict future real economic activity. Finally, the need for DSGE models to take account of financial frictions became apparent during the recent global financial crisis. As a result, the final essay incorporates a stylised banking sector into the benchmark DSGE model described above. The optimal response of the South African Reserve Bank to financial shocks is then analysed within the context of this structural model.
APA, Harvard, Vancouver, ISO, and other styles
12

Feng, Ning. "Essays on business cycles and macroeconomic forecasting." HKBU Institutional Repository, 2016. https://repository.hkbu.edu.hk/etd_oa/279.

Full text
Abstract:
This dissertation consists of two essays. The first essay focuses on developing a quantitative theory for a small open economy dynamic stochastic general equilibrium (DSGE) model with a housing sector allowing for both contemporaneous and news shocks. The second essay is an empirical study on the macroeconomic forecasting using both structural and non-structural models. In the first essay, we develop a DSGE model with a housing sector, which incorporates both contemporaneous and news shocks to domestic and external fundamentals, to explore the kind of and the extent to which different shocks to economic fundamentals matter for driving housing market dynamics in a small open economy. The model is estimated by the Bayesian method, using data from Hong Kong. The quantitative results show that external shocks and news shocks play a significant role in this market. Contemporaneous shock to foreign housing preference, contemporaneous shock to terms of trade, and news shocks to technology in the consumption goods sector explain one-third each of the variance of housing price. Terms of trade contemporaneous shock and consumption technology news shocks also contribute 36% and 59%, respectively, to the variance in housing investment. The simulation results enable policy makers to identify the key driving forces behind the housing market dynamics and the interaction between housing market and the macroeconomy in Hong Kong. In the second essay, we compare the forecasting performance between structural and non-structural models for a small open economy. The structural model refers to the small open economy DSGE model with the housing sector in the first essay. In addition, we examine various non-structural models including both Bayesian and classical time-series methods in our forecasting exercises. We also include the information from a large-scale quarterly data series in some models using two approaches to capture the influence of fundamentals: extracting common factors by principal component analysis in a dynamic factor model (DFM), factor-augmented vector autoregression (FAVAR), and Bayesian FAVAR (BFAVAR) or Bayesian shrinkage in a large-scale vector autoregression (BVAR). In this study, we forecast five key macroeconomic variables, namely, output, consumption, employment, housing price inflation, and CPI-based inflation using quarterly data. The results, based on mean absolute error (MAE) and root mean squared error (RMSE) of one to eight quarters ahead out-of-sample forecasts, indicate that the non-structural models outperform the structural model for all variables of interest across all horizons. Among the non-structural models, small-scale BVAR performs better with short forecasting horizons, although DFM shows a similar predictive ability. As the forecasting horizon grows, DFM tends to improve over other models and is better suited in forecasting key macroeconomic variables at longer horizons.
APA, Harvard, Vancouver, ISO, and other styles
13

Stavrakeva, Vania Atanassova. "Three Essays in Macroeconomics and International Finance." Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:10881.

Full text
Abstract:
This dissertation includes three chapters. The first chapter studies the question of whether countries with different fiscal capacity should optimally have different ex-ante minimum bank capital requirements. In an environment with endogenously incomplete markets and overinvestment because of moral hazard and pecuniary externalities, I show that countries with larger fiscal capacity should have lower minimum ex-ante bank capital requirements. I also show that, in addition to the minimum capital requirement, regulators in countries with a concentrated financial sector and large fiscal capacity (which are also countries with strong moral hazard) should impose a limit on the amount of liquidity pledged by financial institutions in a crisis state (for example, restrict the amount of put options/CDS contracts sold by financial institutions). The second chapter studies the welfare implications of a concentrated, imperfectly competitive banking sector, which faces a bank net worth constraint in a small open economy (SOE) environment. There are two standard sources of inefficiency --- pecuniary externalities, which lead to overinvestment, and a standard monopolistic underinvestment force. I show that the optimal policy instruments include subsidies on firm borrowing costs in certain periods and capital account controls in others, which is a good proxy for the behavior of emerging markets. For every country, there exists a financial sector with a particular banking sector concentration, for which the inefficiencies offset each other and no government intervention is required in some periods. Furthermore, this paper documents a novel theoretical result --- the interaction between future binding bank net worth constraints and dynamic (future) underinvestment could lead to ex-ante overinvestment even in economies with a single monopolistic bank where there are no pecuniary externalities. The last third chapter, which is coauthored with Kenneth Rogoff, evaluates a new class of exchange rate forecasting studies, which claim that structural models are getting closer to being able to forecast exchange rates at short horizons. We argue that misinterpretation of some new out-of-sample tests for nested models, over-reliance on asymptotic test statistics, and failure to sufficiently check robustness to alternative time windows have led many studies to overstate even the relatively thin positive results that have been found.
Economics
APA, Harvard, Vancouver, ISO, and other styles
14

Caruso, Alberto. "Essays on Empirical Macroeconomics." Doctoral thesis, Universite Libre de Bruxelles, 2020. https://dipot.ulb.ac.be/dspace/bitstream/2013/308164/4/TOC.pdf.

Full text
Abstract:
The thesis contains four essays, covering topics in the field of real-time macroeconometrics, forecasting and applied macroeconomics. In the first two chapters, I use recent techniques developed in the "nowcasting" literature in order to analyse and interpret the macroeconomic news flow. I use them either to assess current macroeconomic conditions, showing the importance of foreign indicators dealing with small open economies, or linking macroeconomic news to asset prices, through a model that help us interpret macroeconomic data and explaining the linkages between macro variables and financial indicators. In the third chapter, I analyse the link between macroeconomic data in real-time and the yield curve of interest rates, constructing a forecasting model which takes into account the peculiar characteristics of the macroeconomic data flow. In the last chapter, I present a Bayesian Vector Autoregression model built in order to analyse the last two crisis in the Eurozone (2008-09, and 2011-12) identifying their unique characteristics with respect to historical regularities, an issue of great importance from a policy perspective.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
15

Monti, Francesca. "Combining structural and reduced-form models for macroeconomic forecasting and policy analysis." Doctoral thesis, Universite Libre de Bruxelles, 2011. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209970.

Full text
Abstract:
Can we fruitfully use the same macroeconomic model to forecast and to perform policy analysis? There is a tension between a model’s ability to forecast accurately and its ability to tell a theoretically consistent story. The aim of this dissertation is to propose ways to soothe this tension, combining structural and reduced-form models in order to have models that can effectively do both.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
16

Evholt, David, and Oscar Larsson. "Generative Adversarial Networks and Natural Language Processing for Macroeconomic Forecasting." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273422.

Full text
Abstract:
Macroeconomic forecasting is a classic problem, today most often modeled using time series analysis. Few attempts have been made using machine learning methods, and even fewer incorporating unconventional data, such as that from social media. In this thesis, a Generative Adversarial Network (GAN) is used to predict U.S. unemployment, beating the ARIMA benchmark on all horizons. Furthermore, attempts at using Twitter data and the Natural Language Processing (NLP) model DistilBERT are performed. While these attempts do not beat the benchmark, they do show promising results with predictive power. The models are also tested at predicting the U.S. stock index S&P 500. For these models, the Twitter data does improve the accuracy and shows the potential of social media data when predicting a more erratic index with less seasonality that is more responsive to current trends in public discourse. The results also show that Twitter data can be used to predict trends in both unemployment and the S&P 500 index. This sets the stage for further research into NLP-GAN models for macroeconomic predictions using social media data.
Makroekonomiska prognoser är sedan länge en svår utmaning. Idag löses de oftast med tidsserieanalys och få försök har gjorts med maskininlärning. I denna uppsats används ett generativt motstridande nätverk (GAN) för att förutspå amerikansk arbetslöshet, med resultat som slår samtliga riktmärken satta av en ARIMA. Ett försök görs också till att använda data från Twitter och den datorlingvistiska (NLP) modellen DistilBERT. Dessa modeller slår inte riktmärkena men visar lovande resultat. Modellerna testas vidare på det amerikanska börsindexet S&P 500. För dessa modeller förbättrade Twitterdata resultaten vilket visar på den potential data från sociala medier har när de appliceras på mer oregelbunda index, utan tydligt säsongsberoende och som är mer känsliga för trender i det offentliga samtalet. Resultaten visar på att Twitterdata kan användas för att hitta trender i både amerikansk arbetslöshet och S&P 500 indexet. Detta lägger grunden för fortsatt forskning inom NLP-GAN modeller för makroekonomiska prognoser baserade på data från sociala medier.
APA, Harvard, Vancouver, ISO, and other styles
17

Liebermann, Joëlle. "Essays in real-time forecasting." Doctoral thesis, Universite Libre de Bruxelles, 2012. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209644.

Full text
Abstract:
This thesis contains three essays in the field of real-time econometrics, and more particularly

forecasting.

The issue of using data as available in real-time to forecasters, policymakers or financial

markets is an important one which has only recently been taken on board in the empirical

literature. Data available and used in real-time are preliminary and differ from ex-post

revised data, and given that data revisions may be quite substantial, the use of latest

available instead of real-time can substantially affect empirical findings (see, among others,

Croushore’s (2011) survey). Furthermore, as variables are released on different dates

and with varying degrees of publication lags, in order not to disregard timely information,

datasets are characterized by the so-called “ragged-edge”structure problem. Hence, special

econometric frameworks, such as developed by Giannone, Reichlin and Small (2008) must

be used.

The first Chapter, “The impact of macroeconomic news on bond yields: (in)stabilities over

time and relative importance”, studies the reaction of U.S. Treasury bond yields to real-time

market-based news in the daily flow of macroeconomic releases which provide most of the

relevant information on their fundamentals, i.e. the state of the economy and inflation. We

find that yields react systematically to a set of news consisting of the soft data, which have

very short publication lags, and the most timely hard data, with the employment report

being the most important release. However, sub-samples evidence reveals that parameter

instability in terms of absolute and relative size of yields response to news, as well as

significance, is present. Especially, the often cited dominance to markets of the employment

report has been evolving over time, as the size of the yields reaction to it was steadily

increasing. Moreover, over the recent crisis period there has been an overall switch in the

relative importance of soft and hard data compared to the pre-crisis period, with the latter

becoming more important even if less timely, and the scope of hard data to which markets

react has increased and is more balanced as less concentrated on the employment report.

Markets have become more reactive to news over the recent crisis period, particularly to

hard data. This is a consequence of the fact that in periods of high uncertainty (bad state),

markets starve for information and attach a higher value to the marginal information content

of these news releases.

The second and third Chapters focus on the real-time ability of models to now-and-forecast

in a data-rich environment. It uses an econometric framework, that can deal with large

panels that have a “ragged-edge”structure, and to evaluate the models in real-time, we

constructed a database of vintages for US variables reproducing the exact information that

was available to a real-time forecaster.

The second Chapter, “Real-time nowcasting of GDP: a factor model versus professional

forecasters”, performs a fully real-time nowcasting (forecasting) exercise of US real GDP

growth using Giannone, Reichlin and Smalls (2008), henceforth (GRS), dynamic factor

model (DFM) framework which enables to handle large unbalanced datasets as available

in real-time. We track the daily evolution throughout the current and next quarter of the

model nowcasting performance. Similarly to GRS’s pseudo real-time results, we find that

the precision of the nowcasts increases with information releases. Moreover, the Survey of

Professional Forecasters does not carry additional information with respect to the model,

suggesting that the often cited superiority of the former, attributable to judgment, is weak

over our sample. As one moves forward along the real-time data flow, the continuous

updating of the model provides a more precise estimate of current quarter GDP growth and

the Survey of Professional Forecasters becomes stale. These results are robust to the recent

recession period.

The last Chapter, “Real-time forecasting in a data-rich environment”, evaluates the ability

of different models, to forecast key real and nominal U.S. monthly macroeconomic variables

in a data-rich environment and from the perspective of a real-time forecaster. Among

the approaches used to forecast in a data-rich environment, we use pooling of bi-variate

forecasts which is an indirect way to exploit large cross-section and the directly pooling of

information using a high-dimensional model (DFM and Bayesian VAR). Furthermore forecasts

combination schemes are used, to overcome the choice of model specification faced by

the practitioner (e.g. which criteria to use to select the parametrization of the model), as

we seek for evidence regarding the performance of a model that is robust across specifications/

combination schemes. Our findings show that predictability of the real variables is

confined over the recent recession/crisis period. This in line with the findings of D’Agostino

and Giannone (2012) over an earlier period, that gains in relative performance of models

using large datasets over univariate models are driven by downturn periods which are characterized

by higher comovements. These results are robust to the combination schemes

or models used. A point worth mentioning is that for nowcasting GDP exploiting crosssectional

information along the real-time data flow also helps over the end of the great moderation period. Since this is a quarterly aggregate proxying the state of the economy,

monthly variables carry information content for GDP. But similarly to the findings for the

monthly variables, predictability, as measured by the gains relative to the naive random

walk model, is higher during crisis/recession period than during tranquil times. Regarding

inflation, results are stable across time, but predictability is mainly found at nowcasting

and forecasting one-month ahead, with the BVAR standing out at nowcasting. The results

show that the forecasting gains at these short horizons stem mainly from exploiting timely

information. The results also show that direct pooling of information using a high dimensional

model (DFM or BVAR) which takes into account the cross-correlation between the

variables and efficiently deals with the “ragged-edge”structure of the dataset, yields more

accurate forecasts than the indirect pooling of bi-variate forecasts/models.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
18

Marsilli, Clément. "Mixed-Frequency Modeling and Economic Forecasting." Thesis, Besançon, 2014. http://www.theses.fr/2014BESA2023/document.

Full text
Abstract:
La prévision macroéconomique à court terme est un exercice aussi complexe qu’essentiel pour la définition de la politique économique et monétaire. Les crises financières récentes ainsi que les récessions qu’ont endurées et qu’endurent aujourd’hui encore, en ce début d’année 2014, nombre de pays parmi les plus riches, témoignent de la difficulté d’anticiper les fluctuations économiques, même à des horizons proches. Les recherches effectuées dans le cadre de la thèse de doctorat qui est présentée dans ce manuscrit se sont attachées à étudier, analyser et développer des modélisations pour la prévision de croissance économique. L’ensemble d’informations à partir duquel construire une méthodologie prédictive est vaste mais également hétérogène. Celle-ci doit en effet concilier le mélange des fréquences d’échantillonnage des données et la parcimonie nécessaire à son estimation. Nous évoquons à cet effet dans un premier chapitre les éléments économétriques fondamentaux de la modélisation multi-fréquentielle. Le deuxième chapitre illustre l’apport prédictif macroéconomique que constitue l’utilisation de la volatilité des variables financières en période de retournement conjoncturel. Le troisième chapitre s’étend ensuite sur l’inférence bayésienne et nous présentons par ce biais un travail empirique issu de l’adjonction d’une volatilité stochastique à notre modèle. Enfin, le quatrième chapitre propose une étude des techniques de sélection de variables à fréquence multiple dans l’optique d’améliorer la capacité prédictive de nos modélisations. Diverses méthodologies sont à cet égard développées, leurs aptitudes empiriques sont comparées, et certains faits stylisés sont esquissés
Economic downturn and recession that many countries experienced in the wake of the global financial crisis demonstrate how important but difficult it is to forecast macroeconomic fluctuations, especially within a short time horizon. The doctoral dissertation studies, analyses and develops models for economic growth forecasting. The set of information coming from economic activity is vast and disparate. In fact, time series coming from real and financial economy do not have the same characteristics, both in terms of sampling frequency and predictive power. Therefore short-term forecasting models should both allow the use of mixed-frequency data and parsimony. The first chapter is dedicated to time series econometrics within a mixed-frequency framework. The second chapter contains two empirical works that sheds light on macro-financial linkages by assessing the leading role of the daily financial volatility in macroeconomic prediction during the Great Recession. The third chapter extends mixed-frequency model into a Bayesian framework and presents an empirical study using a stochastic volatility augmented mixed data sampling model. The fourth chapter focuses on variable selection techniques in mixed-frequency models for short-term forecasting. We address the selection issue by developing mixed-frequency-based dimension reduction techniques in a cross-validation procedure that allows automatic in-sample selection based on recent forecasting performances. Our model succeeds in constructing an objective variable selection with broad applicability
APA, Harvard, Vancouver, ISO, and other styles
19

Garda, Paula. "Essays on the macroeconomics of labor markets." Doctoral thesis, Universitat Pompeu Fabra, 2013. http://hdl.handle.net/10803/119820.

Full text
Abstract:
This thesis sheds light on several macroeconomic aspects of labor markets. The first chapter focuses on the impact of dual labor markets on human capital investment. Using a large dataset of the Spanish Social Security the wage losses of permanent and fixed term workers after displacement are analyzed. Results indicate that workers under permanent contracts accumulate a higher share of firm specific human capital than workers under fixed term contracts. The impact on aggregate productivity is analyzed using a calibrated model `a la Mortensen and Pissarides (1994) with endogenous investment in human capital and dual labor markets. The second chapter develops a model in order to explain cross countries differences in the cyclical fluctuations of informal employment for developing countries. The explanation can be found in institutional differences between the formal and informal sector. The third chapter proposes a model that uses the flows into and out of unemployment to forecast the unemployment rate. It shows why this model should outperform standard time series models, and quantifies empirically this contribution for several OECD countries.
Esta tesis arroja luz sobre varios aspectos macroeconómicos de los mercados laborales. El primer capítulo se centra en el impacto de los mercados duales de trabajo sobre la inversión en capital humano. Usando una base de datos de la Seguridad Social española, se analizan las pérdidas salariales de los trabajadores permanentes y a plazo fijo tras cambiar de empleo. Los resultados indican que los trabajadores con contratos permanentes acumulan una mayor proporción de capital humano específico a la firma, que los trabajadores con contratos de duración determinada. El impacto sobre la productividad es analizado calibrando un modelo `a la Mortensen y Pissarides (1994) con inversión endógena en capital humano y mercado de trabajo dual. El segundo capítulo desarrolla un modelo para explicar las diferencias en las fluctuaciones cíclicas del empleo informal en los países en desarrollo. La explicación se basa en diferencias institucionales entre el sector formal e informal. En el tercer capítulo se propone un modelo que utiliza flujos de entrada y salida del desempleo para pronosticar la tasa de desempleo. Se analizan cuáles son las condiciones bajo las cuales este modelo tiene una performance superior a los modelos estándar de series de tiempo, y cuantifica empíricamente esta contribución para varios países de la OCDE
APA, Harvard, Vancouver, ISO, and other styles
20

Bauch, Jacob H. "The Impact of Oil Prices on the U.S. Economy." Scholarship @ Claremont, 2011. http://scholarship.claremont.edu/cmc_theses/146.

Full text
Abstract:
Nine of the ten recessions since WWII have been preceded by relatively large and sudden increases in the price of oil. In this paper, I use time series analysis to forecast GDP growth using oil prices. I use the methodology from Hamilton (2009), and extend the dataset through 2010. Impulse response functions are used to analyze the historical performance of the model’s one-year-ahead forecasts. In April, 2011, the International Monetary Fund changed its forecast of 2011 GDP growth in the U.S. from 3.0% to 2.8% largely due to persistently high oil prices. My model suggests that the price increase in 2011Q1 will lead to growth of 2% in 2011. Furthermore, my model predicts that a 54% increase in crude oil prices during the second quarter of 2011 will lead the U.S. into a double dip recession.
APA, Harvard, Vancouver, ISO, and other styles
21

Cicconi, Claudia. "Essays on macroeconometrics and short-term forecasting." Doctoral thesis, Universite Libre de Bruxelles, 2012. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209660.

Full text
Abstract:
The thesis, entitled "Essays on macroeconometrics and short-term forecasting",

is composed of three chapters. The first two chapters are on nowcasting,

a topic that has received an increasing attention both among practitioners and

the academics especially in conjunction and in the aftermath of the 2008-2009

economic crisis. At the heart of the two chapters is the idea of exploiting the

information from data published at a higher frequency for obtaining early estimates

of the macroeconomic variable of interest. The models used to compute

the nowcasts are dynamic models conceived for handling in an efficient way

the characteristics of the data used in a real-time context, like the fact that due to the different frequencies and the non-synchronicity of the releases

the time series have in general missing data at the end of the sample. While

the first chapter uses a small model like a VAR for nowcasting Italian GDP,

the second one makes use of a dynamic factor model, more suitable to handle

medium-large data sets, for providing early estimates of the employment in

the euro area. The third chapter develops a topic only marginally touched

by the second chapter, i.e. the estimation of dynamic factor models on data characterized by block-structures.

The firrst chapter assesses the accuracy of the Italian GDP nowcasts based

on a small information set consisting of GDP itself, the industrial production

index and the Economic Sentiment Indicator. The task is carried out by using

real-time vintages of data in an out-of-sample exercise over rolling windows

of data. Beside using real-time data, the real-time setting of the exercise is

also guaranteed by updating the nowcasts according to the historical release calendar. The model used to compute the nowcasts is a mixed-frequency Vector

Autoregressive (VAR) model, cast in state-space form and estimated by

maximum likelihood. The results show that the model can provide quite accurate

early estimates of the Italian GDP growth rates not only with respect

to a naive benchmark but also with respect to a bridge model based on the

same information set and a mixed-frequency VAR with only GDP and the industrial production index.

The chapter also analyzes with some attention the role of the Economic Sentiment

Indicator, and of soft information in general. The comparison of our

mixed-frequency VAR with one with only GDP and the industrial production

index clearly shows that using soft information helps obtaining more accurate

early estimates. Evidence is also found that the advantage from using soft

information goes beyond its timeliness.

In the second chapter we focus on nowcasting the quarterly national account

employment of the euro area making use of both country-specific and

area wide information. The relevance of anticipating Eurostat estimates of

employment rests on the fact that, despite it represents an important macroeconomic

variable, euro area employment is measured at a relatively low frequency

(quarterly) and published with a considerable delay (approximately

two months and a half). Obtaining an early estimate of this variable is possible

thanks to the fact that several Member States publish employment data and

employment-related statistics in advance with respect to the Eurostat release

of the euro area employment. Data availability represents, nevertheless, a

major limit as country-level time series are in general non homogeneous, have

different starting periods and, in some cases, are very short. We construct a

data set of monthly and quarterly time series consisting of both aggregate and

country-level data on Quarterly National Account employment, employment

expectations from business surveys and Labour Force Survey employment and

unemployment. In order to perform a real time out-of-sample exercise simulating

the (pseudo) real-time availability of the data, we construct an artificial

calendar of data releases based on the effective calendar observed during the first quarter of 2012. The model used to compute the nowcasts is a dynamic

factor model allowing for mixed-frequency data, missing data at the beginning

of the sample and ragged edges typical of non synchronous data releases. Our

results show that using country-specific information as soon as it is available

allows to obtain reasonably accurate estimates of the employment of the euro

area about fifteen days before the end of the quarter.

We also look at the nowcasts of employment of the four largest Member

States. We find that (with the exception of France) augmenting the dynamic

factor model with country-specific factors provides better results than those

obtained with the model without country-specific factors.

The third chapter of the thesis deals with dynamic factor models on data

characterized by local cross-correlation due to the presence of block-structures.

The latter is modeled by introducing block-specific factors, i.e. factors that

are specific to blocks of time series. We propose an algorithm to estimate the model by (quasi) maximum likelihood and use it to run Monte Carlo

simulations to evaluate the effects of modeling or not the block-structure on

the estimates of common factors. We find two main results: first, that in finite samples modeling the block-structure, beside being interesting per se, can help

reducing the model miss-specification and getting more accurate estimates

of the common factors; second, that imposing a wrong block-structure or

imposing a block-structure when it is not present does not have negative

effects on the estimates of the common factors. These two results allow us

to conclude that it is always recommendable to model the block-structure

especially if the characteristics of the data suggest that there is one.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
22

Coroneo, Laura. "Essays on modelling and forecasting financial time series." Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210284.

Full text
Abstract:
This thesis is composed of three chapters which propose some novel approaches to model and forecast financial time series. The first chapter focuses on high frequency financial returns and proposes a quantile regression approach to model their intraday seasonality and dynamics. The second chapter deals with the problem of forecasting the yield curve including large datasets of macroeconomics information. While the last chapter addresses the issue of modelling the term structure of interest rates.

The first chapter investigates the distribution of high frequency financial returns, with special emphasis on the intraday seasonality. Using quantile regression, I show the expansions and shrinks of the probability law through the day for three years of 15 minutes sampled stock returns. Returns are more dispersed and less concentrated around the median at the hours near the opening and closing. I provide intraday value at risk assessments and I show how it adapts to changes of dispersion over the day. The tests performed on the out-of-sample forecasts of the value at risk show that the model is able to provide good risk assessments and to outperform standard Gaussian and Student’s t GARCH models.

The second chapter shows that macroeconomic indicators are helpful in forecasting the yield curve. I incorporate a large number of macroeconomic predictors within the Nelson and Siegel (1987) model for the yield curve, which can be cast in a common factor model representation. Rather than including macroeconomic variables as additional factors, I use them to extract the Nelson and Siegel factors. Estimation is performed by EM algorithm and Kalman filter using a data set composed by 17 yields and 118 macro variables. Results show that incorporating large macroeconomic information improves the accuracy of out-of-sample yield forecasts at medium and long horizons.

The third chapter statistically tests whether the Nelson and Siegel (1987) yield curve model is arbitrage-free. Theoretically, the Nelson-Siegel model does not ensure the absence of arbitrage opportunities. Still, central banks and public wealth managers rely heavily on it. Using a non-parametric resampling technique and zero-coupon yield curve data from the US market, I find that the no-arbitrage parameters are not statistically different from those obtained from the Nelson and Siegel model, at a 95 percent confidence level. I therefore conclude that the Nelson and Siegel yield curve model is compatible with arbitrage-freeness.


Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
23

Breuss, Fritz. "Would DSGE Models have Predicted the Great Recession in Austria?" Springer International Publishing, 2018. http://epub.wu.ac.at/6086/1/10.1007_s41549%2D018%2D0025%2D1.pdf.

Full text
Abstract:
Dynamic stochastic general equilibrium (DSGE) models are the common workhorse of modern macroeconomic theory. Whereas story-telling and policy analysis were in the forefront of applications since its inception, the forecasting perspective of DSGE models is only recently topical. In this study, we perform a post-mortem analysis of the predictive power of DSGE models in the case of Austria's Great Recession in 2009. For this purpose, eight DSGE models with different characteristics (small and large models; closed and open economy models; one and two-country models) were used. The initial hypothesis was that DSGE models are inferior in ex-ante forecasting a crisis. Surprisingly however, it turned out that not all but those models which implemented features of the causes of the global financial crisis (like financial frictions or interbank credit flows) could not only detect the turning point of the Austrian business cycle early in 2008 but they also succeeded in forecasting the following severe recession in 2009. In comparison, non-DSGE methods like the ex-ante forecast with the Global Economic (Macro) Model of Oxford Economics and WIFO's expert forecasts performed comparable or better than most DSGE models in the crisis.
APA, Harvard, Vancouver, ISO, and other styles
24

Conflitti, Cristina. "Essays on the econometrics of macroeconomic survey data." Doctoral thesis, Universite Libre de Bruxelles, 2012. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209635.

Full text
Abstract:
This thesis contains three essays covering different topics in the field of statistics

and econometrics of survey data. Chapters one and two analyse two aspects

of the Survey of Professional Forecasters (SPF hereafter) dataset. This survey

provides a large information on macroeconomic expectations done by the professional

forecasters and offers an opportunity to exploit a rich information set.

But it poses a challenge on how to extract the relevant information in a proper

way. The last chapter addresses the issue of analyzing the opinions on the euro

reported in the Flash Eurobaromenter dataset.

The first chapter Measuring Uncertainty and Disagreement in the European

Survey of Professional Forecasters proposes a density forecast methodology based

on the piecewise linear approximation of the individual’s forecasting histograms,

to measure uncertainty and disagreement of the professional forecasters. Since

1960 with the introduction of the SPF in the US, it has been clear that they were a

useful source of information to address the issue on how to measure disagreement

and uncertainty, without relying on macroeconomic or time series models. Direct

measures of uncertainty are seldom available, whereas many surveys report point

forecasts from a number of individual respondents. There has been a long tradition

of using measures of the dispersion of individual respondents’ point forecasts

(disagreement or consensus) as proxies for uncertainty. Unlike other surveys, the

SPF represents an exception. It directly asks for the point forecast, and for the

probability distribution, in the form of histogram, associated with the macro variables

of interest. An important issue that should be considered concerns how to

approximate individual probability densities and get accurate individual results

for disagreement and uncertainty before computing the aggregate measures. In

contrast to Zarnowitz and Lambros (1987), and Giordani and Soderlind (2003) we

overcome the problem associated with distributional assumptions of probability

density forecasts by using a non parametric approach that, instead of assuming

a functional form for the individual probability law, approximates the histogram

by a piecewise linear function. In addition, and unlike earlier works that focus on

US data, we employ European data, considering gross domestic product (GDP),

inflation and unemployment.

The second chapter Optimal Combination of Survey Forecasts is based on

a joint work with Christine De Mol and Domenico Giannone. It proposes an

approach to optimally combine survey forecasts, exploiting the whole covariance

structure among forecasters. There is a vast literature on forecast combination

methods, advocating their usefulness both from the theoretical and empirical

points of view (see e.g. the recent review by Timmermann (2006)). Surprisingly,

it appears that simple methods tend to outperform more sophisticated ones, as

shown for example by Genre et al. (2010) on the combination of the forecasts in

the SPF conducted by the European Central Bank (ECB). The main conclusion of

several studies is that the simple equal-weighted average constitutes a benchmark

that is hard to improve upon. In contrast to a great part of the literature which

does not exploit the correlation among forecasters, we take into account the full

covariance structure and we determine the optimal weights for the combination

of point forecasts as the minimizers of the mean squared forecast error (MSFE),

under the constraint that these weights are nonnegative and sum to one. We

compare our combination scheme with other methodologies in terms of forecasting

performance. Results show that the proposed optimal combination scheme is an

appropriate methodology to combine survey forecasts.

The literature on point forecast combination has been widely developed, however

there are fewer studies analyzing the issue for combination density forecast.

We extend our work considering the density forecasts combination. Moving from

the main results presented in Hall and Mitchell (2007), we propose an iterative

algorithm for computing the density weights which maximize the average logarithmic

score over the sample period. The empirical application is made for the

European GDP and inflation forecasts. Results suggest that optimal weights,

obtained via an iterative algorithm outperform the equal-weighted used by the

ECB density combinations.

The third chapter entitled Opinion surveys on the euro: a multilevel multinomial

logistic analysis outlines the multilevel aspects related to public attitudes

toward the euro. This work was motivated by the on-going debate whether the

perception of the euro among European citizenships after ten years from its introduction

was positive or negative. The aim of this work is, therefore, to disentangle

the issue of public attitudes considering either individual socio-demographic characteristics

and macroeconomic features of each country, counting each of them

as two separate levels in a single analysis. Considering a hierarchical structure

represents an advantage as it models within-country as well as between-country

relations using a single analysis. The multilevel analysis allows the consideration

of the existence of dependence between individuals within countries induced by

unobserved heterogeneity between countries, i.e. we include in the estimation

specific country characteristics not directly observable. In this chapter we empirically

investigate which individual characteristics and country specificities are

most important and affect the perception of the euro. The attitudes toward the

euro vary across individuals and countries, and are driven by personal considerations

based on the benefits and costs of using the single currency. Individual

features, such as a high level of education or living in a metropolitan area, have

a positive impact on the perception of the euro. Moreover, the country-specific

economic condition can influence individuals attitudes.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
25

Hörnell, Fredrik, and Melina Hafelt. "Responsiveness of Swedish housing prices to the 2018 amortization requirement : An investigation using a structural Vector autoregressive model to estimate the impact of macro prudential regulation on the Swedish housing market." Thesis, Södertörns högskola, Nationalekonomi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-35533.

Full text
Abstract:
This thesis analyzed and estimated the impact of the March 1, 2018 loan to income amortization requirement on residential real estate prices in Sweden. A four variables vector autoregressive model (VAR) was used to study the relationships between residential real estate prices, GDP, real mortgage rate and consumer price index over a time period from 2005 to 2017. First, a structural vector autoregressive (SVAR) model was used to test how a structural innovation in the error term for real mortgage rate affected residential real estate prices. Secondly, an unconditional forecast from our reduced VAR was produced to estimate post 2017 price growth of the Swedish housing market. The impulse response function results stand in contradiction to economic intuition i.e. the price puzzle problem. The unconditional forecast indicates that the housing market will enter a period with slower price growth post 2017, which are in line with previous research. This thesis vector autoregressive model can give meaningful results with regard to trend forecasts but with regard to precise statements as anticipating drastic price depreciation, it falls short. We recommend the use of reduced VAR forecasting with regard to the Swedish housing market.
APA, Harvard, Vancouver, ISO, and other styles
26

Mayr, Johannes. "Forecasting Macroeconomic Aggregates." Diss., lmu, 2010. http://nbn-resolving.de/urn:nbn:de:bvb:19-111404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Tagliabracci, Alex. "Essays on macroeconomic forecasting." Doctoral thesis, Universitat Autònoma de Barcelona, 2018. http://hdl.handle.net/10803/665202.

Full text
Abstract:
Esta tesis es una colección de tres ensayos empíricos con un enfoque en la previsión. El primer capítulo se centra en una importante tarea de política como previsión de la inflación. El trabajo tiene como objetivo investigar como la dinámica del ciclo económico puede afectar la distribución de las previsiones de inflación. El segundo capítulo considera dos modelos econométricos utilizados en la literatura de predicción inmediata y propone una comparación con una aplicación al PIB italiano. El último capítulo se centra en la previsión de los efectos de las emisiones de datos macroeconómicos sobre los tipos de cambio. El primer capítulo estudia como el ciclo económico afecta la distribución condicional de las previsiones de inflación de la zona del euro. Utilizando un enfoque de regresión de cuantiles, estimo la distribución condicional de la inflación para mostrar su evolución a lo largo del tiempo, lo que permite asimetrías entre cuantiles. Documentamos la evidencia de los riesgos a la baja de la inflación que varían en relación con la evolución del estado de la economía, mientras que el riesgo alcista se mantiene relativamente estable en el tiempo. También encuentro que esta evidencia caracteriza parcialmente la distribución correspondiente derivada de la Encuesta de pronosticadores profesionales del BCE. El segundo capítulo propone dos modelos econométricos multivariados que consideran dos características importantes en la literatura de predicción inmediata, como datos oportunos y de alta frecuencia, para predecir el PIB italiano, a saber, un modelo de factor dinámico y un VAR bayesiano de frecuencia mixta. Un ejercicio pseudo fuera de muestra demonstra tres resultados principales: (i) ambos modelos superan considerablemente a un estándar de referencia univariante estándar; (ii) el modelo de factor dinámico resulta ser más confiable al final del período de pronóstico mientras que el BVAR de frecuencia mixta parece superior con un conjunto de información incompleto; (iii) la superioridad del pronóstico general del modelo de factor dinámico se debe principalmente a su capacidad para captar la gravedad de los episodios de recesión. Finalmente, el tercer capítulo, escrito conjuntamente con Luca Brugnolini y Antonello D’Agostino, investiga la posible predecibilidad de las sorpresas macroeconómicas y sus efectos sobre los tipos de cambio. En particular, analizamos dos de los lanzamientos de datos más importantes que afectan el mercado financiero de EE. UU., Es decir, el cambio en el nivel de empleo nómina no agrícola (NFP) y el índice de manufactura publicado por el Instituto de Gerencia de Abastecimiento (ISM). Examinamos el componente inesperado de estos dos, medido por la desviación de la publicación real del Consenso de Bloomberg. Lo etiquetamos como la sorpresa del mercado e investigamos si su estructura es parcialmente predecible y en qué casos. En segundo lugar, utilizamos datos de alta frecuencia en el eurodólar como laboratorio para estudiar el efecto de estas sorpresas. Mostramos en un marco de regresión que, aunque el ajuste dentro de la muestra es suficientemente bueno, el rendimiento se deteriora en un entorno fuera de muestra porque un modelo ingenuo difícilmente puede superarse en una ventana de sesenta minutos después del lanzamiento. Para terminar, demostramos que bajo ciertas circunstancias existe una estructura que puede ser explotada y brindamos un marco para aprovecharla.
This thesis is a collection of three empirical essays with a focus on forecasting. The first chapter focuses on an important policy task as forecasting inflation. The work aims to investigate how the dynamics of the business cycle may impact the distribution of inflation forecasts. The second chapter considers two econometric models used in the nowcasting literature and propose a comparison with an application to the Italian GDP. The last chapter is centered around forecasting the effects of macroeconomic data releases on the exchange rates. The first chapter studies how the business cycle affects the conditional distribution of euro area inflation forecasts. Using a quantile regression approach, I estimate the conditional distribution of inflation to show its evolution over time allowing for asymmetries across quantiles. I document the evidence of downside risks to inflation which vary in relation to developments of the state of the economy while the upside risk remains relatively stable over time. I also find that this evidence partially characterizes the corresponding distribution derived from ECB Survey of Professional Forecasters. The second chapter proposes two multivarite econometric models that consider two important characteristics in the nowcasting literature, as timely and high frequency data, to predict Italian GDP, namely a dynamic factor model and a mixed-frequency Bayesian VAR. A pseudo out-of-sample exercise shows three main results: (i) both models considerably outperform a standard univariate benchmark; (ii) the dynamic factor model turns out to be more reliable at the end of the forecasting period while the mixed-frequency BVAR appears superior with an incomplete information set; (iii) the overall forecasting superiority of the dynamic factor model is mainly driven by its ability in capturing the severity of recession episodes. Finally, the third chapter, jointly written with Luca Brugnolini and Antonello D’Agostino, investigates the possible predictability of macroeconomic surprises and their effects on the exchange rates. In particular, we analyze two of the most important data releases that impact the US financial market, namely the change in the level of non-farm payroll employment (NFP) and the manufacturing index published by the Institute for Supply Management (ISM). We examine the unexpected component of these two, as measured by the deviation of the actual release from the Bloomberg Consensus. We label it as the market surprise, and we investigate whether its structure is partially predictable and in which cases. Secondly, we use high-frequency data on the eurodollar as a laboratory to study the effect of these surprises. We show in a regression framework that although the in-sample fit is sufficiently good, the performance deteriorates in an out-of-sample setting because a naive model can hardly be beaten in a sixty-minute window after the release. Finally, we demonstrate that under certain circumstances there is some structure that can be exploited and we provide a framework to take advantages of it.
APA, Harvard, Vancouver, ISO, and other styles
28

Yetman, James Arthur. "Essays in macroeconomic forecasting." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ35986.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Hutson, Mark Duffield. "Three Essays on Macroeconomic Forecasting." Thesis, The George Washington University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3719383.

Full text
Abstract:

My dissertation consists of three essays on econometric forecasting and forecast evaluation. Each essay contributes to the literature in its own way. In the first essay, I employ a common qualitative analysis framework on a widely-available consensus forecast. I evaluate the consensus forecast's performance using the Predictive Failure statistic. I find that the survey respondents provide statistically significant directional forecasts or signals. The second essay evaluates forecasts of a prominent energy policy model. This includes analysis of the forecasts, developing a rival forecasting model, and attempting to identify information to improve the policy model's forecasts. This analysis finds that the policy model generally performs well against a sophisticated time-series model. Specifically, the model appears to incorporate data dynamics appropriately, and this technique can serve as a new benchmark to evaluate the forecasts. The third essay examines the role of data revisions on models employing error correction relationships. These error correction relationships are rooted in economic theory and thus should be robust to changing economic conditions and data revisions. This essay finds that an older, popular model breaks down using newer data vintages, suggesting that data revisions and data issues can influence the stability of vector error correction models.

APA, Harvard, Vancouver, ISO, and other styles
30

Verra, Christina. "Macroeconomic forecasting using model averaging." Thesis, Queen Mary, University of London, 2009. http://qmro.qmul.ac.uk/xmlui/handle/123456789/383.

Full text
Abstract:
Recently, there has been a broadening concern on forecasting techniques that are applied on large data sets, since economists in business and management want to deal with the great magnitude of information. In this analysis, the issue of forecasting a large data set by using different model averaging approaches is addressed. In particular, Bayesian and frequentist model averaging methods are considered, including Bayesian model averaging (BMA), information theoretic model averaging (ITMA) and predictive likelihood model averaging (PLMA). The predictive performance of each scheme is compared with the most promising existing alternatives, namely benchmark AR model and the equal weighted model averaging (AV) scheme. An empirical application on Inflation forecasting for five countries using large data sets within the model averaging framework is applied. The average ARX model with weights constructed differently according to each model averaging scheme is compared with both the benchmark AR and the AV model. For the comparison of the accuracy of forecasts several performance indicators have been provided such as the Root Mean Square Error (RMSE), the Mean Absolute Error (MAE), the U-Theil’s Inequality Coefficient (U), Mean Square Forecast Error (MSFE) and the Relative Mean Square Forecast Error (RMSFE). Next, within the Granger causality framework through the Diebold & Mariano (DM) test and the Clark & McCracken (CM) test, whether the data-rich models represented by the three different model averaging schemes have made a statistically significant improvement relative to the benchmark forecasts has been tested. Critical values at 5% and at 10% have been calculated based on bootstrap approximation of the finite sample distribution of the DM and CM test statistics. The main outcome is that although the information theoretic model averaging scheme is a more powerful approach, the other two model averaging techniques can be regarded as useful alternatives.
APA, Harvard, Vancouver, ISO, and other styles
31

Sundberg, David. "Yield curve forecasting using macroeconomic proxy variables." Thesis, Umeå universitet, Institutionen för fysik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-158326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Zeng, Jing [Verfasser]. "Forecasting Euro Area Macroeconomic Aggregate Variables / Jing Zeng." Konstanz : Bibliothek der Universität Konstanz, 2015. http://d-nb.info/1112336702/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Costantini, Mauro, Cuaresma Jesus Crespo, and Jaroslava Hlouskova. "Can Macroeconomists Get Rich Forecasting Exchange Rates?" WU Vienna University of Economics and Business, 2014. http://epub.wu.ac.at/4181/1/wp176.pdf.

Full text
Abstract:
We provide a systematic comparison of the out-of-sample forecasts based on multivariate macroeconomic models and forecast combinations for the euro against the US dollar, the British pound, the Swiss franc and the Japanese yen. We use profit maximization measures based on directional accuracy and trading strategies in addition to standard loss minimization measures. When comparing predictive accuracy and profit measures, data snooping bias free tests are used. The results indicate that forecast combinations help to improve over benchmark trading strategies for the exchange rate against the US dollar and the British pound, although the excess return per unit of deviation is limited. For the euro against the Swiss franc or the Japanese yen, no evidence of generalized improvement in profit measures over the benchmark is found. (authors' abstract)
Series: Department of Economics Working Paper Series
APA, Harvard, Vancouver, ISO, and other styles
34

Korompilis-Magkas, Dimitris. "Three essays in macroeconomic forecasting using Bayesian model selection." Thesis, University of Strathclyde, 2010. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=18236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Mayr, Johannes. "Forecasting macroeconomic aggregates : pooling of forecasts and pooling of information." kostenfrei, 2009. http://d-nb.info/100055046X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Reade, J. J. ames. "Macroeconomic modelling and forecasting in the face of non-stationarity." Thesis, University of Oxford, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.495733.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Qiang, Fu. "Bayesian multivariate time series models for forecasting European macroeconomic series." Thesis, University of Hull, 2000. http://hydra.hull.ac.uk/resources/hull:8068.

Full text
Abstract:
Research on and debate about 'wise use' of explicitly Bayesian forecasting procedures has been widespread and often heated. This situation has come about partly in response to the dissatisfaction with the poor forecasting performance of conventional methods and partly in view of the development of computational capacity and macro-data availability. Experience with Bayesian econometric forecasting schemes is still rather limited, but it seems to be an attractive alternative to subjectively adjusted statistical models [see, for example, Phillips (1995a), Todd (1984) and West & Harrison (1989)]. It provides effective standards of forecasting performance and has demonstrated success in forecasting macroeconomic variables. Therefore, there would seem a case for seeking some additional insights into the important role of such methods in achieving objectives within the macroeconomics profession. The primary concerns of this study, motivated by the apparent deterioration of mainstream macroeconometric forecasts of the world economy in recent years [Wallis (1989), pp.34-43], are threefold. The first is to formalize a thorough, yet simple, methodological framework for empirical macroeconometric modelling in a Bayesian spirit. The second is to investigate whether improved forecasting accuracy is feasible within a European-based multicountry context. This is conducted with particular emphasis on the construction and implementation of Bayesian vector autoregressive (BVAR) models that incorporate both a priori and cointegration restrictions. The third is to extend the approach and apply it to the joint-modelling of system-wide interactions amongst national economies. The intention is to attempt to generate more accurate answers to a variety of practical questions about the future path towards a united Europe. The use of BVARs has advanced considerably. In particular, the value of joint-modelling with time-varying parameters and much more sophisticated prior distributions has been stressed in the econometric methodology literature. See e.g. Doan et al. (1984). Kadiyala and Karlsson (1993, 1997), Litterman (1986a), and Phillips (1995a, 1995b). Although trade-linked multicountry macroeconomic models may not be able to clarify all the structural and finer economic characteristics of each economy, they do provide a flexible and adaptable framework for analysis of global economic issues. In this thesis, the forecasting record for the main European countries is examined using the 'post mortem' of IMF, DECO and EEC sources. The formulation, estimation and selection of BVAR forecasting models, carried out using Microfit, MicroTSP, PcGive and RATS packages, are reported. Practical applications of BVAR models especially address the issues as to whether combinations of forecasts explicitly outperform the forecasts of a single model, and whether the recent failures of multicountry forecasts can be attributed to an increase in the 'internal volatility' of the world economic environment. See Artis and Holly (1992), and Barrell and Pain (1992, p.3). The research undertaken consolidates existing empirical and theoretical knowledge of BVAR modelling. It provides a unified coverage of economic forecasting applications and develops a common, effective and progressive methodology for the European economies. The empirical results reflect that in simulated 'out-of-sample' forecasting performances, the gains in forecast accuracy from imposing prior and long-run constraints are statistically significant, especially for small estimation sample sizes and long forecast horizons.
APA, Harvard, Vancouver, ISO, and other styles
38

Rangel, Jose Gonzalo. "Stock market volatility and price discovery three essays on the effect of macroeconomic information /." Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2006. http://wwwlib.umi.com/cr/ucsd/fullcit?p3220417.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2006.
Title from first page of PDF file (viewed September 7, 2006). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 125-130).
APA, Harvard, Vancouver, ISO, and other styles
39

Berardi, Andrea. "Term structure of interest rates, non-neutral inflation and economic growth." Thesis, London Business School (University of London), 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Nguyen, Dat-Dao. "Forecasting macroeconomic models with artificial neural networks : an empirical investigation into the foundation for an intelligent forecasting system." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape2/PQDD_0021/NQ47705.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Stankiewicz, Sandra [Verfasser]. "Forecasting and econometric modelling of macroeconomic and financial time series / Sandra Stankiewicz." Konstanz : Bibliothek der Universität Konstanz, 2015. http://d-nb.info/1079666028/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Pirschel, Inske [Verfasser]. "Essays on Preferences and Nominal Rigidities and on Macroeconomic Forecasting / Inske Pirschel." Kiel : Universitätsbibliothek Kiel, 2016. http://d-nb.info/1122110944/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Bañbura, Marta. "Essays in dynamic macroeconometrics." Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210294.

Full text
Abstract:
The thesis contains four essays covering topics in the field of macroeconomic forecasting.

The first two chapters consider factor models in the context of real-time forecasting with many indicators. Using a large number of predictors offers an opportunity to exploit a rich information set and is also considered to be a more robust approach in the presence of instabilities. On the other hand, it poses a challenge of how to extract the relevant information in a parsimonious way. Recent research shows that factor models provide an answer to this problem. The fundamental assumption underlying those models is that most of the co-movement of the variables in a given dataset can be summarized by only few latent variables, the factors. This assumption seems to be warranted in the case of macroeconomic and financial data. Important theoretical foundations for large factor models were laid by Forni, Hallin, Lippi and Reichlin (2000) and Stock and Watson (2002). Since then, different versions of factor models have been applied for forecasting, structural analysis or construction of economic activity indicators. Recently, Giannone, Reichlin and Small (2008) have used a factor model to produce projections of the U.S GDP in the presence of a real-time data flow. They propose a framework that can cope with large datasets characterised by staggered and nonsynchronous data releases (sometimes referred to as “ragged edge”). This is relevant as, in practice, important indicators like GDP are released with a substantial delay and, in the meantime, more timely variables can be used to assess the current state of the economy.

The first chapter of the thesis entitled “A look into the factor model black box: publication lags and the role of hard and soft data in forecasting GDP” is based on joint work with Gerhard Rünstler and applies the framework of Giannone, Reichlin and Small (2008) to the case of euro area. In particular, we are interested in the role of “soft” and “hard” data in the GDP forecast and how it is related to their timeliness.

The soft data include surveys and financial indicators and reflect market expectations. They are usually promptly available. In contrast, the hard indicators on real activity measure directly certain components of GDP (e.g. industrial production) and are published with a significant delay. We propose several measures in order to assess the role of individual or groups of series in the forecast while taking into account their respective publication lags. We find that surveys and financial data contain important information beyond the monthly real activity measures for the GDP forecasts, once their timeliness is properly accounted for.

The second chapter entitled “Maximum likelihood estimation of large factor model on datasets with arbitrary pattern of missing data” is based on joint work with Michele Modugno. It proposes a methodology for the estimation of factor models on large cross-sections with a general pattern of missing data. In contrast to Giannone, Reichlin and Small (2008), we can handle datasets that are not only characterised by a “ragged edge”, but can include e.g. mixed frequency or short history indicators. The latter is particularly relevant for the euro area or other young economies, for which many series have been compiled only since recently. We adopt the maximum likelihood approach which, apart from the flexibility with regard to the pattern of missing data, is also more efficient and allows imposing restrictions on the parameters. Applied for small factor models by e.g. Geweke (1977), Sargent and Sims (1977) or Watson and Engle (1983), it has been shown by Doz, Giannone and Reichlin (2006) to be consistent, robust and computationally feasible also in the case of large cross-sections. To circumvent the computational complexity of a direct likelihood maximisation in the case of large cross-section, Doz, Giannone and Reichlin (2006) propose to use the iterative Expectation-Maximisation (EM) algorithm (used for the small model by Watson and Engle, 1983). Our contribution is to modify the EM steps to the case of missing data and to show how to augment the model, in order to account for the serial correlation of the idiosyncratic component. In addition, we derive the link between the unexpected part of a data release and the forecast revision and illustrate how this can be used to understand the sources of the

latter in the case of simultaneous releases. We use this methodology for short-term forecasting and backdating of the euro area GDP on the basis of a large panel of monthly and quarterly data. In particular, we are able to examine the effect of quarterly variables and short history monthly series like the Purchasing Managers' surveys on the forecast.

The third chapter is entitled “Large Bayesian VARs” and is based on joint work with Domenico Giannone and Lucrezia Reichlin. It proposes an alternative approach to factor models for dealing with the curse of dimensionality, namely Bayesian shrinkage. We study Vector Autoregressions (VARs) which have the advantage over factor models in that they allow structural analysis in a natural way. We consider systems including more than 100 variables. This is the first application in the literature to estimate a VAR of this size. Apart from the forecast considerations, as argued above, the size of the information set can be also relevant for the structural analysis, see e.g. Bernanke, Boivin and Eliasz (2005), Giannone and Reichlin (2006) or Christiano, Eichenbaum and Evans (1999) for a discussion. In addition, many problems may require the study of the dynamics of many variables: many countries, sectors or regions. While we use standard priors as proposed by Litterman (1986), an

important novelty of the work is that we set the overall tightness of the prior in relation to the model size. In this we follow the recommendation by De Mol, Giannone and Reichlin (2008) who study the case of Bayesian regressions. They show that with increasing size of the model one should shrink more to avoid overfitting, but when data are collinear one is still able to extract the relevant sample information. We apply this principle in the case of VARs. We compare the large model with smaller systems in terms of forecasting performance and structural analysis of the effect of monetary policy shock. The results show that a standard Bayesian VAR model is an appropriate tool for large panels of data once the degree of shrinkage is set in relation to the model size.

The fourth chapter entitled “Forecasting euro area inflation with wavelets: extracting information from real activity and money at different scales” proposes a framework for exploiting relationships between variables at different frequency bands in the context of forecasting. This work is motivated by the on-going debate whether money provides a reliable signal for the future price developments. The empirical evidence on the leading role of money for inflation in an out-of-sample forecast framework is not very strong, see e.g. Lenza (2006) or Fisher, Lenza, Pill and Reichlin (2008). At the same time, e.g. Gerlach (2003) or Assenmacher-Wesche and Gerlach (2007, 2008) argue that money and output could affect prices at different frequencies, however their analysis is performed in-sample. In this Chapter, it is investigated empirically which frequency bands and for which variables are the most relevant for the out-of-sample forecast of inflation when the information from prices, money and real activity is considered. To extract different frequency components from a series a wavelet transform is applied. It provides a simple and intuitive framework for band-pass filtering and allows a decomposition of series into different frequency bands. Its application in the multivariate out-of-sample forecast is novel in the literature. The results indicate that, indeed, different scales of money, prices and GDP can be relevant for the inflation forecast.


Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
44

Bradshaw, Girard W. "Detecting macroeconomic impacts on agricultural prices and export sales : a time series forecasting approach /." Thesis, This resource online, 1988. http://scholar.lib.vt.edu/theses/available/etd-04122010-083628/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Sadik, Zryan. "Asset price and volatility forecasting using news sentiment." Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/17079.

Full text
Abstract:
The aim of this thesis is to show that news analytics data can be utilised to improve the predictive ability of existing models that have useful roles in a variety of financial applications. The modified models are computationally efficient and perform far better than the existing ones. The new modified models offer a reasonable compromise between increased model complexity and prediction accuracy. I have investigated the impact of news sentiment on volatility of stock returns. The GARCH model is one of the most common models used for predicting asset price volatility from the return time series. In this research, I have considered quantified news sentiment as a second source of information and its impact on the movement of asset prices, which is used together with the asset time series data to predict the volatility of asset price returns. Comprehensive numerical experiments demonstrate that the new proposed volatility models provide superior prediction than the "plain vanilla" GARCH, TGARCH and EGARCH models. This research presents evidence that including news sentiment term as an exogenous variable in the GARCH framework improves the prediction power of the model. The analysis of this study suggested that the use of an exponential decay function is good when the news flow is frequent, whereas the Hill decay function is good only when there are scheduled announcements. The numerical results vindicate some recent findings regarding the utility of news sentiment as a predictor of volatility, and also vindicate the utility of the new models combining the proxies for past news sentiments and the past asset price returns. The empirical analysis suggested that news augmented GARCH models can be very useful in estimating VaR and implementing risk management strategies. Another direction of my research is introducing a new approach to construct a commodity futures pricing model. This study proposed a new method of incorporating macroeconomic news into a predictive model for forecasting prices of crude oil futures contracts. Since these futures contracts are iii iv more liquid than the underlying commodity itself, accurate forecasting of their prices is of great value to multiple categories of market participants. The Kalman filtering framework for forecasting arbitrage-free (futures) prices was utilized, and it is assumed that the volatility of oil (futures) price is influenced by macroeconomic news. The impact of quantified news sentiment on the price volatility is modelled through a parametrized, nonlinear functional map. This approach is motivated by the successful use of a similar model structure in my earlier work, for predicting individual stock volatility using stock-specific news. Numerical experiments with real data illustrate that this new model performs better than the one factor model in terms of accuracy of predictive power as well as goodness of fit to the data. The proposed model structure for incorporating macroeconomic news together with historical (market) data is novel and improves the accuracy of price prediction quite significantly.
APA, Harvard, Vancouver, ISO, and other styles
46

Fodor, Bryan D. "The effect of macroeconomic variables on the pricing of common stock under trending market conditions." Thesis, Department of Business Administration, University of New Brunswick, 2003. http://hdl.handle.net/1882/49.

Full text
Abstract:
Thesis (MBA) -- University of New Brunswick, Faculty of Administration, 2003.
Typescript. Bibliography: leaves 83-84. Also available online through University of New Brunswick, UNB Electronic Theses & Dissertations.
APA, Harvard, Vancouver, ISO, and other styles
47

Heinrich, Markus [Verfasser], Kai [Akademischer Betreuer] Carstensen, and Matei [Gutachter] Demetrescu. "Macroeconomic Forecasting and Business Cycle Analysis with Nonlinear Models / Markus Heinrich ; Gutachter: Matei Demetrescu ; Betreuer: Kai Carstensen." Kiel : Universitätsbibliothek Kiel, 2020. http://d-nb.info/122345293X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Perendija, Djordje V. "Business cycles, interest rates and market volatility : estimation and forecasting using DSGE macroeconomic models under partial information." Thesis, London Metropolitan University, 2018. http://repository.londonmet.ac.uk/1536/.

Full text
Abstract:
Even long before the recent financial and economic crisis of 2007/2008 economists were more than aware of the insufficiencies and a lack of realism in macroeconomic modelling and model calibration methods, including those with DSGE methods and models, and spelled the need for further enhancements. The issues this research started addressing even before the 2008 crisis imposed demand for improvements, was use of single, fully informed rational agents in those modes. Consequently, the first part of this research project was aiming to improve the DSGE econometric methods by introducing novel solution for DSGE models with imperfect, partial information about the current values of deep variables and shocks, and apply this solution to imperfectly informed multiple agents with their different, inner-rationality models. Along these lines, this research also shows that DSGE models can be extended and suited to both, fitting and estimation of long-term yield curve, and to estimating with rich data sets by extending further its inner-mechanism. In the aftermath of the 2008 crises, which struck at the beginning of this research project, and the subsequent, extensive criticism of DSGE models, this research analyses the alternative causes of the crisis. It then focuses on identifying its possible causes, such as yet unknown debt accelerator mechanism and the related, probable model miss-specifications, rational inattention, and as well, a role of institutional policies in both the development of the crisis and its resolution. And finally, in a response to many of the critiques of the, usually monetary policy oriented DSGE models, this research project provides another set of novel extensions to such models, aiming to bring more of Keynesian characteristics suited to a more active, endogenous fiscal policy deemed needed in the aftermath of the crisis. This project, henceforth, extends the NK-Neo-Classical synthesis monetary DSGE models with a novel, endogenous, counter-cyclical fiscal policy rule driven by news and unemployment changes. It then also shows overall benefits of the resulting, mutually active, monetary-fiscal policy for both capital utilisation and overall economic stability.
APA, Harvard, Vancouver, ISO, and other styles
49

Dechaux, Pierrick. "L'économie face aux enquêtes psychologiques 1944 -1960 : unité de la science économique, diversité des pratiques." Thesis, Paris 1, 2017. http://www.theses.fr/2017PA01E025.

Full text
Abstract:
Cette thèse étudie la trajectoire historique des enquêtes psychologiques produites au Survey Research Center de l’Université du Michigan à l’initiative de George Katona. Aujourd’hui, on ne retient de ces enquêtes que les indicateurs de confiance produits chaque mois par plus de cinquante pays pour analyser la conjoncture. Pourquoi continue-t-on à produire et à utiliser ces enquêtes et ces indicateurs alors qu’un consensus s’est produit en macroéconomie et en microéconomie autour d’un ensemble de modèles qui n’en font pas l’usage ? Pour répondre à cette question, on étudie plusieurs controverses qui se sont produites autour des enquêtes du Michigan entre 1944 et 1960. On montre que l’époque est caractérisée de décisions au sein des gouvernements et du monde des affaires. La thèse montre que si ces débats sont peu connus des économistes aujourd’hui, c’est parce qu’ils se sont poursuivis dans des champs disciplinaires périphériques à l’économie. Ces disciplines sont concernées par des problèmes pratiques dont les économistes théoriciens se sont progressivement détournés. En proposant une analyse des liens entre la théorie économique et sa mise en pratique, cette thèse offre une nouvelle manière d’appréhender l’histoire de la macroéconomie récente et de l’économie comportementale. L’histoire des dynamiques intellectuelles d’après-guerre ne se résume ni à des innovations théoriques, ni à un nouveau rapport entre la théorie et l’empirie. En effet, ces dynamiques reposent aussi sur la redéfinition des frontières entre la science et son art ; entre d’un côté l’économie et de l’autre le marketing et la conjoncture
This dissertation looks at the historical development of George Kantona's psychological surveys at the Survey Research Center at the University of Michigan. The main legacy of this work has been the widespread adoption of confidence indicators. They are used each month by more than fifty countries and widely implemented by business managers and forecasters. How do we explain the widespread usage of these indicators despite a prevalent consensus in macroeconomics and microeconomics that does not consider them as important tools? In order to answer this question, we study several controversies that occurred around Michigan surveys between 1944 and 1960. It is shown that this era is characterized by many interdisciplinary exchanges guided by the practical needs of decision-makers in governments and private companies. I show that if economists know little about these debates, it is because they were maintained in disciplinary fields on the periphery of economics. These fields are centered on practical problems that theoretical economists progressively abandoned. This thesis offers a new way of understanding the history of recent macroeconomics and behavioral economics by proposing an analysis of the links between economic theory and its application in practice. For instance, the history of post-war intellectual dynamics cannot be reduced to theoretical innovations or to a new relationship between theory and empiricism. Indeed, these dynamics rely also on the transformation of the boundaries between the science and its art; between the economy on the one hand and marketing and forecasting on the other
APA, Harvard, Vancouver, ISO, and other styles
50

CUNHA, JOAO MARCO BRAGA DA. "EXPERIMENTS ON FORECASTING THE AMERICAN TERM STRUCTURE OF INTEREST RATES: MEAN REVERSION, INERTIA AND INFLUENCE OF MACROECONOMIC VARIABLES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2009. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=14308@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
Este trabalho propõe um modelo com reversão à média e inércia para taxas de juros e para cargas dos fatores de Nelson e Siegel (1987), e adiciona variáveis macroeconômicas selecionadas. As previsões geradas são comparadas com o Passeio Aleatório e com a metodologia de Diebold e Li (2006).
This work proposes a model with mean reversion and inertia for the yields and the loadings of the Nelson and Siegel (1987) factors, and includes selected macroeconomic variables. The generated forecasts are compared with the Random Walk and the Diebold e Li (2006) methodology.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography