Dissertations / Theses on the topic 'Macroeconomics – Forecasting'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Macroeconomics – Forecasting.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Heidari, Hassan Economics Australian School of Business UNSW. "Essays on macoroeconomics and macroeconomic forecasting." Awarded by:University of New South Wales. School of Economics, 2006. http://handle.unsw.edu.au/1959.4/22800.
Full textLiu, Dandan. "Essays on macroeconomics and forecasting." Texas A&M University, 2005. http://hdl.handle.net/1969.1/4271.
Full textDe, Antonio Liedo David. "Structural models for macroeconomics and forecasting." Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210142.
Full textcentral debates in empirical macroeconomic modeling.
Chapter 1, entitled “A Model for Real-Time Data Assessment with an Application to GDP Growth Rates”, provides a model for the data
revisions of macroeconomic variables that distinguishes between rational expectation updates and noise corrections. Thus, the model encompasses the two polar views regarding the publication process of statistical agencies: noise versus news. Most of the studies previous studies that analyze data revisions are based
on the classical noise and news regression approach introduced by Mankiew, Runkle and Shapiro (1984). The problem is that the statistical tests available do not formulate both extreme hypotheses as collectively exhaustive, as recognized by Aruoba (2008). That is, it would be possible to reject or accept both of them simultaneously. In turn, the model for the
DPP presented here allows for the simultaneous presence of both noise and news. While the “regression approach” followed by Faust et al. (2005), along the lines of Mankiew et al. (1984), identifies noise in the preliminary
figures, it is not possible for them to quantify it, as done by our model.
The second and third chapters acknowledge the possibility that macroeconomic data is measured with errors, but the approach followed to model the missmeasurement is extremely stylized and does not capture the complexity of the revision process that we describe in the first chapter.
Chapter 2, entitled “Revisiting the Success of the RBC model”, proposes the use of dynamic factor models as an alternative to the VAR based tools for the empirical validation of dynamic stochastic general equilibrium (DSGE) theories. Along the lines of Giannone et al. (2006), we use the state-space parameterisation of the factor models proposed by Forni et al. (2007) as a competitive benchmark that is able to capture weak statistical restrictions that DSGE models impose on the data. Our empirical illustration compares the out-of-sample forecasting performance of a simple RBC model augmented with a serially correlated noise component against several specifications belonging to classes of dynamic factor and VAR models. Although the performance of the RBC model is comparable
to that of the reduced form models, a formal test of predictive accuracy reveals that the weak restrictions are more useful at forecasting than the strong behavioral assumptions imposed by the microfoundations in the model economy.
The last chapter, “What are Shocks Capturing in DSGE modeling”, contributes to current debates on the use and interpretation of larger DSGE
models. Recent tendency in academic work and at central banks is to develop and estimate large DSGE models for policy analysis and forecasting. These models typically have many shocks (e.g. Smets and Wouters, 2003 and Adolfson, Laseen, Linde and Villani, 2005). On the other hand, empirical studies point out that few large shocks are sufficient to capture the covariance structure of macro data (Giannone, Reichlin and
Sala, 2005, Uhlig, 2004). In this Chapter, we propose to reconcile both views by considering an alternative DSGE estimation approach which
models explicitly the statistical agency along the lines of Sargent (1989). This enables us to distinguish whether the exogenous shocks in DSGE
modeling are structural or instead serve the purpose of fitting the data in presence of misspecification and measurement problems. When applied to the original Smets and Wouters (2007) model, we find that the explanatory power of the structural shocks decreases at high frequencies. This allows us to back out a smoother measure of the natural output gap than that
resulting from the original specification.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
Schwarzmüller, Tim [Verfasser]. "Essays in Macroeconomics and Forecasting / Tim Schwarzmüller." Kiel : Universitätsbibliothek Kiel, 2016. http://d-nb.info/1102204021/34.
Full textBrinca, Pedro Soares. "Essays in Quantitative Macroeconomics." Doctoral thesis, Stockholms universitet, Nationalekonomiska institutionen, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-92861.
Full textGalimberti, Jaqueson Kingeski. "Adaptive learning for applied macroeconomics." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/adaptive-learning-for-applied-macroeconomics(cde517d7-d552-4a53-a442-c584262c3a8f).html.
Full textArora, Siddharth. "Time series forecasting with applications in macroeconomics and energy." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:c763b735-e4fa-4466-9c1f-c3f6daf04a67.
Full textWard, Felix [Verfasser]. "Essays in International Macroeconomics and Financial Crisis Forecasting / Felix Ward." Bonn : Universitäts- und Landesbibliothek Bonn, 2018. http://d-nb.info/1167856899/34.
Full textXue, Jiangbo. "A structural forecasting model for the Chinese macroeconomy /." View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?ECON%202009%20XUE.
Full textRicci, Lorenzo. "Essays on tail risk in macroeconomics and finance: measurement and forecasting." Doctoral thesis, Universite Libre de Bruxelles, 2017. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/242122.
Full textDoctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
Steinbach, Max Rudibert. "Essays on dynamic macroeconomics." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/86196.
Full textENGLISH ABSTRACT: In the first essay of this thesis, a medium scale DSGE model is developed and estimated for the South African economy. When used for forecasting, the model is found to outperform private sector economists when forecasting CPI inflation, GDP growth and the policy rate over certain horizons. In the second essay, the benchmark DSGE model is extended to include the yield on South African 10-year government bonds. The model is then used to decompose the 10-year yield spread into (1) the structural shocks that contributed to its evolution during the inflation targeting regime of the South African Reserve Bank, as well as (2) an expected yield and a term premium. In addition, it is found that changes in the South African term premium may predict future real economic activity. Finally, the need for DSGE models to take account of financial frictions became apparent during the recent global financial crisis. As a result, the final essay incorporates a stylised banking sector into the benchmark DSGE model described above. The optimal response of the South African Reserve Bank to financial shocks is then analysed within the context of this structural model.
Feng, Ning. "Essays on business cycles and macroeconomic forecasting." HKBU Institutional Repository, 2016. https://repository.hkbu.edu.hk/etd_oa/279.
Full textStavrakeva, Vania Atanassova. "Three Essays in Macroeconomics and International Finance." Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:10881.
Full textEconomics
Caruso, Alberto. "Essays on Empirical Macroeconomics." Doctoral thesis, Universite Libre de Bruxelles, 2020. https://dipot.ulb.ac.be/dspace/bitstream/2013/308164/4/TOC.pdf.
Full textDoctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
Monti, Francesca. "Combining structural and reduced-form models for macroeconomic forecasting and policy analysis." Doctoral thesis, Universite Libre de Bruxelles, 2011. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209970.
Full textDoctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
Evholt, David, and Oscar Larsson. "Generative Adversarial Networks and Natural Language Processing for Macroeconomic Forecasting." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273422.
Full textMakroekonomiska prognoser är sedan länge en svår utmaning. Idag löses de oftast med tidsserieanalys och få försök har gjorts med maskininlärning. I denna uppsats används ett generativt motstridande nätverk (GAN) för att förutspå amerikansk arbetslöshet, med resultat som slår samtliga riktmärken satta av en ARIMA. Ett försök görs också till att använda data från Twitter och den datorlingvistiska (NLP) modellen DistilBERT. Dessa modeller slår inte riktmärkena men visar lovande resultat. Modellerna testas vidare på det amerikanska börsindexet S&P 500. För dessa modeller förbättrade Twitterdata resultaten vilket visar på den potential data från sociala medier har när de appliceras på mer oregelbunda index, utan tydligt säsongsberoende och som är mer känsliga för trender i det offentliga samtalet. Resultaten visar på att Twitterdata kan användas för att hitta trender i både amerikansk arbetslöshet och S&P 500 indexet. Detta lägger grunden för fortsatt forskning inom NLP-GAN modeller för makroekonomiska prognoser baserade på data från sociala medier.
Liebermann, Joëlle. "Essays in real-time forecasting." Doctoral thesis, Universite Libre de Bruxelles, 2012. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209644.
Full textforecasting.
The issue of using data as available in real-time to forecasters, policymakers or financial
markets is an important one which has only recently been taken on board in the empirical
literature. Data available and used in real-time are preliminary and differ from ex-post
revised data, and given that data revisions may be quite substantial, the use of latest
available instead of real-time can substantially affect empirical findings (see, among others,
Croushore’s (2011) survey). Furthermore, as variables are released on different dates
and with varying degrees of publication lags, in order not to disregard timely information,
datasets are characterized by the so-called “ragged-edge”structure problem. Hence, special
econometric frameworks, such as developed by Giannone, Reichlin and Small (2008) must
be used.
The first Chapter, “The impact of macroeconomic news on bond yields: (in)stabilities over
time and relative importance”, studies the reaction of U.S. Treasury bond yields to real-time
market-based news in the daily flow of macroeconomic releases which provide most of the
relevant information on their fundamentals, i.e. the state of the economy and inflation. We
find that yields react systematically to a set of news consisting of the soft data, which have
very short publication lags, and the most timely hard data, with the employment report
being the most important release. However, sub-samples evidence reveals that parameter
instability in terms of absolute and relative size of yields response to news, as well as
significance, is present. Especially, the often cited dominance to markets of the employment
report has been evolving over time, as the size of the yields reaction to it was steadily
increasing. Moreover, over the recent crisis period there has been an overall switch in the
relative importance of soft and hard data compared to the pre-crisis period, with the latter
becoming more important even if less timely, and the scope of hard data to which markets
react has increased and is more balanced as less concentrated on the employment report.
Markets have become more reactive to news over the recent crisis period, particularly to
hard data. This is a consequence of the fact that in periods of high uncertainty (bad state),
markets starve for information and attach a higher value to the marginal information content
of these news releases.
The second and third Chapters focus on the real-time ability of models to now-and-forecast
in a data-rich environment. It uses an econometric framework, that can deal with large
panels that have a “ragged-edge”structure, and to evaluate the models in real-time, we
constructed a database of vintages for US variables reproducing the exact information that
was available to a real-time forecaster.
The second Chapter, “Real-time nowcasting of GDP: a factor model versus professional
forecasters”, performs a fully real-time nowcasting (forecasting) exercise of US real GDP
growth using Giannone, Reichlin and Smalls (2008), henceforth (GRS), dynamic factor
model (DFM) framework which enables to handle large unbalanced datasets as available
in real-time. We track the daily evolution throughout the current and next quarter of the
model nowcasting performance. Similarly to GRS’s pseudo real-time results, we find that
the precision of the nowcasts increases with information releases. Moreover, the Survey of
Professional Forecasters does not carry additional information with respect to the model,
suggesting that the often cited superiority of the former, attributable to judgment, is weak
over our sample. As one moves forward along the real-time data flow, the continuous
updating of the model provides a more precise estimate of current quarter GDP growth and
the Survey of Professional Forecasters becomes stale. These results are robust to the recent
recession period.
The last Chapter, “Real-time forecasting in a data-rich environment”, evaluates the ability
of different models, to forecast key real and nominal U.S. monthly macroeconomic variables
in a data-rich environment and from the perspective of a real-time forecaster. Among
the approaches used to forecast in a data-rich environment, we use pooling of bi-variate
forecasts which is an indirect way to exploit large cross-section and the directly pooling of
information using a high-dimensional model (DFM and Bayesian VAR). Furthermore forecasts
combination schemes are used, to overcome the choice of model specification faced by
the practitioner (e.g. which criteria to use to select the parametrization of the model), as
we seek for evidence regarding the performance of a model that is robust across specifications/
combination schemes. Our findings show that predictability of the real variables is
confined over the recent recession/crisis period. This in line with the findings of D’Agostino
and Giannone (2012) over an earlier period, that gains in relative performance of models
using large datasets over univariate models are driven by downturn periods which are characterized
by higher comovements. These results are robust to the combination schemes
or models used. A point worth mentioning is that for nowcasting GDP exploiting crosssectional
information along the real-time data flow also helps over the end of the great moderation period. Since this is a quarterly aggregate proxying the state of the economy,
monthly variables carry information content for GDP. But similarly to the findings for the
monthly variables, predictability, as measured by the gains relative to the naive random
walk model, is higher during crisis/recession period than during tranquil times. Regarding
inflation, results are stable across time, but predictability is mainly found at nowcasting
and forecasting one-month ahead, with the BVAR standing out at nowcasting. The results
show that the forecasting gains at these short horizons stem mainly from exploiting timely
information. The results also show that direct pooling of information using a high dimensional
model (DFM or BVAR) which takes into account the cross-correlation between the
variables and efficiently deals with the “ragged-edge”structure of the dataset, yields more
accurate forecasts than the indirect pooling of bi-variate forecasts/models.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
Marsilli, Clément. "Mixed-Frequency Modeling and Economic Forecasting." Thesis, Besançon, 2014. http://www.theses.fr/2014BESA2023/document.
Full textEconomic downturn and recession that many countries experienced in the wake of the global financial crisis demonstrate how important but difficult it is to forecast macroeconomic fluctuations, especially within a short time horizon. The doctoral dissertation studies, analyses and develops models for economic growth forecasting. The set of information coming from economic activity is vast and disparate. In fact, time series coming from real and financial economy do not have the same characteristics, both in terms of sampling frequency and predictive power. Therefore short-term forecasting models should both allow the use of mixed-frequency data and parsimony. The first chapter is dedicated to time series econometrics within a mixed-frequency framework. The second chapter contains two empirical works that sheds light on macro-financial linkages by assessing the leading role of the daily financial volatility in macroeconomic prediction during the Great Recession. The third chapter extends mixed-frequency model into a Bayesian framework and presents an empirical study using a stochastic volatility augmented mixed data sampling model. The fourth chapter focuses on variable selection techniques in mixed-frequency models for short-term forecasting. We address the selection issue by developing mixed-frequency-based dimension reduction techniques in a cross-validation procedure that allows automatic in-sample selection based on recent forecasting performances. Our model succeeds in constructing an objective variable selection with broad applicability
Garda, Paula. "Essays on the macroeconomics of labor markets." Doctoral thesis, Universitat Pompeu Fabra, 2013. http://hdl.handle.net/10803/119820.
Full textEsta tesis arroja luz sobre varios aspectos macroeconómicos de los mercados laborales. El primer capítulo se centra en el impacto de los mercados duales de trabajo sobre la inversión en capital humano. Usando una base de datos de la Seguridad Social española, se analizan las pérdidas salariales de los trabajadores permanentes y a plazo fijo tras cambiar de empleo. Los resultados indican que los trabajadores con contratos permanentes acumulan una mayor proporción de capital humano específico a la firma, que los trabajadores con contratos de duración determinada. El impacto sobre la productividad es analizado calibrando un modelo `a la Mortensen y Pissarides (1994) con inversión endógena en capital humano y mercado de trabajo dual. El segundo capítulo desarrolla un modelo para explicar las diferencias en las fluctuaciones cíclicas del empleo informal en los países en desarrollo. La explicación se basa en diferencias institucionales entre el sector formal e informal. En el tercer capítulo se propone un modelo que utiliza flujos de entrada y salida del desempleo para pronosticar la tasa de desempleo. Se analizan cuáles son las condiciones bajo las cuales este modelo tiene una performance superior a los modelos estándar de series de tiempo, y cuantifica empíricamente esta contribución para varios países de la OCDE
Bauch, Jacob H. "The Impact of Oil Prices on the U.S. Economy." Scholarship @ Claremont, 2011. http://scholarship.claremont.edu/cmc_theses/146.
Full textCicconi, Claudia. "Essays on macroeconometrics and short-term forecasting." Doctoral thesis, Universite Libre de Bruxelles, 2012. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209660.
Full textis composed of three chapters. The first two chapters are on nowcasting,
a topic that has received an increasing attention both among practitioners and
the academics especially in conjunction and in the aftermath of the 2008-2009
economic crisis. At the heart of the two chapters is the idea of exploiting the
information from data published at a higher frequency for obtaining early estimates
of the macroeconomic variable of interest. The models used to compute
the nowcasts are dynamic models conceived for handling in an efficient way
the characteristics of the data used in a real-time context, like the fact that due to the different frequencies and the non-synchronicity of the releases
the time series have in general missing data at the end of the sample. While
the first chapter uses a small model like a VAR for nowcasting Italian GDP,
the second one makes use of a dynamic factor model, more suitable to handle
medium-large data sets, for providing early estimates of the employment in
the euro area. The third chapter develops a topic only marginally touched
by the second chapter, i.e. the estimation of dynamic factor models on data characterized by block-structures.
The firrst chapter assesses the accuracy of the Italian GDP nowcasts based
on a small information set consisting of GDP itself, the industrial production
index and the Economic Sentiment Indicator. The task is carried out by using
real-time vintages of data in an out-of-sample exercise over rolling windows
of data. Beside using real-time data, the real-time setting of the exercise is
also guaranteed by updating the nowcasts according to the historical release calendar. The model used to compute the nowcasts is a mixed-frequency Vector
Autoregressive (VAR) model, cast in state-space form and estimated by
maximum likelihood. The results show that the model can provide quite accurate
early estimates of the Italian GDP growth rates not only with respect
to a naive benchmark but also with respect to a bridge model based on the
same information set and a mixed-frequency VAR with only GDP and the industrial production index.
The chapter also analyzes with some attention the role of the Economic Sentiment
Indicator, and of soft information in general. The comparison of our
mixed-frequency VAR with one with only GDP and the industrial production
index clearly shows that using soft information helps obtaining more accurate
early estimates. Evidence is also found that the advantage from using soft
information goes beyond its timeliness.
In the second chapter we focus on nowcasting the quarterly national account
employment of the euro area making use of both country-specific and
area wide information. The relevance of anticipating Eurostat estimates of
employment rests on the fact that, despite it represents an important macroeconomic
variable, euro area employment is measured at a relatively low frequency
(quarterly) and published with a considerable delay (approximately
two months and a half). Obtaining an early estimate of this variable is possible
thanks to the fact that several Member States publish employment data and
employment-related statistics in advance with respect to the Eurostat release
of the euro area employment. Data availability represents, nevertheless, a
major limit as country-level time series are in general non homogeneous, have
different starting periods and, in some cases, are very short. We construct a
data set of monthly and quarterly time series consisting of both aggregate and
country-level data on Quarterly National Account employment, employment
expectations from business surveys and Labour Force Survey employment and
unemployment. In order to perform a real time out-of-sample exercise simulating
the (pseudo) real-time availability of the data, we construct an artificial
calendar of data releases based on the effective calendar observed during the first quarter of 2012. The model used to compute the nowcasts is a dynamic
factor model allowing for mixed-frequency data, missing data at the beginning
of the sample and ragged edges typical of non synchronous data releases. Our
results show that using country-specific information as soon as it is available
allows to obtain reasonably accurate estimates of the employment of the euro
area about fifteen days before the end of the quarter.
We also look at the nowcasts of employment of the four largest Member
States. We find that (with the exception of France) augmenting the dynamic
factor model with country-specific factors provides better results than those
obtained with the model without country-specific factors.
The third chapter of the thesis deals with dynamic factor models on data
characterized by local cross-correlation due to the presence of block-structures.
The latter is modeled by introducing block-specific factors, i.e. factors that
are specific to blocks of time series. We propose an algorithm to estimate the model by (quasi) maximum likelihood and use it to run Monte Carlo
simulations to evaluate the effects of modeling or not the block-structure on
the estimates of common factors. We find two main results: first, that in finite samples modeling the block-structure, beside being interesting per se, can help
reducing the model miss-specification and getting more accurate estimates
of the common factors; second, that imposing a wrong block-structure or
imposing a block-structure when it is not present does not have negative
effects on the estimates of the common factors. These two results allow us
to conclude that it is always recommendable to model the block-structure
especially if the characteristics of the data suggest that there is one.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
Coroneo, Laura. "Essays on modelling and forecasting financial time series." Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210284.
Full textThe first chapter investigates the distribution of high frequency financial returns, with special emphasis on the intraday seasonality. Using quantile regression, I show the expansions and shrinks of the probability law through the day for three years of 15 minutes sampled stock returns. Returns are more dispersed and less concentrated around the median at the hours near the opening and closing. I provide intraday value at risk assessments and I show how it adapts to changes of dispersion over the day. The tests performed on the out-of-sample forecasts of the value at risk show that the model is able to provide good risk assessments and to outperform standard Gaussian and Student’s t GARCH models.
The second chapter shows that macroeconomic indicators are helpful in forecasting the yield curve. I incorporate a large number of macroeconomic predictors within the Nelson and Siegel (1987) model for the yield curve, which can be cast in a common factor model representation. Rather than including macroeconomic variables as additional factors, I use them to extract the Nelson and Siegel factors. Estimation is performed by EM algorithm and Kalman filter using a data set composed by 17 yields and 118 macro variables. Results show that incorporating large macroeconomic information improves the accuracy of out-of-sample yield forecasts at medium and long horizons.
The third chapter statistically tests whether the Nelson and Siegel (1987) yield curve model is arbitrage-free. Theoretically, the Nelson-Siegel model does not ensure the absence of arbitrage opportunities. Still, central banks and public wealth managers rely heavily on it. Using a non-parametric resampling technique and zero-coupon yield curve data from the US market, I find that the no-arbitrage parameters are not statistically different from those obtained from the Nelson and Siegel model, at a 95 percent confidence level. I therefore conclude that the Nelson and Siegel yield curve model is compatible with arbitrage-freeness.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
Breuss, Fritz. "Would DSGE Models have Predicted the Great Recession in Austria?" Springer International Publishing, 2018. http://epub.wu.ac.at/6086/1/10.1007_s41549%2D018%2D0025%2D1.pdf.
Full textConflitti, Cristina. "Essays on the econometrics of macroeconomic survey data." Doctoral thesis, Universite Libre de Bruxelles, 2012. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209635.
Full textand econometrics of survey data. Chapters one and two analyse two aspects
of the Survey of Professional Forecasters (SPF hereafter) dataset. This survey
provides a large information on macroeconomic expectations done by the professional
forecasters and offers an opportunity to exploit a rich information set.
But it poses a challenge on how to extract the relevant information in a proper
way. The last chapter addresses the issue of analyzing the opinions on the euro
reported in the Flash Eurobaromenter dataset.
The first chapter Measuring Uncertainty and Disagreement in the European
Survey of Professional Forecasters proposes a density forecast methodology based
on the piecewise linear approximation of the individual’s forecasting histograms,
to measure uncertainty and disagreement of the professional forecasters. Since
1960 with the introduction of the SPF in the US, it has been clear that they were a
useful source of information to address the issue on how to measure disagreement
and uncertainty, without relying on macroeconomic or time series models. Direct
measures of uncertainty are seldom available, whereas many surveys report point
forecasts from a number of individual respondents. There has been a long tradition
of using measures of the dispersion of individual respondents’ point forecasts
(disagreement or consensus) as proxies for uncertainty. Unlike other surveys, the
SPF represents an exception. It directly asks for the point forecast, and for the
probability distribution, in the form of histogram, associated with the macro variables
of interest. An important issue that should be considered concerns how to
approximate individual probability densities and get accurate individual results
for disagreement and uncertainty before computing the aggregate measures. In
contrast to Zarnowitz and Lambros (1987), and Giordani and Soderlind (2003) we
overcome the problem associated with distributional assumptions of probability
density forecasts by using a non parametric approach that, instead of assuming
a functional form for the individual probability law, approximates the histogram
by a piecewise linear function. In addition, and unlike earlier works that focus on
US data, we employ European data, considering gross domestic product (GDP),
inflation and unemployment.
The second chapter Optimal Combination of Survey Forecasts is based on
a joint work with Christine De Mol and Domenico Giannone. It proposes an
approach to optimally combine survey forecasts, exploiting the whole covariance
structure among forecasters. There is a vast literature on forecast combination
methods, advocating their usefulness both from the theoretical and empirical
points of view (see e.g. the recent review by Timmermann (2006)). Surprisingly,
it appears that simple methods tend to outperform more sophisticated ones, as
shown for example by Genre et al. (2010) on the combination of the forecasts in
the SPF conducted by the European Central Bank (ECB). The main conclusion of
several studies is that the simple equal-weighted average constitutes a benchmark
that is hard to improve upon. In contrast to a great part of the literature which
does not exploit the correlation among forecasters, we take into account the full
covariance structure and we determine the optimal weights for the combination
of point forecasts as the minimizers of the mean squared forecast error (MSFE),
under the constraint that these weights are nonnegative and sum to one. We
compare our combination scheme with other methodologies in terms of forecasting
performance. Results show that the proposed optimal combination scheme is an
appropriate methodology to combine survey forecasts.
The literature on point forecast combination has been widely developed, however
there are fewer studies analyzing the issue for combination density forecast.
We extend our work considering the density forecasts combination. Moving from
the main results presented in Hall and Mitchell (2007), we propose an iterative
algorithm for computing the density weights which maximize the average logarithmic
score over the sample period. The empirical application is made for the
European GDP and inflation forecasts. Results suggest that optimal weights,
obtained via an iterative algorithm outperform the equal-weighted used by the
ECB density combinations.
The third chapter entitled Opinion surveys on the euro: a multilevel multinomial
logistic analysis outlines the multilevel aspects related to public attitudes
toward the euro. This work was motivated by the on-going debate whether the
perception of the euro among European citizenships after ten years from its introduction
was positive or negative. The aim of this work is, therefore, to disentangle
the issue of public attitudes considering either individual socio-demographic characteristics
and macroeconomic features of each country, counting each of them
as two separate levels in a single analysis. Considering a hierarchical structure
represents an advantage as it models within-country as well as between-country
relations using a single analysis. The multilevel analysis allows the consideration
of the existence of dependence between individuals within countries induced by
unobserved heterogeneity between countries, i.e. we include in the estimation
specific country characteristics not directly observable. In this chapter we empirically
investigate which individual characteristics and country specificities are
most important and affect the perception of the euro. The attitudes toward the
euro vary across individuals and countries, and are driven by personal considerations
based on the benefits and costs of using the single currency. Individual
features, such as a high level of education or living in a metropolitan area, have
a positive impact on the perception of the euro. Moreover, the country-specific
economic condition can influence individuals attitudes.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
Hörnell, Fredrik, and Melina Hafelt. "Responsiveness of Swedish housing prices to the 2018 amortization requirement : An investigation using a structural Vector autoregressive model to estimate the impact of macro prudential regulation on the Swedish housing market." Thesis, Södertörns högskola, Nationalekonomi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-35533.
Full textMayr, Johannes. "Forecasting Macroeconomic Aggregates." Diss., lmu, 2010. http://nbn-resolving.de/urn:nbn:de:bvb:19-111404.
Full textTagliabracci, Alex. "Essays on macroeconomic forecasting." Doctoral thesis, Universitat Autònoma de Barcelona, 2018. http://hdl.handle.net/10803/665202.
Full textThis thesis is a collection of three empirical essays with a focus on forecasting. The first chapter focuses on an important policy task as forecasting inflation. The work aims to investigate how the dynamics of the business cycle may impact the distribution of inflation forecasts. The second chapter considers two econometric models used in the nowcasting literature and propose a comparison with an application to the Italian GDP. The last chapter is centered around forecasting the effects of macroeconomic data releases on the exchange rates. The first chapter studies how the business cycle affects the conditional distribution of euro area inflation forecasts. Using a quantile regression approach, I estimate the conditional distribution of inflation to show its evolution over time allowing for asymmetries across quantiles. I document the evidence of downside risks to inflation which vary in relation to developments of the state of the economy while the upside risk remains relatively stable over time. I also find that this evidence partially characterizes the corresponding distribution derived from ECB Survey of Professional Forecasters. The second chapter proposes two multivarite econometric models that consider two important characteristics in the nowcasting literature, as timely and high frequency data, to predict Italian GDP, namely a dynamic factor model and a mixed-frequency Bayesian VAR. A pseudo out-of-sample exercise shows three main results: (i) both models considerably outperform a standard univariate benchmark; (ii) the dynamic factor model turns out to be more reliable at the end of the forecasting period while the mixed-frequency BVAR appears superior with an incomplete information set; (iii) the overall forecasting superiority of the dynamic factor model is mainly driven by its ability in capturing the severity of recession episodes. Finally, the third chapter, jointly written with Luca Brugnolini and Antonello D’Agostino, investigates the possible predictability of macroeconomic surprises and their effects on the exchange rates. In particular, we analyze two of the most important data releases that impact the US financial market, namely the change in the level of non-farm payroll employment (NFP) and the manufacturing index published by the Institute for Supply Management (ISM). We examine the unexpected component of these two, as measured by the deviation of the actual release from the Bloomberg Consensus. We label it as the market surprise, and we investigate whether its structure is partially predictable and in which cases. Secondly, we use high-frequency data on the eurodollar as a laboratory to study the effect of these surprises. We show in a regression framework that although the in-sample fit is sufficiently good, the performance deteriorates in an out-of-sample setting because a naive model can hardly be beaten in a sixty-minute window after the release. Finally, we demonstrate that under certain circumstances there is some structure that can be exploited and we provide a framework to take advantages of it.
Yetman, James Arthur. "Essays in macroeconomic forecasting." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ35986.pdf.
Full textHutson, Mark Duffield. "Three Essays on Macroeconomic Forecasting." Thesis, The George Washington University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3719383.
Full textMy dissertation consists of three essays on econometric forecasting and forecast evaluation. Each essay contributes to the literature in its own way. In the first essay, I employ a common qualitative analysis framework on a widely-available consensus forecast. I evaluate the consensus forecast's performance using the Predictive Failure statistic. I find that the survey respondents provide statistically significant directional forecasts or signals. The second essay evaluates forecasts of a prominent energy policy model. This includes analysis of the forecasts, developing a rival forecasting model, and attempting to identify information to improve the policy model's forecasts. This analysis finds that the policy model generally performs well against a sophisticated time-series model. Specifically, the model appears to incorporate data dynamics appropriately, and this technique can serve as a new benchmark to evaluate the forecasts. The third essay examines the role of data revisions on models employing error correction relationships. These error correction relationships are rooted in economic theory and thus should be robust to changing economic conditions and data revisions. This essay finds that an older, popular model breaks down using newer data vintages, suggesting that data revisions and data issues can influence the stability of vector error correction models.
Verra, Christina. "Macroeconomic forecasting using model averaging." Thesis, Queen Mary, University of London, 2009. http://qmro.qmul.ac.uk/xmlui/handle/123456789/383.
Full textSundberg, David. "Yield curve forecasting using macroeconomic proxy variables." Thesis, Umeå universitet, Institutionen för fysik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-158326.
Full textZeng, Jing [Verfasser]. "Forecasting Euro Area Macroeconomic Aggregate Variables / Jing Zeng." Konstanz : Bibliothek der Universität Konstanz, 2015. http://d-nb.info/1112336702/34.
Full textCostantini, Mauro, Cuaresma Jesus Crespo, and Jaroslava Hlouskova. "Can Macroeconomists Get Rich Forecasting Exchange Rates?" WU Vienna University of Economics and Business, 2014. http://epub.wu.ac.at/4181/1/wp176.pdf.
Full textSeries: Department of Economics Working Paper Series
Korompilis-Magkas, Dimitris. "Three essays in macroeconomic forecasting using Bayesian model selection." Thesis, University of Strathclyde, 2010. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=18236.
Full textMayr, Johannes. "Forecasting macroeconomic aggregates : pooling of forecasts and pooling of information." kostenfrei, 2009. http://d-nb.info/100055046X/34.
Full textReade, J. J. ames. "Macroeconomic modelling and forecasting in the face of non-stationarity." Thesis, University of Oxford, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.495733.
Full textQiang, Fu. "Bayesian multivariate time series models for forecasting European macroeconomic series." Thesis, University of Hull, 2000. http://hydra.hull.ac.uk/resources/hull:8068.
Full textRangel, Jose Gonzalo. "Stock market volatility and price discovery three essays on the effect of macroeconomic information /." Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2006. http://wwwlib.umi.com/cr/ucsd/fullcit?p3220417.
Full textTitle from first page of PDF file (viewed September 7, 2006). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 125-130).
Berardi, Andrea. "Term structure of interest rates, non-neutral inflation and economic growth." Thesis, London Business School (University of London), 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266078.
Full textNguyen, Dat-Dao. "Forecasting macroeconomic models with artificial neural networks : an empirical investigation into the foundation for an intelligent forecasting system." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape2/PQDD_0021/NQ47705.pdf.
Full textStankiewicz, Sandra [Verfasser]. "Forecasting and econometric modelling of macroeconomic and financial time series / Sandra Stankiewicz." Konstanz : Bibliothek der Universität Konstanz, 2015. http://d-nb.info/1079666028/34.
Full textPirschel, Inske [Verfasser]. "Essays on Preferences and Nominal Rigidities and on Macroeconomic Forecasting / Inske Pirschel." Kiel : Universitätsbibliothek Kiel, 2016. http://d-nb.info/1122110944/34.
Full textBañbura, Marta. "Essays in dynamic macroeconometrics." Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210294.
Full textThe first two chapters consider factor models in the context of real-time forecasting with many indicators. Using a large number of predictors offers an opportunity to exploit a rich information set and is also considered to be a more robust approach in the presence of instabilities. On the other hand, it poses a challenge of how to extract the relevant information in a parsimonious way. Recent research shows that factor models provide an answer to this problem. The fundamental assumption underlying those models is that most of the co-movement of the variables in a given dataset can be summarized by only few latent variables, the factors. This assumption seems to be warranted in the case of macroeconomic and financial data. Important theoretical foundations for large factor models were laid by Forni, Hallin, Lippi and Reichlin (2000) and Stock and Watson (2002). Since then, different versions of factor models have been applied for forecasting, structural analysis or construction of economic activity indicators. Recently, Giannone, Reichlin and Small (2008) have used a factor model to produce projections of the U.S GDP in the presence of a real-time data flow. They propose a framework that can cope with large datasets characterised by staggered and nonsynchronous data releases (sometimes referred to as “ragged edge”). This is relevant as, in practice, important indicators like GDP are released with a substantial delay and, in the meantime, more timely variables can be used to assess the current state of the economy.
The first chapter of the thesis entitled “A look into the factor model black box: publication lags and the role of hard and soft data in forecasting GDP” is based on joint work with Gerhard Rünstler and applies the framework of Giannone, Reichlin and Small (2008) to the case of euro area. In particular, we are interested in the role of “soft” and “hard” data in the GDP forecast and how it is related to their timeliness.
The soft data include surveys and financial indicators and reflect market expectations. They are usually promptly available. In contrast, the hard indicators on real activity measure directly certain components of GDP (e.g. industrial production) and are published with a significant delay. We propose several measures in order to assess the role of individual or groups of series in the forecast while taking into account their respective publication lags. We find that surveys and financial data contain important information beyond the monthly real activity measures for the GDP forecasts, once their timeliness is properly accounted for.
The second chapter entitled “Maximum likelihood estimation of large factor model on datasets with arbitrary pattern of missing data” is based on joint work with Michele Modugno. It proposes a methodology for the estimation of factor models on large cross-sections with a general pattern of missing data. In contrast to Giannone, Reichlin and Small (2008), we can handle datasets that are not only characterised by a “ragged edge”, but can include e.g. mixed frequency or short history indicators. The latter is particularly relevant for the euro area or other young economies, for which many series have been compiled only since recently. We adopt the maximum likelihood approach which, apart from the flexibility with regard to the pattern of missing data, is also more efficient and allows imposing restrictions on the parameters. Applied for small factor models by e.g. Geweke (1977), Sargent and Sims (1977) or Watson and Engle (1983), it has been shown by Doz, Giannone and Reichlin (2006) to be consistent, robust and computationally feasible also in the case of large cross-sections. To circumvent the computational complexity of a direct likelihood maximisation in the case of large cross-section, Doz, Giannone and Reichlin (2006) propose to use the iterative Expectation-Maximisation (EM) algorithm (used for the small model by Watson and Engle, 1983). Our contribution is to modify the EM steps to the case of missing data and to show how to augment the model, in order to account for the serial correlation of the idiosyncratic component. In addition, we derive the link between the unexpected part of a data release and the forecast revision and illustrate how this can be used to understand the sources of the
latter in the case of simultaneous releases. We use this methodology for short-term forecasting and backdating of the euro area GDP on the basis of a large panel of monthly and quarterly data. In particular, we are able to examine the effect of quarterly variables and short history monthly series like the Purchasing Managers' surveys on the forecast.
The third chapter is entitled “Large Bayesian VARs” and is based on joint work with Domenico Giannone and Lucrezia Reichlin. It proposes an alternative approach to factor models for dealing with the curse of dimensionality, namely Bayesian shrinkage. We study Vector Autoregressions (VARs) which have the advantage over factor models in that they allow structural analysis in a natural way. We consider systems including more than 100 variables. This is the first application in the literature to estimate a VAR of this size. Apart from the forecast considerations, as argued above, the size of the information set can be also relevant for the structural analysis, see e.g. Bernanke, Boivin and Eliasz (2005), Giannone and Reichlin (2006) or Christiano, Eichenbaum and Evans (1999) for a discussion. In addition, many problems may require the study of the dynamics of many variables: many countries, sectors or regions. While we use standard priors as proposed by Litterman (1986), an
important novelty of the work is that we set the overall tightness of the prior in relation to the model size. In this we follow the recommendation by De Mol, Giannone and Reichlin (2008) who study the case of Bayesian regressions. They show that with increasing size of the model one should shrink more to avoid overfitting, but when data are collinear one is still able to extract the relevant sample information. We apply this principle in the case of VARs. We compare the large model with smaller systems in terms of forecasting performance and structural analysis of the effect of monetary policy shock. The results show that a standard Bayesian VAR model is an appropriate tool for large panels of data once the degree of shrinkage is set in relation to the model size.
The fourth chapter entitled “Forecasting euro area inflation with wavelets: extracting information from real activity and money at different scales” proposes a framework for exploiting relationships between variables at different frequency bands in the context of forecasting. This work is motivated by the on-going debate whether money provides a reliable signal for the future price developments. The empirical evidence on the leading role of money for inflation in an out-of-sample forecast framework is not very strong, see e.g. Lenza (2006) or Fisher, Lenza, Pill and Reichlin (2008). At the same time, e.g. Gerlach (2003) or Assenmacher-Wesche and Gerlach (2007, 2008) argue that money and output could affect prices at different frequencies, however their analysis is performed in-sample. In this Chapter, it is investigated empirically which frequency bands and for which variables are the most relevant for the out-of-sample forecast of inflation when the information from prices, money and real activity is considered. To extract different frequency components from a series a wavelet transform is applied. It provides a simple and intuitive framework for band-pass filtering and allows a decomposition of series into different frequency bands. Its application in the multivariate out-of-sample forecast is novel in the literature. The results indicate that, indeed, different scales of money, prices and GDP can be relevant for the inflation forecast.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
Bradshaw, Girard W. "Detecting macroeconomic impacts on agricultural prices and export sales : a time series forecasting approach /." Thesis, This resource online, 1988. http://scholar.lib.vt.edu/theses/available/etd-04122010-083628/.
Full textSadik, Zryan. "Asset price and volatility forecasting using news sentiment." Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/17079.
Full textFodor, Bryan D. "The effect of macroeconomic variables on the pricing of common stock under trending market conditions." Thesis, Department of Business Administration, University of New Brunswick, 2003. http://hdl.handle.net/1882/49.
Full textTypescript. Bibliography: leaves 83-84. Also available online through University of New Brunswick, UNB Electronic Theses & Dissertations.
Heinrich, Markus [Verfasser], Kai [Akademischer Betreuer] Carstensen, and Matei [Gutachter] Demetrescu. "Macroeconomic Forecasting and Business Cycle Analysis with Nonlinear Models / Markus Heinrich ; Gutachter: Matei Demetrescu ; Betreuer: Kai Carstensen." Kiel : Universitätsbibliothek Kiel, 2020. http://d-nb.info/122345293X/34.
Full textPerendija, Djordje V. "Business cycles, interest rates and market volatility : estimation and forecasting using DSGE macroeconomic models under partial information." Thesis, London Metropolitan University, 2018. http://repository.londonmet.ac.uk/1536/.
Full textDechaux, Pierrick. "L'économie face aux enquêtes psychologiques 1944 -1960 : unité de la science économique, diversité des pratiques." Thesis, Paris 1, 2017. http://www.theses.fr/2017PA01E025.
Full textThis dissertation looks at the historical development of George Kantona's psychological surveys at the Survey Research Center at the University of Michigan. The main legacy of this work has been the widespread adoption of confidence indicators. They are used each month by more than fifty countries and widely implemented by business managers and forecasters. How do we explain the widespread usage of these indicators despite a prevalent consensus in macroeconomics and microeconomics that does not consider them as important tools? In order to answer this question, we study several controversies that occurred around Michigan surveys between 1944 and 1960. It is shown that this era is characterized by many interdisciplinary exchanges guided by the practical needs of decision-makers in governments and private companies. I show that if economists know little about these debates, it is because they were maintained in disciplinary fields on the periphery of economics. These fields are centered on practical problems that theoretical economists progressively abandoned. This thesis offers a new way of understanding the history of recent macroeconomics and behavioral economics by proposing an analysis of the links between economic theory and its application in practice. For instance, the history of post-war intellectual dynamics cannot be reduced to theoretical innovations or to a new relationship between theory and empiricism. Indeed, these dynamics rely also on the transformation of the boundaries between the science and its art; between the economy on the one hand and marketing and forecasting on the other
CUNHA, JOAO MARCO BRAGA DA. "EXPERIMENTS ON FORECASTING THE AMERICAN TERM STRUCTURE OF INTEREST RATES: MEAN REVERSION, INERTIA AND INFLUENCE OF MACROECONOMIC VARIABLES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2009. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=14308@1.
Full textEste trabalho propõe um modelo com reversão à média e inércia para taxas de juros e para cargas dos fatores de Nelson e Siegel (1987), e adiciona variáveis macroeconômicas selecionadas. As previsões geradas são comparadas com o Passeio Aleatório e com a metodologia de Diebold e Li (2006).
This work proposes a model with mean reversion and inertia for the yields and the loadings of the Nelson and Siegel (1987) factors, and includes selected macroeconomic variables. The generated forecasts are compared with the Random Walk and the Diebold e Li (2006) methodology.