Dissertations / Theses on the topic 'Macroeconomics – Econometric models'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Macroeconomics – Econometric models.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Steinbach, Max Rudibert. "Essays on dynamic macroeconomics." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/86196.
Full textENGLISH ABSTRACT: In the first essay of this thesis, a medium scale DSGE model is developed and estimated for the South African economy. When used for forecasting, the model is found to outperform private sector economists when forecasting CPI inflation, GDP growth and the policy rate over certain horizons. In the second essay, the benchmark DSGE model is extended to include the yield on South African 10-year government bonds. The model is then used to decompose the 10-year yield spread into (1) the structural shocks that contributed to its evolution during the inflation targeting regime of the South African Reserve Bank, as well as (2) an expected yield and a term premium. In addition, it is found that changes in the South African term premium may predict future real economic activity. Finally, the need for DSGE models to take account of financial frictions became apparent during the recent global financial crisis. As a result, the final essay incorporates a stylised banking sector into the benchmark DSGE model described above. The optimal response of the South African Reserve Bank to financial shocks is then analysed within the context of this structural model.
Emiris, Marina. "Essays on macroeconomics and finance." Doctoral thesis, Universite Libre de Bruxelles, 2006. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210764.
Full textWalker, Sébastien. "Essays in development macroeconomics." Thesis, University of Oxford, 2015. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.712398.
Full textSantos, Monteiro Paulo. "Essays on uninsurable individual risk and heterogeneity in macroeconomics." Doctoral thesis, Universite Libre de Bruxelles, 2008. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210528.
Full text
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
Delle, Monache Davide. "Essays on state space models and macroeconomic modelling." Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609745.
Full textDe, Antonio Liedo David. "Structural models for macroeconomics and forecasting." Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210142.
Full textcentral debates in empirical macroeconomic modeling.
Chapter 1, entitled “A Model for Real-Time Data Assessment with an Application to GDP Growth Rates”, provides a model for the data
revisions of macroeconomic variables that distinguishes between rational expectation updates and noise corrections. Thus, the model encompasses the two polar views regarding the publication process of statistical agencies: noise versus news. Most of the studies previous studies that analyze data revisions are based
on the classical noise and news regression approach introduced by Mankiew, Runkle and Shapiro (1984). The problem is that the statistical tests available do not formulate both extreme hypotheses as collectively exhaustive, as recognized by Aruoba (2008). That is, it would be possible to reject or accept both of them simultaneously. In turn, the model for the
DPP presented here allows for the simultaneous presence of both noise and news. While the “regression approach” followed by Faust et al. (2005), along the lines of Mankiew et al. (1984), identifies noise in the preliminary
figures, it is not possible for them to quantify it, as done by our model.
The second and third chapters acknowledge the possibility that macroeconomic data is measured with errors, but the approach followed to model the missmeasurement is extremely stylized and does not capture the complexity of the revision process that we describe in the first chapter.
Chapter 2, entitled “Revisiting the Success of the RBC model”, proposes the use of dynamic factor models as an alternative to the VAR based tools for the empirical validation of dynamic stochastic general equilibrium (DSGE) theories. Along the lines of Giannone et al. (2006), we use the state-space parameterisation of the factor models proposed by Forni et al. (2007) as a competitive benchmark that is able to capture weak statistical restrictions that DSGE models impose on the data. Our empirical illustration compares the out-of-sample forecasting performance of a simple RBC model augmented with a serially correlated noise component against several specifications belonging to classes of dynamic factor and VAR models. Although the performance of the RBC model is comparable
to that of the reduced form models, a formal test of predictive accuracy reveals that the weak restrictions are more useful at forecasting than the strong behavioral assumptions imposed by the microfoundations in the model economy.
The last chapter, “What are Shocks Capturing in DSGE modeling”, contributes to current debates on the use and interpretation of larger DSGE
models. Recent tendency in academic work and at central banks is to develop and estimate large DSGE models for policy analysis and forecasting. These models typically have many shocks (e.g. Smets and Wouters, 2003 and Adolfson, Laseen, Linde and Villani, 2005). On the other hand, empirical studies point out that few large shocks are sufficient to capture the covariance structure of macro data (Giannone, Reichlin and
Sala, 2005, Uhlig, 2004). In this Chapter, we propose to reconcile both views by considering an alternative DSGE estimation approach which
models explicitly the statistical agency along the lines of Sargent (1989). This enables us to distinguish whether the exogenous shocks in DSGE
modeling are structural or instead serve the purpose of fitting the data in presence of misspecification and measurement problems. When applied to the original Smets and Wouters (2007) model, we find that the explanatory power of the structural shocks decreases at high frequencies. This allows us to back out a smoother measure of the natural output gap than that
resulting from the original specification.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
Calver, Robin Barnaby. "Macroeconomic and Political Determinants of Foreign Direct Investment in the Middle East." PDXScholar, 2013. https://pdxscholar.library.pdx.edu/open_access_etds/1074.
Full textJindal, Bhavin. "The Chinese Dragon Lands in Africa: Chinese Contracts and Economic Growth in Africa." Scholarship @ Claremont, 2017. http://scholarship.claremont.edu/cmc_theses/1564.
Full textJi, Inyeob Economics Australian School of Business UNSW. "Essays on testing some predictions of RBC models and the stationarity of real interest rates." Publisher:University of New South Wales. Economics, 2008. http://handle.unsw.edu.au/1959.4/41441.
Full textConflitti, Cristina. "Essays on the econometrics of macroeconomic survey data." Doctoral thesis, Universite Libre de Bruxelles, 2012. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209635.
Full textand econometrics of survey data. Chapters one and two analyse two aspects
of the Survey of Professional Forecasters (SPF hereafter) dataset. This survey
provides a large information on macroeconomic expectations done by the professional
forecasters and offers an opportunity to exploit a rich information set.
But it poses a challenge on how to extract the relevant information in a proper
way. The last chapter addresses the issue of analyzing the opinions on the euro
reported in the Flash Eurobaromenter dataset.
The first chapter Measuring Uncertainty and Disagreement in the European
Survey of Professional Forecasters proposes a density forecast methodology based
on the piecewise linear approximation of the individual’s forecasting histograms,
to measure uncertainty and disagreement of the professional forecasters. Since
1960 with the introduction of the SPF in the US, it has been clear that they were a
useful source of information to address the issue on how to measure disagreement
and uncertainty, without relying on macroeconomic or time series models. Direct
measures of uncertainty are seldom available, whereas many surveys report point
forecasts from a number of individual respondents. There has been a long tradition
of using measures of the dispersion of individual respondents’ point forecasts
(disagreement or consensus) as proxies for uncertainty. Unlike other surveys, the
SPF represents an exception. It directly asks for the point forecast, and for the
probability distribution, in the form of histogram, associated with the macro variables
of interest. An important issue that should be considered concerns how to
approximate individual probability densities and get accurate individual results
for disagreement and uncertainty before computing the aggregate measures. In
contrast to Zarnowitz and Lambros (1987), and Giordani and Soderlind (2003) we
overcome the problem associated with distributional assumptions of probability
density forecasts by using a non parametric approach that, instead of assuming
a functional form for the individual probability law, approximates the histogram
by a piecewise linear function. In addition, and unlike earlier works that focus on
US data, we employ European data, considering gross domestic product (GDP),
inflation and unemployment.
The second chapter Optimal Combination of Survey Forecasts is based on
a joint work with Christine De Mol and Domenico Giannone. It proposes an
approach to optimally combine survey forecasts, exploiting the whole covariance
structure among forecasters. There is a vast literature on forecast combination
methods, advocating their usefulness both from the theoretical and empirical
points of view (see e.g. the recent review by Timmermann (2006)). Surprisingly,
it appears that simple methods tend to outperform more sophisticated ones, as
shown for example by Genre et al. (2010) on the combination of the forecasts in
the SPF conducted by the European Central Bank (ECB). The main conclusion of
several studies is that the simple equal-weighted average constitutes a benchmark
that is hard to improve upon. In contrast to a great part of the literature which
does not exploit the correlation among forecasters, we take into account the full
covariance structure and we determine the optimal weights for the combination
of point forecasts as the minimizers of the mean squared forecast error (MSFE),
under the constraint that these weights are nonnegative and sum to one. We
compare our combination scheme with other methodologies in terms of forecasting
performance. Results show that the proposed optimal combination scheme is an
appropriate methodology to combine survey forecasts.
The literature on point forecast combination has been widely developed, however
there are fewer studies analyzing the issue for combination density forecast.
We extend our work considering the density forecasts combination. Moving from
the main results presented in Hall and Mitchell (2007), we propose an iterative
algorithm for computing the density weights which maximize the average logarithmic
score over the sample period. The empirical application is made for the
European GDP and inflation forecasts. Results suggest that optimal weights,
obtained via an iterative algorithm outperform the equal-weighted used by the
ECB density combinations.
The third chapter entitled Opinion surveys on the euro: a multilevel multinomial
logistic analysis outlines the multilevel aspects related to public attitudes
toward the euro. This work was motivated by the on-going debate whether the
perception of the euro among European citizenships after ten years from its introduction
was positive or negative. The aim of this work is, therefore, to disentangle
the issue of public attitudes considering either individual socio-demographic characteristics
and macroeconomic features of each country, counting each of them
as two separate levels in a single analysis. Considering a hierarchical structure
represents an advantage as it models within-country as well as between-country
relations using a single analysis. The multilevel analysis allows the consideration
of the existence of dependence between individuals within countries induced by
unobserved heterogeneity between countries, i.e. we include in the estimation
specific country characteristics not directly observable. In this chapter we empirically
investigate which individual characteristics and country specificities are
most important and affect the perception of the euro. The attitudes toward the
euro vary across individuals and countries, and are driven by personal considerations
based on the benefits and costs of using the single currency. Individual
features, such as a high level of education or living in a metropolitan area, have
a positive impact on the perception of the euro. Moreover, the country-specific
economic condition can influence individuals attitudes.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
Malherbe, Frédéric. "Essays on the macroeconomic implications of information asymmetries." Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210085.
Full textimplications of information asymmetries, with a special focus on financial
issues. This exercise is mainly theoretical: I develop stylized models that aim
at capturing macroeconomic phenomena such as self-fulfilling liquidity dry-ups,
the rise and the fall of securitization markets, and the creation of systemic risk.
The dissertation consists of three chapters. The first one proposes an explanation
to self-fulfilling liquidity dry-ups. The second chapters proposes a formalization
of the concept of market discipline and an application to securitization
markets as risk-sharing mechanisms. The third one offers a complementary
analysis to the second as the rise of securitization is presented as banker optimal
response to strict capital constraints.
Two concepts that do not have unique acceptations in economics play a central
role in these models: liquidity and market discipline.
The liquidity of an asset refers to the ability for his owner to transform it into
current consumption goods. Secondary markets for long-term assets play thus
an important role with that respect. However, such markets might be illiquid due
to adverse selection.
In the first chapter, I show that: (1) when agents expect a liquidity dry-up
on such markets, they optimally choose to self-insure through the hoarding of
non-productive but liquid assets; (2) this hoarding behavior worsens adverse selection and dries up market liquidity; (3) such liquidity dry-ups are Pareto inefficient
equilibria; (4) the government can rule them out. Additionally, I show
that idiosyncratic liquidity shocks à la Diamond and Dybvig have stabilizing effects,
which is at odds with the banking literature. The main contribution of the
chapter is to show that market breakdowns due to adverse selection are highly
endogenous to past balance-sheet decisions.
I consider that agents are under market discipline when their current behavior
is influenced by future market outcomes. A key ingredient for market discipline
to be at play is that the market outcome depends on information that is observable
but not verifiable (that is, information that cannot be proved in court, and
consequently, upon which enforceable contracts cannot be based).
In the second chapter, after introducing this novel formalization of market
discipline, I ask whether securitization really contributes to better risk-sharing:
I compare it with other mechanisms that differ on the timing of risk-transfer. I
find that for securitization to be an efficient risk-sharing mechanism, it requires
market discipline to be strong and adverse selection not to be severe. This seems
to seriously restrict the set of assets that should be securitized for risk-sharing
motive.
Additionally, I show how ex-ante leverage may mitigate interim adverse selection
in securitization markets and therefore enhance ex-post risk-sharing. This
is interesting because high leverage is usually associated with “excessive” risktaking.
In the third chapter, I consider risk-neutral bankers facing strict capital constraints;
their capital is indeed required to cover the worst-case-scenario losses.
In such a set-up, I find that: 1) banker optimal autarky response is to diversify
lower-tail risk and maximize leverage; 2) securitization helps to free up capital
and to increase leverage, but distorts incentives to screen loan applicants properly; 3) market discipline mitigates this problem, but if it is overestimated by
the supervisor, it leads to excess leverage, which creates systemic risk. Finally,
I consider opaque securitization and I show that the supervisor: 4) faces uncertainty
about the trade-off between the size of the economy and the probability
and the severity of a systemic crisis; 5) can generally not set capital constraints
at the socially efficient level.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
Humpe, Andreas. "Macroeconomic variables and the stock market : an empirical comparison of the US and Japan." Thesis, St Andrews, 2008. http://hdl.handle.net/10023/464.
Full textFeng, Ning. "Essays on business cycles and macroeconomic forecasting." HKBU Institutional Repository, 2016. https://repository.hkbu.edu.hk/etd_oa/279.
Full textD'Agostino, Antonello. "Understanding co-movements in macro and financial variables." Doctoral thesis, Universite Libre de Bruxelles, 2007. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210597.
Full textIn the first chapter of this thesis, the generalized dynamic factor model of Forni et. al (2002) is employed to explore the predictive content of the asset returns in forecasting Consumer Price Index (CPI) inflation and the growth rate of Industrial Production (IP). The connection between stock markets and economic growth is well known. In the fundamental valuation of equity, the stock price is equal to the discounted future streams of expected dividends. Since the future dividends are related to future growth, a revision of prices, and hence returns, should signal movements in the future growth path. Though other important transmission channels, such as the Tobin's q theory (Tobin, 1969), the wealth effect as well as capital market imperfections, have been widely studied in this literature. I show that an aggregate index, such as the S&P500, could be misleading if used as a proxy for the informative content of the stock market as a whole. Despite the widespread wisdom of considering such index as a leading variable, only part of the assets included in the composition of the index has a leading behaviour with respect to the variables of interest. Its forecasting performance might be poor, leading to sceptical conclusions about the effectiveness of asset prices in forecasting macroeconomic variables. The main idea of the first essay is therefore to analyze the lead-lag structure of the assets composing the S&P500. The classification in leading, lagging and coincident variables is achieved by means of the cross correlation function cleaned of idiosyncratic noise and short run fluctuations. I assume that asset returns follow a factor structure. That is, they are the sum of two parts: a common part driven by few shocks common to all the assets and an idiosyncratic part, which is rather asset specific. The correlation
function, computed on the common part of the series, is not affected by the assets' specific dynamics and should provide information only on the series driven by the same common factors. Once the leading series are identified, they are grouped within the economic sector they belong to. The predictive content that such aggregates have in forecasting IP growth and CPI inflation is then explored and compared with the forecasting power of the S&P500 composite index. The forecasting exercise is addressed in the following way: first, in an autoregressive (AR) model I choose the truncation lag that minimizes the Mean Square Forecast Error (MSFE) in 11 years out of sample simulations for 1, 6 and 12 steps ahead, both for the IP growth rate and the CPI inflation. Second, the S&P500 is added as an explanatory variable to the previous AR specification. I repeat the simulation exercise and find that there are very small improvements of the MSFE statistics. Third, averages of stock return leading series, in the respective sector, are added as additional explanatory variables in the benchmark regression. Remarkable improvements are achieved with respect to the benchmark specification especially for one year horizon forecast. Significant improvements are also achieved for the shorter forecast horizons, when the leading series of the technology and energy sectors are used.
The second chapter of this thesis disentangles the sources of aggregate risk and measures the extent of co-movements in five European stock markets. Based on the static factor model of Stock and Watson (2002), it proposes a new method for measuring the impact of international, national and industry-specific shocks. The process of European economic and monetary integration with the advent of the EMU has been a central issue for investors and policy makers. During these years, the number of studies on the integration and linkages among European stock markets has increased enormously. Given their forward looking nature, stock prices are considered a key variable to use for establishing the developments in the economic and financial markets. Therefore, measuring the extent of co-movements between European stock markets has became, especially over the last years, one of the main concerns both for policy makers, who want to best shape their policy responses, and for investors who need to adapt their hedging strategies to the new political and economic environment. An optimal portfolio allocation strategy is based on a timely identification of the factors affecting asset returns. So far, literature dating back to Solnik (1974) identifies national factors as the main contributors to the co-variations among stock returns, with the industry factors playing a marginal role. The increasing financial and economic integration over the past years, fostered by the decline of trade barriers and a greater policy coordination, should have strongly reduced the importance of national factors and increased the importance of global determinants, such as industry determinants. However, somehow puzzling, recent studies demonstrated that countries sources are still very important and generally more important of the industry ones. This paper tries to cast some light on these conflicting results. The chapter proposes an econometric estimation strategy more flexible and suitable to disentangle and measure the impact of global and country factors. Results point to a declining influence of national determinants and to an increasing influence of the industries ones. The international influences remains the most important driving forces of excess returns. These findings overturn the results in the literature and have important implications for strategic portfolio allocation policies; they need to be revisited and adapted to the changed financial and economic scenario.
The third chapter presents a new stylized fact which can be helpful for discriminating among alternative explanations of the U.S. macroeconomic stability. The main finding is that the fall in time series volatility is associated with a sizable decline, of the order of 30% on average, in the predictive accuracy of several widely used forecasting models, included the factor models proposed by Stock and Watson (2002). This pattern is not limited to the measures of inflation but also extends to several indicators of real economic activity and interest rates. The generalized fall in predictive ability after the mid-1980s is particularly pronounced for forecast horizons beyond one quarter. Furthermore, this empirical regularity is not simply specific to a single method, rather it is a common feature of all models including those used by public and private institutions. In particular, the forecasts for output and inflation of the Fed's Green book and the Survey of Professional Forecasters (SPF) are significantly more accurate than a random walk only before 1985. After this date, in contrast, the hypothesis of equal predictive ability between naive random walk forecasts and the predictions of those institutions is not rejected for all horizons, the only exception being the current quarter. The results of this chapter may also be of interest for the empirical literature on asymmetric information. Romer and Romer (2000), for instance, consider a sample ending in the early 1990s and find that the Fed produced more accurate forecasts of inflation and output compared to several commercial providers. The results imply that the informational advantage of the Fed and those private forecasters is in fact limited to the 1970s and the beginning of the 1980s. In contrast, during the last two decades no forecasting model is better than "tossing a coin" beyond the first quarter horizon, thereby implying that on average uninformed economic agents can effectively anticipate future macroeconomics developments. On the other hand, econometric models and economists' judgement are quite helpful for the forecasts over the very short horizon, that is relevant for conjunctural analysis. Moreover, the literature on forecasting methods, recently surveyed by Stock and Watson (2005), has devoted a great deal of attention towards identifying the best model for predicting inflation and output. The majority of studies however are based on full-sample periods. The main findings in the chapter reveal that most of the full sample predictability of U.S. macroeconomic series arises from the years before 1985. Long time series appear
to attach a far larger weight on the earlier sub-sample, which is characterized by a larger volatility of inflation and output. Results also suggest that some caution should be used in evaluating the performance of alternative forecasting models on the basis of a pool of different sub-periods as full sample analysis are likely to miss parameter instability.
The fourth chapter performs a detailed forecast comparison between the static factor model of Stock and Watson (2002) (SW) and the dynamic factor model of Forni et. al. (2005) (FHLR). It is not the first work in performing such an evaluation. Boivin and Ng (2005) focus on a very similar problem, while Stock and Watson (2005) compare the performances of a larger class of predictors. The SW and FHLR methods essentially differ in the computation of the forecast of the common component. In particular, they differ in the estimation of the factor space and in the way projections onto this space are performed. In SW, the factors are estimated by static Principal Components (PC) of the sample covariance matrix and the forecast of the common component is simply the projection of the predicted variable on the factors. FHLR propose efficiency improvements in two directions. First, they estimate the common factors based on Generalized Principal Components (GPC) in which observations are weighted according to their signal to noise ratio. Second, they impose the constraints implied by the dynamic factors structure when the variables of interest are projected on the common factors. Specifically, they take into account the leading and lagging relations across series by means of principal components in the frequency domain. This allows for an efficient aggregation of variables that may be out of phase. Whether these efficiency improvements are helpful to forecast in a finite sample is however an empirical question. Literature has not yet reached a consensus. On the one hand, Stock and Watson (2005) show that both methods perform similarly (although they focus on the weighting of the idiosyncratic and not on the dynamic restrictions), while Boivin and Ng (2005) show that SW's method largely outperforms the FHLR's and, in particular, conjecture that the dynamic restrictions implied by the method are harmful for the forecast accuracy of the model. This chapter tries to shed some new light on these conflicting results. It
focuses on the Industrial Production index (IP) and the Consumer Price Index (CPI) and bases the evaluation on a simulated out-of sample forecasting exercise. The data set, borrowed from Stock and Watson (2002), consists of 146 monthly observations for the US economy. The data spans from 1959 to 1999. In order to isolate and evaluate specific characteristics of the methods, a procedure, where the
two non-parametric approaches are nested in a common framework, is designed. In addition, for both versions of the factor model forecasts, the chapter studies the contribution of the idiosyncratic component to the forecast. Other non-core aspects of the model are also investigated: robustness with respect to the choice of the number of factors and variable transformations. Finally, the chapter performs a sub-sample performances of the factor based forecasts. The purpose of this exercise is to design an experiment for assessing the contribution of the core characteristics of different models to the forecasting performance and discussing auxiliary issues. Hopefully this may also serve as a guide for practitioners in the field. As in Stock and Watson (2005), results show that efficiency improvements due to the weighting of the idiosyncratic components do not lead to significant more accurate forecasts, but, in contrast to Boivin and Ng (2005), it is shown that the dynamic restrictions imposed by the procedure of Forni et al. (2005) are not harmful for predictability. The main conclusion is that the two methods have a similar performance and produce highly collinear forecasts.
Doctorat en sciences économiques, Orientation économie
info:eu-repo/semantics/nonPublished
Fantinatti, Marcos da Costa. "Modelo de equilíbrio geral estocástico e o mercado de trabalho brasileiro." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/12/12138/tde-25022016-112933/.
Full textThe three articles of this thesis focus on the labor market. The first article calculated the probability of a worker leaving his job and the probability of an unemployed person finding a job in Brazil, using the methodology developed by Shimer (2012). The aim was to determine which of these factors was the most important to explain the unemployment rate fluctuations. The results showed that the probability of an unemployed worker finding a job is more important to explain the dynamic of the unemployment rate. Commonly, the literature has found an opposite result in Brazil. In the second article, we log linearized and estimated the model built by Christiano, Eichenbaum and Evans (2013) for Brazil. This model is different from the traditional New Keynesian models because it has a structure of searching in the labor market. The idea was to compare this model with the traditional one with sticky wage and sticky prices. Moreover, the idea was to analyze if this model with searching structure in the labor market was able to substitute some traditional rigidity when the concern is the propagation of shocks. The impulse response functions to a contractionist monetary policy shock showed that this model explains the dynamic that is normally found in GDP, inflation and unemployment rate. Furthermore, the estimation showed that, in general, the prices are readjusted less frequently than the frequency estimated by New Keynesian models with sticky wage and sticky prices. Besides, when the rigidities (capital utilization and working capital channel) are eliminated, this model did not properly explain the inertial and persistence dynamic of the macroeconomics variables, such as GDP and inflation. Finally, in the last article, we estimated the Christiano, Eichenbaum and Trabandt (2013) model for the United States, but we adopted a different estimation strategy. We log linearized the model and estimated it with Bayesian methods. Moreover, we estimated for two different periods. The aim was to compare our results with the original model. When the model was estimated with data up to 2008, the results showed that the estimations were in line with the values found in the literature and, in general, they were not too far from the values estimated in the original article. However, the parameters estimated showed a model in which the prices are more rigid, the consumption habit is higher and the monetary rule is less inertial than observed in the original model. However, the monetary authority reacted much more to inflation than GDP, as it happened in the original article. When we considered the data until 2014, we observed that the estimated model remained with more sticky prices and a more inertial monetary rule. Moreover, we noted that this more recent data affected more expressively the estimated values of the labor market. The analysis of impulse response function showed this less inertial dynamic of the monetary rule and, overall, they followed the expected dynamics
Cimadomo, Jacopo. "Essays on systematic and unsystematic monetary and fiscal policies." Doctoral thesis, Universite Libre de Bruxelles, 2008. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210474.
Full textconsequence, the stance that policymakers should adopt over the business cycle, remain
controversial issues in the economic literature.
In the light of the dramatic experience of the early 1930s’ Great Depression, Keynes (1936)
argued that the market mechanism could not be relied upon to spontaneously recover from
a slump, and advocated counter-cyclical public spending and monetary policy to stimulate
demand. Albeit the Keynesian doctrine had largely influenced policymaking during
the two decades following World War II, it began to be seriously challenged in several
directions since the start of the 1970s. The introduction of rational expectations within
macroeconomic models implied that aggregate demand management could not stabilize
the economy’s responses to shocks (see in particular Sargent and Wallace (1975)). According
to this view, in fact, rational agents foresee the effects of the implemented policies, and
wage and price expectations are revised upwards accordingly. Therefore, real wages and
money balances remain constant and so does output. Within such a conceptual framework,
only unexpected policy interventions would have some short-run effects upon the economy.
The "real business cycle (RBC) theory", pioneered by Kydland and Prescott (1982), offered
an alternative explanation on the nature of fluctuations in economic activity, viewed
as reflecting the efficient responses of optimizing agents to exogenous sources of fluctuations, outside the direct control of policymakers. The normative implication was that
there should be no role for economic policy activism: fiscal and monetary policy should be
acyclical. The latest generation of New Keynesian dynamic stochastic general equilibrium
(DSGE) models builds on rigorous foundations in intertemporal optimizing behavior by
consumers and firms inherited from the RBC literature, but incorporates some frictions
in the adjustment of nominal and real quantities in response to macroeconomic shocks
(see Woodford (2003)). In such a framework, not only policy "surprises" may have an
impact on the economic activity, but also the way policymakers "systematically" respond
to exogenous sources of fluctuation plays a fundamental role in affecting the economic
activity, thereby rekindling interest in the use of counter-cyclical stabilization policies to
fine tune the business cycle.
Yet, despite impressive advances in the economic theory and econometric techniques, there are no definitive answers on the systematic stance policymakers should follow, and on the
effects of macroeconomic policies upon the economy. Against this background, the present thesis attempts to inspect the interrelations between macroeconomic policies and the economic activity from novel angles. Three contributions
are proposed.
In the first Chapter, I show that relying on the information actually available to policymakers when budgetary decisions are taken is of fundamental importance for the assessment of the cyclical stance of governments. In the second, I explore whether the effectiveness of fiscal shocks in spurring the economic activity has declined since the beginning of the 1970s. In the third, the impact of systematic monetary policies over U.S. industrial sectors is investigated. In the existing literature, empirical assessments of the historical stance of policymakers over the economic cycle have been mainly drawn from the estimation of "reduced-form" policy reaction functions (see in particular Taylor (1993) and Galì and Perotti (2003)). Such rules typically relate a policy instrument (a reference short-term interest rate or an indicator of discretionary fiscal policy) to a set of explanatory variables (notably inflation, the output gap and the debt-GDP ratio, as long as fiscal policy is concerned). Although these policy rules can be seen as simple approximations of what derived from an explicit optimization problem solved by social planners (see Kollmann (2007)), they received considerable attention since they proved to track the behavior of central banks and fiscal
policymakers relatively well. Typically, revised data, i.e. observations available to the
econometrician when the study is carried out, are used in the estimation of such policy
reaction functions. However, data available in "real-time" to policymakers may end up
to be remarkably different from what it is observed ex-post. Orphanides (2001), in an
innovative and thought-provoking paper on the U.S. monetary policy, challenged the way
policy evaluation was conducted that far by showing that unrealistic assumptions about
the timeliness of data availability may yield misleading descriptions of historical policy.
In the spirit of Orphanides (2001), in the first Chapter of this thesis I reconsider how
the intentional cyclical stance of fiscal authorities should be assessed. Importantly, in
the framework of fiscal policy rules, not only variables such as potential output and the
output gap are subject to measurement errors, but also the main discretionary "operating
instrument" in the hands of governments: the structural budget balance, i.e. the headline
government balance net of the effects due to automatic stabilizers. In fact, the actual
realization of planned fiscal measures may depend on several factors (such as the growth
rate of GDP, the implementation lags that often follow the adoption of many policy
measures, and others more) outside the direct and full control of fiscal authorities. Hence,
there might be sizeable differences between discretionary fiscal measures as planned in the
past and what it is observed ex-post. To be noted, this does not apply to monetary policy
since central bankers can control their operating interest rates with great accuracy.
When the historical behavior of fiscal authorities is analyzed from a real-time perspective, it emerges that the intentional stance has been counter-cyclical, especially during expansions, in the main OECD countries throughout the last thirteen years. This is at
odds with findings based on revised data, generally pointing to pro-cyclicality (see for example Gavin and Perotti (1997)). It is shown that empirical correlations among revision
errors and other second-order moments allow to predict the size and the sign of the bias
incurred in estimating the intentional stance of the policy when revised data are (mistakenly)
used. It addition, formal tests, based on a refinement of Hansen (1999), do not reject
the hypothesis that the intentional reaction of fiscal policy to the cycle is characterized by
two regimes: one counter-cyclical, when output is above its potential level, and the other
acyclical, in the opposite case. On the contrary, the use of revised data does not allow to identify any threshold effect.
The second and third Chapters of this thesis are devoted to the exploration of the impact
of fiscal and monetary policies upon the economy.
Over the last years, two approaches have been mainly followed by practitioners for the
estimation of the effects of macroeconomic policies on the real activity. On the one hand,
calibrated and estimated DSGE models allow to trace out the economy’s responses to
policy disturbances within an analytical framework derived from solid microeconomic
foundations. On the other, vector autoregressive (VAR) models continue to be largely
used since they have proved to fit macro data particularly well, albeit they cannot fully
serve to inspect structural interrelations among economic variables.
Yet, the typical DSGE and VAR models are designed to handle a limited number of variables
and are not suitable to address economic questions potentially involving a large
amount of information. In a DSGE framework, in fact, identifying aggregate shocks and
their propagation mechanism under a plausible set of theoretical restrictions becomes a
thorny issue when many variables are considered. As for VARs, estimation problems may
arise when models are specified in a large number of indicators (although latest contributions suggest that large-scale Bayesian VARs perform surprisingly well in forecasting.
See in particular Banbura, Giannone and Reichlin (2007)). As a consequence, the growing
popularity of factor models as effective econometric tools allowing to summarize in
a parsimonious and flexible manner large amounts of information may be explained not
only by their usefulness in deriving business cycle indicators and forecasting (see for example
Reichlin (2002) and D’Agostino and Giannone (2006)), but also, due to recent
developments, by their ability in evaluating the response of economic systems to identified
structural shocks (see Giannone, Reichlin and Sala (2002) and Forni, Giannone, Lippi
and Reichlin (2007)). Parallelly, some attempts have been made to combine the rigor of
DSGE models and the tractability of VAR ones, with the advantages of factor analysis
(see Boivin and Giannoni (2006) and Bernanke, Boivin and Eliasz (2005)).
The second Chapter of this thesis, based on a joint work with Agnès Bénassy-Quéré, presents an original study combining factor and VAR analysis in an encompassing framework,
to investigate how "unexpected" and "unsystematic" variations in taxes and government
spending feed through the economy in the home country and abroad. The domestic
impact of fiscal shocks in Germany, the U.K. and the U.S. and cross-border fiscal spillovers
from Germany to seven European economies is analyzed. In addition, the time evolution of domestic and cross-border tax and spending multipliers is explored. In fact, the way fiscal policy impacts on domestic and foreign economies
depends on several factors, possibly changing over time. In particular, the presence of excess
capacity, accommodating monetary policy, distortionary taxation and liquidity constrained
consumers, plays a prominent role in affecting how fiscal policies stimulate the
economic activity in the home country. The impact on foreign output crucially depends
on the importance of trade links, on real exchange rates and, in a monetary union, on
the sensitiveness of foreign economies to the common interest rate. It is well documented
that the last thirty years have witnessed frequent changes in the economic environment.
For instance, in most OECD countries, the monetary policy stance became less accommodating
in the 1980s compared to the 1970s, and more accommodating again in the
late 1990s and early 2000s. Moreover, financial markets have been heavily deregulated.
Hence, fiscal policy might have lost (or gained) power as a stimulating tool in the hands
of policymakers. Importantly, the issue of cross-border transmission of fiscal policy decisions is of the utmost relevance in the framework of the European Monetary Union and this explains why the debate on fiscal policy coordination has received so much attention since the adoption
of the single currency (see Ahearne, Sapir and Véron (2006) and European Commission
(2006)). It is found that over the period 1971 to 2004 tax shocks have generally been more effective in spurring domestic output than government spending shocks. Interestingly, the inclusion of common factors representing global economic phenomena yields to smaller multipliers
reconciling, at least for the U.K. the evidence from large-scale macroeconomic models,
generally finding feeble multipliers (see e.g. European Commission’s QUEST model), with
the one from a prototypical structural VAR pointing to stronger effects of fiscal policy.
When the estimation is performed recursively over samples of seventeen years of data, it
emerges that GDP multipliers have dropped drastically from early 1990s on, especially
in Germany (tax shocks) and in the U.S. (both tax and government spending shocks).
Moreover, the conduct of fiscal policy seems to have become less erratic, as documented
by a lower variance of fiscal shocks over time, and this might contribute to explain why
business cycles have shown less volatility in the countries under examination.
Expansionary fiscal policies in Germany do not generally have beggar-thy-neighbor effects
on other European countries. In particular, our results suggest that tax multipliers have
been positive but vanishing for neighboring countries (France, Italy, the Netherlands, Belgium and Austria), weak and mostly not significant for more remote ones (the U.K.
and Spain). Cross-border government spending multipliers are found to be monotonically
weak for all the subsamples considered.
Overall these findings suggest that fiscal "surprises", in the form of unexpected reductions in taxation and expansions in government consumption and investment, have become progressively less successful in stimulating the economic activity at the domestic level, indicating that, in the framework of the European Monetary Union, policymakers can only marginally rely on this discretionary instrument as a substitute for national monetary policies.
The objective of the third chapter is to inspect the role of monetary policy in the U.S. business cycle. In particular, the effects of "systematic" monetary policies upon several industrial sectors is investigated. The focus is on the systematic, or endogenous, component of monetary policy (i.e. the one which is related to the economic activity in a stable and predictable way), for three main reasons. First, endogenous monetary policies are likely to have sizeable real effects, if agents’ expectations are not perfectly rational and if there are some nominal and real frictions in a market. Second, as widely documented, the variability of the monetary instrument and of the main macro variables is only marginally explained by monetary "shocks", defined as unexpected and exogenous variations in monetary conditions. Third, monetary shocks can be simply interpreted as measurement errors (see Christiano, Eichenbaum
and Evans (1998)). Hence, the systematic component of monetary policy is likely to have played a fundamental role in affecting business cycle fluctuations. The strategy to isolate the impact of systematic policies relies on a counterfactual experiment, within a (calibrated or estimated) macroeconomic model. As a first step, a macroeconomic shock to which monetary policy is likely to respond should be selected,
and its effects upon the economy simulated. Then, the impact of such shock should be
evaluated under a “policy-inactive” scenario, assuming that the central bank does not respond
to it. Finally, by comparing the responses of the variables of interest under these
two scenarios, some evidence on the sensitivity of the economic system to the endogenous
component of the policy can be drawn (see Bernanke, Gertler and Watson (1997)).
Such kind of exercise is first proposed within a stylized DSGE model, where the analytical
solution of the model can be derived. However, as argued, large-scale multi-sector DSGE
models can be solved only numerically, thus implying that the proposed experiment cannot
be carried out. Moreover, the estimation of DSGE models becomes a thorny issue when many variables are incorporated (see Canova and Sala (2007)). For these arguments, a less “structural”, but more tractable, approach is followed, where a minimal amount of
identifying restrictions is imposed. In particular, a factor model econometric approach
is adopted (see in particular Giannone, Reichlin and Sala (2002) and Forni, Giannone,
Lippi and Reichlin (2007)). In this framework, I develop a technique to perform the counterfactual experiment needed to assess the impact of systematic monetary policies.
It is found that 2 and 3-digit SIC U.S. industries are characterized by very heterogeneous degrees of sensitivity to the endogenous component of the policy. Notably, the industries showing the strongest sensitivities are the ones producing durable goods and metallic
materials. Non-durable good producers, food, textile and lumber producing industries are
the least affected. In addition, it is highlighted that industrial sectors adjusting prices relatively infrequently are the most "vulnerable" ones. In fact, firms in this group are likely to increase quantities, rather than prices, following a shock positively hitting the economy. Finally, it emerges that sectors characterized by a higher recourse to external sources to finance investments, and sectors investing relatively more in new plants and machineries, are the most affected by endogenous monetary actions.
Doctorat en sciences économiques, Orientation économie
info:eu-repo/semantics/nonPublished
Bauknecht, Klaus Dieter. "A macroeconometric policy model of the South African economy based on weak rational expectations with an application to monetary policy." Thesis, Stellenbosch : Stellenbosch University, 2000. http://hdl.handle.net/10019.1/51575.
Full textENGLISH ABSTRACT: The Lucas critique states that if expectations are not explicitly dealt with, conventional econometric models are inappropriate for policy analyses, as their coefficients are not policy invariant. The inclusion of rational expectations in ·conventional model building has been the most common response to this critique. The concept of rational expectations has received several interpretations. In numerous studies, these expectations are associated with model consistent expectations in the sense that expectations and model solutions are identical. To derive a solution, these models require unique algorithms and assumptions regarding their terminal state, in particular when forward-looking expectations are present. An alternative that avoids these issues is the concept of weak rational expectations, which emphasises that expectation errors should not be systematic. Expectations are therefore formed on the basis of an underlying structure, but full knowledge of the model is not essential. The accommodation of this type of rational expectations is accomplished by means of an explicit specification of an expectations equation consistent with the macro econometric model's broad structure. The estimation of coefficients relating to expectations is achieved through an Instrumental Variable approach. In South Africa, monetary policy has been consistent and transparent in line with the recommendations of the De Kock Commission. This allows the modelling of the policy instrument of the South African Reserve Bank, i.e. the Bank rate, by means of a policy reaction function. Given this transparency in monetary policy, the accommodation of expectations of the Bank rate is essential in modelling the full impact of monetary policy and in avoiding the Lucas critique. This is accomplished through weak rational expectations, based on the reaction function of the Reserve Bank. The accommodation of expectations of a policy instrument also allows the modelling of anticipated and unanticipated policies as alternative assumptions regarding the expectations process can be made during simulations. Conventional econometric models emphasise the demand side of the economy, with equations focusing on private consumption, investment, exports and imports and possibly changes in inventories. In this study, particular emphasis in the model specification is also placed on the impact of monetary policy on government debt and debt servicing costs. Other dimensions of the model include the modelling of the money supply and balance of payments, short- and long-term interest rates, domestic prices, the exchange rate, the wage rate and employment as well as weakly rational expectations of inflation and the Bank rate. The model has been specified and estimated by usmg concepts such as cointegration and Error Correction modelling. Numerous tests, including the assessment of the Root Mean Square Percentage Error, have been employed to test the adequacy of the model. Similarly, tests are carried out to ensure weak rational expectations. Numerous simulations are carried out with the model and the results are compared to relevant alternative studies. The simulation results show that the reduction of inflation by means of only monetary policy could impose severe costs on the economy in terms of real sector volatility.
AFRIKAANSE OPSOMMING: Die Lucas-kritiek beweer dat konvensionele ekonometriese modelle nie gebruik kan word vir beleidsontleding nie, aangesien dit nie voorsiening maak vir die verandering in verwagtings wanneer beleidsaanpassings gemaak word nie. Die insluiting van rasionele verwagtinge in konvensionele ekonometriese modelle is die mees algemene reaksie op die Lukas-kritiek. Ten einde die praktiese insluiting van rasionele verwagtings III ekonometriese modelbou te vergemaklik, word in hierdie studie gebruik gemaak van sogenaamde "swak rasionele verwagtings", wat slegs vereis dat verwagtingsfoute me sistematies moet wees nie. Die beraming van die koëffisiënte van die verwagtingsveranderlikes word gedoen met behulp van die Instrumentele Veranderlikes-benadering. Monetêre beleid in Suid-Afrika was histories konsekwent en deursigtig in ooreenstemming met die aanbevelings van die De Kock Kommissie. Die beleidsinstrument van die Suid-Afrikaanse Reserwebank, naamlik die Bankkoers, kan gevolglik gemodelleer word met behulp van 'n beleidsreaksie-funksie. Ten einde die Lukas-kritiek te akkommodeer, moet verwagtings oor die Bankkoers egter ingesluit word wanneer die volle impak van monetêre beleid gemodelleer word. Dit word vermag met die insluiting van swak rasionele verwagtings, gebaseer op die reaksie-funksie van die Reserwebank. Sodoende kan die impak van verwagte en onverwagte beleidsaanpassings gesimuleer word. Konvensionele ekonometriese modelle beklemtoon die vraagkant van die ekonomie, met vergelykings vir verbruik, investering, invoere, uitvoere en moontlik die verandering in voorrade. In hierdie studie word daar ook klem geplaas op die impak van monetêre beleid op staatskuld en die koste van staatsskuld. Ander aspekte wat gemodelleer word, is die geldvoorraad en betalingsbalans, korttermyn- en langtermynrentekoerse, binnelandse pryse, die wisselkoers, loonkoerse en indiensneming, asook swak rasionele verwagtings van inflasie en die Bankkkoers. Die model is gespesifiseer en beraam met behulp van ko-integrasie en die gebruik van lang-en korttermynvergelykings. Die gebruiklike toetse is uitgevoer om die toereikendheid van die model te toets. Verskeie simulasies is uitgevoer met die model en die resultate is vergelyk met ander relevante studies. Die gevolgtrekking word gemaak dat die verlaging van inflasie deur alleenlik gebruik te maak van monetêre beleid 'n swaar las op die ekonomie kan lê in terme van volatiliteit in die reële sektor.
Bañbura, Marta. "Essays in dynamic macroeconometrics." Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210294.
Full textThe first two chapters consider factor models in the context of real-time forecasting with many indicators. Using a large number of predictors offers an opportunity to exploit a rich information set and is also considered to be a more robust approach in the presence of instabilities. On the other hand, it poses a challenge of how to extract the relevant information in a parsimonious way. Recent research shows that factor models provide an answer to this problem. The fundamental assumption underlying those models is that most of the co-movement of the variables in a given dataset can be summarized by only few latent variables, the factors. This assumption seems to be warranted in the case of macroeconomic and financial data. Important theoretical foundations for large factor models were laid by Forni, Hallin, Lippi and Reichlin (2000) and Stock and Watson (2002). Since then, different versions of factor models have been applied for forecasting, structural analysis or construction of economic activity indicators. Recently, Giannone, Reichlin and Small (2008) have used a factor model to produce projections of the U.S GDP in the presence of a real-time data flow. They propose a framework that can cope with large datasets characterised by staggered and nonsynchronous data releases (sometimes referred to as “ragged edge”). This is relevant as, in practice, important indicators like GDP are released with a substantial delay and, in the meantime, more timely variables can be used to assess the current state of the economy.
The first chapter of the thesis entitled “A look into the factor model black box: publication lags and the role of hard and soft data in forecasting GDP” is based on joint work with Gerhard Rünstler and applies the framework of Giannone, Reichlin and Small (2008) to the case of euro area. In particular, we are interested in the role of “soft” and “hard” data in the GDP forecast and how it is related to their timeliness.
The soft data include surveys and financial indicators and reflect market expectations. They are usually promptly available. In contrast, the hard indicators on real activity measure directly certain components of GDP (e.g. industrial production) and are published with a significant delay. We propose several measures in order to assess the role of individual or groups of series in the forecast while taking into account their respective publication lags. We find that surveys and financial data contain important information beyond the monthly real activity measures for the GDP forecasts, once their timeliness is properly accounted for.
The second chapter entitled “Maximum likelihood estimation of large factor model on datasets with arbitrary pattern of missing data” is based on joint work with Michele Modugno. It proposes a methodology for the estimation of factor models on large cross-sections with a general pattern of missing data. In contrast to Giannone, Reichlin and Small (2008), we can handle datasets that are not only characterised by a “ragged edge”, but can include e.g. mixed frequency or short history indicators. The latter is particularly relevant for the euro area or other young economies, for which many series have been compiled only since recently. We adopt the maximum likelihood approach which, apart from the flexibility with regard to the pattern of missing data, is also more efficient and allows imposing restrictions on the parameters. Applied for small factor models by e.g. Geweke (1977), Sargent and Sims (1977) or Watson and Engle (1983), it has been shown by Doz, Giannone and Reichlin (2006) to be consistent, robust and computationally feasible also in the case of large cross-sections. To circumvent the computational complexity of a direct likelihood maximisation in the case of large cross-section, Doz, Giannone and Reichlin (2006) propose to use the iterative Expectation-Maximisation (EM) algorithm (used for the small model by Watson and Engle, 1983). Our contribution is to modify the EM steps to the case of missing data and to show how to augment the model, in order to account for the serial correlation of the idiosyncratic component. In addition, we derive the link between the unexpected part of a data release and the forecast revision and illustrate how this can be used to understand the sources of the
latter in the case of simultaneous releases. We use this methodology for short-term forecasting and backdating of the euro area GDP on the basis of a large panel of monthly and quarterly data. In particular, we are able to examine the effect of quarterly variables and short history monthly series like the Purchasing Managers' surveys on the forecast.
The third chapter is entitled “Large Bayesian VARs” and is based on joint work with Domenico Giannone and Lucrezia Reichlin. It proposes an alternative approach to factor models for dealing with the curse of dimensionality, namely Bayesian shrinkage. We study Vector Autoregressions (VARs) which have the advantage over factor models in that they allow structural analysis in a natural way. We consider systems including more than 100 variables. This is the first application in the literature to estimate a VAR of this size. Apart from the forecast considerations, as argued above, the size of the information set can be also relevant for the structural analysis, see e.g. Bernanke, Boivin and Eliasz (2005), Giannone and Reichlin (2006) or Christiano, Eichenbaum and Evans (1999) for a discussion. In addition, many problems may require the study of the dynamics of many variables: many countries, sectors or regions. While we use standard priors as proposed by Litterman (1986), an
important novelty of the work is that we set the overall tightness of the prior in relation to the model size. In this we follow the recommendation by De Mol, Giannone and Reichlin (2008) who study the case of Bayesian regressions. They show that with increasing size of the model one should shrink more to avoid overfitting, but when data are collinear one is still able to extract the relevant sample information. We apply this principle in the case of VARs. We compare the large model with smaller systems in terms of forecasting performance and structural analysis of the effect of monetary policy shock. The results show that a standard Bayesian VAR model is an appropriate tool for large panels of data once the degree of shrinkage is set in relation to the model size.
The fourth chapter entitled “Forecasting euro area inflation with wavelets: extracting information from real activity and money at different scales” proposes a framework for exploiting relationships between variables at different frequency bands in the context of forecasting. This work is motivated by the on-going debate whether money provides a reliable signal for the future price developments. The empirical evidence on the leading role of money for inflation in an out-of-sample forecast framework is not very strong, see e.g. Lenza (2006) or Fisher, Lenza, Pill and Reichlin (2008). At the same time, e.g. Gerlach (2003) or Assenmacher-Wesche and Gerlach (2007, 2008) argue that money and output could affect prices at different frequencies, however their analysis is performed in-sample. In this Chapter, it is investigated empirically which frequency bands and for which variables are the most relevant for the out-of-sample forecast of inflation when the information from prices, money and real activity is considered. To extract different frequency components from a series a wavelet transform is applied. It provides a simple and intuitive framework for band-pass filtering and allows a decomposition of series into different frequency bands. Its application in the multivariate out-of-sample forecast is novel in the literature. The results indicate that, indeed, different scales of money, prices and GDP can be relevant for the inflation forecast.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
Lenza, Michèle. "Essays on monetary policy, saving and investment." Doctoral thesis, Universite Libre de Bruxelles, 2007. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210659.
Full textCentral Banks behave so cautiously compared to optimal theoretical
benchmarks, (ii) do monetary variables add information about
future Euro Area inflation to a large amount of non monetary
variables and (iii) why national saving and investment are so
correlated in OECD countries in spite of the high degree of
integration of international financial markets.
The process of innovation in the elaboration of economic theory
and statistical analysis of the data witnessed in the last thirty
years has greatly enriched the toolbox available to
macroeconomists. Two aspects of such a process are particularly
noteworthy for addressing the issues in this thesis: the
development of macroeconomic dynamic stochastic general
equilibrium models (see Woodford, 1999b for an historical
perspective) and of techniques that enable to handle large data
sets in a parsimonious and flexible manner (see Reichlin, 2002 for
an historical perspective).
Dynamic stochastic general equilibrium models (DSGE) provide the
appropriate tools to evaluate the macroeconomic consequences of
policy changes. These models, by exploiting modern intertemporal
general equilibrium theory, aggregate the optimal responses of
individual as consumers and firms in order to identify the
aggregate shocks and their propagation mechanisms by the
restrictions imposed by optimizing individual behavior. Such a
modelling strategy, uncovering economic relationships invariant to
a change in policy regimes, provides a framework to analyze the
effects of economic policy that is robust to the Lucas'critique
(see Lucas, 1976). The early attempts of explaining business
cycles by starting from microeconomic behavior suggested that
economic policy should play no role since business cycles
reflected the efficient response of economic agents to exogenous
sources of fluctuations (see the seminal paper by Kydland and Prescott, 1982}
and, more recently, King and Rebelo, 1999). This view was challenged by
several empirical studies showing that the adjustment mechanisms
of variables at the heart of macroeconomic propagation mechanisms
like prices and wages are not well represented by efficient
responses of individual agents in frictionless economies (see, for
example, Kashyap, 1999; Cecchetti, 1986; Bils and Klenow, 2004 and Dhyne et al. 2004). Hence, macroeconomic models currently incorporate
some sources of nominal and real rigidities in the DSGE framework
and allow the study of the optimal policy reactions to inefficient
fluctuations stemming from frictions in macroeconomic propagation
mechanisms.
Against this background, the first chapter of this thesis sets up
a DSGE model in order to analyze optimal monetary policy in an
economy with sectorial heterogeneity in the frequency of price
adjustments. Price setters are divided in two groups: those
subject to Calvo type nominal rigidities and those able to change
their prices at each period. Sectorial heterogeneity in price
setting behavior is a relevant feature in real economies (see, for
example, Bils and Klenow, 2004 for the US and Dhyne, 2004 for the Euro
Area). Hence, neglecting it would lead to an understatement of the
heterogeneity in the transmission mechanisms of economy wide
shocks. In this framework, Aoki (2001) shows that a Central
Bank maximizing social welfare should stabilize only inflation in
the sector where prices are sticky (hereafter, core inflation).
Since complete stabilization is the only true objective of the
policymaker in Aoki (2001) and, hence, is not only desirable
but also implementable, the equilibrium real interest rate in the
economy is equal to the natural interest rate irrespective of the
degree of heterogeneity that is assumed. This would lead to
conclude that stabilizing core inflation rather than overall
inflation does not imply any observable difference in the
aggressiveness of the policy behavior. While maintaining the
assumption of sectorial heterogeneity in the frequency of price
adjustments, this chapter adds non negligible transaction
frictions to the model economy in Aoki (2001). As a
consequence, the social welfare maximizing monetary policymaker
faces a trade-off among the stabilization of core inflation,
economy wide output gap and the nominal interest rate. This
feature reflects the trade-offs between conflicting objectives
faced by actual policymakers. The chapter shows that the existence
of this trade-off makes the aggressiveness of the monetary policy
reaction dependent on the degree of sectorial heterogeneity in the
economy. In particular, in presence of sectorial heterogeneity in
price adjustments, Central Banks are much more likely to behave
less aggressively than in an economy where all firms face nominal
rigidities. Hence, the chapter concludes that the excessive
caution in the conduct of monetary policy shown by actual Central
Banks (see, for example, Rudebusch and Svennsson, 1999 and Sack, 2000) might not
represent a sub-optimal behavior but, on the contrary, might be
the optimal monetary policy response in presence of a relevant
sectorial dispersion in the frequency of price adjustments.
DSGE models are proving useful also in empirical applications and
recently efforts have been made to incorporate large amounts of
information in their framework (see Boivin and Giannoni, 2006). However, the
typical DSGE model still relies on a handful of variables. Partly,
this reflects the fact that, increasing the number of variables,
the specification of a plausible set of theoretical restrictions
identifying aggregate shocks and their propagation mechanisms
becomes cumbersome. On the other hand, several questions in
macroeconomics require the study of a large amount of variables.
Among others, two examples related to the second and third chapter
of this thesis can help to understand why. First, policymakers
analyze a large quantity of information to assess the current and
future stance of their economies and, because of model
uncertainty, do not rely on a single modelling framework.
Consequently, macroeconomic policy can be better understood if the
econometrician relies on large set of variables without imposing
too much a priori structure on the relationships governing their
evolution (see, for example, Giannone et al. 2004 and Bernanke et al. 2005).
Moreover, the process of integration of good and financial markets
implies that the source of aggregate shocks is increasingly global
requiring, in turn, the study of their propagation through cross
country links (see, among others, Forni and Reichlin, 2001 and Kose et al. 2003). A
priori, country specific behavior cannot be ruled out and many of
the homogeneity assumptions that are typically embodied in open
macroeconomic models for keeping them tractable are rejected by
the data. Summing up, in order to deal with such issues, we need
modelling frameworks able to treat a large amount of variables in
a flexible manner, i.e. without pre-committing on too many
a-priori restrictions more likely to be rejected by the data. The
large extent of comovement among wide cross sections of economic
variables suggests the existence of few common sources of
fluctuations (Forni et al. 2000 and Stock and Watson, 2002) around which
individual variables may display specific features: a shock to the
world price of oil, for example, hits oil exporters and importers
with different sign and intensity or global technological advances
can affect some countries before others (Giannone and Reichlin, 2004). Factor
models mainly rely on the identification assumption that the
dynamics of each variable can be decomposed into two orthogonal
components - common and idiosyncratic - and provide a parsimonious
tool allowing the analysis of the aggregate shocks and their
propagation mechanisms in a large cross section of variables. In
fact, while the idiosyncratic components are poorly
cross-sectionally correlated, driven by shocks specific of a
variable or a group of variables or measurement error, the common
components capture the bulk of cross-sectional correlation, and
are driven by few shocks that affect, through variable specific
factor loadings, all items in a panel of economic time series.
Focusing on the latter components allows useful insights on the
identity and propagation mechanisms of aggregate shocks underlying
a large amount of variables. The second and third chapter of this
thesis exploit this idea.
The second chapter deals with the issue whether monetary variables
help to forecast inflation in the Euro Area harmonized index of
consumer prices (HICP). Policymakers form their views on the
economic outlook by drawing on large amounts of potentially
relevant information. Indeed, the monetary policy strategy of the
European Central Bank acknowledges that many variables and models
can be informative about future Euro Area inflation. A peculiarity
of such strategy is that it assigns to monetary information the
role of providing insights for the medium - long term evolution of
prices while a wide range of alternative non monetary variables
and models are employed in order to form a view on the short term
and to cross-check the inference based on monetary information.
However, both the academic literature and the practice of the
leading Central Banks other than the ECB do not assign such a
special role to monetary variables (see Gali et al. 2004 and
references therein). Hence, the debate whether money really
provides relevant information for the inflation outlook in the
Euro Area is still open. Specifically, this chapter addresses the
issue whether money provides useful information about future
inflation beyond what contained in a large amount of non monetary
variables. It shows that a few aggregates of the data explain a
large amount of the fluctuations in a large cross section of Euro
Area variables. This allows to postulate a factor structure for
the large panel of variables at hand and to aggregate it in few
synthetic indexes that still retain the salient features of the
large cross section. The database is split in two big blocks of
variables: non monetary (baseline) and monetary variables. Results
show that baseline variables provide a satisfactory predictive
performance improving on the best univariate benchmarks in the
period 1997 - 2005 at all horizons between 6 and 36 months.
Remarkably, monetary variables provide a sensible improvement on
the performance of baseline variables at horizons above two years.
However, the analysis of the evolution of the forecast errors
reveals that most of the gains obtained relative to univariate
benchmarks of non forecastability with baseline and monetary
variables are realized in the first part of the prediction sample
up to the end of 2002, which casts doubts on the current
forecastability of inflation in the Euro Area.
The third chapter is based on a joint work with Domenico Giannone
and gives empirical foundation to the general equilibrium
explanation of the Feldstein - Horioka puzzle. Feldstein and Horioka (1980) found
that domestic saving and investment in OECD countries strongly
comove, contrary to the idea that high capital mobility should
allow countries to seek the highest returns in global financial
markets and, hence, imply a correlation among national saving and
investment closer to zero than one. Moreover, capital mobility has
strongly increased since the publication of Feldstein - Horioka's
seminal paper while the association between saving and investment
does not seem to comparably decrease. Through general equilibrium
mechanisms, the presence of global shocks might rationalize the
correlation between saving and investment. In fact, global shocks,
affecting all countries, tend to create imbalance on global
capital markets causing offsetting movements in the global
interest rate and can generate the observed correlation across
national saving and investment rates. However, previous empirical
studies (see Ventura, 2003) that have controlled for the effects
of global shocks in the context of saving-investment regressions
failed to give empirical foundation to this explanation. We show
that previous studies have neglected the fact that global shocks
may propagate heterogeneously across countries, failing to
properly isolate components of saving and investment that are
affected by non pervasive shocks. We propose a novel factor
augmented panel regression methodology that allows to isolate
idiosyncratic sources of fluctuations under the assumption of
heterogenous transmission mechanisms of global shocks. Remarkably,
by applying our methodology, the association between domestic
saving and investment decreases considerably over time,
consistently with the observed increase in international capital
mobility. In particular, in the last 25 years the correlation
between saving and investment disappears.
Doctorat en sciences économiques, Orientation économie
info:eu-repo/semantics/nonPublished
Malek, Mansour Jeoffrey H. G. "Three essays in international economics." Doctoral thesis, Universite Libre de Bruxelles, 2006. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210878.
Full textRegarding the approach pursued to tackle these problems, we have chosen to strictly remain within the boundaries of empirical (macro)economics - that is, applied econometrics. Though we systematically provide theoretical models to back up our empirical approach, our only real concern is to look at the stories the data can (or cannot) tell us. As to the econometric methodology, we will restrict ourselves to the use of panel data analysis. The large spectrum of techniques available within the panel framework allows us to utilize, for each of the problems at hand, the most suitable approach (or what we think it is).
Doctorat en sciences économiques, Orientation économie
info:eu-repo/semantics/nonPublished
Matlanyane, Retselisitsoe Adelaide. "A macroeconometric model for the economy of Lesotho policy analysis and implications /." Pretoria : [s.n.], 2005. http://upetd.up.ac.za/thesis/available/etd-04182005-091509.
Full textFlury, Thomas. "Econometrics of dynamic non-linear models in macroeconomics and finance." Thesis, University of Oxford, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.523095.
Full textAboagye, Anthony Q. Q. "Financial flows, macroeconomic policy and the agricultural sector in Sub-Saharan Africa." Thesis, McGill University, 1998. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=35672.
Full textThe production function is a Cobb-Douglas type. Static export and domestic share equations are derived from a specification of the agricultural gross domestic product function. Transformed auto-regressive distributed-lag versions of the static share models are used to investigate long-run dynamics, persistence and implementation lags in the share response model.
Agricultural output is affected as follows. ODA, PFX and SAV have small positive or negative impact depending on agricultural region or economic policy environment. The impact of openness of the economy is negative in all agricultural regions, however, there is evidence of positive effect of openness within improved policy environment. None of these effects are statistically significant.
Export share is affected as follows. ODA, PFX and SAV have small positive impact in some agricultural regions and policy environments, both in the short-run and in the long-run. PFX is not significant anywhere. ODA is significant only when countries are grouped by policy environment in the short-run. SAV is significant in the short-run only in some regions, and significant in the long-run only in others. Openness has positive impact in the short-run. This is significant in many regions. Its long-run impact is mostly positive but not significant anywhere. The impact of producer price is mostly positive but not significant.
Efforts to encourage economic activities in rural communities such as improvements in domestic terms of trade in favor of agriculture, together with the provision of infrastructure are likely to stimulate output. Strategies to diversify and process agricultural exports in the face of falling agricultural commodity prices should be pursued.
Salman, Abdul Khalik Abbas. "An econometric study of export instability and stabilisation policies in the Iraqi economy." Thesis, Cardiff University, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.329663.
Full textNowman, Khalid. "Gaussian estimation of open higher order continuous time dynamic models with mixed stock and flow and with an application to a United Kingdom macroeconomic model." Thesis, University of Essex, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.305955.
Full textKremers, Jeroen Joseph Marie. "On the determination and macroeconomic consequences of public financial policy." Thesis, University of Oxford, 1986. http://ora.ox.ac.uk/objects/uuid:a8c0cb20-b178-4e80-9a46-fcb1079a4a9f.
Full textWan, Lai Shan. "Macroeconomic modelling and policy simulation for the Chinese economy." HKBU Institutional Repository, 2002. http://repository.hkbu.edu.hk/etd_ra/335.
Full textLuintel, Kul Bahadur. "Macroeconomics and money in developing countries : an econometric model for an Asian region." Thesis, University of Glasgow, 1993. http://theses.gla.ac.uk/829/.
Full textRomain, Astrid. "Essays in the empirical analysis of venture capital and entrepreneurship." Doctoral thesis, Universite Libre de Bruxelles, 2007. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210729.
Full textThis thesis aims at analysing some aspects of Venture Capital (VC) and high-tech entrepreneurship. The focus is both at the macroeconomic level, comparing venture capital from an international point of view and Technology-Based Small Firms (TBSF) at company and founder’s level in Belgium. The approach is mainly empirical.
This work is divided into two parts. The first part focuses on venture capital. First of all, we test the impact of VC on productivity. We then identify the determinants of VC and we test their impact on the relative level of VC for a panel of countries.
The second part concerns the technology-based small firms in Belgium. The objective is twofold. It first aims at creating a database on Belgian TBSF to better understand the importance of entrepreneurship. In order to do this, a national survey was developed and the statistical results were analysed. Secondly, it provides an analysis of the role of universities in the employment performance of TBSF.
A broad summary of each chapter is presented below.
PART 1: VENTURE CAPITAL
The Economic Impact of Venture Capital
The objective of this chapter is to perform an evaluation of the macroeconomic impact of venture capital. The main assumption is that VC can be considered as being similar in several respects to business R&D performed by large firms. We test whether VC contributes to economic growth through two main channels. The first one is innovation, characterized by the introduction of new products, processes or services on the market. The second one is the development of an absorptive capacity. These hypotheses are tested quantitatively with a production function model for a panel data set of 16 OECD countries from 1990 to 2001. The results show that the accumulation of VC is a significant factor contributing directly to Multi-Factor Productivity (MFP) growth. The social rate of return to VC is significantly higher than the social rate of return to business or public R&D. VC has also an indirect impact on MFP in the sense that it improves the output elasticity of R&D. An increased VC intensity makes it easier to absorb the knowledge generated by universities and firms, and therefore improves aggregate economic performance.
Technological Opportunity, Entrepreneurial Environment and Venture Capital Development
The objective of this chapter is to identify the main determinants of venture capital. We develop a theoretical model where three main types of factors affect the demand and supply of VC: macroeconomic conditions, technological opportunity, and the entrepreneurial environment. The model is evaluated with a panel dataset of 16 OECD countries over the period 1990-2000. The estimates show that VC intensity is pro-cyclical - it reacts positively and significantly to GDP growth. Interest rates affect the VC intensity mainly because the entrepreneurs create a demand for this type of funding. Indicators of technological opportunity such as the stock of knowledge and the number of triadic patents affect positively and significantly the relative level of VC. Labour market rigidities reduce the impact of the GDP growth rate and of the stock of knowledge, whereas a minimum level of entrepreneurship is required in order to have a positive effect of the available stock of knowledge on VC intensity.
PART 2: TECHNOLOGY-BASED SMALL FIRMS
Survey in Belgium
The first purpose of this chapter is to present the existing literature on the performance of companies. In order to get a quantitative insight into the entrepreneurial growth process, an original survey of TBSF in Belgium was launched in 2002. The second purpose is to describe the methodology of our national TBSF survey. This survey has two main merits. The first one lies in the quality of the information. Indeed, most of national and international surveys have been developed at firm-level. There exist only a few surveys at founder-level. In the TBSF database, information both at firm and at entrepreneur-level will be found.
The second merit is about the subject covered. TBSF survey tackles the financing of firms (availability of public funds, role of venture capitalists, availability of business angels,…), the framework conditions (e.g. the quality and availability of infrastructures and communication channels, the level of academic and public research, the patenting process,…) and, finally, the socio-cultural factors associated with the entrepreneurs and their environment (e.g. level of education, their parents’education, gender,…).
Statistical Evidence
The main characteristics of companies in our sample are that employment and profits net of taxation do not follow the same trend. Indeed, employment may decrease while results after taxes may stay constant. Only a few companies enjoy a growth in both employment and results after taxes between 1998 and 2003.
On the financing front, our findings suggest that internal finance in the form of personal funds, as well as the funds of family and friends are the primary source of capital to start-up a high-tech company in Belgium. Entrepreneurs rely on their own personal savings in 84 percent of the cases. Commercial bank loans are the secondary source of finance. This part of external financing (debt-finance) exceeds the combined angel funds and venture capital funds (equity-finance).
On the entrepreneur front, the preliminary results show that 80 percent of entrepreneurs in this study have a university degree while 42 percent hold postgraduate degrees (i.e. master’s, and doctorate). In term of research activities, 88 percent of the entrepreneurs holding a Ph.D. or a post-doctorate collaborate with Belgian higher education institutes. Moreover, more than 90 percent of these entrepreneurs are working in a university spin-off.
The Contribution of Universities to Employment Growth
The objective of this chapter is to test whether universities play a role amongst the determinants of employment growth in Belgian TBSF. The empirical model is based on our original survey of 87 Belgian TBSF. The results suggest that both academic spin-offs and TBSF created on the basis of an idea originating from business R&D activities are associated with an above than average growth in employees. As most ‘high-tech’ entrepreneurs are at least graduated from universities, there is no significant impact of the level of education. Nevertheless, these results must be taken with caution, as they are highly sensitive to the presence of outliers. Young high-tech firms are by definition highly volatile, and might be therefore difficult to understand.
CONCLUSION
In this last chapter, recommendations for policy-makers are drawn from the results of the thesis. The possible interventions of governments are classified according to whether they influence the demand or the supply of entrepreneurship and/or VC. We present some possible actions such as direct intervention in the VC funds, interventions of public sector through labour market rigidities, pension system, patent and research policy, level of entrepreneurial activities, bankruptcy legislation, entrepreneurial education, development of university spin-offs, and creation of a national database of TBSF.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
Kavari, Gift Vijandjua. "Modelling macroeconomic performance of African economies : an application of a macro econometric model." Thesis, University of Surrey, 2002. http://epubs.surrey.ac.uk/844583/.
Full textAsare, Nyamekye. "Essays on Time-Varying Volatility and Structural Breaks in Macroeconomics and Econometrics." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37179.
Full textBarbosa, Fernando Honorato. "Uma análise das elasticidades de bens e serviços não fatores, sua estabilidade e o ajuste externo brasileiro pós-99." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/12/12140/tde-03122006-120212/.
Full textThe recent Brazilian trade surpluses changed the perception about the fragility of the external accounts of the Brazilian economy that lasted for the last two and a half decades. In the face of this new reality it seem reasonable to evaluate which were the determinants of this trade surplus, taking into account the traditional variables of the external accounts literature, like the currency, the external prices, the domestic and foreign output. With this purpose we estimated long run equations for exports and imports for evaluating the external trade of goods and services elasticities. The methodology applied was that proposed by Johansen (1988) and Johansen and Juselius (1990) that take for the account of cointegration methods. The estimation was divided into two periods: 1980-1998 and 1980-2005. With such a division, we intend to capture the effects of the change in the Brazilian foreign exchange regime introduced in 1999 over the external accounts. Further, we tested these models recursively to check for stability, breaks and cointegration power. The results were satisfactory in terms of the elasticities, in line with previous jobs on this field, but we add the information on the aggregated goods and services elasticities, usually estimated only for the goods markets. Further we identified many brakes over the estimation sample, generally associated with macroeconomic policy changes. Finally it was possible to identify that after the floating of the Brazilian currency the external income elasticity of the exports jumped to a higher level and the currency elasticities of both exports and imports showed some reduction. We conclude by saying that the huge trade surpluses recently observed are the result of a particular combination of external favorable prices, a depreciated real exchange and a high level of world income growth, as far as some structural change associated with the bigger responsiveness of the exports to the world growth, probably due to the resurge in the Brazilian external comparative advantages in the face of the currency flotation of 1999.
Eliasson, Ann-Charlotte. "Smooth transitions in macroeconomic relationships." Doctoral thesis, Stockholm : Economic Research Institute, Stockholm School of Economics (Ekonomiska forskningsinstitutet vid Handelshögsk.) (EFI), 1999. http://www.hhs.se/efi/summary/516.htm.
Full textSantos, Douglas Gomes dos. "Ensaios em econometria aplicada a finanças e macroeconomia utilizando a abordagem de regressão MIDAS." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/103954.
Full textThe Mixed Data Sampling (MIDAS) regression approach, proposed by Ghysels et al. (2004), allows us to directly relate variables at different frequencies. This characteristic is particularly attractive when one wishes to use the data at their original sampling frequencies, as well as when the objective is to calculate multi-period-ahead forecasts. In this thesis, we use the MIDAS regression approach in three papers in which we perform empirical applications in the areas of finance and macroeconomics. All papers are comparative studies. With applications in different forecasting contexts, we aim at contributing with empirical comparative evidence. In the first paper, we explore comparative results in the context of multi-period volatility forecasting. We compare the MIDAS approach with two widely used methods of producing multi-period forecasts: the direct and the iterated approaches. Their relative performances are investigated in a Monte Carlo study and in an empirical study in which we forecast volatility at horizons up to 60 days ahead. The results of the Monte Carlo study indicate that the MIDAS forecasts are the best ones at horizons of 15 days ahead and longer. In contrast, the iterated forecasts are superior for shorter horizons of 5 and 10 days ahead. In the empirical study, using daily returns of the S&P 500 and NASDAQ indexes, the results are not so conclusive, but suggest a better performance for the iterated forecasts. All analyses are out-of-sample. In the second paper, we compare several multi-period volatility forecasting models, specifically from MIDAS and HAR families. We perform our comparisons in terms of out-of-sample volatility forecasting accuracy. We also consider combinations of the models forecasts. Using intra-daily returns of the IBOVESPA, we calculate volatility measures such as realized variance, realized power variation, and realized bipower variation to be used as regressors in both models. Further, we use a nonparametric procedure for separately measuring the continuous sample path variation and the discontinuous jump part of the quadratic variation process. Thus, MIDAS and HAR specifications with the continuous sample path and jump variability measures as separate regressors are estimated. Our results in terms of mean squared error suggest that regressors involving volatility measures which are robust to jumps (i.e., realized bipower variation and realized power variation) are better at forecasting future volatility. However, we find that, in general, the forecasts based on these regressors are not statistically different from those based on realized variance (the benchmark regressor). Moreover, we find that, in general, the relative forecasting performances of the three approaches (i.e., MIDAS, HAR and forecast combinations) are statistically equivalent. In the third paper, we compare the Markov-Switching MIDAS (MS-MIDAS) and the Smooth Transition MIDAS (STMIDAS) models in terms of forecast accuracy. We perform a real time forecasting exercise in which out-of-sample forecasts for the quarterly U.S. output growth are generated using monthly financial indicators. In this exercise, we also consider linear MIDAS models, and other forecasting models (linear and nonlinear) that include information on the indicators (via temporal aggregation of the monthly observations) for comparative purposes. From the results of the empirical study, we observe that, in general, the MS-MIDAS models provide more accurate forecasts than do the STMIDAS models.
Zeng, Ning. "The usefulness of econometric models with stochastic volatility and long memory : applications for macroeconomic and financial time series." Thesis, Brunel University, 2009. http://bura.brunel.ac.uk/handle/2438/3903.
Full textKAVTARADZE, LASHA. "DINAMICS AND LATENT VARIABLES IN APPLIED MACROECONOMICS." Doctoral thesis, Università Cattolica del Sacro Cuore, 2016. http://hdl.handle.net/10280/16793.
Full textThe Ph.D. thesis consist of three chapters on evaluating inflation dynamics in Georgia and modeling and forecasting nominal exchange rates for the European Eastern Partnership (EaP) countries using modern applied econometric techniques. In the first chapter, we survey of models those produce high predictive powers for forecasting exchange rates and inflation. This survey reveals that the factor-based and time-varying parameter (TVP) models generate superior forecasts relative to all other models. In the second chapter, we study inflation dynamics in Georgia using a hybrid New Keynesian Phillips Curve (NKPC) nested within a time-varying parameter (TVP) framework. Estimation of a TVP model with stochastic volatility shows low inflation persistence over the entire time span (1996-2012), while revealing increasing volatility of inflation shocks since 2003. Moreover, parameter estimates point to the forward-looking component of the model gaining importance following the National Bank of Georgia (NBG) adoption of inflation targeting in 2009. In the third chapter, we construct Factor Vector Autoregressive (FVAR) models to forecast nominal exchange rates for the EaP countries. This study provides better forecasts of nominal exchange rates than those produced by the random walk process.
KAVTARADZE, LASHA. "DINAMICS AND LATENT VARIABLES IN APPLIED MACROECONOMICS." Doctoral thesis, Università Cattolica del Sacro Cuore, 2016. http://hdl.handle.net/10280/16793.
Full textThe Ph.D. thesis consist of three chapters on evaluating inflation dynamics in Georgia and modeling and forecasting nominal exchange rates for the European Eastern Partnership (EaP) countries using modern applied econometric techniques. In the first chapter, we survey of models those produce high predictive powers for forecasting exchange rates and inflation. This survey reveals that the factor-based and time-varying parameter (TVP) models generate superior forecasts relative to all other models. In the second chapter, we study inflation dynamics in Georgia using a hybrid New Keynesian Phillips Curve (NKPC) nested within a time-varying parameter (TVP) framework. Estimation of a TVP model with stochastic volatility shows low inflation persistence over the entire time span (1996-2012), while revealing increasing volatility of inflation shocks since 2003. Moreover, parameter estimates point to the forward-looking component of the model gaining importance following the National Bank of Georgia (NBG) adoption of inflation targeting in 2009. In the third chapter, we construct Factor Vector Autoregressive (FVAR) models to forecast nominal exchange rates for the EaP countries. This study provides better forecasts of nominal exchange rates than those produced by the random walk process.
Nunes, Maurício Simiano. "A relação entre o mercado de ações brasileiro e as variáveis macroeconômicas no período pós-plano real." Florianópolis, SC, 2003. http://repositorio.ufsc.br/xmlui/handle/123456789/85284.
Full textMade available in DSpace on 2012-10-20T17:08:40Z (GMT). No. of bitstreams: 1 190307.pdf: 423262 bytes, checksum: 6b43f7cda503be2ee6155bac1b07a5cd (MD5)
Skalin, Joakim. "Modelling macroeconomic time series with smooth transition autoregressions." Doctoral thesis, Handelshögskolan i Stockholm, Ekonomisk Statistik (ES), 1998. http://urn.kb.se/resolve?urn=urn:nbn:se:hhs:diva-650.
Full textDiss. Stockholm : Handelshögskolan, 1999
Galgau, Olivia. "Essays in international economics and industrial organization." Doctoral thesis, Universite Libre de Bruxelles, 2006. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210773.
Full textThe first chapter aims to bring together the literature on economic integration, firm mobility and investment. It contains two sections: one dedicated to the literature on FDI and the second covering the literature on firm entry and exit, economic performance and economic and business regulation.
In the second chapter I examine the relationship between the Single Market and FDI both in an intra-EU context and from outside the EU. The empirical results show that the impact of the Single Market on FDI differs substantially from one country to another. This finding may be due to the functioning of institutions.
The third chapter studies the relationship between the level of external trade protection put into place by a Regional Integration Agreement(RIA)and the option of a firm from outside the RIA block to serve the RIA market through FDI rather than exports. I find that the level of external trade protection put in place by the RIA depends on the RIA country's capacity to benefit from FDI spillovers, the magnitude of set-up costs of building a plant in the RIA and on the amount of external trade protection erected by the country from outside the reigonal block with respect to the RIA.
The fourth chapter studies how the firm entry and exit process is affected by product market reforms and regulations and impact macroeconomic performance. The results show that an increase in deregulation will lead to a rise in firm entry and exit. This in turn will especially affect macroeconomic performance as measured by output growth and labor productivity growth. The analysis done at the sector level shows that results can differ substantially across industries, which implies that deregulation policies should be conducted at the sector level, rather than at the global macroeconomic level.
Doctorat en sciences économiques, Orientation économie
info:eu-repo/semantics/nonPublished
Song, Keran. "Business Cycle Effects on US Sectoral Stock Returns." FIU Digital Commons, 2015. http://digitalcommons.fiu.edu/etd/2008.
Full textFodor, Bryan D. "The effect of macroeconomic variables on the pricing of common stock under trending market conditions." Thesis, Department of Business Administration, University of New Brunswick, 2003. http://hdl.handle.net/1882/49.
Full textTypescript. Bibliography: leaves 83-84. Also available online through University of New Brunswick, UNB Electronic Theses & Dissertations.
FAUSER, SIMON GEORG. "Un modello reggionale del mercato di lavoro per la Germania - an analisi degli shock macroeconomici e variabili della politica economica." Doctoral thesis, Università Cattolica del Sacro Cuore, 2009. http://hdl.handle.net/10280/622.
Full textThe rise in European unemployment during the last decades stems from high unemployment in large European nations. On a sub-national level, unemployment rates within these large nations differ extensively and mirror distinct economic structures. Together with the increased economic and political complexity, regionalisation and integration within Europe, this calls for tools assisting the policy decision maker to analyse the impact of policies on a regional level. We construct such a tool and apply it to data of Western German states from 1975-2005. The model builds on previous approaches in macroeconomic labour market modelling in the Italian context and extends such approaches to incorporate the institutional setting, aspects of innovation. We utilize the model to examine reactions of regional labour markets to macro economic shocks. Specifically, the following question arises: Do labour markets of regions that have a high share of innovative industries and knowledge-intensive services respond differently to exogenous shocks than regions with less innovative industries and services? The model shows a good performance and reveals manifold insights that are useful for the regional, national as well as the supra national policy maker.
FAUSER, SIMON GEORG. "Un modello reggionale del mercato di lavoro per la Germania - an analisi degli shock macroeconomici e variabili della politica economica." Doctoral thesis, Università Cattolica del Sacro Cuore, 2009. http://hdl.handle.net/10280/622.
Full textThe rise in European unemployment during the last decades stems from high unemployment in large European nations. On a sub-national level, unemployment rates within these large nations differ extensively and mirror distinct economic structures. Together with the increased economic and political complexity, regionalisation and integration within Europe, this calls for tools assisting the policy decision maker to analyse the impact of policies on a regional level. We construct such a tool and apply it to data of Western German states from 1975-2005. The model builds on previous approaches in macroeconomic labour market modelling in the Italian context and extends such approaches to incorporate the institutional setting, aspects of innovation. We utilize the model to examine reactions of regional labour markets to macro economic shocks. Specifically, the following question arises: Do labour markets of regions that have a high share of innovative industries and knowledge-intensive services respond differently to exogenous shocks than regions with less innovative industries and services? The model shows a good performance and reveals manifold insights that are useful for the regional, national as well as the supra national policy maker.
Zhao, Zilong. "Extracting knowledge from macroeconomic data, images and unreliable data." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALT074.
Full textSystem identification and machine learning are two similar concepts independently used in automatic and computer science community. System identification uses statistical methods to build mathematical models of dynamical systems from measured data. Machine learning algorithms build a mathematical model based on sample data, known as "training data" (clean or not), in order to make predictions or decisions without being explicitly programmed to do so. Except prediction accuracy, converging speed and stability are another two key factors to evaluate the training process, especially in the online learning scenario, and these properties have already been well studied in control theory. Therefore, this thesis will implement the interdisciplinary researches for following topic: 1) System identification and optimal control on macroeconomic data: We first modelize the China macroeconomic data on Vector Auto-Regression (VAR) model, then identify the cointegration relation between variables and use Vector Error Correction Model (VECM) to study the short-time fluctuations around the long-term equilibrium, Granger Causality is also studied with VECM. This work reveals the trend of China's economic growth transition: from export-oriented to consumption-oriented; Due to limitation of China economic data, we turn to use France macroeconomic data in the second study. We represent the model in state-space, put the model into a feedback control framework, the controller is designed by Linear-Quadratic Regulator (LQR). The system can apply the control law to bring the system to a desired state. We can also impose perturbations on outputs and constraints on inputs, which emulates the real-world situation of economic crisis. Economists can observe the recovery trajectory of economy, which gives meaningful implications for policy-making. 2) Using control theory to improve the online learning of deep neural network: We propose a performance-based learning rate algorithm: E (Exponential)/PD (Proportional Derivative) feedback control, which consider the Convolutional Neural Network (CNN) as plant, learning rate as control signal and loss value as error signal. Results show that E/PD outperforms the state-of-the-art in final accuracy, final loss and converging speed, and the result are also more stable. However, one observation from E/PD experiments is that learning rate decreases while loss continuously decreases. But loss decreases mean model approaches optimum, we should not decrease the learning rate. To prevent this, we propose an event-based E/PD. Results show that it improves E/PD in final accuracy, final loss and converging speed; Another observation from E/PD experiment is that online learning fixes a constant training epoch for each batch. Since E/PD converges fast, the significant improvement only comes from the beginning epochs. Therefore, we propose another event-based E/PD, which inspects the historical loss, when the progress of training is lower than a certain threshold, we turn to next batch. Results show that it can save up to 67% epochs on CIFAR-10 dataset without degrading much performance. 3) Machine learning out of unreliable data: We propose a generic framework: Robust Anomaly Detector (RAD), The data selection part of RAD is a two-layer framework, where the first layer is used to filter out the suspicious data, and the second layer detects the anomaly patterns from the remaining data. We also derive three variations of RAD namely, voting, active learning and slim, which use additional information, e.g., opinions of conflicting classifiers and queries of oracles. We iteratively update the historical selected data to improve accumulated data quality. Results show that RAD can continuously improve model's performance under the presence of noise on labels. Three variations of RAD show they can all improve the original setting, and the RAD Active Learning performs almost as good as the case where there is no noise on labels
Liebermann, Joëlle. "Essays in real-time forecasting." Doctoral thesis, Universite Libre de Bruxelles, 2012. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209644.
Full textforecasting.
The issue of using data as available in real-time to forecasters, policymakers or financial
markets is an important one which has only recently been taken on board in the empirical
literature. Data available and used in real-time are preliminary and differ from ex-post
revised data, and given that data revisions may be quite substantial, the use of latest
available instead of real-time can substantially affect empirical findings (see, among others,
Croushore’s (2011) survey). Furthermore, as variables are released on different dates
and with varying degrees of publication lags, in order not to disregard timely information,
datasets are characterized by the so-called “ragged-edge”structure problem. Hence, special
econometric frameworks, such as developed by Giannone, Reichlin and Small (2008) must
be used.
The first Chapter, “The impact of macroeconomic news on bond yields: (in)stabilities over
time and relative importance”, studies the reaction of U.S. Treasury bond yields to real-time
market-based news in the daily flow of macroeconomic releases which provide most of the
relevant information on their fundamentals, i.e. the state of the economy and inflation. We
find that yields react systematically to a set of news consisting of the soft data, which have
very short publication lags, and the most timely hard data, with the employment report
being the most important release. However, sub-samples evidence reveals that parameter
instability in terms of absolute and relative size of yields response to news, as well as
significance, is present. Especially, the often cited dominance to markets of the employment
report has been evolving over time, as the size of the yields reaction to it was steadily
increasing. Moreover, over the recent crisis period there has been an overall switch in the
relative importance of soft and hard data compared to the pre-crisis period, with the latter
becoming more important even if less timely, and the scope of hard data to which markets
react has increased and is more balanced as less concentrated on the employment report.
Markets have become more reactive to news over the recent crisis period, particularly to
hard data. This is a consequence of the fact that in periods of high uncertainty (bad state),
markets starve for information and attach a higher value to the marginal information content
of these news releases.
The second and third Chapters focus on the real-time ability of models to now-and-forecast
in a data-rich environment. It uses an econometric framework, that can deal with large
panels that have a “ragged-edge”structure, and to evaluate the models in real-time, we
constructed a database of vintages for US variables reproducing the exact information that
was available to a real-time forecaster.
The second Chapter, “Real-time nowcasting of GDP: a factor model versus professional
forecasters”, performs a fully real-time nowcasting (forecasting) exercise of US real GDP
growth using Giannone, Reichlin and Smalls (2008), henceforth (GRS), dynamic factor
model (DFM) framework which enables to handle large unbalanced datasets as available
in real-time. We track the daily evolution throughout the current and next quarter of the
model nowcasting performance. Similarly to GRS’s pseudo real-time results, we find that
the precision of the nowcasts increases with information releases. Moreover, the Survey of
Professional Forecasters does not carry additional information with respect to the model,
suggesting that the often cited superiority of the former, attributable to judgment, is weak
over our sample. As one moves forward along the real-time data flow, the continuous
updating of the model provides a more precise estimate of current quarter GDP growth and
the Survey of Professional Forecasters becomes stale. These results are robust to the recent
recession period.
The last Chapter, “Real-time forecasting in a data-rich environment”, evaluates the ability
of different models, to forecast key real and nominal U.S. monthly macroeconomic variables
in a data-rich environment and from the perspective of a real-time forecaster. Among
the approaches used to forecast in a data-rich environment, we use pooling of bi-variate
forecasts which is an indirect way to exploit large cross-section and the directly pooling of
information using a high-dimensional model (DFM and Bayesian VAR). Furthermore forecasts
combination schemes are used, to overcome the choice of model specification faced by
the practitioner (e.g. which criteria to use to select the parametrization of the model), as
we seek for evidence regarding the performance of a model that is robust across specifications/
combination schemes. Our findings show that predictability of the real variables is
confined over the recent recession/crisis period. This in line with the findings of D’Agostino
and Giannone (2012) over an earlier period, that gains in relative performance of models
using large datasets over univariate models are driven by downturn periods which are characterized
by higher comovements. These results are robust to the combination schemes
or models used. A point worth mentioning is that for nowcasting GDP exploiting crosssectional
information along the real-time data flow also helps over the end of the great moderation period. Since this is a quarterly aggregate proxying the state of the economy,
monthly variables carry information content for GDP. But similarly to the findings for the
monthly variables, predictability, as measured by the gains relative to the naive random
walk model, is higher during crisis/recession period than during tranquil times. Regarding
inflation, results are stable across time, but predictability is mainly found at nowcasting
and forecasting one-month ahead, with the BVAR standing out at nowcasting. The results
show that the forecasting gains at these short horizons stem mainly from exploiting timely
information. The results also show that direct pooling of information using a high dimensional
model (DFM or BVAR) which takes into account the cross-correlation between the
variables and efficiently deals with the “ragged-edge”structure of the dataset, yields more
accurate forecasts than the indirect pooling of bi-variate forecasts/models.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
Kamps, Christophe. "The dynamic macroeconomic effects of public capital : theory and evidence for OECD countries /." Berlin [u.a.] : Springer, 2004. http://www.loc.gov/catdir/toc/fy054/2004114864.html.
Full textHörnell, Fredrik, and Melina Hafelt. "Responsiveness of Swedish housing prices to the 2018 amortization requirement : An investigation using a structural Vector autoregressive model to estimate the impact of macro prudential regulation on the Swedish housing market." Thesis, Södertörns högskola, Nationalekonomi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-35533.
Full textFonseca, Marcelo Gonçalves da Silva. "Essays on the credit channel of monetary policy: a case study for Brazil." reponame:Repositório Institucional do FGV, 2014. http://hdl.handle.net/10438/11748.
Full textRejected by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br), reason: Boa tarde Marcelo, conforme conversamos ao telefone. Att. Suzi 3799-7876 on 2014-05-19T19:47:10Z (GMT)
Submitted by Marcelo Fonseca (marcelo.economista@hotmail.com) on 2014-05-19T21:20:48Z No. of bitstreams: 1 Essays on the Credit Channel of Monetary Policy - a Case Study for Brazil.pdf: 3702737 bytes, checksum: 106ac090d0a4805c2b0d31d85182e2eb (MD5)
Approved for entry into archive by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br) on 2014-05-20T11:36:27Z (GMT) No. of bitstreams: 1 Essays on the Credit Channel of Monetary Policy - a Case Study for Brazil.pdf: 3702737 bytes, checksum: 106ac090d0a4805c2b0d31d85182e2eb (MD5)
Made available in DSpace on 2014-05-20T11:38:51Z (GMT). No. of bitstreams: 1 Essays on the Credit Channel of Monetary Policy - a Case Study for Brazil.pdf: 3702737 bytes, checksum: 106ac090d0a4805c2b0d31d85182e2eb (MD5) Previous issue date: 2014-05-06
O estouro da crise do subprime em 2008 nos EUA e da crise soberana europeia em 2010 renovou o interesse acadêmico no papel desempenhado pela atividade creditícia nos ciclos econômicos. O propósito desse trabalho é apresentar evidências empíricas acerca do canal do crédito da política monetária para o caso brasileiro, usando técnicas econométricas distintas. O trabalho é composto por três artigos. O primeiro apresenta uma revisão da literatura de fricções financeiras, com especial ênfase nas suas implicações sobre a condução da política monetária. Destaca-se o amplo conjunto de medidas não convencionais utilizadas pelos bancos centrais de países emergentes e desenvolvidos em resposta à interrupção da intermediação financeira. Um capítulo em particular é dedicado aos desafios enfrentados pelos bancos centrais emergentes para a condução da política monetária em um ambiente de mercado de capitais altamente integrados. O segundo artigo apresenta uma investigação empírica acerca das implicações do canal do crédito, sob a lente de um modelo FAVAR estrutural (SFAVAR). O termo estrutural decorre da estratégia de estimação adotada, a qual possibilita associar uma clara interpretação econômica aos fatores estimados. Os resultados mostram que choques nas proxies para o prêmio de financiamento externo e o volume de crédito produzem flutuações amplas e persistentes na inflação e atividade econômica, respondendo por mais de 30% da decomposição de variância desta no horizonte de três anos. Simulações contrafactuais demonstram que o canal do crédito amplificou a contração econômica no Brasil durante a fase aguda da crise financeira global no último trimestre de 2008, produzindo posteriormente um impulso relevante na recuperação que se seguiu. O terceiro artigo apresenta estimação Bayesiana de um modelo DSGE novo-keynesiano que incorpora o mecanismo de acelerador financeiro desenvolvido por Bernanke, Gertler e Gilchrist (1999). Os resultados apresentam evidências em linha com aquelas obtidas no artigo anterior: inovações no prêmio de financiamento externo – representado pelos spreads de crédito – produzem efeitos relevantes sobre a dinâmica da demanda agregada e inflação. Adicionalmente, verifica-se que choques de política monetária são amplificados pelo acelerador financeiro. Palavras-chave: Macroeconomia, Política Monetária, Canal do Crédito, Acelerador Financeiro, FAVAR, DSGE, Econometria Bayesiana
Curto, Millet Fabien. "Inflation expectations, labour markets and EMU." Thesis, University of Oxford, 2007. http://ora.ox.ac.uk/objects/uuid:9187d2eb-2f93-4a5a-a7d6-0fb6556079bb.
Full text