Dissertations / Theses on the topic 'Macroeconomics – Econometric models'

To see the other types of publications on this topic, follow the link: Macroeconomics – Econometric models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Macroeconomics – Econometric models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Steinbach, Max Rudibert. "Essays on dynamic macroeconomics." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/86196.

Full text
Abstract:
Thesis (PhD)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: In the first essay of this thesis, a medium scale DSGE model is developed and estimated for the South African economy. When used for forecasting, the model is found to outperform private sector economists when forecasting CPI inflation, GDP growth and the policy rate over certain horizons. In the second essay, the benchmark DSGE model is extended to include the yield on South African 10-year government bonds. The model is then used to decompose the 10-year yield spread into (1) the structural shocks that contributed to its evolution during the inflation targeting regime of the South African Reserve Bank, as well as (2) an expected yield and a term premium. In addition, it is found that changes in the South African term premium may predict future real economic activity. Finally, the need for DSGE models to take account of financial frictions became apparent during the recent global financial crisis. As a result, the final essay incorporates a stylised banking sector into the benchmark DSGE model described above. The optimal response of the South African Reserve Bank to financial shocks is then analysed within the context of this structural model.
APA, Harvard, Vancouver, ISO, and other styles
2

Emiris, Marina. "Essays on macroeconomics and finance." Doctoral thesis, Universite Libre de Bruxelles, 2006. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210764.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Walker, Sébastien. "Essays in development macroeconomics." Thesis, University of Oxford, 2015. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.712398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Santos, Monteiro Paulo. "Essays on uninsurable individual risk and heterogeneity in macroeconomics." Doctoral thesis, Universite Libre de Bruxelles, 2008. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210528.

Full text
Abstract:
This thesis examines empirical and theoretical issues related to the role of uninsurable individual risk and heterogeneity in macroeconomics. The thesis includes four chapters. The first chapter uses data from the Panel Study of Income Dynamics (PSID) to test full risk-sharing among North American households. The second chapter is a short essay where I use simulated data to show how the method applied in the previous chapter can be used to distinguish between partial risk sharing and imperfect credit markets. The third chapter develops a heterogeneous agent dynamic general equilibrium model which jointly models aggregate saving and employment. Finally, the fourth chapter investigates empirically the ability of financial market incompleteness to help explaining the equity premium puzzle. The central motivation throughout this dissertation is the recognition that the interaction between cross-sectional volatility and aggregate volatility is of fundamental importance to understand the way we should model macroeconomic aggregates such as aggregate consumption, asset prices and business cycle fluctuations.


Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
5

Delle, Monache Davide. "Essays on state space models and macroeconomic modelling." Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609745.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

De, Antonio Liedo David. "Structural models for macroeconomics and forecasting." Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210142.

Full text
Abstract:
This Thesis is composed by three independent papers that investigate

central debates in empirical macroeconomic modeling.

Chapter 1, entitled “A Model for Real-Time Data Assessment with an Application to GDP Growth Rates”, provides a model for the data

revisions of macroeconomic variables that distinguishes between rational expectation updates and noise corrections. Thus, the model encompasses the two polar views regarding the publication process of statistical agencies: noise versus news. Most of the studies previous studies that analyze data revisions are based

on the classical noise and news regression approach introduced by Mankiew, Runkle and Shapiro (1984). The problem is that the statistical tests available do not formulate both extreme hypotheses as collectively exhaustive, as recognized by Aruoba (2008). That is, it would be possible to reject or accept both of them simultaneously. In turn, the model for the

DPP presented here allows for the simultaneous presence of both noise and news. While the “regression approach” followed by Faust et al. (2005), along the lines of Mankiew et al. (1984), identifies noise in the preliminary

figures, it is not possible for them to quantify it, as done by our model.

The second and third chapters acknowledge the possibility that macroeconomic data is measured with errors, but the approach followed to model the missmeasurement is extremely stylized and does not capture the complexity of the revision process that we describe in the first chapter.

Chapter 2, entitled “Revisiting the Success of the RBC model”, proposes the use of dynamic factor models as an alternative to the VAR based tools for the empirical validation of dynamic stochastic general equilibrium (DSGE) theories. Along the lines of Giannone et al. (2006), we use the state-space parameterisation of the factor models proposed by Forni et al. (2007) as a competitive benchmark that is able to capture weak statistical restrictions that DSGE models impose on the data. Our empirical illustration compares the out-of-sample forecasting performance of a simple RBC model augmented with a serially correlated noise component against several specifications belonging to classes of dynamic factor and VAR models. Although the performance of the RBC model is comparable

to that of the reduced form models, a formal test of predictive accuracy reveals that the weak restrictions are more useful at forecasting than the strong behavioral assumptions imposed by the microfoundations in the model economy.

The last chapter, “What are Shocks Capturing in DSGE modeling”, contributes to current debates on the use and interpretation of larger DSGE

models. Recent tendency in academic work and at central banks is to develop and estimate large DSGE models for policy analysis and forecasting. These models typically have many shocks (e.g. Smets and Wouters, 2003 and Adolfson, Laseen, Linde and Villani, 2005). On the other hand, empirical studies point out that few large shocks are sufficient to capture the covariance structure of macro data (Giannone, Reichlin and

Sala, 2005, Uhlig, 2004). In this Chapter, we propose to reconcile both views by considering an alternative DSGE estimation approach which

models explicitly the statistical agency along the lines of Sargent (1989). This enables us to distinguish whether the exogenous shocks in DSGE

modeling are structural or instead serve the purpose of fitting the data in presence of misspecification and measurement problems. When applied to the original Smets and Wouters (2007) model, we find that the explanatory power of the structural shocks decreases at high frequencies. This allows us to back out a smoother measure of the natural output gap than that

resulting from the original specification.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
7

Calver, Robin Barnaby. "Macroeconomic and Political Determinants of Foreign Direct Investment in the Middle East." PDXScholar, 2013. https://pdxscholar.library.pdx.edu/open_access_etds/1074.

Full text
Abstract:
This study argues that governments with sustained GDP growth, open markets, low country risk, high levels and low standard deviation of government performance, and few or no occurrences of war, will see larger levels of foreign direct investment (FDI) over time. Scholarship on the determinants of FDI variously argues the influence of GDP growth, the openness of a country's economy, a government's level of political capacity, the level of country risk, and the negative effects of inter-, intra- and extrastate conflict. These studies on the various effects on FDI, while providing insightful and substantial statistical results, fail to capture the simultaneous effects of macroeconomic, government performance, country risk, and war variables. The present study attempts to resolve this gap in the literature on FDI by proposing a multi-dimensional model of the combined effects of un-weighted macroeconomic, political, country risk, and war variables on FDI flows over time. The empirical results confirm the expected multi-dimensional nature of FDI flows over time and provide insight into the macroeconomic and political effects on regional and country-level yearly flows of FDI, as well as yielding some unexpected and counter-intuitive results of the role war plays on FDI flows over time.
APA, Harvard, Vancouver, ISO, and other styles
8

Jindal, Bhavin. "The Chinese Dragon Lands in Africa: Chinese Contracts and Economic Growth in Africa." Scholarship @ Claremont, 2017. http://scholarship.claremont.edu/cmc_theses/1564.

Full text
Abstract:
China has been increasingly sending more contracts to work on projects in Africa. This study tests the effects of Chinese contracts on economic growth in 50 African countries as well as the correlation between Chinese contracts and other economic indicators. The paper uses data from the World Bank and National Bureau of Statistics of China starting from 2000-2015. This study finds that from 2000 to 2015, Chinese contracts have not been significant in economic growth of all African countries. The analysis does find that Chinese contracts are significant to economic growth when considering only the top five countries who have received the most contracts on average.
APA, Harvard, Vancouver, ISO, and other styles
9

Ji, Inyeob Economics Australian School of Business UNSW. "Essays on testing some predictions of RBC models and the stationarity of real interest rates." Publisher:University of New South Wales. Economics, 2008. http://handle.unsw.edu.au/1959.4/41441.

Full text
Abstract:
This dissertation contains a series of essays that provide empirical evidence for Australia on some fundamental predictions of real business cycle models and on the convergence and persistence of real interest rates. Chapter 1 provides a brief introduction to the issues examined in each chapter and provides an overview of the methodologies that are used. Tests of various basic predictions of standard real business cycle models for Australia are presented in Chapters 2, 3 and 4. Chapter 2 considers the question of great ratios for Australia. These are ratios of macroeconomic variables that are predicted by standard models to be stationary in the steady state. Using time series econometric techniques (unit root tests and cointegration tests) Australia great ratios are examined. In Chapter 3 a more restrictive implication of real business cycle models than the existence of great ratios is considered. Following the methodology proposed by Canova, Finn and Pagan (1994) the equilibrium decision rules for some standard real business cycle are tested on Australian data. The final essay on this topic is presented in Chapter 4. In this chapter a large-country, small-country is used to try and understand the reason for the sharp rise in Australia??s share of world output that began around 1990. Chapter 5 discusses real interest rate linkages in the Pacific Basin region. Vector autoregressive models and bootstrap methods are adopted to study financial linkages between East Asian markets, Japan and US. Given the apparent non-stationarity of real interest rates a related issue is examined in Chapter 6, viz. the persistence of international real interest rates and estimation of their half-life. Half-life is selected as a means of measuring persistence of real rates. Bootstrap methods are employed to overcome small sample issues in the estimation and a non-standard statistical inference methodology (Highest Density Regions) is adopted. Chapter 7 reapplies the High Density Regions methodology and bootstrap half-life estimation to the data used in Chapters 2 and 5. This provides a robustness check on the results of standard unit root tests that were applied to the data in those chapters. Main findings of the thesis are as follows. The long run implications of real business cycle models are largely rejected by the Australia data. This finding holds for both the existence of great ratios and when the explicit decision rules are employed. When the small open economy features of the Australian economy are incorporated in a two country RBC model, a country-specific productivity boom seems to provide a possible explanation for the rise in Australia??s share of world output. The essays that examine real interest rates suggest the following results. Following the East Asian financial crisis in 1997-98 there appears to have been a decline in the importance of Japan in influencing developments in the Pacific Basin region. In addition there is evidence that following the crisis Korea??s financial market became less insular and more integrated with the US. Finally results obtained from the half-life estimators suggest that despite the usual findings from unit root tests, real interest rates may in fact exhibit mean-reversion.
APA, Harvard, Vancouver, ISO, and other styles
10

Conflitti, Cristina. "Essays on the econometrics of macroeconomic survey data." Doctoral thesis, Universite Libre de Bruxelles, 2012. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209635.

Full text
Abstract:
This thesis contains three essays covering different topics in the field of statistics

and econometrics of survey data. Chapters one and two analyse two aspects

of the Survey of Professional Forecasters (SPF hereafter) dataset. This survey

provides a large information on macroeconomic expectations done by the professional

forecasters and offers an opportunity to exploit a rich information set.

But it poses a challenge on how to extract the relevant information in a proper

way. The last chapter addresses the issue of analyzing the opinions on the euro

reported in the Flash Eurobaromenter dataset.

The first chapter Measuring Uncertainty and Disagreement in the European

Survey of Professional Forecasters proposes a density forecast methodology based

on the piecewise linear approximation of the individual’s forecasting histograms,

to measure uncertainty and disagreement of the professional forecasters. Since

1960 with the introduction of the SPF in the US, it has been clear that they were a

useful source of information to address the issue on how to measure disagreement

and uncertainty, without relying on macroeconomic or time series models. Direct

measures of uncertainty are seldom available, whereas many surveys report point

forecasts from a number of individual respondents. There has been a long tradition

of using measures of the dispersion of individual respondents’ point forecasts

(disagreement or consensus) as proxies for uncertainty. Unlike other surveys, the

SPF represents an exception. It directly asks for the point forecast, and for the

probability distribution, in the form of histogram, associated with the macro variables

of interest. An important issue that should be considered concerns how to

approximate individual probability densities and get accurate individual results

for disagreement and uncertainty before computing the aggregate measures. In

contrast to Zarnowitz and Lambros (1987), and Giordani and Soderlind (2003) we

overcome the problem associated with distributional assumptions of probability

density forecasts by using a non parametric approach that, instead of assuming

a functional form for the individual probability law, approximates the histogram

by a piecewise linear function. In addition, and unlike earlier works that focus on

US data, we employ European data, considering gross domestic product (GDP),

inflation and unemployment.

The second chapter Optimal Combination of Survey Forecasts is based on

a joint work with Christine De Mol and Domenico Giannone. It proposes an

approach to optimally combine survey forecasts, exploiting the whole covariance

structure among forecasters. There is a vast literature on forecast combination

methods, advocating their usefulness both from the theoretical and empirical

points of view (see e.g. the recent review by Timmermann (2006)). Surprisingly,

it appears that simple methods tend to outperform more sophisticated ones, as

shown for example by Genre et al. (2010) on the combination of the forecasts in

the SPF conducted by the European Central Bank (ECB). The main conclusion of

several studies is that the simple equal-weighted average constitutes a benchmark

that is hard to improve upon. In contrast to a great part of the literature which

does not exploit the correlation among forecasters, we take into account the full

covariance structure and we determine the optimal weights for the combination

of point forecasts as the minimizers of the mean squared forecast error (MSFE),

under the constraint that these weights are nonnegative and sum to one. We

compare our combination scheme with other methodologies in terms of forecasting

performance. Results show that the proposed optimal combination scheme is an

appropriate methodology to combine survey forecasts.

The literature on point forecast combination has been widely developed, however

there are fewer studies analyzing the issue for combination density forecast.

We extend our work considering the density forecasts combination. Moving from

the main results presented in Hall and Mitchell (2007), we propose an iterative

algorithm for computing the density weights which maximize the average logarithmic

score over the sample period. The empirical application is made for the

European GDP and inflation forecasts. Results suggest that optimal weights,

obtained via an iterative algorithm outperform the equal-weighted used by the

ECB density combinations.

The third chapter entitled Opinion surveys on the euro: a multilevel multinomial

logistic analysis outlines the multilevel aspects related to public attitudes

toward the euro. This work was motivated by the on-going debate whether the

perception of the euro among European citizenships after ten years from its introduction

was positive or negative. The aim of this work is, therefore, to disentangle

the issue of public attitudes considering either individual socio-demographic characteristics

and macroeconomic features of each country, counting each of them

as two separate levels in a single analysis. Considering a hierarchical structure

represents an advantage as it models within-country as well as between-country

relations using a single analysis. The multilevel analysis allows the consideration

of the existence of dependence between individuals within countries induced by

unobserved heterogeneity between countries, i.e. we include in the estimation

specific country characteristics not directly observable. In this chapter we empirically

investigate which individual characteristics and country specificities are

most important and affect the perception of the euro. The attitudes toward the

euro vary across individuals and countries, and are driven by personal considerations

based on the benefits and costs of using the single currency. Individual

features, such as a high level of education or living in a metropolitan area, have

a positive impact on the perception of the euro. Moreover, the country-specific

economic condition can influence individuals attitudes.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
11

Malherbe, Frédéric. "Essays on the macroeconomic implications of information asymmetries." Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210085.

Full text
Abstract:
Along this dissertation I propose to walk the reader through several macroeconomic

implications of information asymmetries, with a special focus on financial

issues. This exercise is mainly theoretical: I develop stylized models that aim

at capturing macroeconomic phenomena such as self-fulfilling liquidity dry-ups,

the rise and the fall of securitization markets, and the creation of systemic risk.

The dissertation consists of three chapters. The first one proposes an explanation

to self-fulfilling liquidity dry-ups. The second chapters proposes a formalization

of the concept of market discipline and an application to securitization

markets as risk-sharing mechanisms. The third one offers a complementary

analysis to the second as the rise of securitization is presented as banker optimal

response to strict capital constraints.

Two concepts that do not have unique acceptations in economics play a central

role in these models: liquidity and market discipline.

The liquidity of an asset refers to the ability for his owner to transform it into

current consumption goods. Secondary markets for long-term assets play thus

an important role with that respect. However, such markets might be illiquid due

to adverse selection.

In the first chapter, I show that: (1) when agents expect a liquidity dry-up

on such markets, they optimally choose to self-insure through the hoarding of

non-productive but liquid assets; (2) this hoarding behavior worsens adverse selection and dries up market liquidity; (3) such liquidity dry-ups are Pareto inefficient

equilibria; (4) the government can rule them out. Additionally, I show

that idiosyncratic liquidity shocks à la Diamond and Dybvig have stabilizing effects,

which is at odds with the banking literature. The main contribution of the

chapter is to show that market breakdowns due to adverse selection are highly

endogenous to past balance-sheet decisions.

I consider that agents are under market discipline when their current behavior

is influenced by future market outcomes. A key ingredient for market discipline

to be at play is that the market outcome depends on information that is observable

but not verifiable (that is, information that cannot be proved in court, and

consequently, upon which enforceable contracts cannot be based).

In the second chapter, after introducing this novel formalization of market

discipline, I ask whether securitization really contributes to better risk-sharing:

I compare it with other mechanisms that differ on the timing of risk-transfer. I

find that for securitization to be an efficient risk-sharing mechanism, it requires

market discipline to be strong and adverse selection not to be severe. This seems

to seriously restrict the set of assets that should be securitized for risk-sharing

motive.

Additionally, I show how ex-ante leverage may mitigate interim adverse selection

in securitization markets and therefore enhance ex-post risk-sharing. This

is interesting because high leverage is usually associated with “excessive” risktaking.

In the third chapter, I consider risk-neutral bankers facing strict capital constraints;

their capital is indeed required to cover the worst-case-scenario losses.

In such a set-up, I find that: 1) banker optimal autarky response is to diversify

lower-tail risk and maximize leverage; 2) securitization helps to free up capital

and to increase leverage, but distorts incentives to screen loan applicants properly; 3) market discipline mitigates this problem, but if it is overestimated by

the supervisor, it leads to excess leverage, which creates systemic risk. Finally,

I consider opaque securitization and I show that the supervisor: 4) faces uncertainty

about the trade-off between the size of the economy and the probability

and the severity of a systemic crisis; 5) can generally not set capital constraints

at the socially efficient level.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
12

Humpe, Andreas. "Macroeconomic variables and the stock market : an empirical comparison of the US and Japan." Thesis, St Andrews, 2008. http://hdl.handle.net/10023/464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Feng, Ning. "Essays on business cycles and macroeconomic forecasting." HKBU Institutional Repository, 2016. https://repository.hkbu.edu.hk/etd_oa/279.

Full text
Abstract:
This dissertation consists of two essays. The first essay focuses on developing a quantitative theory for a small open economy dynamic stochastic general equilibrium (DSGE) model with a housing sector allowing for both contemporaneous and news shocks. The second essay is an empirical study on the macroeconomic forecasting using both structural and non-structural models. In the first essay, we develop a DSGE model with a housing sector, which incorporates both contemporaneous and news shocks to domestic and external fundamentals, to explore the kind of and the extent to which different shocks to economic fundamentals matter for driving housing market dynamics in a small open economy. The model is estimated by the Bayesian method, using data from Hong Kong. The quantitative results show that external shocks and news shocks play a significant role in this market. Contemporaneous shock to foreign housing preference, contemporaneous shock to terms of trade, and news shocks to technology in the consumption goods sector explain one-third each of the variance of housing price. Terms of trade contemporaneous shock and consumption technology news shocks also contribute 36% and 59%, respectively, to the variance in housing investment. The simulation results enable policy makers to identify the key driving forces behind the housing market dynamics and the interaction between housing market and the macroeconomy in Hong Kong. In the second essay, we compare the forecasting performance between structural and non-structural models for a small open economy. The structural model refers to the small open economy DSGE model with the housing sector in the first essay. In addition, we examine various non-structural models including both Bayesian and classical time-series methods in our forecasting exercises. We also include the information from a large-scale quarterly data series in some models using two approaches to capture the influence of fundamentals: extracting common factors by principal component analysis in a dynamic factor model (DFM), factor-augmented vector autoregression (FAVAR), and Bayesian FAVAR (BFAVAR) or Bayesian shrinkage in a large-scale vector autoregression (BVAR). In this study, we forecast five key macroeconomic variables, namely, output, consumption, employment, housing price inflation, and CPI-based inflation using quarterly data. The results, based on mean absolute error (MAE) and root mean squared error (RMSE) of one to eight quarters ahead out-of-sample forecasts, indicate that the non-structural models outperform the structural model for all variables of interest across all horizons. Among the non-structural models, small-scale BVAR performs better with short forecasting horizons, although DFM shows a similar predictive ability. As the forecasting horizon grows, DFM tends to improve over other models and is better suited in forecasting key macroeconomic variables at longer horizons.
APA, Harvard, Vancouver, ISO, and other styles
14

D'Agostino, Antonello. "Understanding co-movements in macro and financial variables." Doctoral thesis, Universite Libre de Bruxelles, 2007. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210597.

Full text
Abstract:
Over the last years, the growing availability of large datasets and the improvements in the computational speed of computers have further fostered the research in the fields of both macroeconomic modeling and forecasting analysis. A primary focus of these research areas is to improve the models performance by exploiting the informational content of several time series. Increasing the dimension of macro models is indeed crucial for a detailed structural understanding of the economic environment, as well as for an accurate forecasting analysis. As consequence, a new generation of large-scale macro models, based on the micro-foundations of a fully specified dynamic stochastic general equilibrium set-up, has became one of the most flourishing research areas of interest both in central banks and academia. At the same time, there has been a revival of forecasting methods dealing with many predictors, such as the factor models. The central idea of factor models is to exploit co-movements among variables through a parsimonious econometric structure. Few underlying common shocks or factors explain most of the co-variations among variables. The unexplained component of series movements is on the other hand due to pure idiosyncratic dynamics. The generality of their framework allows factor models to be suitable for describing a broad variety of models in a macroeconomic and a financial context. The revival of factor models, over the recent years, comes from important developments achieved by Stock and Watson (2002) and Forni, Hallin, Lippi and Reichlin (2000). These authors find the conditions under which some data averages become collinear to the space spanned by the factors when, the cross section dimension, becomes large. Moreover, their factor specifications allow the idiosyncratic dynamics to be mildly cross-correlated (an effect referred to as the 'approximate factor structure' by Chamberlain and Rothschild, 1983), a situation empirically verified in many applications. These findings have relevant implications. The most important being that the use of a large number of series is no longer representative of a dimensional constraint. On the other hand, it does help to identify the factor space. This new generation of factor models has been applied in several areas of macroeconomics and finance as well as for policy evaluation. It is consequently very likely to become a milestone in the literature of forecasting methods using many predictors. This thesis contributes to the empirical literature on factor models by proposing four original applications.

In the first chapter of this thesis, the generalized dynamic factor model of Forni et. al (2002) is employed to explore the predictive content of the asset returns in forecasting Consumer Price Index (CPI) inflation and the growth rate of Industrial Production (IP). The connection between stock markets and economic growth is well known. In the fundamental valuation of equity, the stock price is equal to the discounted future streams of expected dividends. Since the future dividends are related to future growth, a revision of prices, and hence returns, should signal movements in the future growth path. Though other important transmission channels, such as the Tobin's q theory (Tobin, 1969), the wealth effect as well as capital market imperfections, have been widely studied in this literature. I show that an aggregate index, such as the S&P500, could be misleading if used as a proxy for the informative content of the stock market as a whole. Despite the widespread wisdom of considering such index as a leading variable, only part of the assets included in the composition of the index has a leading behaviour with respect to the variables of interest. Its forecasting performance might be poor, leading to sceptical conclusions about the effectiveness of asset prices in forecasting macroeconomic variables. The main idea of the first essay is therefore to analyze the lead-lag structure of the assets composing the S&P500. The classification in leading, lagging and coincident variables is achieved by means of the cross correlation function cleaned of idiosyncratic noise and short run fluctuations. I assume that asset returns follow a factor structure. That is, they are the sum of two parts: a common part driven by few shocks common to all the assets and an idiosyncratic part, which is rather asset specific. The correlation

function, computed on the common part of the series, is not affected by the assets' specific dynamics and should provide information only on the series driven by the same common factors. Once the leading series are identified, they are grouped within the economic sector they belong to. The predictive content that such aggregates have in forecasting IP growth and CPI inflation is then explored and compared with the forecasting power of the S&P500 composite index. The forecasting exercise is addressed in the following way: first, in an autoregressive (AR) model I choose the truncation lag that minimizes the Mean Square Forecast Error (MSFE) in 11 years out of sample simulations for 1, 6 and 12 steps ahead, both for the IP growth rate and the CPI inflation. Second, the S&P500 is added as an explanatory variable to the previous AR specification. I repeat the simulation exercise and find that there are very small improvements of the MSFE statistics. Third, averages of stock return leading series, in the respective sector, are added as additional explanatory variables in the benchmark regression. Remarkable improvements are achieved with respect to the benchmark specification especially for one year horizon forecast. Significant improvements are also achieved for the shorter forecast horizons, when the leading series of the technology and energy sectors are used.

The second chapter of this thesis disentangles the sources of aggregate risk and measures the extent of co-movements in five European stock markets. Based on the static factor model of Stock and Watson (2002), it proposes a new method for measuring the impact of international, national and industry-specific shocks. The process of European economic and monetary integration with the advent of the EMU has been a central issue for investors and policy makers. During these years, the number of studies on the integration and linkages among European stock markets has increased enormously. Given their forward looking nature, stock prices are considered a key variable to use for establishing the developments in the economic and financial markets. Therefore, measuring the extent of co-movements between European stock markets has became, especially over the last years, one of the main concerns both for policy makers, who want to best shape their policy responses, and for investors who need to adapt their hedging strategies to the new political and economic environment. An optimal portfolio allocation strategy is based on a timely identification of the factors affecting asset returns. So far, literature dating back to Solnik (1974) identifies national factors as the main contributors to the co-variations among stock returns, with the industry factors playing a marginal role. The increasing financial and economic integration over the past years, fostered by the decline of trade barriers and a greater policy coordination, should have strongly reduced the importance of national factors and increased the importance of global determinants, such as industry determinants. However, somehow puzzling, recent studies demonstrated that countries sources are still very important and generally more important of the industry ones. This paper tries to cast some light on these conflicting results. The chapter proposes an econometric estimation strategy more flexible and suitable to disentangle and measure the impact of global and country factors. Results point to a declining influence of national determinants and to an increasing influence of the industries ones. The international influences remains the most important driving forces of excess returns. These findings overturn the results in the literature and have important implications for strategic portfolio allocation policies; they need to be revisited and adapted to the changed financial and economic scenario.

The third chapter presents a new stylized fact which can be helpful for discriminating among alternative explanations of the U.S. macroeconomic stability. The main finding is that the fall in time series volatility is associated with a sizable decline, of the order of 30% on average, in the predictive accuracy of several widely used forecasting models, included the factor models proposed by Stock and Watson (2002). This pattern is not limited to the measures of inflation but also extends to several indicators of real economic activity and interest rates. The generalized fall in predictive ability after the mid-1980s is particularly pronounced for forecast horizons beyond one quarter. Furthermore, this empirical regularity is not simply specific to a single method, rather it is a common feature of all models including those used by public and private institutions. In particular, the forecasts for output and inflation of the Fed's Green book and the Survey of Professional Forecasters (SPF) are significantly more accurate than a random walk only before 1985. After this date, in contrast, the hypothesis of equal predictive ability between naive random walk forecasts and the predictions of those institutions is not rejected for all horizons, the only exception being the current quarter. The results of this chapter may also be of interest for the empirical literature on asymmetric information. Romer and Romer (2000), for instance, consider a sample ending in the early 1990s and find that the Fed produced more accurate forecasts of inflation and output compared to several commercial providers. The results imply that the informational advantage of the Fed and those private forecasters is in fact limited to the 1970s and the beginning of the 1980s. In contrast, during the last two decades no forecasting model is better than "tossing a coin" beyond the first quarter horizon, thereby implying that on average uninformed economic agents can effectively anticipate future macroeconomics developments. On the other hand, econometric models and economists' judgement are quite helpful for the forecasts over the very short horizon, that is relevant for conjunctural analysis. Moreover, the literature on forecasting methods, recently surveyed by Stock and Watson (2005), has devoted a great deal of attention towards identifying the best model for predicting inflation and output. The majority of studies however are based on full-sample periods. The main findings in the chapter reveal that most of the full sample predictability of U.S. macroeconomic series arises from the years before 1985. Long time series appear

to attach a far larger weight on the earlier sub-sample, which is characterized by a larger volatility of inflation and output. Results also suggest that some caution should be used in evaluating the performance of alternative forecasting models on the basis of a pool of different sub-periods as full sample analysis are likely to miss parameter instability.

The fourth chapter performs a detailed forecast comparison between the static factor model of Stock and Watson (2002) (SW) and the dynamic factor model of Forni et. al. (2005) (FHLR). It is not the first work in performing such an evaluation. Boivin and Ng (2005) focus on a very similar problem, while Stock and Watson (2005) compare the performances of a larger class of predictors. The SW and FHLR methods essentially differ in the computation of the forecast of the common component. In particular, they differ in the estimation of the factor space and in the way projections onto this space are performed. In SW, the factors are estimated by static Principal Components (PC) of the sample covariance matrix and the forecast of the common component is simply the projection of the predicted variable on the factors. FHLR propose efficiency improvements in two directions. First, they estimate the common factors based on Generalized Principal Components (GPC) in which observations are weighted according to their signal to noise ratio. Second, they impose the constraints implied by the dynamic factors structure when the variables of interest are projected on the common factors. Specifically, they take into account the leading and lagging relations across series by means of principal components in the frequency domain. This allows for an efficient aggregation of variables that may be out of phase. Whether these efficiency improvements are helpful to forecast in a finite sample is however an empirical question. Literature has not yet reached a consensus. On the one hand, Stock and Watson (2005) show that both methods perform similarly (although they focus on the weighting of the idiosyncratic and not on the dynamic restrictions), while Boivin and Ng (2005) show that SW's method largely outperforms the FHLR's and, in particular, conjecture that the dynamic restrictions implied by the method are harmful for the forecast accuracy of the model. This chapter tries to shed some new light on these conflicting results. It

focuses on the Industrial Production index (IP) and the Consumer Price Index (CPI) and bases the evaluation on a simulated out-of sample forecasting exercise. The data set, borrowed from Stock and Watson (2002), consists of 146 monthly observations for the US economy. The data spans from 1959 to 1999. In order to isolate and evaluate specific characteristics of the methods, a procedure, where the

two non-parametric approaches are nested in a common framework, is designed. In addition, for both versions of the factor model forecasts, the chapter studies the contribution of the idiosyncratic component to the forecast. Other non-core aspects of the model are also investigated: robustness with respect to the choice of the number of factors and variable transformations. Finally, the chapter performs a sub-sample performances of the factor based forecasts. The purpose of this exercise is to design an experiment for assessing the contribution of the core characteristics of different models to the forecasting performance and discussing auxiliary issues. Hopefully this may also serve as a guide for practitioners in the field. As in Stock and Watson (2005), results show that efficiency improvements due to the weighting of the idiosyncratic components do not lead to significant more accurate forecasts, but, in contrast to Boivin and Ng (2005), it is shown that the dynamic restrictions imposed by the procedure of Forni et al. (2005) are not harmful for predictability. The main conclusion is that the two methods have a similar performance and produce highly collinear forecasts.


Doctorat en sciences économiques, Orientation économie
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
15

Fantinatti, Marcos da Costa. "Modelo de equilíbrio geral estocástico e o mercado de trabalho brasileiro." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/12/12138/tde-25022016-112933/.

Full text
Abstract:
Os três artigos desta tese focam no mercado de trabalho. O primeiro artigo calculou a probabilidade com que um trabalhador deixa o emprego e a probabilidade com que um desempregado encontra trabalho no Brasil. A metodologia utilizada foi a desenvolvida por Shimer (2012). O objetivo foi determinar qual destes dois fatores seria o mais importante para explicar as flutuações da taxa de desemprego no Brasil. Os resultados mostraram que é a dinâmica da probabilidade com que um desempregado encontra emprego que explica o comportamento da taxa de desemprego. Este resultado é distinto daquele encontrado normalmente na literatura. No segundo artigo, log-linearizamos e estimados o modelo de Christiano, Eichenbaum e Trabandt (2013) para o Brasil. Este modelo difere dos modelos novos keynesianos tradicionais ao colocar uma estrutura de searching (busca) para o mercado de trabalho. A ideia foi comparar este modelo com o modelo de rigidez de preços e salários tradicional e analisar se esta estrutura para o mercado de trabalho é capaz de fazer o papel das rigidezes tradicionais, no que se refere a propagação dos choques na economia. As funções impulso resposta a um choque contracionista de política monetária mostraram que o modelo explicou o comportamento esperado para variáveis como PIB, inflação e taxa de desemprego. Ainda, a estimação do modelo mostrou, no geral, que os preços no Brasil são reajustados com uma frequência menor do que a frequência indicada pelos modelos novos keynesianos com rigidez de preços e salários. Por sua vez, ao desligar a rigidez da utilização do capital e a do working capital channel, este modelo mais completo, maior e mais detalhado para mercado de trabalho pareceu não ser capaz de dar conta do movimento inercial e persistente observado para as variáveis macroeconômicas como PIB e inflação. Por fim, no terceiro artigo, estimamos novamente o modelo Christiano, Eichenbaum e Trabandt (2013), mas agora para os Estados Unidos. Entretanto, adotamos uma estratégia de estimação diferente: optamos por primeiro log-linearizar o modelo para depois fazer a estimação, para dois períodos: até 2008, assim como no artigo original, e até 2014. O objetivo principal foi comparar os resultados da nossa estimativa com os resultados de Christiano, Eichenbaum e Trabandt (2013). Para o conjunto de dados até 2008, os resultados indicam que os valores estimados estão em linha com os encontrados na literatura e, no geral, não estão muito distantes das estimações do artigo original. Mas, os parâmetros estimados apontaram para um modelo com um pouco mais de rigidez de preços, uma maior persistência de consumo e com uma regra de política monetária um pouco menos inercial em relação à do artigo original. Entretanto, esta regra mostrou uma reação muito maior à inflação do que ao produto, assim como em Christiano, Eichenbaum e Trabandt (2013). Considerando a amostra toda, isto é, até o final de 2014, observamos que o modelo estimado continuou a ter uma maior rigidez de preço em relação ao modelo original e uma regra de política monetária menos inercial. Além disso, os dados mais recentes afetaram de modo mais expressivo os valores estimados para variáveis do mercado de trabalho. Por sua vez, as funções impulso resposta refletiram esta menor inércia da política monetária e, no geral, apresentaram as trajetórias esperadas.
The three articles of this thesis focus on the labor market. The first article calculated the probability of a worker leaving his job and the probability of an unemployed person finding a job in Brazil, using the methodology developed by Shimer (2012). The aim was to determine which of these factors was the most important to explain the unemployment rate fluctuations. The results showed that the probability of an unemployed worker finding a job is more important to explain the dynamic of the unemployment rate. Commonly, the literature has found an opposite result in Brazil. In the second article, we log linearized and estimated the model built by Christiano, Eichenbaum and Evans (2013) for Brazil. This model is different from the traditional New Keynesian models because it has a structure of searching in the labor market. The idea was to compare this model with the traditional one with sticky wage and sticky prices. Moreover, the idea was to analyze if this model with searching structure in the labor market was able to substitute some traditional rigidity when the concern is the propagation of shocks. The impulse response functions to a contractionist monetary policy shock showed that this model explains the dynamic that is normally found in GDP, inflation and unemployment rate. Furthermore, the estimation showed that, in general, the prices are readjusted less frequently than the frequency estimated by New Keynesian models with sticky wage and sticky prices. Besides, when the rigidities (capital utilization and working capital channel) are eliminated, this model did not properly explain the inertial and persistence dynamic of the macroeconomics variables, such as GDP and inflation. Finally, in the last article, we estimated the Christiano, Eichenbaum and Trabandt (2013) model for the United States, but we adopted a different estimation strategy. We log linearized the model and estimated it with Bayesian methods. Moreover, we estimated for two different periods. The aim was to compare our results with the original model. When the model was estimated with data up to 2008, the results showed that the estimations were in line with the values found in the literature and, in general, they were not too far from the values estimated in the original article. However, the parameters estimated showed a model in which the prices are more rigid, the consumption habit is higher and the monetary rule is less inertial than observed in the original model. However, the monetary authority reacted much more to inflation than GDP, as it happened in the original article. When we considered the data until 2014, we observed that the estimated model remained with more sticky prices and a more inertial monetary rule. Moreover, we noted that this more recent data affected more expressively the estimated values of the labor market. The analysis of impulse response function showed this less inertial dynamic of the monetary rule and, overall, they followed the expected dynamics
APA, Harvard, Vancouver, ISO, and other styles
16

Cimadomo, Jacopo. "Essays on systematic and unsystematic monetary and fiscal policies." Doctoral thesis, Universite Libre de Bruxelles, 2008. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210474.

Full text
Abstract:
The active use of macroeconomic policies to smooth economic fluctuations and, as a

consequence, the stance that policymakers should adopt over the business cycle, remain

controversial issues in the economic literature.

In the light of the dramatic experience of the early 1930s’ Great Depression, Keynes (1936)

argued that the market mechanism could not be relied upon to spontaneously recover from

a slump, and advocated counter-cyclical public spending and monetary policy to stimulate

demand. Albeit the Keynesian doctrine had largely influenced policymaking during

the two decades following World War II, it began to be seriously challenged in several

directions since the start of the 1970s. The introduction of rational expectations within

macroeconomic models implied that aggregate demand management could not stabilize

the economy’s responses to shocks (see in particular Sargent and Wallace (1975)). According

to this view, in fact, rational agents foresee the effects of the implemented policies, and

wage and price expectations are revised upwards accordingly. Therefore, real wages and

money balances remain constant and so does output. Within such a conceptual framework,

only unexpected policy interventions would have some short-run effects upon the economy.

The "real business cycle (RBC) theory", pioneered by Kydland and Prescott (1982), offered

an alternative explanation on the nature of fluctuations in economic activity, viewed

as reflecting the efficient responses of optimizing agents to exogenous sources of fluctuations, outside the direct control of policymakers. The normative implication was that

there should be no role for economic policy activism: fiscal and monetary policy should be

acyclical. The latest generation of New Keynesian dynamic stochastic general equilibrium

(DSGE) models builds on rigorous foundations in intertemporal optimizing behavior by

consumers and firms inherited from the RBC literature, but incorporates some frictions

in the adjustment of nominal and real quantities in response to macroeconomic shocks

(see Woodford (2003)). In such a framework, not only policy "surprises" may have an

impact on the economic activity, but also the way policymakers "systematically" respond

to exogenous sources of fluctuation plays a fundamental role in affecting the economic

activity, thereby rekindling interest in the use of counter-cyclical stabilization policies to

fine tune the business cycle.

Yet, despite impressive advances in the economic theory and econometric techniques, there are no definitive answers on the systematic stance policymakers should follow, and on the

effects of macroeconomic policies upon the economy. Against this background, the present thesis attempts to inspect the interrelations between macroeconomic policies and the economic activity from novel angles. Three contributions

are proposed.

In the first Chapter, I show that relying on the information actually available to policymakers when budgetary decisions are taken is of fundamental importance for the assessment of the cyclical stance of governments. In the second, I explore whether the effectiveness of fiscal shocks in spurring the economic activity has declined since the beginning of the 1970s. In the third, the impact of systematic monetary policies over U.S. industrial sectors is investigated. In the existing literature, empirical assessments of the historical stance of policymakers over the economic cycle have been mainly drawn from the estimation of "reduced-form" policy reaction functions (see in particular Taylor (1993) and Galì and Perotti (2003)). Such rules typically relate a policy instrument (a reference short-term interest rate or an indicator of discretionary fiscal policy) to a set of explanatory variables (notably inflation, the output gap and the debt-GDP ratio, as long as fiscal policy is concerned). Although these policy rules can be seen as simple approximations of what derived from an explicit optimization problem solved by social planners (see Kollmann (2007)), they received considerable attention since they proved to track the behavior of central banks and fiscal

policymakers relatively well. Typically, revised data, i.e. observations available to the

econometrician when the study is carried out, are used in the estimation of such policy

reaction functions. However, data available in "real-time" to policymakers may end up

to be remarkably different from what it is observed ex-post. Orphanides (2001), in an

innovative and thought-provoking paper on the U.S. monetary policy, challenged the way

policy evaluation was conducted that far by showing that unrealistic assumptions about

the timeliness of data availability may yield misleading descriptions of historical policy.

In the spirit of Orphanides (2001), in the first Chapter of this thesis I reconsider how

the intentional cyclical stance of fiscal authorities should be assessed. Importantly, in

the framework of fiscal policy rules, not only variables such as potential output and the

output gap are subject to measurement errors, but also the main discretionary "operating

instrument" in the hands of governments: the structural budget balance, i.e. the headline

government balance net of the effects due to automatic stabilizers. In fact, the actual

realization of planned fiscal measures may depend on several factors (such as the growth

rate of GDP, the implementation lags that often follow the adoption of many policy

measures, and others more) outside the direct and full control of fiscal authorities. Hence,

there might be sizeable differences between discretionary fiscal measures as planned in the

past and what it is observed ex-post. To be noted, this does not apply to monetary policy

since central bankers can control their operating interest rates with great accuracy.

When the historical behavior of fiscal authorities is analyzed from a real-time perspective, it emerges that the intentional stance has been counter-cyclical, especially during expansions, in the main OECD countries throughout the last thirteen years. This is at

odds with findings based on revised data, generally pointing to pro-cyclicality (see for example Gavin and Perotti (1997)). It is shown that empirical correlations among revision

errors and other second-order moments allow to predict the size and the sign of the bias

incurred in estimating the intentional stance of the policy when revised data are (mistakenly)

used. It addition, formal tests, based on a refinement of Hansen (1999), do not reject

the hypothesis that the intentional reaction of fiscal policy to the cycle is characterized by

two regimes: one counter-cyclical, when output is above its potential level, and the other

acyclical, in the opposite case. On the contrary, the use of revised data does not allow to identify any threshold effect.

The second and third Chapters of this thesis are devoted to the exploration of the impact

of fiscal and monetary policies upon the economy.

Over the last years, two approaches have been mainly followed by practitioners for the

estimation of the effects of macroeconomic policies on the real activity. On the one hand,

calibrated and estimated DSGE models allow to trace out the economy’s responses to

policy disturbances within an analytical framework derived from solid microeconomic

foundations. On the other, vector autoregressive (VAR) models continue to be largely

used since they have proved to fit macro data particularly well, albeit they cannot fully

serve to inspect structural interrelations among economic variables.

Yet, the typical DSGE and VAR models are designed to handle a limited number of variables

and are not suitable to address economic questions potentially involving a large

amount of information. In a DSGE framework, in fact, identifying aggregate shocks and

their propagation mechanism under a plausible set of theoretical restrictions becomes a

thorny issue when many variables are considered. As for VARs, estimation problems may

arise when models are specified in a large number of indicators (although latest contributions suggest that large-scale Bayesian VARs perform surprisingly well in forecasting.

See in particular Banbura, Giannone and Reichlin (2007)). As a consequence, the growing

popularity of factor models as effective econometric tools allowing to summarize in

a parsimonious and flexible manner large amounts of information may be explained not

only by their usefulness in deriving business cycle indicators and forecasting (see for example

Reichlin (2002) and D’Agostino and Giannone (2006)), but also, due to recent

developments, by their ability in evaluating the response of economic systems to identified

structural shocks (see Giannone, Reichlin and Sala (2002) and Forni, Giannone, Lippi

and Reichlin (2007)). Parallelly, some attempts have been made to combine the rigor of

DSGE models and the tractability of VAR ones, with the advantages of factor analysis

(see Boivin and Giannoni (2006) and Bernanke, Boivin and Eliasz (2005)).

The second Chapter of this thesis, based on a joint work with Agnès Bénassy-Quéré, presents an original study combining factor and VAR analysis in an encompassing framework,

to investigate how "unexpected" and "unsystematic" variations in taxes and government

spending feed through the economy in the home country and abroad. The domestic

impact of fiscal shocks in Germany, the U.K. and the U.S. and cross-border fiscal spillovers

from Germany to seven European economies is analyzed. In addition, the time evolution of domestic and cross-border tax and spending multipliers is explored. In fact, the way fiscal policy impacts on domestic and foreign economies

depends on several factors, possibly changing over time. In particular, the presence of excess

capacity, accommodating monetary policy, distortionary taxation and liquidity constrained

consumers, plays a prominent role in affecting how fiscal policies stimulate the

economic activity in the home country. The impact on foreign output crucially depends

on the importance of trade links, on real exchange rates and, in a monetary union, on

the sensitiveness of foreign economies to the common interest rate. It is well documented

that the last thirty years have witnessed frequent changes in the economic environment.

For instance, in most OECD countries, the monetary policy stance became less accommodating

in the 1980s compared to the 1970s, and more accommodating again in the

late 1990s and early 2000s. Moreover, financial markets have been heavily deregulated.

Hence, fiscal policy might have lost (or gained) power as a stimulating tool in the hands

of policymakers. Importantly, the issue of cross-border transmission of fiscal policy decisions is of the utmost relevance in the framework of the European Monetary Union and this explains why the debate on fiscal policy coordination has received so much attention since the adoption

of the single currency (see Ahearne, Sapir and Véron (2006) and European Commission

(2006)). It is found that over the period 1971 to 2004 tax shocks have generally been more effective in spurring domestic output than government spending shocks. Interestingly, the inclusion of common factors representing global economic phenomena yields to smaller multipliers

reconciling, at least for the U.K. the evidence from large-scale macroeconomic models,

generally finding feeble multipliers (see e.g. European Commission’s QUEST model), with

the one from a prototypical structural VAR pointing to stronger effects of fiscal policy.

When the estimation is performed recursively over samples of seventeen years of data, it

emerges that GDP multipliers have dropped drastically from early 1990s on, especially

in Germany (tax shocks) and in the U.S. (both tax and government spending shocks).

Moreover, the conduct of fiscal policy seems to have become less erratic, as documented

by a lower variance of fiscal shocks over time, and this might contribute to explain why

business cycles have shown less volatility in the countries under examination.

Expansionary fiscal policies in Germany do not generally have beggar-thy-neighbor effects

on other European countries. In particular, our results suggest that tax multipliers have

been positive but vanishing for neighboring countries (France, Italy, the Netherlands, Belgium and Austria), weak and mostly not significant for more remote ones (the U.K.

and Spain). Cross-border government spending multipliers are found to be monotonically

weak for all the subsamples considered.

Overall these findings suggest that fiscal "surprises", in the form of unexpected reductions in taxation and expansions in government consumption and investment, have become progressively less successful in stimulating the economic activity at the domestic level, indicating that, in the framework of the European Monetary Union, policymakers can only marginally rely on this discretionary instrument as a substitute for national monetary policies.

The objective of the third chapter is to inspect the role of monetary policy in the U.S. business cycle. In particular, the effects of "systematic" monetary policies upon several industrial sectors is investigated. The focus is on the systematic, or endogenous, component of monetary policy (i.e. the one which is related to the economic activity in a stable and predictable way), for three main reasons. First, endogenous monetary policies are likely to have sizeable real effects, if agents’ expectations are not perfectly rational and if there are some nominal and real frictions in a market. Second, as widely documented, the variability of the monetary instrument and of the main macro variables is only marginally explained by monetary "shocks", defined as unexpected and exogenous variations in monetary conditions. Third, monetary shocks can be simply interpreted as measurement errors (see Christiano, Eichenbaum

and Evans (1998)). Hence, the systematic component of monetary policy is likely to have played a fundamental role in affecting business cycle fluctuations. The strategy to isolate the impact of systematic policies relies on a counterfactual experiment, within a (calibrated or estimated) macroeconomic model. As a first step, a macroeconomic shock to which monetary policy is likely to respond should be selected,

and its effects upon the economy simulated. Then, the impact of such shock should be

evaluated under a “policy-inactive” scenario, assuming that the central bank does not respond

to it. Finally, by comparing the responses of the variables of interest under these

two scenarios, some evidence on the sensitivity of the economic system to the endogenous

component of the policy can be drawn (see Bernanke, Gertler and Watson (1997)).

Such kind of exercise is first proposed within a stylized DSGE model, where the analytical

solution of the model can be derived. However, as argued, large-scale multi-sector DSGE

models can be solved only numerically, thus implying that the proposed experiment cannot

be carried out. Moreover, the estimation of DSGE models becomes a thorny issue when many variables are incorporated (see Canova and Sala (2007)). For these arguments, a less “structural”, but more tractable, approach is followed, where a minimal amount of

identifying restrictions is imposed. In particular, a factor model econometric approach

is adopted (see in particular Giannone, Reichlin and Sala (2002) and Forni, Giannone,

Lippi and Reichlin (2007)). In this framework, I develop a technique to perform the counterfactual experiment needed to assess the impact of systematic monetary policies.

It is found that 2 and 3-digit SIC U.S. industries are characterized by very heterogeneous degrees of sensitivity to the endogenous component of the policy. Notably, the industries showing the strongest sensitivities are the ones producing durable goods and metallic

materials. Non-durable good producers, food, textile and lumber producing industries are

the least affected. In addition, it is highlighted that industrial sectors adjusting prices relatively infrequently are the most "vulnerable" ones. In fact, firms in this group are likely to increase quantities, rather than prices, following a shock positively hitting the economy. Finally, it emerges that sectors characterized by a higher recourse to external sources to finance investments, and sectors investing relatively more in new plants and machineries, are the most affected by endogenous monetary actions.
Doctorat en sciences économiques, Orientation économie
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
17

Bauknecht, Klaus Dieter. "A macroeconometric policy model of the South African economy based on weak rational expectations with an application to monetary policy." Thesis, Stellenbosch : Stellenbosch University, 2000. http://hdl.handle.net/10019.1/51575.

Full text
Abstract:
Dissertation (PhD) -- University of Stellenbosch, 2000.
ENGLISH ABSTRACT: The Lucas critique states that if expectations are not explicitly dealt with, conventional econometric models are inappropriate for policy analyses, as their coefficients are not policy invariant. The inclusion of rational expectations in ·conventional model building has been the most common response to this critique. The concept of rational expectations has received several interpretations. In numerous studies, these expectations are associated with model consistent expectations in the sense that expectations and model solutions are identical. To derive a solution, these models require unique algorithms and assumptions regarding their terminal state, in particular when forward-looking expectations are present. An alternative that avoids these issues is the concept of weak rational expectations, which emphasises that expectation errors should not be systematic. Expectations are therefore formed on the basis of an underlying structure, but full knowledge of the model is not essential. The accommodation of this type of rational expectations is accomplished by means of an explicit specification of an expectations equation consistent with the macro econometric model's broad structure. The estimation of coefficients relating to expectations is achieved through an Instrumental Variable approach. In South Africa, monetary policy has been consistent and transparent in line with the recommendations of the De Kock Commission. This allows the modelling of the policy instrument of the South African Reserve Bank, i.e. the Bank rate, by means of a policy reaction function. Given this transparency in monetary policy, the accommodation of expectations of the Bank rate is essential in modelling the full impact of monetary policy and in avoiding the Lucas critique. This is accomplished through weak rational expectations, based on the reaction function of the Reserve Bank. The accommodation of expectations of a policy instrument also allows the modelling of anticipated and unanticipated policies as alternative assumptions regarding the expectations process can be made during simulations. Conventional econometric models emphasise the demand side of the economy, with equations focusing on private consumption, investment, exports and imports and possibly changes in inventories. In this study, particular emphasis in the model specification is also placed on the impact of monetary policy on government debt and debt servicing costs. Other dimensions of the model include the modelling of the money supply and balance of payments, short- and long-term interest rates, domestic prices, the exchange rate, the wage rate and employment as well as weakly rational expectations of inflation and the Bank rate. The model has been specified and estimated by usmg concepts such as cointegration and Error Correction modelling. Numerous tests, including the assessment of the Root Mean Square Percentage Error, have been employed to test the adequacy of the model. Similarly, tests are carried out to ensure weak rational expectations. Numerous simulations are carried out with the model and the results are compared to relevant alternative studies. The simulation results show that the reduction of inflation by means of only monetary policy could impose severe costs on the economy in terms of real sector volatility.
AFRIKAANSE OPSOMMING: Die Lucas-kritiek beweer dat konvensionele ekonometriese modelle nie gebruik kan word vir beleidsontleding nie, aangesien dit nie voorsiening maak vir die verandering in verwagtings wanneer beleidsaanpassings gemaak word nie. Die insluiting van rasionele verwagtinge in konvensionele ekonometriese modelle is die mees algemene reaksie op die Lukas-kritiek. Ten einde die praktiese insluiting van rasionele verwagtings III ekonometriese modelbou te vergemaklik, word in hierdie studie gebruik gemaak van sogenaamde "swak rasionele verwagtings", wat slegs vereis dat verwagtingsfoute me sistematies moet wees nie. Die beraming van die koëffisiënte van die verwagtingsveranderlikes word gedoen met behulp van die Instrumentele Veranderlikes-benadering. Monetêre beleid in Suid-Afrika was histories konsekwent en deursigtig in ooreenstemming met die aanbevelings van die De Kock Kommissie. Die beleidsinstrument van die Suid-Afrikaanse Reserwebank, naamlik die Bankkoers, kan gevolglik gemodelleer word met behulp van 'n beleidsreaksie-funksie. Ten einde die Lukas-kritiek te akkommodeer, moet verwagtings oor die Bankkoers egter ingesluit word wanneer die volle impak van monetêre beleid gemodelleer word. Dit word vermag met die insluiting van swak rasionele verwagtings, gebaseer op die reaksie-funksie van die Reserwebank. Sodoende kan die impak van verwagte en onverwagte beleidsaanpassings gesimuleer word. Konvensionele ekonometriese modelle beklemtoon die vraagkant van die ekonomie, met vergelykings vir verbruik, investering, invoere, uitvoere en moontlik die verandering in voorrade. In hierdie studie word daar ook klem geplaas op die impak van monetêre beleid op staatskuld en die koste van staatsskuld. Ander aspekte wat gemodelleer word, is die geldvoorraad en betalingsbalans, korttermyn- en langtermynrentekoerse, binnelandse pryse, die wisselkoers, loonkoerse en indiensneming, asook swak rasionele verwagtings van inflasie en die Bankkkoers. Die model is gespesifiseer en beraam met behulp van ko-integrasie en die gebruik van lang-en korttermynvergelykings. Die gebruiklike toetse is uitgevoer om die toereikendheid van die model te toets. Verskeie simulasies is uitgevoer met die model en die resultate is vergelyk met ander relevante studies. Die gevolgtrekking word gemaak dat die verlaging van inflasie deur alleenlik gebruik te maak van monetêre beleid 'n swaar las op die ekonomie kan lê in terme van volatiliteit in die reële sektor.
APA, Harvard, Vancouver, ISO, and other styles
18

Bañbura, Marta. "Essays in dynamic macroeconometrics." Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210294.

Full text
Abstract:
The thesis contains four essays covering topics in the field of macroeconomic forecasting.

The first two chapters consider factor models in the context of real-time forecasting with many indicators. Using a large number of predictors offers an opportunity to exploit a rich information set and is also considered to be a more robust approach in the presence of instabilities. On the other hand, it poses a challenge of how to extract the relevant information in a parsimonious way. Recent research shows that factor models provide an answer to this problem. The fundamental assumption underlying those models is that most of the co-movement of the variables in a given dataset can be summarized by only few latent variables, the factors. This assumption seems to be warranted in the case of macroeconomic and financial data. Important theoretical foundations for large factor models were laid by Forni, Hallin, Lippi and Reichlin (2000) and Stock and Watson (2002). Since then, different versions of factor models have been applied for forecasting, structural analysis or construction of economic activity indicators. Recently, Giannone, Reichlin and Small (2008) have used a factor model to produce projections of the U.S GDP in the presence of a real-time data flow. They propose a framework that can cope with large datasets characterised by staggered and nonsynchronous data releases (sometimes referred to as “ragged edge”). This is relevant as, in practice, important indicators like GDP are released with a substantial delay and, in the meantime, more timely variables can be used to assess the current state of the economy.

The first chapter of the thesis entitled “A look into the factor model black box: publication lags and the role of hard and soft data in forecasting GDP” is based on joint work with Gerhard Rünstler and applies the framework of Giannone, Reichlin and Small (2008) to the case of euro area. In particular, we are interested in the role of “soft” and “hard” data in the GDP forecast and how it is related to their timeliness.

The soft data include surveys and financial indicators and reflect market expectations. They are usually promptly available. In contrast, the hard indicators on real activity measure directly certain components of GDP (e.g. industrial production) and are published with a significant delay. We propose several measures in order to assess the role of individual or groups of series in the forecast while taking into account their respective publication lags. We find that surveys and financial data contain important information beyond the monthly real activity measures for the GDP forecasts, once their timeliness is properly accounted for.

The second chapter entitled “Maximum likelihood estimation of large factor model on datasets with arbitrary pattern of missing data” is based on joint work with Michele Modugno. It proposes a methodology for the estimation of factor models on large cross-sections with a general pattern of missing data. In contrast to Giannone, Reichlin and Small (2008), we can handle datasets that are not only characterised by a “ragged edge”, but can include e.g. mixed frequency or short history indicators. The latter is particularly relevant for the euro area or other young economies, for which many series have been compiled only since recently. We adopt the maximum likelihood approach which, apart from the flexibility with regard to the pattern of missing data, is also more efficient and allows imposing restrictions on the parameters. Applied for small factor models by e.g. Geweke (1977), Sargent and Sims (1977) or Watson and Engle (1983), it has been shown by Doz, Giannone and Reichlin (2006) to be consistent, robust and computationally feasible also in the case of large cross-sections. To circumvent the computational complexity of a direct likelihood maximisation in the case of large cross-section, Doz, Giannone and Reichlin (2006) propose to use the iterative Expectation-Maximisation (EM) algorithm (used for the small model by Watson and Engle, 1983). Our contribution is to modify the EM steps to the case of missing data and to show how to augment the model, in order to account for the serial correlation of the idiosyncratic component. In addition, we derive the link between the unexpected part of a data release and the forecast revision and illustrate how this can be used to understand the sources of the

latter in the case of simultaneous releases. We use this methodology for short-term forecasting and backdating of the euro area GDP on the basis of a large panel of monthly and quarterly data. In particular, we are able to examine the effect of quarterly variables and short history monthly series like the Purchasing Managers' surveys on the forecast.

The third chapter is entitled “Large Bayesian VARs” and is based on joint work with Domenico Giannone and Lucrezia Reichlin. It proposes an alternative approach to factor models for dealing with the curse of dimensionality, namely Bayesian shrinkage. We study Vector Autoregressions (VARs) which have the advantage over factor models in that they allow structural analysis in a natural way. We consider systems including more than 100 variables. This is the first application in the literature to estimate a VAR of this size. Apart from the forecast considerations, as argued above, the size of the information set can be also relevant for the structural analysis, see e.g. Bernanke, Boivin and Eliasz (2005), Giannone and Reichlin (2006) or Christiano, Eichenbaum and Evans (1999) for a discussion. In addition, many problems may require the study of the dynamics of many variables: many countries, sectors or regions. While we use standard priors as proposed by Litterman (1986), an

important novelty of the work is that we set the overall tightness of the prior in relation to the model size. In this we follow the recommendation by De Mol, Giannone and Reichlin (2008) who study the case of Bayesian regressions. They show that with increasing size of the model one should shrink more to avoid overfitting, but when data are collinear one is still able to extract the relevant sample information. We apply this principle in the case of VARs. We compare the large model with smaller systems in terms of forecasting performance and structural analysis of the effect of monetary policy shock. The results show that a standard Bayesian VAR model is an appropriate tool for large panels of data once the degree of shrinkage is set in relation to the model size.

The fourth chapter entitled “Forecasting euro area inflation with wavelets: extracting information from real activity and money at different scales” proposes a framework for exploiting relationships between variables at different frequency bands in the context of forecasting. This work is motivated by the on-going debate whether money provides a reliable signal for the future price developments. The empirical evidence on the leading role of money for inflation in an out-of-sample forecast framework is not very strong, see e.g. Lenza (2006) or Fisher, Lenza, Pill and Reichlin (2008). At the same time, e.g. Gerlach (2003) or Assenmacher-Wesche and Gerlach (2007, 2008) argue that money and output could affect prices at different frequencies, however their analysis is performed in-sample. In this Chapter, it is investigated empirically which frequency bands and for which variables are the most relevant for the out-of-sample forecast of inflation when the information from prices, money and real activity is considered. To extract different frequency components from a series a wavelet transform is applied. It provides a simple and intuitive framework for band-pass filtering and allows a decomposition of series into different frequency bands. Its application in the multivariate out-of-sample forecast is novel in the literature. The results indicate that, indeed, different scales of money, prices and GDP can be relevant for the inflation forecast.


Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
19

Lenza, Michèle. "Essays on monetary policy, saving and investment." Doctoral thesis, Universite Libre de Bruxelles, 2007. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210659.

Full text
Abstract:
This thesis addresses three relevant macroeconomic issues: (i) why

Central Banks behave so cautiously compared to optimal theoretical

benchmarks, (ii) do monetary variables add information about

future Euro Area inflation to a large amount of non monetary

variables and (iii) why national saving and investment are so

correlated in OECD countries in spite of the high degree of

integration of international financial markets.

The process of innovation in the elaboration of economic theory

and statistical analysis of the data witnessed in the last thirty

years has greatly enriched the toolbox available to

macroeconomists. Two aspects of such a process are particularly

noteworthy for addressing the issues in this thesis: the

development of macroeconomic dynamic stochastic general

equilibrium models (see Woodford, 1999b for an historical

perspective) and of techniques that enable to handle large data

sets in a parsimonious and flexible manner (see Reichlin, 2002 for

an historical perspective).

Dynamic stochastic general equilibrium models (DSGE) provide the

appropriate tools to evaluate the macroeconomic consequences of

policy changes. These models, by exploiting modern intertemporal

general equilibrium theory, aggregate the optimal responses of

individual as consumers and firms in order to identify the

aggregate shocks and their propagation mechanisms by the

restrictions imposed by optimizing individual behavior. Such a

modelling strategy, uncovering economic relationships invariant to

a change in policy regimes, provides a framework to analyze the

effects of economic policy that is robust to the Lucas'critique

(see Lucas, 1976). The early attempts of explaining business

cycles by starting from microeconomic behavior suggested that

economic policy should play no role since business cycles

reflected the efficient response of economic agents to exogenous

sources of fluctuations (see the seminal paper by Kydland and Prescott, 1982}

and, more recently, King and Rebelo, 1999). This view was challenged by

several empirical studies showing that the adjustment mechanisms

of variables at the heart of macroeconomic propagation mechanisms

like prices and wages are not well represented by efficient

responses of individual agents in frictionless economies (see, for

example, Kashyap, 1999; Cecchetti, 1986; Bils and Klenow, 2004 and Dhyne et al. 2004). Hence, macroeconomic models currently incorporate

some sources of nominal and real rigidities in the DSGE framework

and allow the study of the optimal policy reactions to inefficient

fluctuations stemming from frictions in macroeconomic propagation

mechanisms.

Against this background, the first chapter of this thesis sets up

a DSGE model in order to analyze optimal monetary policy in an

economy with sectorial heterogeneity in the frequency of price

adjustments. Price setters are divided in two groups: those

subject to Calvo type nominal rigidities and those able to change

their prices at each period. Sectorial heterogeneity in price

setting behavior is a relevant feature in real economies (see, for

example, Bils and Klenow, 2004 for the US and Dhyne, 2004 for the Euro

Area). Hence, neglecting it would lead to an understatement of the

heterogeneity in the transmission mechanisms of economy wide

shocks. In this framework, Aoki (2001) shows that a Central

Bank maximizing social welfare should stabilize only inflation in

the sector where prices are sticky (hereafter, core inflation).

Since complete stabilization is the only true objective of the

policymaker in Aoki (2001) and, hence, is not only desirable

but also implementable, the equilibrium real interest rate in the

economy is equal to the natural interest rate irrespective of the

degree of heterogeneity that is assumed. This would lead to

conclude that stabilizing core inflation rather than overall

inflation does not imply any observable difference in the

aggressiveness of the policy behavior. While maintaining the

assumption of sectorial heterogeneity in the frequency of price

adjustments, this chapter adds non negligible transaction

frictions to the model economy in Aoki (2001). As a

consequence, the social welfare maximizing monetary policymaker

faces a trade-off among the stabilization of core inflation,

economy wide output gap and the nominal interest rate. This

feature reflects the trade-offs between conflicting objectives

faced by actual policymakers. The chapter shows that the existence

of this trade-off makes the aggressiveness of the monetary policy

reaction dependent on the degree of sectorial heterogeneity in the

economy. In particular, in presence of sectorial heterogeneity in

price adjustments, Central Banks are much more likely to behave

less aggressively than in an economy where all firms face nominal

rigidities. Hence, the chapter concludes that the excessive

caution in the conduct of monetary policy shown by actual Central

Banks (see, for example, Rudebusch and Svennsson, 1999 and Sack, 2000) might not

represent a sub-optimal behavior but, on the contrary, might be

the optimal monetary policy response in presence of a relevant

sectorial dispersion in the frequency of price adjustments.

DSGE models are proving useful also in empirical applications and

recently efforts have been made to incorporate large amounts of

information in their framework (see Boivin and Giannoni, 2006). However, the

typical DSGE model still relies on a handful of variables. Partly,

this reflects the fact that, increasing the number of variables,

the specification of a plausible set of theoretical restrictions

identifying aggregate shocks and their propagation mechanisms

becomes cumbersome. On the other hand, several questions in

macroeconomics require the study of a large amount of variables.

Among others, two examples related to the second and third chapter

of this thesis can help to understand why. First, policymakers

analyze a large quantity of information to assess the current and

future stance of their economies and, because of model

uncertainty, do not rely on a single modelling framework.

Consequently, macroeconomic policy can be better understood if the

econometrician relies on large set of variables without imposing

too much a priori structure on the relationships governing their

evolution (see, for example, Giannone et al. 2004 and Bernanke et al. 2005).

Moreover, the process of integration of good and financial markets

implies that the source of aggregate shocks is increasingly global

requiring, in turn, the study of their propagation through cross

country links (see, among others, Forni and Reichlin, 2001 and Kose et al. 2003). A

priori, country specific behavior cannot be ruled out and many of

the homogeneity assumptions that are typically embodied in open

macroeconomic models for keeping them tractable are rejected by

the data. Summing up, in order to deal with such issues, we need

modelling frameworks able to treat a large amount of variables in

a flexible manner, i.e. without pre-committing on too many

a-priori restrictions more likely to be rejected by the data. The

large extent of comovement among wide cross sections of economic

variables suggests the existence of few common sources of

fluctuations (Forni et al. 2000 and Stock and Watson, 2002) around which

individual variables may display specific features: a shock to the

world price of oil, for example, hits oil exporters and importers

with different sign and intensity or global technological advances

can affect some countries before others (Giannone and Reichlin, 2004). Factor

models mainly rely on the identification assumption that the

dynamics of each variable can be decomposed into two orthogonal

components - common and idiosyncratic - and provide a parsimonious

tool allowing the analysis of the aggregate shocks and their

propagation mechanisms in a large cross section of variables. In

fact, while the idiosyncratic components are poorly

cross-sectionally correlated, driven by shocks specific of a

variable or a group of variables or measurement error, the common

components capture the bulk of cross-sectional correlation, and

are driven by few shocks that affect, through variable specific

factor loadings, all items in a panel of economic time series.

Focusing on the latter components allows useful insights on the

identity and propagation mechanisms of aggregate shocks underlying

a large amount of variables. The second and third chapter of this

thesis exploit this idea.

The second chapter deals with the issue whether monetary variables

help to forecast inflation in the Euro Area harmonized index of

consumer prices (HICP). Policymakers form their views on the

economic outlook by drawing on large amounts of potentially

relevant information. Indeed, the monetary policy strategy of the

European Central Bank acknowledges that many variables and models

can be informative about future Euro Area inflation. A peculiarity

of such strategy is that it assigns to monetary information the

role of providing insights for the medium - long term evolution of

prices while a wide range of alternative non monetary variables

and models are employed in order to form a view on the short term

and to cross-check the inference based on monetary information.

However, both the academic literature and the practice of the

leading Central Banks other than the ECB do not assign such a

special role to monetary variables (see Gali et al. 2004 and

references therein). Hence, the debate whether money really

provides relevant information for the inflation outlook in the

Euro Area is still open. Specifically, this chapter addresses the

issue whether money provides useful information about future

inflation beyond what contained in a large amount of non monetary

variables. It shows that a few aggregates of the data explain a

large amount of the fluctuations in a large cross section of Euro

Area variables. This allows to postulate a factor structure for

the large panel of variables at hand and to aggregate it in few

synthetic indexes that still retain the salient features of the

large cross section. The database is split in two big blocks of

variables: non monetary (baseline) and monetary variables. Results

show that baseline variables provide a satisfactory predictive

performance improving on the best univariate benchmarks in the

period 1997 - 2005 at all horizons between 6 and 36 months.

Remarkably, monetary variables provide a sensible improvement on

the performance of baseline variables at horizons above two years.

However, the analysis of the evolution of the forecast errors

reveals that most of the gains obtained relative to univariate

benchmarks of non forecastability with baseline and monetary

variables are realized in the first part of the prediction sample

up to the end of 2002, which casts doubts on the current

forecastability of inflation in the Euro Area.

The third chapter is based on a joint work with Domenico Giannone

and gives empirical foundation to the general equilibrium

explanation of the Feldstein - Horioka puzzle. Feldstein and Horioka (1980) found

that domestic saving and investment in OECD countries strongly

comove, contrary to the idea that high capital mobility should

allow countries to seek the highest returns in global financial

markets and, hence, imply a correlation among national saving and

investment closer to zero than one. Moreover, capital mobility has

strongly increased since the publication of Feldstein - Horioka's

seminal paper while the association between saving and investment

does not seem to comparably decrease. Through general equilibrium

mechanisms, the presence of global shocks might rationalize the

correlation between saving and investment. In fact, global shocks,

affecting all countries, tend to create imbalance on global

capital markets causing offsetting movements in the global

interest rate and can generate the observed correlation across

national saving and investment rates. However, previous empirical

studies (see Ventura, 2003) that have controlled for the effects

of global shocks in the context of saving-investment regressions

failed to give empirical foundation to this explanation. We show

that previous studies have neglected the fact that global shocks

may propagate heterogeneously across countries, failing to

properly isolate components of saving and investment that are

affected by non pervasive shocks. We propose a novel factor

augmented panel regression methodology that allows to isolate

idiosyncratic sources of fluctuations under the assumption of

heterogenous transmission mechanisms of global shocks. Remarkably,

by applying our methodology, the association between domestic

saving and investment decreases considerably over time,

consistently with the observed increase in international capital

mobility. In particular, in the last 25 years the correlation

between saving and investment disappears.


Doctorat en sciences économiques, Orientation économie
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
20

Malek, Mansour Jeoffrey H. G. "Three essays in international economics." Doctoral thesis, Universite Libre de Bruxelles, 2006. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210878.

Full text
Abstract:
This thesis consists in a collection of research works dealing with various aspects of International Economics. More precisely, we focus on three main themes: (i) the existence of a world business cycle and the implications thereof, (ii) the likelihood of asymmetric shocks in the Euro Zone resulting from fluctuations in the euro exchange rate because of differences in sector specialization patterns and some consequences of such shocks, and (iii) the relationship between trade openness and growth influence of the sector specialization structure on that relationship.

Regarding the approach pursued to tackle these problems, we have chosen to strictly remain within the boundaries of empirical (macro)economics - that is, applied econometrics. Though we systematically provide theoretical models to back up our empirical approach, our only real concern is to look at the stories the data can (or cannot) tell us. As to the econometric methodology, we will restrict ourselves to the use of panel data analysis. The large spectrum of techniques available within the panel framework allows us to utilize, for each of the problems at hand, the most suitable approach (or what we think it is).
Doctorat en sciences économiques, Orientation économie
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
21

Matlanyane, Retselisitsoe Adelaide. "A macroeconometric model for the economy of Lesotho policy analysis and implications /." Pretoria : [s.n.], 2005. http://upetd.up.ac.za/thesis/available/etd-04182005-091509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Flury, Thomas. "Econometrics of dynamic non-linear models in macroeconomics and finance." Thesis, University of Oxford, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.523095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Aboagye, Anthony Q. Q. "Financial flows, macroeconomic policy and the agricultural sector in Sub-Saharan Africa." Thesis, McGill University, 1998. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=35672.

Full text
Abstract:
This thesis focuses on the effects of development assistance (ODA), private foreign commercial capital (PFX), domestic savings (SAV), the openness of the economy and producer prices on agricultural output, and on export and domestic shares of agricultural output in sub-Saharan Africa (SSA). This study uses panel data spanning 27 countries and the period 1970 to 1993.
The production function is a Cobb-Douglas type. Static export and domestic share equations are derived from a specification of the agricultural gross domestic product function. Transformed auto-regressive distributed-lag versions of the static share models are used to investigate long-run dynamics, persistence and implementation lags in the share response model.
Agricultural output is affected as follows. ODA, PFX and SAV have small positive or negative impact depending on agricultural region or economic policy environment. The impact of openness of the economy is negative in all agricultural regions, however, there is evidence of positive effect of openness within improved policy environment. None of these effects are statistically significant.
Export share is affected as follows. ODA, PFX and SAV have small positive impact in some agricultural regions and policy environments, both in the short-run and in the long-run. PFX is not significant anywhere. ODA is significant only when countries are grouped by policy environment in the short-run. SAV is significant in the short-run only in some regions, and significant in the long-run only in others. Openness has positive impact in the short-run. This is significant in many regions. Its long-run impact is mostly positive but not significant anywhere. The impact of producer price is mostly positive but not significant.
Efforts to encourage economic activities in rural communities such as improvements in domestic terms of trade in favor of agriculture, together with the provision of infrastructure are likely to stimulate output. Strategies to diversify and process agricultural exports in the face of falling agricultural commodity prices should be pursued.
APA, Harvard, Vancouver, ISO, and other styles
24

Salman, Abdul Khalik Abbas. "An econometric study of export instability and stabilisation policies in the Iraqi economy." Thesis, Cardiff University, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.329663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Nowman, Khalid. "Gaussian estimation of open higher order continuous time dynamic models with mixed stock and flow and with an application to a United Kingdom macroeconomic model." Thesis, University of Essex, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.305955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Kremers, Jeroen Joseph Marie. "On the determination and macroeconomic consequences of public financial policy." Thesis, University of Oxford, 1986. http://ora.ox.ac.uk/objects/uuid:a8c0cb20-b178-4e80-9a46-fcb1079a4a9f.

Full text
Abstract:
This study develops a theoretical framework for the analysis of regular patterns in public financial behaviour, and applies that framework in an empirical assessment of budgetary policies in the United States and in the Netherlands. Its purpose and scope are threefold. First, it sheds theoretical light on economic considerations guiding public financial behaviour in a dynamic model of optimal taxation. The resulting idea, that it may be sensible to smooth taxation over time,is subsequently extended to a more general model of the public finances, which involves spending, taxation, debt and money creation in an effort to control the government budget. Second, using modern econometric methods the practical relevance of this model is illustrated with estimations for the United States and the Netherlands. Third, the model is sufficiently flexible to allow for a number of more institutional insights. In this respect the emphasis is placed on the Dutch economy and public finances. The thesis thus engages economic theory, econometric technique and institutional and macroeconomic background in a combined effort to understand and evaluate regular patterns in public financial behaviour. Its findings have implications for each of these three areas of economic interest.
APA, Harvard, Vancouver, ISO, and other styles
27

Wan, Lai Shan. "Macroeconomic modelling and policy simulation for the Chinese economy." HKBU Institutional Repository, 2002. http://repository.hkbu.edu.hk/etd_ra/335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Luintel, Kul Bahadur. "Macroeconomics and money in developing countries : an econometric model for an Asian region." Thesis, University of Glasgow, 1993. http://theses.gla.ac.uk/829/.

Full text
Abstract:
This thesis is a contribution towards the macroeconomic and monetary analysis of developing countries. A fully-fledged macroeconometric model is theoretically specified, econometrically estimated and dynamically simulated for policy analysis. The model contains demand side, supply side, balance of payments accounts, government accounts and a financial sector. The model is tested using regional data consisting of seven Asian Developing Countries, namely, Fiji, India, Malayasia, Pakistan, Philippines, Sri Lanka, and Thailand. A regional econometric model for Asian LDCs was lacking in the realm of global econometric models and this study is an attempt to bridge this gap by building a first ever model for this region. In the demand side of the model volume equations for consumption, investment, exports and imports and an equation for export prices are estimated. The supply side is derived from wage and price equations following a production function approach which is neo-classical in spirit. Inflation is modelled as a function of the divergence between demand and supply. Government accounts and the balance of payments accounts are fully specified. Most of the existing macroeconomic models in LDCs context abstract from modelling a financial sector. The implicit reason for this is that the financial sector in these economies is underdeveloped; therefore, little scope exists for monetary policy instruments. We have developed a detailed bank based financial sector model where all the balance sheet flows of the Central Bank and commercial banks are at the centre stage. We show that monetary policy instruments are effective in affecting macro activity. The interlinkage between the financial and the real sector comes not through the cost of capital, rather it arises due to income-expenditure flows and the real financial asset stocks. Such linkages operate even if the financial sector is undeveloped.
APA, Harvard, Vancouver, ISO, and other styles
29

Romain, Astrid. "Essays in the empirical analysis of venture capital and entrepreneurship." Doctoral thesis, Universite Libre de Bruxelles, 2007. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210729.

Full text
Abstract:
EXECUTIVE SUMMARY

This thesis aims at analysing some aspects of Venture Capital (VC) and high-tech entrepreneurship. The focus is both at the macroeconomic level, comparing venture capital from an international point of view and Technology-Based Small Firms (TBSF) at company and founder’s level in Belgium. The approach is mainly empirical.

This work is divided into two parts. The first part focuses on venture capital. First of all, we test the impact of VC on productivity. We then identify the determinants of VC and we test their impact on the relative level of VC for a panel of countries.

The second part concerns the technology-based small firms in Belgium. The objective is twofold. It first aims at creating a database on Belgian TBSF to better understand the importance of entrepreneurship. In order to do this, a national survey was developed and the statistical results were analysed. Secondly, it provides an analysis of the role of universities in the employment performance of TBSF.

A broad summary of each chapter is presented below.

PART 1: VENTURE CAPITAL

The Economic Impact of Venture Capital

The objective of this chapter is to perform an evaluation of the macroeconomic impact of venture capital. The main assumption is that VC can be considered as being similar in several respects to business R&D performed by large firms. We test whether VC contributes to economic growth through two main channels. The first one is innovation, characterized by the introduction of new products, processes or services on the market. The second one is the development of an absorptive capacity. These hypotheses are tested quantitatively with a production function model for a panel data set of 16 OECD countries from 1990 to 2001. The results show that the accumulation of VC is a significant factor contributing directly to Multi-Factor Productivity (MFP) growth. The social rate of return to VC is significantly higher than the social rate of return to business or public R&D. VC has also an indirect impact on MFP in the sense that it improves the output elasticity of R&D. An increased VC intensity makes it easier to absorb the knowledge generated by universities and firms, and therefore improves aggregate economic performance.

Technological Opportunity, Entrepreneurial Environment and Venture Capital Development

The objective of this chapter is to identify the main determinants of venture capital. We develop a theoretical model where three main types of factors affect the demand and supply of VC: macroeconomic conditions, technological opportunity, and the entrepreneurial environment. The model is evaluated with a panel dataset of 16 OECD countries over the period 1990-2000. The estimates show that VC intensity is pro-cyclical - it reacts positively and significantly to GDP growth. Interest rates affect the VC intensity mainly because the entrepreneurs create a demand for this type of funding. Indicators of technological opportunity such as the stock of knowledge and the number of triadic patents affect positively and significantly the relative level of VC. Labour market rigidities reduce the impact of the GDP growth rate and of the stock of knowledge, whereas a minimum level of entrepreneurship is required in order to have a positive effect of the available stock of knowledge on VC intensity.

PART 2: TECHNOLOGY-BASED SMALL FIRMS

Survey in Belgium

The first purpose of this chapter is to present the existing literature on the performance of companies. In order to get a quantitative insight into the entrepreneurial growth process, an original survey of TBSF in Belgium was launched in 2002. The second purpose is to describe the methodology of our national TBSF survey. This survey has two main merits. The first one lies in the quality of the information. Indeed, most of national and international surveys have been developed at firm-level. There exist only a few surveys at founder-level. In the TBSF database, information both at firm and at entrepreneur-level will be found.

The second merit is about the subject covered. TBSF survey tackles the financing of firms (availability of public funds, role of venture capitalists, availability of business angels,…), the framework conditions (e.g. the quality and availability of infrastructures and communication channels, the level of academic and public research, the patenting process,…) and, finally, the socio-cultural factors associated with the entrepreneurs and their environment (e.g. level of education, their parents’education, gender,…).

Statistical Evidence

The main characteristics of companies in our sample are that employment and profits net of taxation do not follow the same trend. Indeed, employment may decrease while results after taxes may stay constant. Only a few companies enjoy a growth in both employment and results after taxes between 1998 and 2003.

On the financing front, our findings suggest that internal finance in the form of personal funds, as well as the funds of family and friends are the primary source of capital to start-up a high-tech company in Belgium. Entrepreneurs rely on their own personal savings in 84 percent of the cases. Commercial bank loans are the secondary source of finance. This part of external financing (debt-finance) exceeds the combined angel funds and venture capital funds (equity-finance).

On the entrepreneur front, the preliminary results show that 80 percent of entrepreneurs in this study have a university degree while 42 percent hold postgraduate degrees (i.e. master’s, and doctorate). In term of research activities, 88 percent of the entrepreneurs holding a Ph.D. or a post-doctorate collaborate with Belgian higher education institutes. Moreover, more than 90 percent of these entrepreneurs are working in a university spin-off.

The Contribution of Universities to Employment Growth

The objective of this chapter is to test whether universities play a role amongst the determinants of employment growth in Belgian TBSF. The empirical model is based on our original survey of 87 Belgian TBSF. The results suggest that both academic spin-offs and TBSF created on the basis of an idea originating from business R&D activities are associated with an above than average growth in employees. As most ‘high-tech’ entrepreneurs are at least graduated from universities, there is no significant impact of the level of education. Nevertheless, these results must be taken with caution, as they are highly sensitive to the presence of outliers. Young high-tech firms are by definition highly volatile, and might be therefore difficult to understand.

CONCLUSION

In this last chapter, recommendations for policy-makers are drawn from the results of the thesis. The possible interventions of governments are classified according to whether they influence the demand or the supply of entrepreneurship and/or VC. We present some possible actions such as direct intervention in the VC funds, interventions of public sector through labour market rigidities, pension system, patent and research policy, level of entrepreneurial activities, bankruptcy legislation, entrepreneurial education, development of university spin-offs, and creation of a national database of TBSF.


Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
30

Kavari, Gift Vijandjua. "Modelling macroeconomic performance of African economies : an application of a macro econometric model." Thesis, University of Surrey, 2002. http://epubs.surrey.ac.uk/844583/.

Full text
Abstract:
The main objective of this study has been to model macroeconomic performance of African economies, and therefore, a macro econometric model was constructed to facilitate this exercise. The study has investigated thirteen African countries during the period 1980-97. Based on the growth rate of real GDP and per capita income, and other macroeconomic indicators, Botswana emerges as the African "Tiger Economy", which has pursued sound economic policies. Other good macroeconomic performers are Mauritius and Namibia. The macro econometric model was constructed for four African economies in Southern Africa: Namibia, Botswana, Mauritius, and South Africa. An instrumental variable technique was applied to estimate the model, and the WinSolve (simulation program) was utilised to perform policy simulations. Based on estimated model (1970-96), consumption is not influenced by real interest rates. However, real interest rates are a determinant of investment only in South Africa. The determinant of consumption is real disposable income and the determinant of investment is real domestic income, in all the countries. Exchange rate effects boosted exports in the economy following a pegged exchange rate system (Namibia), and have constrained imports in the economy, which have experienced massive exchange rate depreciation or a weak currency (Mauritius). The existence of speculative money demand is well confirmed in Botswana and South Africa, but not in Namibia and Mauritius. In all countries, real wage rates and the level of income significantly determine employment. In the simulation model, a tax stabilisation rule was enforced, and a quarter of last year's cumulated debt was raised in taxes. When the tax rule is in place, the effects of government spending to stimulate the level of income is less potent than when the tax rule is relaxed. The simulation model was used to perform historical simulations, and the ability of the model to replicate the actual data demonstrates the "goodness of fit" of the model. Hence the model was subjected to shocks, and the potency of economic policies on the economy was assessed. These policies are interest rate, exchange rate devaluation, fiscal policy (government spending and tax cuts), and income policy (wages rise). Based on simulation evidence, interest rate policy was more potent in stimulating economic activities in South Africa than in the remaining economies. Interest rate control in Mauritius and the lack of an independent interest rate policy in Namibia explain why the interest rate policy in these economies is less potent. Exchange rate devaluation improves the trade balance in Namibia and Botswana whilst the trade balance in Mauritius and South Africa deteriorates. The conduct of fiscal policy (rise in government spending or tax cuts) to raise the level of income is more effective in South Africa. While a rise in government spending is less effective in Mauritius, tax cuts are more potent in this economy. Tax cuts policy is relatively less effective than a rise in government spending in Namibia and Botswana. Policy prescriptions are country-specific and the study recommends an implementation of proposed growth policy targets.
APA, Harvard, Vancouver, ISO, and other styles
31

Asare, Nyamekye. "Essays on Time-Varying Volatility and Structural Breaks in Macroeconomics and Econometrics." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37179.

Full text
Abstract:
This thesis is comprised of three independent essays. One essay is in the field of macroeconomics and the other two are in time-series econometrics. The first essay, "Productivity and Business Investment over the Business Cycle", is co-authored with my co-supervisor Hashmat Khan. This essay documents a new stylized fact: the correlation between labour productivity and real business investment in the U.S. data switching from 0.54 to -0.1 in 1990. With the assistance of a bivariate VAR, we find that the response of investment to identified technology shocks has changed signs from positive to negative across two sub-periods: ranging from the time of the post-WWII era to the end of 1980s and from 1990 onwards, whereas the response to non-technology shocks has remained relatively unchanged. Also, the volatility of technology shocks declined less relative to the non-technology shocks. This raises the question of whether relatively more volatile technology shocks and the negative response of investment can together account for the decreased correlation. To answer this question, we consider a canonical DSGE model and simulate data under a variety of assumptions about the parameters representing structural features and volatility of shocks. The second and third essays are in time series econometrics and solely authored by myself. The second essay, however, focuses on the impact of ignoring structural breaks in the conditional volatility parameters on time-varying volatility parameters. The focal point of the third essay is on empirical relevance of structural breaks in time-varying volatility models and the forecasting gains of accommodating structural breaks in the unconditional variance. There are several ways in modeling time-varying volatility. One way is to use the autoregressive conditional heteroskedasticity (ARCH)/generalized ARCH (GARCH) class first introduced by Engle (1982) and Bollerslev (1986). One prominent model is Bollerslev (1986) GARCH model in which the conditional volatility is updated by its own residuals and its lags. This class of models is popular amongst practitioners in finance because they are able to capture stylized facts about asset returns such as fat tails and volatility clustering (Engle and Patton, 2001; Zivot, 2009) and require maximum likelihood methods for estimation. They also perform well in forecasting volatility. For example, Hansen and Lunde (2005) find that it is difficult to beat a simple GARCH(1,1) model in forecasting exchange rate volatility. Another way of modeling time-varying volatility is to use the class of stochastic volatility (SV) models including Taylor's (1986) autoregressive stochastic volatility (ARSV) model. With SV models, the conditional volatility is updated only by its own lags and increasingly used in macroeconomic modeling (i.e.Justiniano and Primiceri (2010)). Fernandez-Villaverde and Rubio-Ramirez (2010) claim that the stochastic volatility model fits better than the GARCH model and is easier to incorporate into DSGE models. However, Creal et al. (2013) recently introduced a new class of models called the generalized autoregressive score (GAS) models. With the GAS volatility framework, the conditional variance is updated by the scaled score of the model's density function instead of the squared residuals. According to Creal et al. (2013), GAS models are advantageous to use because updating the conditional variance using the score of the log-density instead of the second moments can improve a model's fit to data. They are also found to be less sensitive to other forms of misspecification such as outliers. As mentioned by Maddala and Kim (1998), structural breaks are considered to be one form of outliers. This raises the question about whether GAS volatility models are less sensitive to parameter non-constancy. This issue of ignoring structural breaks in the volatility parameters is important because neglecting breaks can cause the conditional variance to exhibit unit root behaviour in which the unconditional variance is undefined, implying that any shock to the variance will not gradually decline (Lamoureux and Lastrapes, 1990). The impact of ignoring parameter non-constancy is found in GARCH literature (see Lamoureux and Lastrapes, 1990; Hillebrand, 2005) and in SV literature (Psaradakis and Tzavalis, 1999; Kramer and Messow, 2012) in which the estimated persistence parameter overestimates its true value and approaches one. However, it has never been addressed in GAS literature until now. The second essay uses a simple Monte-Carlo simulation study to examine the impact of neglecting parameter non-constancy on the estimated persistence parameter of several GAS and non-GAS models of volatility. Five different volatility models are examined. Of these models, three --the GARCH(1,1), t-GAS(1,1), and Beta-t-EGARCH(1,1) models -- are GAS models, while the other two -- the t-GARCH(1,1) and EGARCH(1,1) models -- are not. Following Hillebrand (2005) who studied only the GARCH model, this essay examines the extent of how biased the estimated persistence parameter are by assessing impact of ignoring breaks on the mean value of the estimated persistence parameter. The impact of neglecting parameter non-constancy on the empirical sampling distributions and coverage probabilities for the estimated persistence parameters are also studied in this essay. For the latter, studying the effect on the coverage probabilities is important because a decrease in coverage probabilities is associated with an increase in Type I error. This study has implications for forecasting. If the size of an ignored break in parameters is small, then there may not be any gains in using forecast methods that accommodate breaks. Empirical evidence suggests that structural breaks are present in data on macro-financial variables such as oil prices and exchange rates. The potentially serious consequences of ignoring a break in GARCH parameters motivated Rapach and Strauss (2008) and Arouri et al. (2012) to study the empirical relevance of structural breaks in the context of GARCH models. However, the literature does not address the empirical relevance of structural breaks in the context of GAS models. The third and final essay contributes to this literature by extending Rapach and Strauss (2008) to include the t-GAS model and by comparing its performance to that of two non-GAS models, the t-GARCH and SV models. The empirical relevance of structural breaks in the models of volatility is assessed using a formal test by Dufour and Torres (1998) to determine how much the estimated parameters change over sub-periods. The in-sample performance of all the models is analyzed using both the weekly USD trade-weighted index between January 1973 and October 2016 and spot oil prices based on West Texas Intermediate between January 1986 and October 2016. The full sample is split into smaller subsamples by break dates chosen based on historical events and policy changes rather than formal tests. This is because commonly-used tests such as CUSUM suffer from low power (Smith, 2008; Xu, 2013). For each sub-period, all models are estimated using either oil or USD returns. The confidence intervals are constructed for the constant of the conditional parameter and the score parameter (or ARCH parameter in GARCH and t-GARCH models). Then Dufour and Torres's union-intersection test is applied to these confidence intervals to determine how much the estimated parameter change over sub-periods. If there is a set of values that intersects the confidence intervals of all sub-periods, then one can conclude that the parameters do not change that much. The out-of-sample performance of all time-varying volatility models are also assessed in the ability to forecast the mean and variance of oil and USD returns. Through this analysis, this essay also addresses whether using models that accommodate structural breaks in the unconditional variance of both GAS and non-GAS models will improve forecasts.
APA, Harvard, Vancouver, ISO, and other styles
32

Barbosa, Fernando Honorato. "Uma análise das elasticidades de bens e serviços não fatores, sua estabilidade e o ajuste externo brasileiro pós-99." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/12/12140/tde-03122006-120212/.

Full text
Abstract:
Os recentes superávits comerciais da economia brasileira transformaram a percepção de elevada fragilidade das contas externas do país que se verificou nas duas últimas décadas e meia. Diante desta nova realidade, parece pertinente avaliarmos quais foram os determinantes deste saldo comercial a partir dos elementos tradicionais da literatura, como a taxa de câmbio, os preços externos e a renda doméstica e mundial. Com este propósito, foram estimadas equações de longo prazo para as exportações e importações brasileiras para que se pudesse avaliar as elasticidades do comércio de bens e serviços não fatores do país. A metodologia utilizada foi a de cointegração proposta por Johansen (1988) e Johansen e Juselius (1990). A estimação das elasticidades se dividiu em dois períodos: 1980-1998 e 1980-2005. Com tal divisão pretendemos capturar os efeitos da mudança do regime cambial de 1999 sobre as contas externas. Além disso, foram realizados uma série de testes recursivos para se checar a estabilidade, quebras e a robustez da cointegração destas variáveis. Os resultados obtidos foram satisfatórios para as elasticidades, condizentes com trabalhos anteriores, mas agregaram a informação da elasticidade para o conjunto de bens e serviços não fatores e não apenas de bens. Além disso, foram identificadas uma série de quebras no período de estimação, em geral associadas a períodos críticos de mudanças de política econômica. Finalmente, foi possível identificar que após a flutuação cambial houve um aumento da elasticidade renda externa das exportações e uma redução da elasticidade-câmbio de exportações e importações. O trabalho conclui sugerindo que os elevados saldos comerciais são resultantes de uma particular combinação de preços externos favoráveis, câmbio real depreciado e renda mundial elevada, além de alguma mudança estrutural associada à maior resposta das exportações à renda mundial, provavelmente por conta do resgate das vantagens comparativas brasileiras na esteira da mudança do regime cambial em 1999.
The recent Brazilian trade surpluses changed the perception about the fragility of the external accounts of the Brazilian economy that lasted for the last two and a half decades. In the face of this new reality it seem reasonable to evaluate which were the determinants of this trade surplus, taking into account the traditional variables of the external accounts literature, like the currency, the external prices, the domestic and foreign output. With this purpose we estimated long run equations for exports and imports for evaluating the external trade of goods and services elasticities. The methodology applied was that proposed by Johansen (1988) and Johansen and Juselius (1990) that take for the account of cointegration methods. The estimation was divided into two periods: 1980-1998 and 1980-2005. With such a division, we intend to capture the effects of the change in the Brazilian foreign exchange regime introduced in 1999 over the external accounts. Further, we tested these models recursively to check for stability, breaks and cointegration power. The results were satisfactory in terms of the elasticities, in line with previous jobs on this field, but we add the information on the aggregated goods and services elasticities, usually estimated only for the goods markets. Further we identified many brakes over the estimation sample, generally associated with macroeconomic policy changes. Finally it was possible to identify that after the floating of the Brazilian currency the external income elasticity of the exports jumped to a higher level and the currency elasticities of both exports and imports showed some reduction. We conclude by saying that the huge trade surpluses recently observed are the result of a particular combination of external favorable prices, a depreciated real exchange and a high level of world income growth, as far as some structural change associated with the bigger responsiveness of the exports to the world growth, probably due to the resurge in the Brazilian external comparative advantages in the face of the currency flotation of 1999.
APA, Harvard, Vancouver, ISO, and other styles
33

Eliasson, Ann-Charlotte. "Smooth transitions in macroeconomic relationships." Doctoral thesis, Stockholm : Economic Research Institute, Stockholm School of Economics (Ekonomiska forskningsinstitutet vid Handelshögsk.) (EFI), 1999. http://www.hhs.se/efi/summary/516.htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Santos, Douglas Gomes dos. "Ensaios em econometria aplicada a finanças e macroeconomia utilizando a abordagem de regressão MIDAS." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/103954.

Full text
Abstract:
A abordagem de regressão MIDAS (Mixed Data Sampling), proposta por Ghysels et al. (2004), permite relacionar diretamente variáveis em freqüências distintas. Esta característica é particularmente atraente quando se deseja utilizar os dados nas freqüências em que são disponibilizados, bem como quando o objetivo é calcular previsões multi-períodos à frente. Nesta tese, utiliza-se a abordagem de regressão MIDAS em três ensaios em que são realizadas aplicações empíricas nas áreas de finanças e macroeconomia. Os três ensaios são de caráter comparativo. Com aplicações em diferentes contextos de previsão, objetiva-se contribuir fornecendo evidências empíricas comparativas. No primeiro ensaio, são explorados resultados comparativos no contexto de previsão de volatilidade multi-períodos. Compara-se a abordagem MIDAS com dois métodos amplamente utilizados no cálculo de previsões multi-períodos à frente: as abordagens direta e iterada. Seus desempenhos relativos são investigados em um estudo de Monte Carlo e em um estudo empírico em que são computadas previsões de volatilidade para horizontes de até 60 dias à frente. Os resultados do estudo de Monte Carlo indicam que a abordagem MIDAS fornece as melhores previsões para os horizontes iguais ou superiores a 15 dias. Em contraste, as previsões geradas a partir da abordagem iterada são superiores nos horizontes de 5 e 10 dias à frente. No estudo empírico, utilizando-se retornos diários dos índices S&P 500 e NASDAQ, os resultados não são tão conclusivos, mas sugerem um melhor desempenho para a abordagem iterada. Todas as análises são fora da amostra. No segundo ensaio, são comparados diversos modelos de previsão de volatilidade multi-períodos, especificamente das famílias MIDAS e HAR. As comparações são realizadas em termos da acurácia das previsões de volatilidade fora da amostra. No segundo ensaio, são comparados diversos modelos de previsão de volatilidade multi-períodos, especificamente das famílias MIDAS e HAR. As comparações são realizadas em termos da acurácia das previsões de volatilidade fora da amostra. Combinações das previsões dos referidos modelos também são consideradas. São utilizados retornos intradiários do IBOVESPA no cálculo de medidas de volatilidade, tais como variância realizada, variação potente realizada e variação bipotente realizada, sendo estas medidas usadas como regressores em ambos os modelos. Adicionalmente, utiliza-se um procedimento não paramétrico na estimação das medidas de variabilidade dos componentes contínuo e de saltos do processo de variação quadrática. Estas medidas são utilizadas como regressores separados em especificações MIDAS e HAR. Quanto às evidências empíricas, os resultados em termos de erro quadrático médio sugerem que regressores baseados em medidas de volatilidade robustas a saltos (i.e., variação bipotente realizada e variação potente realizada) são melhores em prever volatilidade futura. Entretanto, observa-se que, em geral, as previsões baseadas nestes regressores não são estatisticamente diferentes daquelas baseadas na variância realizada (o regressor benchmark). Além disso, observa-se que, de modo geral, o desempenho relativo das três abordagens de previsão (i.e., MIDAS, HAR e combinação de previsões) é estatisticamente equivalente. No terceiro ensaio, busca-se comparar os modelos MS-MIDAS (Markov-Switching MIDAS) e STMIDAS (Smooth Transition MIDAS) em termos de acurácia preditiva. Para tanto, realiza-se um exercício de previsão em tempo real em que são geradas previsões fora da amostra para o crescimento do PIB trimestral dos Estados Unidos com o uso de indicadores financeiros mensais. Neste exercício, também são considerados modelos lineares MIDAS e outros modelos de previsão (lineares e não-lineares) que incluem informação dos indicadores (via agregação temporal das observações mensais) para fins comparativos de desempenho preditivo. A partir dos resultados do estudo empírico, observa-se que, de modo geral, os modelos MS-MIDAS fornecem previsões mais acuradas que os modelos STMIDAS.
The Mixed Data Sampling (MIDAS) regression approach, proposed by Ghysels et al. (2004), allows us to directly relate variables at different frequencies. This characteristic is particularly attractive when one wishes to use the data at their original sampling frequencies, as well as when the objective is to calculate multi-period-ahead forecasts. In this thesis, we use the MIDAS regression approach in three papers in which we perform empirical applications in the areas of finance and macroeconomics. All papers are comparative studies. With applications in different forecasting contexts, we aim at contributing with empirical comparative evidence. In the first paper, we explore comparative results in the context of multi-period volatility forecasting. We compare the MIDAS approach with two widely used methods of producing multi-period forecasts: the direct and the iterated approaches. Their relative performances are investigated in a Monte Carlo study and in an empirical study in which we forecast volatility at horizons up to 60 days ahead. The results of the Monte Carlo study indicate that the MIDAS forecasts are the best ones at horizons of 15 days ahead and longer. In contrast, the iterated forecasts are superior for shorter horizons of 5 and 10 days ahead. In the empirical study, using daily returns of the S&P 500 and NASDAQ indexes, the results are not so conclusive, but suggest a better performance for the iterated forecasts. All analyses are out-of-sample. In the second paper, we compare several multi-period volatility forecasting models, specifically from MIDAS and HAR families. We perform our comparisons in terms of out-of-sample volatility forecasting accuracy. We also consider combinations of the models forecasts. Using intra-daily returns of the IBOVESPA, we calculate volatility measures such as realized variance, realized power variation, and realized bipower variation to be used as regressors in both models. Further, we use a nonparametric procedure for separately measuring the continuous sample path variation and the discontinuous jump part of the quadratic variation process. Thus, MIDAS and HAR specifications with the continuous sample path and jump variability measures as separate regressors are estimated. Our results in terms of mean squared error suggest that regressors involving volatility measures which are robust to jumps (i.e., realized bipower variation and realized power variation) are better at forecasting future volatility. However, we find that, in general, the forecasts based on these regressors are not statistically different from those based on realized variance (the benchmark regressor). Moreover, we find that, in general, the relative forecasting performances of the three approaches (i.e., MIDAS, HAR and forecast combinations) are statistically equivalent. In the third paper, we compare the Markov-Switching MIDAS (MS-MIDAS) and the Smooth Transition MIDAS (STMIDAS) models in terms of forecast accuracy. We perform a real time forecasting exercise in which out-of-sample forecasts for the quarterly U.S. output growth are generated using monthly financial indicators. In this exercise, we also consider linear MIDAS models, and other forecasting models (linear and nonlinear) that include information on the indicators (via temporal aggregation of the monthly observations) for comparative purposes. From the results of the empirical study, we observe that, in general, the MS-MIDAS models provide more accurate forecasts than do the STMIDAS models.
APA, Harvard, Vancouver, ISO, and other styles
35

Zeng, Ning. "The usefulness of econometric models with stochastic volatility and long memory : applications for macroeconomic and financial time series." Thesis, Brunel University, 2009. http://bura.brunel.ac.uk/handle/2438/3903.

Full text
Abstract:
This study aims to examine the usefulness of econometric models with stochastic volatility and long memory in the application of macroeconomic and financial time series. An ARFIMA-FIAPARCH process is used to estimate the two main parameters driving the degree of persistence in the US real interest rate and its uncertainty. It provides evidence that the US real interest rates exhibit dual long memory and suggests that much more attention needs to be paid to the degree of persistence and its consequences for the economic theories which are still inconsistent with the finding of either near-unit-root or long memory mean-reverting behavior. A bivariate GARCH-type of model with/without long-memory is constructed to concern the issue of temporal ordering of inflation, output growth and their respective uncertainties as well as all the possible causal relationships among the four variables in the US/UK, allowing several lags of the conditional variances/levels used as regressors in the mean/variance equations. Notably, the findings are quite robust to changes in the specification of the model. The applicability and out-of-sample forecasting ability of a multivariate constant conditional correlation FIAPARCH model are analysed through a multi-country study of national stock market returns. This multivariate specification is generally applicable once power, leverage and long-memory effects are taken into consideration. In addition, both the optimal fractional differencing parameter and power transformation are remarkably similar across countries.
APA, Harvard, Vancouver, ISO, and other styles
36

KAVTARADZE, LASHA. "DINAMICS AND LATENT VARIABLES IN APPLIED MACROECONOMICS." Doctoral thesis, Università Cattolica del Sacro Cuore, 2016. http://hdl.handle.net/10280/16793.

Full text
Abstract:
La tesi di dottorato, composta da tre capitoli, si concentra sulla valutazione delle dinamiche di inflazione in Georgia e sulla previsione dei tassi di cambio nominali per i Paesi della European Eastern Partnership attraverso l’utilizzo di moderne tecniche econometriche. Nel primo capitolo, abbiamo svolto un’indagine sui modelli di previsione dei tassi di cambio e dell’inflazione. Questa indagine rivela che i modelli “factor-based and time-varying parameter” generano migliori previsioni rispetto ad altri modelli. Nel secondo capitolo, abbiamo approfondito le dinamiche di inflazione in Georgia utilizzando la New Keynesian Phillips Curve ibrida, inserita all’interno di un quadro di un modello “time-varying parameter (TVP)”. Una stima del modello TVP con volatilità stocastica mostra la persistenza di un’inflazione bassa durante il periodo 1996-2012. Un’analisi più approfondita dal 2003 mostra una volatilità crescente dell’inflazione. Inoltre, le stime del parametro evidenziano che la componente forward-looking del modello è importante a seguito dell’adozione di inflation targeting da parte della NBG a partire dal 2009. Nel terzo capitolo, abbiamo costruito dei modelli fattoriali, “Factor Vector Autoregressive” per prevedere i tassi di cambio nominali per i Paesi dell’European Eastern Partnership. Questi modelli prevedono meglio i tassi di cambio nominali rispetto ad un processo naïve come il random walk.
The Ph.D. thesis consist of three chapters on evaluating inflation dynamics in Georgia and modeling and forecasting nominal exchange rates for the European Eastern Partnership (EaP) countries using modern applied econometric techniques. In the first chapter, we survey of models those produce high predictive powers for forecasting exchange rates and inflation. This survey reveals that the factor-based and time-varying parameter (TVP) models generate superior forecasts relative to all other models. In the second chapter, we study inflation dynamics in Georgia using a hybrid New Keynesian Phillips Curve (NKPC) nested within a time-varying parameter (TVP) framework. Estimation of a TVP model with stochastic volatility shows low inflation persistence over the entire time span (1996-2012), while revealing increasing volatility of inflation shocks since 2003. Moreover, parameter estimates point to the forward-looking component of the model gaining importance following the National Bank of Georgia (NBG) adoption of inflation targeting in 2009. In the third chapter, we construct Factor Vector Autoregressive (FVAR) models to forecast nominal exchange rates for the EaP countries. This study provides better forecasts of nominal exchange rates than those produced by the random walk process.
APA, Harvard, Vancouver, ISO, and other styles
37

KAVTARADZE, LASHA. "DINAMICS AND LATENT VARIABLES IN APPLIED MACROECONOMICS." Doctoral thesis, Università Cattolica del Sacro Cuore, 2016. http://hdl.handle.net/10280/16793.

Full text
Abstract:
La tesi di dottorato, composta da tre capitoli, si concentra sulla valutazione delle dinamiche di inflazione in Georgia e sulla previsione dei tassi di cambio nominali per i Paesi della European Eastern Partnership attraverso l’utilizzo di moderne tecniche econometriche. Nel primo capitolo, abbiamo svolto un’indagine sui modelli di previsione dei tassi di cambio e dell’inflazione. Questa indagine rivela che i modelli “factor-based and time-varying parameter” generano migliori previsioni rispetto ad altri modelli. Nel secondo capitolo, abbiamo approfondito le dinamiche di inflazione in Georgia utilizzando la New Keynesian Phillips Curve ibrida, inserita all’interno di un quadro di un modello “time-varying parameter (TVP)”. Una stima del modello TVP con volatilità stocastica mostra la persistenza di un’inflazione bassa durante il periodo 1996-2012. Un’analisi più approfondita dal 2003 mostra una volatilità crescente dell’inflazione. Inoltre, le stime del parametro evidenziano che la componente forward-looking del modello è importante a seguito dell’adozione di inflation targeting da parte della NBG a partire dal 2009. Nel terzo capitolo, abbiamo costruito dei modelli fattoriali, “Factor Vector Autoregressive” per prevedere i tassi di cambio nominali per i Paesi dell’European Eastern Partnership. Questi modelli prevedono meglio i tassi di cambio nominali rispetto ad un processo naïve come il random walk.
The Ph.D. thesis consist of three chapters on evaluating inflation dynamics in Georgia and modeling and forecasting nominal exchange rates for the European Eastern Partnership (EaP) countries using modern applied econometric techniques. In the first chapter, we survey of models those produce high predictive powers for forecasting exchange rates and inflation. This survey reveals that the factor-based and time-varying parameter (TVP) models generate superior forecasts relative to all other models. In the second chapter, we study inflation dynamics in Georgia using a hybrid New Keynesian Phillips Curve (NKPC) nested within a time-varying parameter (TVP) framework. Estimation of a TVP model with stochastic volatility shows low inflation persistence over the entire time span (1996-2012), while revealing increasing volatility of inflation shocks since 2003. Moreover, parameter estimates point to the forward-looking component of the model gaining importance following the National Bank of Georgia (NBG) adoption of inflation targeting in 2009. In the third chapter, we construct Factor Vector Autoregressive (FVAR) models to forecast nominal exchange rates for the EaP countries. This study provides better forecasts of nominal exchange rates than those produced by the random walk process.
APA, Harvard, Vancouver, ISO, and other styles
38

Nunes, Maurício Simiano. "A relação entre o mercado de ações brasileiro e as variáveis macroeconômicas no período pós-plano real." Florianópolis, SC, 2003. http://repositorio.ufsc.br/xmlui/handle/123456789/85284.

Full text
Abstract:
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Sócio-Econômico. Programa de Pós-Graduação em Economia.
Made available in DSpace on 2012-10-20T17:08:40Z (GMT). No. of bitstreams: 1 190307.pdf: 423262 bytes, checksum: 6b43f7cda503be2ee6155bac1b07a5cd (MD5)
APA, Harvard, Vancouver, ISO, and other styles
39

Skalin, Joakim. "Modelling macroeconomic time series with smooth transition autoregressions." Doctoral thesis, Handelshögskolan i Stockholm, Ekonomisk Statistik (ES), 1998. http://urn.kb.se/resolve?urn=urn:nbn:se:hhs:diva-650.

Full text
Abstract:
Among the parametric nonlinear time series model families, the smooth transition regression (STR) model has recently received attention in the literature. The considerations in this dissertation focus on the univariate special case of this model, the smooth transition autoregression (STAR) model, although large parts of the discussion can be easily generalised to the more general STR case. Many nonlinear univariate time series models can be described as consisting of a number of regimes, each one corresponding to a linear autoregressive parametrisation, between which the process switches. In the STAR models, as opposed to certain other popular models involving multiple regimes, the transition between the extreme regimes is smooth and assumed to be characterised by a bounded continuous function of a transition variable. The transition variable, in turn, may be a lagged value of the variable in the model, or another stochastic or deterministic observable variable. A number of other commonly discussed nonlinear autoregressive models can be viewed as special or limiting cases of the STAR model. The applications presented in the first two chapters of this dissertation, Chapter I: Another look at Swedish Business Cycles, 1861-1988 Chapter II: Modelling asymmetries and moving equilibria in unemployment rates, make use of STAR models. In these two studies, STAR models are used to provide insight into dynamic properties of the time series which cannot be be properly characterised by linear time series models, and which thereby may be obscured by estimating only a linear model in cases where linearity would be rejected if tested. The applications being of interest in their own right, an important common objective of these two chapters is also to develop, suggest, and give examples of various methods that may be of use in discussing the dynamic properties of estimated STAR models in general.Chapter III, Testing linearity against smooth transition autoregression using a parametric bootstrap, reports the result of a small simulation study considering a new test of linearity against STAR based on bootstrap methodology.

Diss. Stockholm : Handelshögskolan, 1999

APA, Harvard, Vancouver, ISO, and other styles
40

Galgau, Olivia. "Essays in international economics and industrial organization." Doctoral thesis, Universite Libre de Bruxelles, 2006. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210773.

Full text
Abstract:
The aim of the thesis is to further explore the relationship between economic integration and firm mobility and investment, both from an empirical and a theoretical perspective, with the objective of drawing conclusions on how government policy can be used to strengthen the positive impact of integration on investment, which is crucial in moving and maintaining countries at the forefront of the technology frontier and accelerating economic growth in a world of rapid technical change and high mobility of ideas, goods, services, capital and labor.

The first chapter aims to bring together the literature on economic integration, firm mobility and investment. It contains two sections: one dedicated to the literature on FDI and the second covering the literature on firm entry and exit, economic performance and economic and business regulation.

In the second chapter I examine the relationship between the Single Market and FDI both in an intra-EU context and from outside the EU. The empirical results show that the impact of the Single Market on FDI differs substantially from one country to another. This finding may be due to the functioning of institutions.

The third chapter studies the relationship between the level of external trade protection put into place by a Regional Integration Agreement(RIA)and the option of a firm from outside the RIA block to serve the RIA market through FDI rather than exports. I find that the level of external trade protection put in place by the RIA depends on the RIA country's capacity to benefit from FDI spillovers, the magnitude of set-up costs of building a plant in the RIA and on the amount of external trade protection erected by the country from outside the reigonal block with respect to the RIA.

The fourth chapter studies how the firm entry and exit process is affected by product market reforms and regulations and impact macroeconomic performance. The results show that an increase in deregulation will lead to a rise in firm entry and exit. This in turn will especially affect macroeconomic performance as measured by output growth and labor productivity growth. The analysis done at the sector level shows that results can differ substantially across industries, which implies that deregulation policies should be conducted at the sector level, rather than at the global macroeconomic level.
Doctorat en sciences économiques, Orientation économie
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
41

Song, Keran. "Business Cycle Effects on US Sectoral Stock Returns." FIU Digital Commons, 2015. http://digitalcommons.fiu.edu/etd/2008.

Full text
Abstract:
My dissertation investigated business cycle effects on US sectoral stock returns. The first chapter examined the relationship between the business cycle and sectoral stock returns. First, I calculated constant correlation coefficients between the business cycle and sectoral stock returns. Then, I employed the DCC GARCH model to estimate time-varying correlation coefficients for each pair of the business cycle and sectoral stock returns. Finally, I ran regression of sectoral returns on dummy variables designed to capture the four stages of the business cycle. I found that though sectoral stock returns were closely related to the business cycle, they did not share some of its main characteristics. The second chapter developed two models in order to discuss possible asymmetric business cycle effects on US sectoral stock returns. One was a GARCH model with asymmetric explanatory variables and the other one was an ARCH-M model with asymmetric external regressors. In the second model, square root of conditional variance of the business cycle proxy was characterized as positive or negative risk, depending on the algebraic sign of past innovations driving the business cycle proxy. I found that some sectors changed their cyclicities from expansions to recessions. Negative shocks to business cycles had most power to influence sectoral volatilities. Positive and negative parts of business cycle risk had same effects on some sectors but had opposite effects on other sectors. A general conclusion of both models was that business cycle had stronger effects than own sectoral effects in driving sectoral returns. The third chapter discussed Chinese business cycle effects on US sectoral stock returns at two horizons. At a monthly horizon, the third lag of Chinese IP growth rate had positive effects on most sectors. The second lag of US IP growth rate had positive effects on almost all sectors. At a quarterly horizon, besides the extensive positive effects of the first lag of Chinese IP growth rate, the third and fourth lags also had effects on some sectors. The US IP growth rate had the same pattern, namely positive first and fourth lag effects and negative third lag effects. Using a 5-year rolling fixed window, I found that these business cycle effects were time-varying. The major changes in parameters resulted from the elimination of quota on textiles by WTO, the terrorist attacks on the US, and the 2007 financial crisis.
APA, Harvard, Vancouver, ISO, and other styles
42

Fodor, Bryan D. "The effect of macroeconomic variables on the pricing of common stock under trending market conditions." Thesis, Department of Business Administration, University of New Brunswick, 2003. http://hdl.handle.net/1882/49.

Full text
Abstract:
Thesis (MBA) -- University of New Brunswick, Faculty of Administration, 2003.
Typescript. Bibliography: leaves 83-84. Also available online through University of New Brunswick, UNB Electronic Theses & Dissertations.
APA, Harvard, Vancouver, ISO, and other styles
43

FAUSER, SIMON GEORG. "Un modello reggionale del mercato di lavoro per la Germania - an analisi degli shock macroeconomici e variabili della politica economica." Doctoral thesis, Università Cattolica del Sacro Cuore, 2009. http://hdl.handle.net/10280/622.

Full text
Abstract:
La crescita di disoccupazione europea durante i decenni passati derivano dell’alto tasso di disoccupazione delle grandi nazioni europee. A livello sub-nazionale la disoccupazione varia molto e riflette la diversa struttura economica regionale. Assieme alla crescente complessità, l’importanza delle strutture regionale e il processo d’integrazione europea, ciò evidenza la necessità di strumenti analitici di supporto al policy-maker. L’autore costruisce un modello adeguato a tale scopo e lo applicano alle regioni tedesche dell’ovest dal 1975 al 2005. Il modello si basa su modelli precedenti per l’ambiente italiano. L’autore estende questi modelli per includere la struttura istituzionale e aspetti d’innovazione e utilizza il modello per esaminare le reazioni dei mercati di lavoro regionali agli shock macro economici. Ci si pone la seguente domanda: Sono le reazioni agli shock esogeni dei mercati di lavoro delle regioni con industrie innovative e servizi all’alta intensità di conoscenza differenti rispetto ai mercati di lavoro delle regioni con struttura meno innovativa? Il modello si conferma efficace e rivela molteplici aspetti importanti per il policy-maker a livello regionale, nazionale ed internazionale.
The rise in European unemployment during the last decades stems from high unemployment in large European nations. On a sub-national level, unemployment rates within these large nations differ extensively and mirror distinct economic structures. Together with the increased economic and political complexity, regionalisation and integration within Europe, this calls for tools assisting the policy decision maker to analyse the impact of policies on a regional level. We construct such a tool and apply it to data of Western German states from 1975-2005. The model builds on previous approaches in macroeconomic labour market modelling in the Italian context and extends such approaches to incorporate the institutional setting, aspects of innovation. We utilize the model to examine reactions of regional labour markets to macro economic shocks. Specifically, the following question arises: Do labour markets of regions that have a high share of innovative industries and knowledge-intensive services respond differently to exogenous shocks than regions with less innovative industries and services? The model shows a good performance and reveals manifold insights that are useful for the regional, national as well as the supra national policy maker.
APA, Harvard, Vancouver, ISO, and other styles
44

FAUSER, SIMON GEORG. "Un modello reggionale del mercato di lavoro per la Germania - an analisi degli shock macroeconomici e variabili della politica economica." Doctoral thesis, Università Cattolica del Sacro Cuore, 2009. http://hdl.handle.net/10280/622.

Full text
Abstract:
La crescita di disoccupazione europea durante i decenni passati derivano dell’alto tasso di disoccupazione delle grandi nazioni europee. A livello sub-nazionale la disoccupazione varia molto e riflette la diversa struttura economica regionale. Assieme alla crescente complessità, l’importanza delle strutture regionale e il processo d’integrazione europea, ciò evidenza la necessità di strumenti analitici di supporto al policy-maker. L’autore costruisce un modello adeguato a tale scopo e lo applicano alle regioni tedesche dell’ovest dal 1975 al 2005. Il modello si basa su modelli precedenti per l’ambiente italiano. L’autore estende questi modelli per includere la struttura istituzionale e aspetti d’innovazione e utilizza il modello per esaminare le reazioni dei mercati di lavoro regionali agli shock macro economici. Ci si pone la seguente domanda: Sono le reazioni agli shock esogeni dei mercati di lavoro delle regioni con industrie innovative e servizi all’alta intensità di conoscenza differenti rispetto ai mercati di lavoro delle regioni con struttura meno innovativa? Il modello si conferma efficace e rivela molteplici aspetti importanti per il policy-maker a livello regionale, nazionale ed internazionale.
The rise in European unemployment during the last decades stems from high unemployment in large European nations. On a sub-national level, unemployment rates within these large nations differ extensively and mirror distinct economic structures. Together with the increased economic and political complexity, regionalisation and integration within Europe, this calls for tools assisting the policy decision maker to analyse the impact of policies on a regional level. We construct such a tool and apply it to data of Western German states from 1975-2005. The model builds on previous approaches in macroeconomic labour market modelling in the Italian context and extends such approaches to incorporate the institutional setting, aspects of innovation. We utilize the model to examine reactions of regional labour markets to macro economic shocks. Specifically, the following question arises: Do labour markets of regions that have a high share of innovative industries and knowledge-intensive services respond differently to exogenous shocks than regions with less innovative industries and services? The model shows a good performance and reveals manifold insights that are useful for the regional, national as well as the supra national policy maker.
APA, Harvard, Vancouver, ISO, and other styles
45

Zhao, Zilong. "Extracting knowledge from macroeconomic data, images and unreliable data." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALT074.

Full text
Abstract:
L'identification de système et l'apprentissage automatique sont deux concepts similaires utilisés indépendamment dans la communauté automatique et informatique. L'identification des systèmes construit des modèles à partir de données mesurées. Les algorithmes d'apprentissage automatique construisent des modèles basés sur des données d'entraînement (propre ou non), afin de faire des prédictions sans être explicitement programmé pour le faire. Sauf la précision de prédiction, la vitesse de convergence et la stabilité sont deux autres facteurs clés pour évaluer le processus de l'apprentissage, en particulier dans le cas d'apprentissage en ligne, et ces propriétés ont déjà été bien étudiées en théorie du contrôle. Donc, cette thèse implémente des recherches suivantes : 1) Identification du système et contrôle optimal des données macroéconomiques : Nous modélisons d'abord les données macroéconomiques chinoises sur le modèle VAR (Vector Auto-Regression), puis identifions la relation de cointégration entre les variables et utilisons le Vector Error Correction Model (VECM) pour étudier le court terme fluctuations autour de l'équilibre à long terme, la causalité de Granger est également étudiée avec VECM. Ce travail révèle la tendance de la transition de la croissance économique de la Chine : de l'exportation vers la consommation ; La deuxième étude est avec des données de la France. On représente le modèle dans l'espace d'états, mettons le modèle dans un cadre de feedback-control, le contrôleur est conçu par un régulateur linéaire-quadratique (LQR). On peut également imposer des perturbations sur les sorties et des contraintes sur les entrées, ce qui simule la situation réelle de crise économique. 2) Utilisation de la théorie du contrôle pour améliorer l'apprentissage en ligne du réseau neuronal profond : Nous proposons un algorithme de taux d'apprentissage basé sur les performances : E (Exponential)/PD (Proportional Derivative) contrôle, qui considère le Convolutional Neural Network (CNN) comme une plante, taux d'apprentissage comme signal de commande et valeur de loss comme signal d'erreur. Le résultat montre que E/PD surpasse l'état de l'art en termes de précision finale, de loss finale et de vitesse de convergence, et le résultat est également plus stable. Cependant, une observation des expériences E/PD est que le taux d'apprentissage diminue tandis que la loss diminue continuellement. Mais la loss diminue, le modèle s’approche d’optimum, on ne devait pas diminuer le taux d'apprentissage. Pour éviter cela, nous proposons un event-based E/PD. Le résultat montre qu'il améliore E/PD en précision finale, loss finale et vitesse de convergence ; Une autre observation de l'expérience E/PD est que l'apprentissage en ligne fixe des époques constantes pour chaque batch. Puisque E/PD converge rapidement, l'amélioration significative ne vient que des époques initiales. Alors, nous proposons un autre event-based E/PD, qui inspecte la loss historique. Le résultat montre qu'il peut épargner jusqu'à 67% d'époques sur la donnée CIFAR-10 sans dégrader beaucoup les performances.3) Apprentissage automatique à partir de données non fiables : Nous proposons un cadre générique : Robust Anomaly Detector (RAD), la partie de sélection des données de RAD est un cadre à deux couches, où la première couche est utilisée pour filtrer les données suspectes, et la deuxième couche détecte les modèles d'anomalie à partir des données restantes. On dérive également trois variantes de RAD : voting, active learning et slim, qui utilisent des informations supplémentaires, par exempe, les opinions des classificateurs conflictuels et les requêtes d'oracles. Le résultat montre que RAD peut améliorer la performance du modèle en présence de bruit sur les étiquettes de données. Trois variations de RAD montrent qu'elles peuvent toutes améliorer le RAD original, et le RAD Active Learning fonctionne presque aussi bien que dans le cas où il n'y a pas de bruit sur les étiquettes
System identification and machine learning are two similar concepts independently used in automatic and computer science community. System identification uses statistical methods to build mathematical models of dynamical systems from measured data. Machine learning algorithms build a mathematical model based on sample data, known as "training data" (clean or not), in order to make predictions or decisions without being explicitly programmed to do so. Except prediction accuracy, converging speed and stability are another two key factors to evaluate the training process, especially in the online learning scenario, and these properties have already been well studied in control theory. Therefore, this thesis will implement the interdisciplinary researches for following topic: 1) System identification and optimal control on macroeconomic data: We first modelize the China macroeconomic data on Vector Auto-Regression (VAR) model, then identify the cointegration relation between variables and use Vector Error Correction Model (VECM) to study the short-time fluctuations around the long-term equilibrium, Granger Causality is also studied with VECM. This work reveals the trend of China's economic growth transition: from export-oriented to consumption-oriented; Due to limitation of China economic data, we turn to use France macroeconomic data in the second study. We represent the model in state-space, put the model into a feedback control framework, the controller is designed by Linear-Quadratic Regulator (LQR). The system can apply the control law to bring the system to a desired state. We can also impose perturbations on outputs and constraints on inputs, which emulates the real-world situation of economic crisis. Economists can observe the recovery trajectory of economy, which gives meaningful implications for policy-making. 2) Using control theory to improve the online learning of deep neural network: We propose a performance-based learning rate algorithm: E (Exponential)/PD (Proportional Derivative) feedback control, which consider the Convolutional Neural Network (CNN) as plant, learning rate as control signal and loss value as error signal. Results show that E/PD outperforms the state-of-the-art in final accuracy, final loss and converging speed, and the result are also more stable. However, one observation from E/PD experiments is that learning rate decreases while loss continuously decreases. But loss decreases mean model approaches optimum, we should not decrease the learning rate. To prevent this, we propose an event-based E/PD. Results show that it improves E/PD in final accuracy, final loss and converging speed; Another observation from E/PD experiment is that online learning fixes a constant training epoch for each batch. Since E/PD converges fast, the significant improvement only comes from the beginning epochs. Therefore, we propose another event-based E/PD, which inspects the historical loss, when the progress of training is lower than a certain threshold, we turn to next batch. Results show that it can save up to 67% epochs on CIFAR-10 dataset without degrading much performance. 3) Machine learning out of unreliable data: We propose a generic framework: Robust Anomaly Detector (RAD), The data selection part of RAD is a two-layer framework, where the first layer is used to filter out the suspicious data, and the second layer detects the anomaly patterns from the remaining data. We also derive three variations of RAD namely, voting, active learning and slim, which use additional information, e.g., opinions of conflicting classifiers and queries of oracles. We iteratively update the historical selected data to improve accumulated data quality. Results show that RAD can continuously improve model's performance under the presence of noise on labels. Three variations of RAD show they can all improve the original setting, and the RAD Active Learning performs almost as good as the case where there is no noise on labels
APA, Harvard, Vancouver, ISO, and other styles
46

Liebermann, Joëlle. "Essays in real-time forecasting." Doctoral thesis, Universite Libre de Bruxelles, 2012. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209644.

Full text
Abstract:
This thesis contains three essays in the field of real-time econometrics, and more particularly

forecasting.

The issue of using data as available in real-time to forecasters, policymakers or financial

markets is an important one which has only recently been taken on board in the empirical

literature. Data available and used in real-time are preliminary and differ from ex-post

revised data, and given that data revisions may be quite substantial, the use of latest

available instead of real-time can substantially affect empirical findings (see, among others,

Croushore’s (2011) survey). Furthermore, as variables are released on different dates

and with varying degrees of publication lags, in order not to disregard timely information,

datasets are characterized by the so-called “ragged-edge”structure problem. Hence, special

econometric frameworks, such as developed by Giannone, Reichlin and Small (2008) must

be used.

The first Chapter, “The impact of macroeconomic news on bond yields: (in)stabilities over

time and relative importance”, studies the reaction of U.S. Treasury bond yields to real-time

market-based news in the daily flow of macroeconomic releases which provide most of the

relevant information on their fundamentals, i.e. the state of the economy and inflation. We

find that yields react systematically to a set of news consisting of the soft data, which have

very short publication lags, and the most timely hard data, with the employment report

being the most important release. However, sub-samples evidence reveals that parameter

instability in terms of absolute and relative size of yields response to news, as well as

significance, is present. Especially, the often cited dominance to markets of the employment

report has been evolving over time, as the size of the yields reaction to it was steadily

increasing. Moreover, over the recent crisis period there has been an overall switch in the

relative importance of soft and hard data compared to the pre-crisis period, with the latter

becoming more important even if less timely, and the scope of hard data to which markets

react has increased and is more balanced as less concentrated on the employment report.

Markets have become more reactive to news over the recent crisis period, particularly to

hard data. This is a consequence of the fact that in periods of high uncertainty (bad state),

markets starve for information and attach a higher value to the marginal information content

of these news releases.

The second and third Chapters focus on the real-time ability of models to now-and-forecast

in a data-rich environment. It uses an econometric framework, that can deal with large

panels that have a “ragged-edge”structure, and to evaluate the models in real-time, we

constructed a database of vintages for US variables reproducing the exact information that

was available to a real-time forecaster.

The second Chapter, “Real-time nowcasting of GDP: a factor model versus professional

forecasters”, performs a fully real-time nowcasting (forecasting) exercise of US real GDP

growth using Giannone, Reichlin and Smalls (2008), henceforth (GRS), dynamic factor

model (DFM) framework which enables to handle large unbalanced datasets as available

in real-time. We track the daily evolution throughout the current and next quarter of the

model nowcasting performance. Similarly to GRS’s pseudo real-time results, we find that

the precision of the nowcasts increases with information releases. Moreover, the Survey of

Professional Forecasters does not carry additional information with respect to the model,

suggesting that the often cited superiority of the former, attributable to judgment, is weak

over our sample. As one moves forward along the real-time data flow, the continuous

updating of the model provides a more precise estimate of current quarter GDP growth and

the Survey of Professional Forecasters becomes stale. These results are robust to the recent

recession period.

The last Chapter, “Real-time forecasting in a data-rich environment”, evaluates the ability

of different models, to forecast key real and nominal U.S. monthly macroeconomic variables

in a data-rich environment and from the perspective of a real-time forecaster. Among

the approaches used to forecast in a data-rich environment, we use pooling of bi-variate

forecasts which is an indirect way to exploit large cross-section and the directly pooling of

information using a high-dimensional model (DFM and Bayesian VAR). Furthermore forecasts

combination schemes are used, to overcome the choice of model specification faced by

the practitioner (e.g. which criteria to use to select the parametrization of the model), as

we seek for evidence regarding the performance of a model that is robust across specifications/

combination schemes. Our findings show that predictability of the real variables is

confined over the recent recession/crisis period. This in line with the findings of D’Agostino

and Giannone (2012) over an earlier period, that gains in relative performance of models

using large datasets over univariate models are driven by downturn periods which are characterized

by higher comovements. These results are robust to the combination schemes

or models used. A point worth mentioning is that for nowcasting GDP exploiting crosssectional

information along the real-time data flow also helps over the end of the great moderation period. Since this is a quarterly aggregate proxying the state of the economy,

monthly variables carry information content for GDP. But similarly to the findings for the

monthly variables, predictability, as measured by the gains relative to the naive random

walk model, is higher during crisis/recession period than during tranquil times. Regarding

inflation, results are stable across time, but predictability is mainly found at nowcasting

and forecasting one-month ahead, with the BVAR standing out at nowcasting. The results

show that the forecasting gains at these short horizons stem mainly from exploiting timely

information. The results also show that direct pooling of information using a high dimensional

model (DFM or BVAR) which takes into account the cross-correlation between the

variables and efficiently deals with the “ragged-edge”structure of the dataset, yields more

accurate forecasts than the indirect pooling of bi-variate forecasts/models.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
47

Kamps, Christophe. "The dynamic macroeconomic effects of public capital : theory and evidence for OECD countries /." Berlin [u.a.] : Springer, 2004. http://www.loc.gov/catdir/toc/fy054/2004114864.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Hörnell, Fredrik, and Melina Hafelt. "Responsiveness of Swedish housing prices to the 2018 amortization requirement : An investigation using a structural Vector autoregressive model to estimate the impact of macro prudential regulation on the Swedish housing market." Thesis, Södertörns högskola, Nationalekonomi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-35533.

Full text
Abstract:
This thesis analyzed and estimated the impact of the March 1, 2018 loan to income amortization requirement on residential real estate prices in Sweden. A four variables vector autoregressive model (VAR) was used to study the relationships between residential real estate prices, GDP, real mortgage rate and consumer price index over a time period from 2005 to 2017. First, a structural vector autoregressive (SVAR) model was used to test how a structural innovation in the error term for real mortgage rate affected residential real estate prices. Secondly, an unconditional forecast from our reduced VAR was produced to estimate post 2017 price growth of the Swedish housing market. The impulse response function results stand in contradiction to economic intuition i.e. the price puzzle problem. The unconditional forecast indicates that the housing market will enter a period with slower price growth post 2017, which are in line with previous research. This thesis vector autoregressive model can give meaningful results with regard to trend forecasts but with regard to precise statements as anticipating drastic price depreciation, it falls short. We recommend the use of reduced VAR forecasting with regard to the Swedish housing market.
APA, Harvard, Vancouver, ISO, and other styles
49

Fonseca, Marcelo Gonçalves da Silva. "Essays on the credit channel of monetary policy: a case study for Brazil." reponame:Repositório Institucional do FGV, 2014. http://hdl.handle.net/10438/11748.

Full text
Abstract:
Submitted by Marcelo Fonseca (marcelo.economista@hotmail.com) on 2014-05-19T19:10:06Z No. of bitstreams: 1 Essays on the Credit Channel of Monetary Policy - a Case Study for Brazil.pdf: 3704297 bytes, checksum: 3b1fcaf85bbcf74f3843e7c2c0d1cad9 (MD5)
Rejected by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br), reason: Boa tarde Marcelo, conforme conversamos ao telefone. Att. Suzi 3799-7876 on 2014-05-19T19:47:10Z (GMT)
Submitted by Marcelo Fonseca (marcelo.economista@hotmail.com) on 2014-05-19T21:20:48Z No. of bitstreams: 1 Essays on the Credit Channel of Monetary Policy - a Case Study for Brazil.pdf: 3702737 bytes, checksum: 106ac090d0a4805c2b0d31d85182e2eb (MD5)
Approved for entry into archive by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br) on 2014-05-20T11:36:27Z (GMT) No. of bitstreams: 1 Essays on the Credit Channel of Monetary Policy - a Case Study for Brazil.pdf: 3702737 bytes, checksum: 106ac090d0a4805c2b0d31d85182e2eb (MD5)
Made available in DSpace on 2014-05-20T11:38:51Z (GMT). No. of bitstreams: 1 Essays on the Credit Channel of Monetary Policy - a Case Study for Brazil.pdf: 3702737 bytes, checksum: 106ac090d0a4805c2b0d31d85182e2eb (MD5) Previous issue date: 2014-05-06
O estouro da crise do subprime em 2008 nos EUA e da crise soberana europeia em 2010 renovou o interesse acadêmico no papel desempenhado pela atividade creditícia nos ciclos econômicos. O propósito desse trabalho é apresentar evidências empíricas acerca do canal do crédito da política monetária para o caso brasileiro, usando técnicas econométricas distintas. O trabalho é composto por três artigos. O primeiro apresenta uma revisão da literatura de fricções financeiras, com especial ênfase nas suas implicações sobre a condução da política monetária. Destaca-se o amplo conjunto de medidas não convencionais utilizadas pelos bancos centrais de países emergentes e desenvolvidos em resposta à interrupção da intermediação financeira. Um capítulo em particular é dedicado aos desafios enfrentados pelos bancos centrais emergentes para a condução da política monetária em um ambiente de mercado de capitais altamente integrados. O segundo artigo apresenta uma investigação empírica acerca das implicações do canal do crédito, sob a lente de um modelo FAVAR estrutural (SFAVAR). O termo estrutural decorre da estratégia de estimação adotada, a qual possibilita associar uma clara interpretação econômica aos fatores estimados. Os resultados mostram que choques nas proxies para o prêmio de financiamento externo e o volume de crédito produzem flutuações amplas e persistentes na inflação e atividade econômica, respondendo por mais de 30% da decomposição de variância desta no horizonte de três anos. Simulações contrafactuais demonstram que o canal do crédito amplificou a contração econômica no Brasil durante a fase aguda da crise financeira global no último trimestre de 2008, produzindo posteriormente um impulso relevante na recuperação que se seguiu. O terceiro artigo apresenta estimação Bayesiana de um modelo DSGE novo-keynesiano que incorpora o mecanismo de acelerador financeiro desenvolvido por Bernanke, Gertler e Gilchrist (1999). Os resultados apresentam evidências em linha com aquelas obtidas no artigo anterior: inovações no prêmio de financiamento externo – representado pelos spreads de crédito – produzem efeitos relevantes sobre a dinâmica da demanda agregada e inflação. Adicionalmente, verifica-se que choques de política monetária são amplificados pelo acelerador financeiro. Palavras-chave: Macroeconomia, Política Monetária, Canal do Crédito, Acelerador Financeiro, FAVAR, DSGE, Econometria Bayesiana
APA, Harvard, Vancouver, ISO, and other styles
50

Curto, Millet Fabien. "Inflation expectations, labour markets and EMU." Thesis, University of Oxford, 2007. http://ora.ox.ac.uk/objects/uuid:9187d2eb-2f93-4a5a-a7d6-0fb6556079bb.

Full text
Abstract:
This thesis examines the measurement, applications and properties of consumer inflation expectations in the context of eight European Union countries: France, Germany, the UK, Spain, Italy, Belgium, the Netherlands and Sweden. The data proceed mainly from the European Commission's Consumer Survey and are qualitative in nature, therefore requiring quantification prior to use. This study first seeks to determine the optimal quantification methodology among a set of approaches spanning three traditions, associated with Carlson-Parkin (1975), Pesaran (1984) and Seitz (1988). The success of a quantification methodology is assessed on the basis of its ability to match quantitative expectations data and on its behaviour in an important economic application, namely the modelling of wages for our sample countries. The wage equation developed here draws on the theoretical background of the staggered contracts and the wage bargaining literature, and controls carefully for inflation expectations and institutional variables. The Carlson-Parkin variation proposed in Curto Millet (2004) was found to be the most satisfactory. This being established, the wage equations are used to test the hypothesis that the advent of EMU generated an increase in labour market flexibility, which would be reflected in structural breaks. The hypothesis is essentially rejected. Finally, the properties of inflation expectations and perceptions themselves are examined, especially in the context of EMU. Both the rational expectations and rational perceptions hypotheses are rejected. Popular expectations mechanisms, such as the "rule-of-thumb" model or Akerlof et al.'s (2000) "near-rationality hypothesis" are similarly unsupported. On the other hand, evidence is found for the transmission of expert forecasts to consumer expectations in the case of the UK, as in Carroll's (2003) model. The distribution of consumer expectations and perceptions is also considered, showing a tendency for gradual (as in Mankiw and Reis, 2002) but non-rational adjustment. Expectations formation is further shown to have important qualitative features.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography