Dissertations / Theses on the topic 'Economic forecasting Australia Econometric models'

To see the other types of publications on this topic, follow the link: Economic forecasting Australia Econometric models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 36 dissertations / theses for your research on the topic 'Economic forecasting Australia Econometric models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Billah, Baki 1965. "Model selection for time series forecasting models." Monash University, Dept. of Econometrics and Business Statistics, 2001. http://arrow.monash.edu.au/hdl/1959.1/8840.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jeon, Yongil. "Four essays on forecasting evaluation and econometric estimation /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1999. http://wwwlib.umi.com/cr/ucsd/fullcit?p9949690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Azam, Mohammad Nurul 1957. "Modelling and forecasting in the presence of structural change in the linear regression model." Monash University, Dept. of Econometrics and Business Statistics, 2001. http://arrow.monash.edu.au/hdl/1959.1/9152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lazim, Mohamad Alias. "Econometric forecasting models and model evaluation : a case study of air passenger traffic flow." Thesis, Lancaster University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.296880.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Enzinger, Sharn Emma 1973. "The economic impact of greenhouse policy upon the Australian electricity industry : an applied general equilibrium analysis." Monash University, Centre of Policy Studies, 2001. http://arrow.monash.edu.au/hdl/1959.1/8383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kummerow, Max F. "A paradigm of inquiry for applied real estate research : integrating econometric and simulation methods in time and space specific forecasting models : Australian office market case study." Thesis, Curtin University, 1997. http://hdl.handle.net/20.500.11937/1574.

Full text
Abstract:
Office space oversupply cost Australia billions of dollars during the 1990-92 recession. Australia, the United States, Japan, the U.K., South Africa, China, Thailand, and many other countries have suffered office oversupply cycles. Illiquid untenanted office buildings impair investors capital and cash flows, with adverse effects on macroeconomics, financial institutions, and individuals. This study aims to develop improved methods for medium term forecasting of office market adjustments to inform individual project development decisions and thereby to mitigate office oversupply cycles. Methods combine qualitative research, econometric estimation, system dynamics simulation, and institutional economics. This research operationalises a problem solving research paradigm concept advocated by Ken Lusht. The research is also indebted to the late James Graaskamp, who was successful in linking industry and academic research through time and space specific feasibility studies to inform individual property development decisions. Qualitative research and literature provided a list of contributing causes of office oversupply including random shocks, faulty forecasting methods, fee driven deals, prisoners dilemma game, system dynamics (lags and adjustment times), land use regulation, and capital market issues. Rather than choosing among these, they are all considered to be causal to varying degrees. Moreover, there is synergy between combinations of these market imperfections. Office markets are complex evolving human designed systems (not time invariant) so each cycle has unique historical features. Data on Australian office markets were used to estimate office rent adjustment equations. Simulation models in spreadsheet and system dynamics software then integrate additional information with the statistical results to produce demand, supply, and rent forecasts. Results include models for rent forecasting and models for analysis related to policy and system redesign. The dissertation ends with two chapters on institutional reforms whereby better information might find application to improve market efficiency.Keywords. Office rents, rent adjustment, office market modelling, forecasting, system dynamics.
APA, Harvard, Vancouver, ISO, and other styles
7

Marshall, Peter John 1960. "Rational versus anchored traders : exchange rate behaviour in macro models." Monash University, Dept. of Economics, 2001. http://arrow.monash.edu.au/hdl/1959.1/9048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Steinbach, Max Rudibert. "Essays on dynamic macroeconomics." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/86196.

Full text
Abstract:
Thesis (PhD)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: In the first essay of this thesis, a medium scale DSGE model is developed and estimated for the South African economy. When used for forecasting, the model is found to outperform private sector economists when forecasting CPI inflation, GDP growth and the policy rate over certain horizons. In the second essay, the benchmark DSGE model is extended to include the yield on South African 10-year government bonds. The model is then used to decompose the 10-year yield spread into (1) the structural shocks that contributed to its evolution during the inflation targeting regime of the South African Reserve Bank, as well as (2) an expected yield and a term premium. In addition, it is found that changes in the South African term premium may predict future real economic activity. Finally, the need for DSGE models to take account of financial frictions became apparent during the recent global financial crisis. As a result, the final essay incorporates a stylised banking sector into the benchmark DSGE model described above. The optimal response of the South African Reserve Bank to financial shocks is then analysed within the context of this structural model.
APA, Harvard, Vancouver, ISO, and other styles
9

Silvestrini, Andrea. "Essays on aggregation and cointegration of econometric models." Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210304.

Full text
Abstract:
This dissertation can be broadly divided into two independent parts. The first three chapters analyse issues related to temporal and contemporaneous aggregation of econometric models. The fourth chapter contains an application of Bayesian techniques to investigate whether the post transition fiscal policy of Poland is sustainable in the long run and consistent with an intertemporal budget constraint.

Chapter 1 surveys the econometric methodology of temporal aggregation for a wide range of univariate and multivariate time series models.

A unified overview of temporal aggregation techniques for this broad class of processes is presented in the first part of the chapter and the main results are summarized. In each case, assuming to know the underlying process at the disaggregate frequency, the aim is to find the appropriate model for the aggregated data. Additional topics concerning temporal aggregation of ARIMA-GARCH models (see Drost and Nijman, 1993) are discussed and several examples presented. Systematic sampling schemes are also reviewed.

Multivariate models, which show interesting features under temporal aggregation (Breitung and Swanson, 2002, Marcellino, 1999, Hafner, 2008), are examined in the second part of the chapter. In particular, the focus is on temporal aggregation of VARMA models and on the related concept of spurious instantaneous causality, which is not a time series property invariant to temporal aggregation. On the other hand, as pointed out by Marcellino (1999), other important time series features as cointegration and presence of unit roots are invariant to temporal aggregation and are not induced by it.

Some empirical applications based on macroeconomic and financial data illustrate all the techniques surveyed and the main results.

Chapter 2 is an attempt to monitor fiscal variables in the Euro area, building an early warning signal indicator for assessing the development of public finances in the short-run and exploiting the existence of monthly budgetary statistics from France, taken as "example country".

The application is conducted focusing on the cash State deficit, looking at components from the revenue and expenditure sides. For each component, monthly ARIMA models are estimated and then temporally aggregated to the annual frequency, as the policy makers are interested in yearly predictions.

The short-run forecasting exercises carried out for years 2002, 2003 and 2004 highlight the fact that the one-step-ahead predictions based on the temporally aggregated models generally outperform those delivered by standard monthly ARIMA modeling, as well as the official forecasts made available by the French government, for each of the eleven components and thus for the whole State deficit. More importantly, by the middle of the year, very accurate predictions for the current year are made available.

The proposed method could be extremely useful, providing policy makers with a valuable indicator when assessing the development of public finances in the short-run (one year horizon or even less).

Chapter 3 deals with the issue of forecasting contemporaneous time series aggregates. The performance of "aggregate" and "disaggregate" predictors in forecasting contemporaneously aggregated vector ARMA (VARMA) processes is compared. An aggregate predictor is built by forecasting directly the aggregate process, as it results from contemporaneous aggregation of the data generating vector process. A disaggregate predictor is a predictor obtained from aggregation of univariate forecasts for the individual components of the data generating vector process.

The econometric framework is broadly based on Lütkepohl (1987). The necessary and sufficient condition for the equality of mean squared errors associated with the two competing methods in the bivariate VMA(1) case is provided. It is argued that the condition of equality of predictors as stated in Lütkepohl (1987), although necessary and sufficient for the equality of the predictors, is sufficient (but not necessary) for the equality of mean squared errors.

Furthermore, it is shown that the same forecasting accuracy for the two predictors can be achieved using specific assumptions on the parameters of the VMA(1) structure.

Finally, an empirical application that involves the problem of forecasting the Italian monetary aggregate M1 on the basis of annual time series ranging from 1948 until 1998, prior to the creation of the European Economic and Monetary Union (EMU), is presented to show the relevance of the topic. In the empirical application, the framework is further generalized to deal with heteroskedastic and cross-correlated innovations.

Chapter 4 deals with a cointegration analysis applied to the empirical investigation of fiscal sustainability. The focus is on a particular country: Poland. The choice of Poland is not random. First, the motivation stems from the fact that fiscal sustainability is a central topic for most of the economies of Eastern Europe. Second, this is one of the first countries to start the transition process to a market economy (since 1989), providing a relatively favorable institutional setting within which to study fiscal sustainability (see Green, Holmes and Kowalski, 2001). The emphasis is on the feasibility of a permanent deficit in the long-run, meaning whether a government can continue to operate under its current fiscal policy indefinitely.

The empirical analysis to examine debt stabilization is made up by two steps.

First, a Bayesian methodology is applied to conduct inference about the cointegrating relationship between budget revenues and (inclusive of interest) expenditures and to select the cointegrating rank. This task is complicated by the conceptual difficulty linked to the choice of the prior distributions for the parameters relevant to the economic problem under study (Villani, 2005).

Second, Bayesian inference is applied to the estimation of the normalized cointegrating vector between budget revenues and expenditures. With a single cointegrating equation, some known results concerning the posterior density of the cointegrating vector may be used (see Bauwens, Lubrano and Richard, 1999).

The priors used in the paper leads to straightforward posterior calculations which can be easily performed.

Moreover, the posterior analysis leads to a careful assessment of the magnitude of the cointegrating vector. Finally, it is shown to what extent the likelihood of the data is important in revising the available prior information, relying on numerical integration techniques based on deterministic methods.


Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
10

Ben-Belhassen, Boubaker. "Econometric models of the Argentine cereal economy : a focus on policy simulation analysis /." free to MU campus, to others for purchase, 1997. http://wwwlib.umi.com/cr/mo/fullcit?p9842508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

De, Antonio Liedo David. "Structural models for macroeconomics and forecasting." Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210142.

Full text
Abstract:
This Thesis is composed by three independent papers that investigate

central debates in empirical macroeconomic modeling.

Chapter 1, entitled “A Model for Real-Time Data Assessment with an Application to GDP Growth Rates”, provides a model for the data

revisions of macroeconomic variables that distinguishes between rational expectation updates and noise corrections. Thus, the model encompasses the two polar views regarding the publication process of statistical agencies: noise versus news. Most of the studies previous studies that analyze data revisions are based

on the classical noise and news regression approach introduced by Mankiew, Runkle and Shapiro (1984). The problem is that the statistical tests available do not formulate both extreme hypotheses as collectively exhaustive, as recognized by Aruoba (2008). That is, it would be possible to reject or accept both of them simultaneously. In turn, the model for the

DPP presented here allows for the simultaneous presence of both noise and news. While the “regression approach” followed by Faust et al. (2005), along the lines of Mankiew et al. (1984), identifies noise in the preliminary

figures, it is not possible for them to quantify it, as done by our model.

The second and third chapters acknowledge the possibility that macroeconomic data is measured with errors, but the approach followed to model the missmeasurement is extremely stylized and does not capture the complexity of the revision process that we describe in the first chapter.

Chapter 2, entitled “Revisiting the Success of the RBC model”, proposes the use of dynamic factor models as an alternative to the VAR based tools for the empirical validation of dynamic stochastic general equilibrium (DSGE) theories. Along the lines of Giannone et al. (2006), we use the state-space parameterisation of the factor models proposed by Forni et al. (2007) as a competitive benchmark that is able to capture weak statistical restrictions that DSGE models impose on the data. Our empirical illustration compares the out-of-sample forecasting performance of a simple RBC model augmented with a serially correlated noise component against several specifications belonging to classes of dynamic factor and VAR models. Although the performance of the RBC model is comparable

to that of the reduced form models, a formal test of predictive accuracy reveals that the weak restrictions are more useful at forecasting than the strong behavioral assumptions imposed by the microfoundations in the model economy.

The last chapter, “What are Shocks Capturing in DSGE modeling”, contributes to current debates on the use and interpretation of larger DSGE

models. Recent tendency in academic work and at central banks is to develop and estimate large DSGE models for policy analysis and forecasting. These models typically have many shocks (e.g. Smets and Wouters, 2003 and Adolfson, Laseen, Linde and Villani, 2005). On the other hand, empirical studies point out that few large shocks are sufficient to capture the covariance structure of macro data (Giannone, Reichlin and

Sala, 2005, Uhlig, 2004). In this Chapter, we propose to reconcile both views by considering an alternative DSGE estimation approach which

models explicitly the statistical agency along the lines of Sargent (1989). This enables us to distinguish whether the exogenous shocks in DSGE

modeling are structural or instead serve the purpose of fitting the data in presence of misspecification and measurement problems. When applied to the original Smets and Wouters (2007) model, we find that the explanatory power of the structural shocks decreases at high frequencies. This allows us to back out a smoother measure of the natural output gap than that

resulting from the original specification.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
12

Kummerow, Max F. "A paradigm of inquiry for applied real estate research : integrating econometric and simulation methods in time and space specific forecasting models : Australian office market case study." Curtin University of Technology, School of Economics and Finance, 1997. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=11274.

Full text
Abstract:
Office space oversupply cost Australia billions of dollars during the 1990-92 recession. Australia, the United States, Japan, the U.K., South Africa, China, Thailand, and many other countries have suffered office oversupply cycles. Illiquid untenanted office buildings impair investors capital and cash flows, with adverse effects on macroeconomics, financial institutions, and individuals. This study aims to develop improved methods for medium term forecasting of office market adjustments to inform individual project development decisions and thereby to mitigate office oversupply cycles. Methods combine qualitative research, econometric estimation, system dynamics simulation, and institutional economics. This research operationalises a problem solving research paradigm concept advocated by Ken Lusht. The research is also indebted to the late James Graaskamp, who was successful in linking industry and academic research through time and space specific feasibility studies to inform individual property development decisions. Qualitative research and literature provided a list of contributing causes of office oversupply including random shocks, faulty forecasting methods, fee driven deals, prisoners dilemma game, system dynamics (lags and adjustment times), land use regulation, and capital market issues. Rather than choosing among these, they are all considered to be causal to varying degrees. Moreover, there is synergy between combinations of these market imperfections. Office markets are complex evolving human designed systems (not time invariant) so each cycle has unique historical features. Data on Australian office markets were used to estimate office rent adjustment equations. Simulation models in spreadsheet and system dynamics software then integrate additional information with the statistical results to produce demand, supply, and rent forecasts. Results include ++
models for rent forecasting and models for analysis related to policy and system redesign. The dissertation ends with two chapters on institutional reforms whereby better information might find application to improve market efficiency.Keywords. Office rents, rent adjustment, office market modelling, forecasting, system dynamics.
APA, Harvard, Vancouver, ISO, and other styles
13

Feng, Ning. "Essays on business cycles and macroeconomic forecasting." HKBU Institutional Repository, 2016. https://repository.hkbu.edu.hk/etd_oa/279.

Full text
Abstract:
This dissertation consists of two essays. The first essay focuses on developing a quantitative theory for a small open economy dynamic stochastic general equilibrium (DSGE) model with a housing sector allowing for both contemporaneous and news shocks. The second essay is an empirical study on the macroeconomic forecasting using both structural and non-structural models. In the first essay, we develop a DSGE model with a housing sector, which incorporates both contemporaneous and news shocks to domestic and external fundamentals, to explore the kind of and the extent to which different shocks to economic fundamentals matter for driving housing market dynamics in a small open economy. The model is estimated by the Bayesian method, using data from Hong Kong. The quantitative results show that external shocks and news shocks play a significant role in this market. Contemporaneous shock to foreign housing preference, contemporaneous shock to terms of trade, and news shocks to technology in the consumption goods sector explain one-third each of the variance of housing price. Terms of trade contemporaneous shock and consumption technology news shocks also contribute 36% and 59%, respectively, to the variance in housing investment. The simulation results enable policy makers to identify the key driving forces behind the housing market dynamics and the interaction between housing market and the macroeconomy in Hong Kong. In the second essay, we compare the forecasting performance between structural and non-structural models for a small open economy. The structural model refers to the small open economy DSGE model with the housing sector in the first essay. In addition, we examine various non-structural models including both Bayesian and classical time-series methods in our forecasting exercises. We also include the information from a large-scale quarterly data series in some models using two approaches to capture the influence of fundamentals: extracting common factors by principal component analysis in a dynamic factor model (DFM), factor-augmented vector autoregression (FAVAR), and Bayesian FAVAR (BFAVAR) or Bayesian shrinkage in a large-scale vector autoregression (BVAR). In this study, we forecast five key macroeconomic variables, namely, output, consumption, employment, housing price inflation, and CPI-based inflation using quarterly data. The results, based on mean absolute error (MAE) and root mean squared error (RMSE) of one to eight quarters ahead out-of-sample forecasts, indicate that the non-structural models outperform the structural model for all variables of interest across all horizons. Among the non-structural models, small-scale BVAR performs better with short forecasting horizons, although DFM shows a similar predictive ability. As the forecasting horizon grows, DFM tends to improve over other models and is better suited in forecasting key macroeconomic variables at longer horizons.
APA, Harvard, Vancouver, ISO, and other styles
14

Yang, Wenling. "M-GARCH Hedge Ratios And Hedging Effectiveness In Australian Futures Markets." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2000. https://ro.ecu.edu.au/theses/1530.

Full text
Abstract:
This study deals with the estimation of the optimal hedge ratios using various econometric models. Most of the recent papers have demonstrated that the conventional ordinary least squares (OLS) method of estimating constant hedge ratios is inappropriate, other more complicated models however seem to produce no more efficient hedge ratios. Using daily AOIs and SPI futures on the Australian market, optimal hedge ratios are calculated from four different models: the OLS regression model, the bivariate vector autoaggressive model (BVAR), the error-correction model (ECM) and the multivariate diagonal Vcc GARCH Model. The performance of each hedge ratio is then compared. The hedging effectiveness is measured in terms of ex-post and ex-ante risk-return traHe-off at various forcasting horizons. It is generally found that the GARCH time varying hedge ratios provide the greatest portfolio risk reduction, particularly for longer hedging horizons, but hey so not generate the highest portfolio return.
APA, Harvard, Vancouver, ISO, and other styles
15

Berger, Loïc. "Essays on the economics of risk and uncertainty." Doctoral thesis, Universite Libre de Bruxelles, 2012. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209676.

Full text
Abstract:
In the first chapter of this thesis, I use the smooth ambiguity model developed by Klibanoff, Marinacci, and Mukerji (2005) to define the concepts of ambiguity and uncertainty premia in a way analogous to what Pratt (1964) did in the risk theory literature. I show that these concepts may be useful to quantify the effect ambiguity has on the welfare of economic agents. I also define several other concepts such as the unambiguous probability equivalent or the ambiguous utility premium, provide local approximations of these different premia and show the link that exists between them when comparing different degrees of ambiguity aversion not only in the small, but also in the large.

In the second chapter, I analyze the effect of ambiguity on self-insurance and self-protection, that are tools used to deal with the uncertainty of facing a monetary loss when market insurance is not available (in the self-insurance model, the decision maker has the opportunity to furnish an effort to reduce the size of the loss occurring in the bad state of the world, while in the self-protection – or prevention – model, the effort reduces the probability of being in the bad state).

In a short note, in the context of a two-period model I first examine the links between risk-aversion, prudence and self-insurance/self-protection activities under risk. Contrary to the results obtained in the static one-period model, I show that the impacts of prudence and of risk-aversion go in the same direction and generate a higher level of prevention in the more usual situations. I also show that the results concerning self-insurance in a single period framework may be easily extended to a two-period context.

I then consider two-period self-insurance and self-protection models in the presence of ambiguity and analyze the effect of ambiguity aversion. I show that in most common situations, ambiguity prudence is a sufficient condition to observe an increase in the level of effort. I propose an interpretation of the model in the context of climate change, so that self-insurance and self-protection are respectively seen as adaptation and mitigation efforts a policy-maker should provide to deal with an uncertain catastrophic event, and interpret the results obtained as an expression of the Precautionary Principle.

In the third chapter, I introduce the economic theory developed to deal with ambiguity in the context of medical decision-making. I show that, under diagnostic uncertainty, an increase in ambiguity aversion always leads a physician whose goal is to act in the best interest of his patient, to choose a higher level of treatment. In the context of a dichotomic choice (treatment versus no treatment), this result implies that taking into account the attitude agents generally manifest towards ambiguity may induce a physician to change his decision by opting for treatment more often. I further show that under therapeutic uncertainty, the opposite happens, i.e. an ambiguity averse physician may eventually choose not to treat a patient who would have been treated under ambiguity neutrality.


Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
16

D'Agostino, Antonello. "Understanding co-movements in macro and financial variables." Doctoral thesis, Universite Libre de Bruxelles, 2007. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210597.

Full text
Abstract:
Over the last years, the growing availability of large datasets and the improvements in the computational speed of computers have further fostered the research in the fields of both macroeconomic modeling and forecasting analysis. A primary focus of these research areas is to improve the models performance by exploiting the informational content of several time series. Increasing the dimension of macro models is indeed crucial for a detailed structural understanding of the economic environment, as well as for an accurate forecasting analysis. As consequence, a new generation of large-scale macro models, based on the micro-foundations of a fully specified dynamic stochastic general equilibrium set-up, has became one of the most flourishing research areas of interest both in central banks and academia. At the same time, there has been a revival of forecasting methods dealing with many predictors, such as the factor models. The central idea of factor models is to exploit co-movements among variables through a parsimonious econometric structure. Few underlying common shocks or factors explain most of the co-variations among variables. The unexplained component of series movements is on the other hand due to pure idiosyncratic dynamics. The generality of their framework allows factor models to be suitable for describing a broad variety of models in a macroeconomic and a financial context. The revival of factor models, over the recent years, comes from important developments achieved by Stock and Watson (2002) and Forni, Hallin, Lippi and Reichlin (2000). These authors find the conditions under which some data averages become collinear to the space spanned by the factors when, the cross section dimension, becomes large. Moreover, their factor specifications allow the idiosyncratic dynamics to be mildly cross-correlated (an effect referred to as the 'approximate factor structure' by Chamberlain and Rothschild, 1983), a situation empirically verified in many applications. These findings have relevant implications. The most important being that the use of a large number of series is no longer representative of a dimensional constraint. On the other hand, it does help to identify the factor space. This new generation of factor models has been applied in several areas of macroeconomics and finance as well as for policy evaluation. It is consequently very likely to become a milestone in the literature of forecasting methods using many predictors. This thesis contributes to the empirical literature on factor models by proposing four original applications.

In the first chapter of this thesis, the generalized dynamic factor model of Forni et. al (2002) is employed to explore the predictive content of the asset returns in forecasting Consumer Price Index (CPI) inflation and the growth rate of Industrial Production (IP). The connection between stock markets and economic growth is well known. In the fundamental valuation of equity, the stock price is equal to the discounted future streams of expected dividends. Since the future dividends are related to future growth, a revision of prices, and hence returns, should signal movements in the future growth path. Though other important transmission channels, such as the Tobin's q theory (Tobin, 1969), the wealth effect as well as capital market imperfections, have been widely studied in this literature. I show that an aggregate index, such as the S&P500, could be misleading if used as a proxy for the informative content of the stock market as a whole. Despite the widespread wisdom of considering such index as a leading variable, only part of the assets included in the composition of the index has a leading behaviour with respect to the variables of interest. Its forecasting performance might be poor, leading to sceptical conclusions about the effectiveness of asset prices in forecasting macroeconomic variables. The main idea of the first essay is therefore to analyze the lead-lag structure of the assets composing the S&P500. The classification in leading, lagging and coincident variables is achieved by means of the cross correlation function cleaned of idiosyncratic noise and short run fluctuations. I assume that asset returns follow a factor structure. That is, they are the sum of two parts: a common part driven by few shocks common to all the assets and an idiosyncratic part, which is rather asset specific. The correlation

function, computed on the common part of the series, is not affected by the assets' specific dynamics and should provide information only on the series driven by the same common factors. Once the leading series are identified, they are grouped within the economic sector they belong to. The predictive content that such aggregates have in forecasting IP growth and CPI inflation is then explored and compared with the forecasting power of the S&P500 composite index. The forecasting exercise is addressed in the following way: first, in an autoregressive (AR) model I choose the truncation lag that minimizes the Mean Square Forecast Error (MSFE) in 11 years out of sample simulations for 1, 6 and 12 steps ahead, both for the IP growth rate and the CPI inflation. Second, the S&P500 is added as an explanatory variable to the previous AR specification. I repeat the simulation exercise and find that there are very small improvements of the MSFE statistics. Third, averages of stock return leading series, in the respective sector, are added as additional explanatory variables in the benchmark regression. Remarkable improvements are achieved with respect to the benchmark specification especially for one year horizon forecast. Significant improvements are also achieved for the shorter forecast horizons, when the leading series of the technology and energy sectors are used.

The second chapter of this thesis disentangles the sources of aggregate risk and measures the extent of co-movements in five European stock markets. Based on the static factor model of Stock and Watson (2002), it proposes a new method for measuring the impact of international, national and industry-specific shocks. The process of European economic and monetary integration with the advent of the EMU has been a central issue for investors and policy makers. During these years, the number of studies on the integration and linkages among European stock markets has increased enormously. Given their forward looking nature, stock prices are considered a key variable to use for establishing the developments in the economic and financial markets. Therefore, measuring the extent of co-movements between European stock markets has became, especially over the last years, one of the main concerns both for policy makers, who want to best shape their policy responses, and for investors who need to adapt their hedging strategies to the new political and economic environment. An optimal portfolio allocation strategy is based on a timely identification of the factors affecting asset returns. So far, literature dating back to Solnik (1974) identifies national factors as the main contributors to the co-variations among stock returns, with the industry factors playing a marginal role. The increasing financial and economic integration over the past years, fostered by the decline of trade barriers and a greater policy coordination, should have strongly reduced the importance of national factors and increased the importance of global determinants, such as industry determinants. However, somehow puzzling, recent studies demonstrated that countries sources are still very important and generally more important of the industry ones. This paper tries to cast some light on these conflicting results. The chapter proposes an econometric estimation strategy more flexible and suitable to disentangle and measure the impact of global and country factors. Results point to a declining influence of national determinants and to an increasing influence of the industries ones. The international influences remains the most important driving forces of excess returns. These findings overturn the results in the literature and have important implications for strategic portfolio allocation policies; they need to be revisited and adapted to the changed financial and economic scenario.

The third chapter presents a new stylized fact which can be helpful for discriminating among alternative explanations of the U.S. macroeconomic stability. The main finding is that the fall in time series volatility is associated with a sizable decline, of the order of 30% on average, in the predictive accuracy of several widely used forecasting models, included the factor models proposed by Stock and Watson (2002). This pattern is not limited to the measures of inflation but also extends to several indicators of real economic activity and interest rates. The generalized fall in predictive ability after the mid-1980s is particularly pronounced for forecast horizons beyond one quarter. Furthermore, this empirical regularity is not simply specific to a single method, rather it is a common feature of all models including those used by public and private institutions. In particular, the forecasts for output and inflation of the Fed's Green book and the Survey of Professional Forecasters (SPF) are significantly more accurate than a random walk only before 1985. After this date, in contrast, the hypothesis of equal predictive ability between naive random walk forecasts and the predictions of those institutions is not rejected for all horizons, the only exception being the current quarter. The results of this chapter may also be of interest for the empirical literature on asymmetric information. Romer and Romer (2000), for instance, consider a sample ending in the early 1990s and find that the Fed produced more accurate forecasts of inflation and output compared to several commercial providers. The results imply that the informational advantage of the Fed and those private forecasters is in fact limited to the 1970s and the beginning of the 1980s. In contrast, during the last two decades no forecasting model is better than "tossing a coin" beyond the first quarter horizon, thereby implying that on average uninformed economic agents can effectively anticipate future macroeconomics developments. On the other hand, econometric models and economists' judgement are quite helpful for the forecasts over the very short horizon, that is relevant for conjunctural analysis. Moreover, the literature on forecasting methods, recently surveyed by Stock and Watson (2005), has devoted a great deal of attention towards identifying the best model for predicting inflation and output. The majority of studies however are based on full-sample periods. The main findings in the chapter reveal that most of the full sample predictability of U.S. macroeconomic series arises from the years before 1985. Long time series appear

to attach a far larger weight on the earlier sub-sample, which is characterized by a larger volatility of inflation and output. Results also suggest that some caution should be used in evaluating the performance of alternative forecasting models on the basis of a pool of different sub-periods as full sample analysis are likely to miss parameter instability.

The fourth chapter performs a detailed forecast comparison between the static factor model of Stock and Watson (2002) (SW) and the dynamic factor model of Forni et. al. (2005) (FHLR). It is not the first work in performing such an evaluation. Boivin and Ng (2005) focus on a very similar problem, while Stock and Watson (2005) compare the performances of a larger class of predictors. The SW and FHLR methods essentially differ in the computation of the forecast of the common component. In particular, they differ in the estimation of the factor space and in the way projections onto this space are performed. In SW, the factors are estimated by static Principal Components (PC) of the sample covariance matrix and the forecast of the common component is simply the projection of the predicted variable on the factors. FHLR propose efficiency improvements in two directions. First, they estimate the common factors based on Generalized Principal Components (GPC) in which observations are weighted according to their signal to noise ratio. Second, they impose the constraints implied by the dynamic factors structure when the variables of interest are projected on the common factors. Specifically, they take into account the leading and lagging relations across series by means of principal components in the frequency domain. This allows for an efficient aggregation of variables that may be out of phase. Whether these efficiency improvements are helpful to forecast in a finite sample is however an empirical question. Literature has not yet reached a consensus. On the one hand, Stock and Watson (2005) show that both methods perform similarly (although they focus on the weighting of the idiosyncratic and not on the dynamic restrictions), while Boivin and Ng (2005) show that SW's method largely outperforms the FHLR's and, in particular, conjecture that the dynamic restrictions implied by the method are harmful for the forecast accuracy of the model. This chapter tries to shed some new light on these conflicting results. It

focuses on the Industrial Production index (IP) and the Consumer Price Index (CPI) and bases the evaluation on a simulated out-of sample forecasting exercise. The data set, borrowed from Stock and Watson (2002), consists of 146 monthly observations for the US economy. The data spans from 1959 to 1999. In order to isolate and evaluate specific characteristics of the methods, a procedure, where the

two non-parametric approaches are nested in a common framework, is designed. In addition, for both versions of the factor model forecasts, the chapter studies the contribution of the idiosyncratic component to the forecast. Other non-core aspects of the model are also investigated: robustness with respect to the choice of the number of factors and variable transformations. Finally, the chapter performs a sub-sample performances of the factor based forecasts. The purpose of this exercise is to design an experiment for assessing the contribution of the core characteristics of different models to the forecasting performance and discussing auxiliary issues. Hopefully this may also serve as a guide for practitioners in the field. As in Stock and Watson (2005), results show that efficiency improvements due to the weighting of the idiosyncratic components do not lead to significant more accurate forecasts, but, in contrast to Boivin and Ng (2005), it is shown that the dynamic restrictions imposed by the procedure of Forni et al. (2005) are not harmful for predictability. The main conclusion is that the two methods have a similar performance and produce highly collinear forecasts.


Doctorat en sciences économiques, Orientation économie
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
17

Berger, Nicholas. "Modelling structural and policy changes in the world wine market into the 21st century." Title page, contents and abstract only, 2000. http://web4.library.adelaide.edu.au/theses/09ECM/09ecmb496.pdf.

Full text
Abstract:
Includes bibliographical references. Addresses the question of what an economic model of the world wine market suggests will happen to wine production, consumption, trade and prices in various regions in the early 21st century. A subsidiary issue is what difference would global or European regional wine liberalisation make to that outlook, according to such a model. Accompanying CD-ROM comprises spreadsheet written by Nick Berger, November 2000, for the Windows and Office97 versions of Excel; a seven region world wine model (WWM7) - base version projecting the world wine market 1996-2005 as a non-linear Armington model. System requirements for accompanying CD-ROM: IBM compatible computer ; Microsoft Excel 97 or later.
APA, Harvard, Vancouver, ISO, and other styles
18

Lenza, Michèle. "Essays on monetary policy, saving and investment." Doctoral thesis, Universite Libre de Bruxelles, 2007. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210659.

Full text
Abstract:
This thesis addresses three relevant macroeconomic issues: (i) why

Central Banks behave so cautiously compared to optimal theoretical

benchmarks, (ii) do monetary variables add information about

future Euro Area inflation to a large amount of non monetary

variables and (iii) why national saving and investment are so

correlated in OECD countries in spite of the high degree of

integration of international financial markets.

The process of innovation in the elaboration of economic theory

and statistical analysis of the data witnessed in the last thirty

years has greatly enriched the toolbox available to

macroeconomists. Two aspects of such a process are particularly

noteworthy for addressing the issues in this thesis: the

development of macroeconomic dynamic stochastic general

equilibrium models (see Woodford, 1999b for an historical

perspective) and of techniques that enable to handle large data

sets in a parsimonious and flexible manner (see Reichlin, 2002 for

an historical perspective).

Dynamic stochastic general equilibrium models (DSGE) provide the

appropriate tools to evaluate the macroeconomic consequences of

policy changes. These models, by exploiting modern intertemporal

general equilibrium theory, aggregate the optimal responses of

individual as consumers and firms in order to identify the

aggregate shocks and their propagation mechanisms by the

restrictions imposed by optimizing individual behavior. Such a

modelling strategy, uncovering economic relationships invariant to

a change in policy regimes, provides a framework to analyze the

effects of economic policy that is robust to the Lucas'critique

(see Lucas, 1976). The early attempts of explaining business

cycles by starting from microeconomic behavior suggested that

economic policy should play no role since business cycles

reflected the efficient response of economic agents to exogenous

sources of fluctuations (see the seminal paper by Kydland and Prescott, 1982}

and, more recently, King and Rebelo, 1999). This view was challenged by

several empirical studies showing that the adjustment mechanisms

of variables at the heart of macroeconomic propagation mechanisms

like prices and wages are not well represented by efficient

responses of individual agents in frictionless economies (see, for

example, Kashyap, 1999; Cecchetti, 1986; Bils and Klenow, 2004 and Dhyne et al. 2004). Hence, macroeconomic models currently incorporate

some sources of nominal and real rigidities in the DSGE framework

and allow the study of the optimal policy reactions to inefficient

fluctuations stemming from frictions in macroeconomic propagation

mechanisms.

Against this background, the first chapter of this thesis sets up

a DSGE model in order to analyze optimal monetary policy in an

economy with sectorial heterogeneity in the frequency of price

adjustments. Price setters are divided in two groups: those

subject to Calvo type nominal rigidities and those able to change

their prices at each period. Sectorial heterogeneity in price

setting behavior is a relevant feature in real economies (see, for

example, Bils and Klenow, 2004 for the US and Dhyne, 2004 for the Euro

Area). Hence, neglecting it would lead to an understatement of the

heterogeneity in the transmission mechanisms of economy wide

shocks. In this framework, Aoki (2001) shows that a Central

Bank maximizing social welfare should stabilize only inflation in

the sector where prices are sticky (hereafter, core inflation).

Since complete stabilization is the only true objective of the

policymaker in Aoki (2001) and, hence, is not only desirable

but also implementable, the equilibrium real interest rate in the

economy is equal to the natural interest rate irrespective of the

degree of heterogeneity that is assumed. This would lead to

conclude that stabilizing core inflation rather than overall

inflation does not imply any observable difference in the

aggressiveness of the policy behavior. While maintaining the

assumption of sectorial heterogeneity in the frequency of price

adjustments, this chapter adds non negligible transaction

frictions to the model economy in Aoki (2001). As a

consequence, the social welfare maximizing monetary policymaker

faces a trade-off among the stabilization of core inflation,

economy wide output gap and the nominal interest rate. This

feature reflects the trade-offs between conflicting objectives

faced by actual policymakers. The chapter shows that the existence

of this trade-off makes the aggressiveness of the monetary policy

reaction dependent on the degree of sectorial heterogeneity in the

economy. In particular, in presence of sectorial heterogeneity in

price adjustments, Central Banks are much more likely to behave

less aggressively than in an economy where all firms face nominal

rigidities. Hence, the chapter concludes that the excessive

caution in the conduct of monetary policy shown by actual Central

Banks (see, for example, Rudebusch and Svennsson, 1999 and Sack, 2000) might not

represent a sub-optimal behavior but, on the contrary, might be

the optimal monetary policy response in presence of a relevant

sectorial dispersion in the frequency of price adjustments.

DSGE models are proving useful also in empirical applications and

recently efforts have been made to incorporate large amounts of

information in their framework (see Boivin and Giannoni, 2006). However, the

typical DSGE model still relies on a handful of variables. Partly,

this reflects the fact that, increasing the number of variables,

the specification of a plausible set of theoretical restrictions

identifying aggregate shocks and their propagation mechanisms

becomes cumbersome. On the other hand, several questions in

macroeconomics require the study of a large amount of variables.

Among others, two examples related to the second and third chapter

of this thesis can help to understand why. First, policymakers

analyze a large quantity of information to assess the current and

future stance of their economies and, because of model

uncertainty, do not rely on a single modelling framework.

Consequently, macroeconomic policy can be better understood if the

econometrician relies on large set of variables without imposing

too much a priori structure on the relationships governing their

evolution (see, for example, Giannone et al. 2004 and Bernanke et al. 2005).

Moreover, the process of integration of good and financial markets

implies that the source of aggregate shocks is increasingly global

requiring, in turn, the study of their propagation through cross

country links (see, among others, Forni and Reichlin, 2001 and Kose et al. 2003). A

priori, country specific behavior cannot be ruled out and many of

the homogeneity assumptions that are typically embodied in open

macroeconomic models for keeping them tractable are rejected by

the data. Summing up, in order to deal with such issues, we need

modelling frameworks able to treat a large amount of variables in

a flexible manner, i.e. without pre-committing on too many

a-priori restrictions more likely to be rejected by the data. The

large extent of comovement among wide cross sections of economic

variables suggests the existence of few common sources of

fluctuations (Forni et al. 2000 and Stock and Watson, 2002) around which

individual variables may display specific features: a shock to the

world price of oil, for example, hits oil exporters and importers

with different sign and intensity or global technological advances

can affect some countries before others (Giannone and Reichlin, 2004). Factor

models mainly rely on the identification assumption that the

dynamics of each variable can be decomposed into two orthogonal

components - common and idiosyncratic - and provide a parsimonious

tool allowing the analysis of the aggregate shocks and their

propagation mechanisms in a large cross section of variables. In

fact, while the idiosyncratic components are poorly

cross-sectionally correlated, driven by shocks specific of a

variable or a group of variables or measurement error, the common

components capture the bulk of cross-sectional correlation, and

are driven by few shocks that affect, through variable specific

factor loadings, all items in a panel of economic time series.

Focusing on the latter components allows useful insights on the

identity and propagation mechanisms of aggregate shocks underlying

a large amount of variables. The second and third chapter of this

thesis exploit this idea.

The second chapter deals with the issue whether monetary variables

help to forecast inflation in the Euro Area harmonized index of

consumer prices (HICP). Policymakers form their views on the

economic outlook by drawing on large amounts of potentially

relevant information. Indeed, the monetary policy strategy of the

European Central Bank acknowledges that many variables and models

can be informative about future Euro Area inflation. A peculiarity

of such strategy is that it assigns to monetary information the

role of providing insights for the medium - long term evolution of

prices while a wide range of alternative non monetary variables

and models are employed in order to form a view on the short term

and to cross-check the inference based on monetary information.

However, both the academic literature and the practice of the

leading Central Banks other than the ECB do not assign such a

special role to monetary variables (see Gali et al. 2004 and

references therein). Hence, the debate whether money really

provides relevant information for the inflation outlook in the

Euro Area is still open. Specifically, this chapter addresses the

issue whether money provides useful information about future

inflation beyond what contained in a large amount of non monetary

variables. It shows that a few aggregates of the data explain a

large amount of the fluctuations in a large cross section of Euro

Area variables. This allows to postulate a factor structure for

the large panel of variables at hand and to aggregate it in few

synthetic indexes that still retain the salient features of the

large cross section. The database is split in two big blocks of

variables: non monetary (baseline) and monetary variables. Results

show that baseline variables provide a satisfactory predictive

performance improving on the best univariate benchmarks in the

period 1997 - 2005 at all horizons between 6 and 36 months.

Remarkably, monetary variables provide a sensible improvement on

the performance of baseline variables at horizons above two years.

However, the analysis of the evolution of the forecast errors

reveals that most of the gains obtained relative to univariate

benchmarks of non forecastability with baseline and monetary

variables are realized in the first part of the prediction sample

up to the end of 2002, which casts doubts on the current

forecastability of inflation in the Euro Area.

The third chapter is based on a joint work with Domenico Giannone

and gives empirical foundation to the general equilibrium

explanation of the Feldstein - Horioka puzzle. Feldstein and Horioka (1980) found

that domestic saving and investment in OECD countries strongly

comove, contrary to the idea that high capital mobility should

allow countries to seek the highest returns in global financial

markets and, hence, imply a correlation among national saving and

investment closer to zero than one. Moreover, capital mobility has

strongly increased since the publication of Feldstein - Horioka's

seminal paper while the association between saving and investment

does not seem to comparably decrease. Through general equilibrium

mechanisms, the presence of global shocks might rationalize the

correlation between saving and investment. In fact, global shocks,

affecting all countries, tend to create imbalance on global

capital markets causing offsetting movements in the global

interest rate and can generate the observed correlation across

national saving and investment rates. However, previous empirical

studies (see Ventura, 2003) that have controlled for the effects

of global shocks in the context of saving-investment regressions

failed to give empirical foundation to this explanation. We show

that previous studies have neglected the fact that global shocks

may propagate heterogeneously across countries, failing to

properly isolate components of saving and investment that are

affected by non pervasive shocks. We propose a novel factor

augmented panel regression methodology that allows to isolate

idiosyncratic sources of fluctuations under the assumption of

heterogenous transmission mechanisms of global shocks. Remarkably,

by applying our methodology, the association between domestic

saving and investment decreases considerably over time,

consistently with the observed increase in international capital

mobility. In particular, in the last 25 years the correlation

between saving and investment disappears.


Doctorat en sciences économiques, Orientation économie
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
19

Bañbura, Marta. "Essays in dynamic macroeconometrics." Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210294.

Full text
Abstract:
The thesis contains four essays covering topics in the field of macroeconomic forecasting.

The first two chapters consider factor models in the context of real-time forecasting with many indicators. Using a large number of predictors offers an opportunity to exploit a rich information set and is also considered to be a more robust approach in the presence of instabilities. On the other hand, it poses a challenge of how to extract the relevant information in a parsimonious way. Recent research shows that factor models provide an answer to this problem. The fundamental assumption underlying those models is that most of the co-movement of the variables in a given dataset can be summarized by only few latent variables, the factors. This assumption seems to be warranted in the case of macroeconomic and financial data. Important theoretical foundations for large factor models were laid by Forni, Hallin, Lippi and Reichlin (2000) and Stock and Watson (2002). Since then, different versions of factor models have been applied for forecasting, structural analysis or construction of economic activity indicators. Recently, Giannone, Reichlin and Small (2008) have used a factor model to produce projections of the U.S GDP in the presence of a real-time data flow. They propose a framework that can cope with large datasets characterised by staggered and nonsynchronous data releases (sometimes referred to as “ragged edge”). This is relevant as, in practice, important indicators like GDP are released with a substantial delay and, in the meantime, more timely variables can be used to assess the current state of the economy.

The first chapter of the thesis entitled “A look into the factor model black box: publication lags and the role of hard and soft data in forecasting GDP” is based on joint work with Gerhard Rünstler and applies the framework of Giannone, Reichlin and Small (2008) to the case of euro area. In particular, we are interested in the role of “soft” and “hard” data in the GDP forecast and how it is related to their timeliness.

The soft data include surveys and financial indicators and reflect market expectations. They are usually promptly available. In contrast, the hard indicators on real activity measure directly certain components of GDP (e.g. industrial production) and are published with a significant delay. We propose several measures in order to assess the role of individual or groups of series in the forecast while taking into account their respective publication lags. We find that surveys and financial data contain important information beyond the monthly real activity measures for the GDP forecasts, once their timeliness is properly accounted for.

The second chapter entitled “Maximum likelihood estimation of large factor model on datasets with arbitrary pattern of missing data” is based on joint work with Michele Modugno. It proposes a methodology for the estimation of factor models on large cross-sections with a general pattern of missing data. In contrast to Giannone, Reichlin and Small (2008), we can handle datasets that are not only characterised by a “ragged edge”, but can include e.g. mixed frequency or short history indicators. The latter is particularly relevant for the euro area or other young economies, for which many series have been compiled only since recently. We adopt the maximum likelihood approach which, apart from the flexibility with regard to the pattern of missing data, is also more efficient and allows imposing restrictions on the parameters. Applied for small factor models by e.g. Geweke (1977), Sargent and Sims (1977) or Watson and Engle (1983), it has been shown by Doz, Giannone and Reichlin (2006) to be consistent, robust and computationally feasible also in the case of large cross-sections. To circumvent the computational complexity of a direct likelihood maximisation in the case of large cross-section, Doz, Giannone and Reichlin (2006) propose to use the iterative Expectation-Maximisation (EM) algorithm (used for the small model by Watson and Engle, 1983). Our contribution is to modify the EM steps to the case of missing data and to show how to augment the model, in order to account for the serial correlation of the idiosyncratic component. In addition, we derive the link between the unexpected part of a data release and the forecast revision and illustrate how this can be used to understand the sources of the

latter in the case of simultaneous releases. We use this methodology for short-term forecasting and backdating of the euro area GDP on the basis of a large panel of monthly and quarterly data. In particular, we are able to examine the effect of quarterly variables and short history monthly series like the Purchasing Managers' surveys on the forecast.

The third chapter is entitled “Large Bayesian VARs” and is based on joint work with Domenico Giannone and Lucrezia Reichlin. It proposes an alternative approach to factor models for dealing with the curse of dimensionality, namely Bayesian shrinkage. We study Vector Autoregressions (VARs) which have the advantage over factor models in that they allow structural analysis in a natural way. We consider systems including more than 100 variables. This is the first application in the literature to estimate a VAR of this size. Apart from the forecast considerations, as argued above, the size of the information set can be also relevant for the structural analysis, see e.g. Bernanke, Boivin and Eliasz (2005), Giannone and Reichlin (2006) or Christiano, Eichenbaum and Evans (1999) for a discussion. In addition, many problems may require the study of the dynamics of many variables: many countries, sectors or regions. While we use standard priors as proposed by Litterman (1986), an

important novelty of the work is that we set the overall tightness of the prior in relation to the model size. In this we follow the recommendation by De Mol, Giannone and Reichlin (2008) who study the case of Bayesian regressions. They show that with increasing size of the model one should shrink more to avoid overfitting, but when data are collinear one is still able to extract the relevant sample information. We apply this principle in the case of VARs. We compare the large model with smaller systems in terms of forecasting performance and structural analysis of the effect of monetary policy shock. The results show that a standard Bayesian VAR model is an appropriate tool for large panels of data once the degree of shrinkage is set in relation to the model size.

The fourth chapter entitled “Forecasting euro area inflation with wavelets: extracting information from real activity and money at different scales” proposes a framework for exploiting relationships between variables at different frequency bands in the context of forecasting. This work is motivated by the on-going debate whether money provides a reliable signal for the future price developments. The empirical evidence on the leading role of money for inflation in an out-of-sample forecast framework is not very strong, see e.g. Lenza (2006) or Fisher, Lenza, Pill and Reichlin (2008). At the same time, e.g. Gerlach (2003) or Assenmacher-Wesche and Gerlach (2007, 2008) argue that money and output could affect prices at different frequencies, however their analysis is performed in-sample. In this Chapter, it is investigated empirically which frequency bands and for which variables are the most relevant for the out-of-sample forecast of inflation when the information from prices, money and real activity is considered. To extract different frequency components from a series a wavelet transform is applied. It provides a simple and intuitive framework for band-pass filtering and allows a decomposition of series into different frequency bands. Its application in the multivariate out-of-sample forecast is novel in the literature. The results indicate that, indeed, different scales of money, prices and GDP can be relevant for the inflation forecast.


Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
20

Duong, Lien Thi Hong. "Australian takeover waves : a re-examination of patterns, causes and consequences." UWA Business School, 2009. http://theses.library.uwa.edu.au/adt-WU2009.0201.

Full text
Abstract:
This thesis provides more precise characterisation of patterns, causes and consequences of takeover activity in Australia over three decades spanning from 1972 to 2004. The first contribution of the thesis is to characterise the time series behaviour of takeover activity. It is found that linear models do not adequately capture the structure of merger activity; a non-linear two-state Markov switching model works better. A key contribution of the thesis is, therefore, to propose an approach of combining a State-Space model with the Markov switching regime model in describing takeover activity. Experimental results based on our approach show an improvement over other existing approaches. We find four waves, one in the 1980s, two in the 1990s, and one in the 2000s, with an expected duration of each wave state of approximately two years. The second contribution is an investigation of the extent to which financial and macro-economic factors predict takeover activity after controlling for the probability of takeover waves. A main finding is that while stock market boom periods are empirically associated with takeover waves, the underlying driver is interest rate level. A low interest rate environment is associated with higher aggregate takeover activity. This relationship is consistent with Shleifer and Vishny (1992)'s liquidity argument that takeover waves are symptoms of lower cost of capital. Replicating the analysis to the biggest takeover market in the world, the US, reveals a remarkable consistency of results. In short, the Australian findings are not idiosyncratic. Finally, the implications for target and bidder firm shareholders are explored via investigation of takeover bid premiums and long-term abnormal returns separately between the wave and non-wave periods. This represents the third contribution to the literature of takeover waves. Findings reveal that target shareholders earn abnormally positive returns in takeover bids and bid premiums are slightly lower in the wave periods. Analysis of the returns to bidding firm shareholders suggests that the lower premiums earned by target shareholders in the wave periods may simply reflect lower total economic gains, at the margin, to takeovers made in the wave periods. It is found that bidding firms earn normal post-takeover returns (relative to a portfolio of firms matched in size and survival) if their bids are made in the non-wave periods. However, bidders who announce their takeover bids during the wave periods exhibit significant under-performance. For mergers that took place within waves, there is no difference in bid premiums and nor is there a difference in the long-run returns of bidders involved in the first half and second half of the waves. We find that none of theories of merger waves (managerial, mis-valuation and neoclassical) can fully account for the Australian takeover waves and their effects. Instead, our results suggest that a combination of these theories may provide better explanation. Given that normal returns are observed for acquiring firms, taken as a whole, we are more likely to uphold the neoclassical argument for merger activity. However, the evidence is not entirely consistent with neo-classical rational models, the under-performance effect during the wave states is consistent with the herding behaviour by firms.
APA, Harvard, Vancouver, ISO, and other styles
21

Verikios, George. "Understanding the world wool market : trade, productivity and grower incomes." University of Western Australia. School of Economics and Commerce, 2007. http://theses.library.uwa.edu.au/adt-WU2007.0064.

Full text
Abstract:
[Truncated abstract] The core objective of this thesis is summarised by its title: “Understanding the World Wool Market: Trade, Productivity and Grower Incomes”. Thus, we wish to aid understanding of the economic mechanisms by which the world wool market operates. In doing so, we analyse two issues trade and productivity and their effect on, inter alia, grower incomes. To achieve the objective, we develop a novel analytical framework, or model. The model combines two long and rich modelling traditions: the partial-equilibrium commodity-specific approach and the computable-general-equilibrium approach. The result is a model that represents the world wool market in detail, tracking the production of greasy wool through five off-farm production stages ending in the production of wool garments. Capturing the multistage nature of the wool production system is a key pillar in this part of the model . . . The estimated welfare gain for China is 0.1% of real income; this is a significant welfare gain. For three losing regions Italy, Germany and Japan the results are robust and we can be highly confident that these regions are the largest losers from the complete removal of 2005 wool tariffs. In both wool tariff liberalisation scenarios, regions whose exports are skewed towards wool textiles and garments gain the most as it is these wool products that have the highest initial tariff rates. The overall finding of this work is that a sophisticated analytical framework is necessary for analysing productivity and trade issues in the world wool market. Only a model of this kind can appropriately handle the degree of complexity of interactions between members (domestic and foreign) of the multistage wool production system. Further, including the nonwool economy in the analytical framework allows us to capture the indirect effects of changes in the world wool market and also the effects on the nonwool economy itself.
APA, Harvard, Vancouver, ISO, and other styles
22

Curto, Millet Fabien. "Inflation expectations, labour markets and EMU." Thesis, University of Oxford, 2007. http://ora.ox.ac.uk/objects/uuid:9187d2eb-2f93-4a5a-a7d6-0fb6556079bb.

Full text
Abstract:
This thesis examines the measurement, applications and properties of consumer inflation expectations in the context of eight European Union countries: France, Germany, the UK, Spain, Italy, Belgium, the Netherlands and Sweden. The data proceed mainly from the European Commission's Consumer Survey and are qualitative in nature, therefore requiring quantification prior to use. This study first seeks to determine the optimal quantification methodology among a set of approaches spanning three traditions, associated with Carlson-Parkin (1975), Pesaran (1984) and Seitz (1988). The success of a quantification methodology is assessed on the basis of its ability to match quantitative expectations data and on its behaviour in an important economic application, namely the modelling of wages for our sample countries. The wage equation developed here draws on the theoretical background of the staggered contracts and the wage bargaining literature, and controls carefully for inflation expectations and institutional variables. The Carlson-Parkin variation proposed in Curto Millet (2004) was found to be the most satisfactory. This being established, the wage equations are used to test the hypothesis that the advent of EMU generated an increase in labour market flexibility, which would be reflected in structural breaks. The hypothesis is essentially rejected. Finally, the properties of inflation expectations and perceptions themselves are examined, especially in the context of EMU. Both the rational expectations and rational perceptions hypotheses are rejected. Popular expectations mechanisms, such as the "rule-of-thumb" model or Akerlof et al.'s (2000) "near-rationality hypothesis" are similarly unsupported. On the other hand, evidence is found for the transmission of expert forecasts to consumer expectations in the case of the UK, as in Carroll's (2003) model. The distribution of consumer expectations and perceptions is also considered, showing a tendency for gradual (as in Mankiw and Reis, 2002) but non-rational adjustment. Expectations formation is further shown to have important qualitative features.
APA, Harvard, Vancouver, ISO, and other styles
23

Hatangala, Chinthana. "Identifying the Future Directions of Australian Excess Stock Returns and Their Determinants Using Binary Models." Thesis, 2016. https://vuir.vu.edu.au/32888/.

Full text
Abstract:
The predictability of excess stock returns has been debated by researchers over time, with many studies proving that stock returns can be predicted to some extent. To enable an effective investment strategy, it is vital for investors to identify the future directions of stock returns and the factors causing directional changes. This study sought to determine whether Australian monthly excess stock return signs are predictable, and identify the key determinants of Australian monthly excess stock return directions. Three different binary models were considered to predict stock directions: discriminant, logistic and probit models. The predictive powers of benchmark static logistic and probit models were also compared with dynamic, autoregressive and dynamic autoregressive models. In order to identify the key determinants, this study considered various economic, international and financial factors, as well as past volatility measures of explanatory variables. It also tested a United States (US) binary recession indicator and Organisation for Economic Co-operation and Development (OECD) composite leading indicator as explanatory variables in the predictive models.
APA, Harvard, Vancouver, ISO, and other styles
24

"Die kombinering van vooruitskattings : 'n toepassing op die vernaamste makro-ekonomiese veranderlikes." Thesis, 2014. http://hdl.handle.net/10210/9439.

Full text
Abstract:
M.Com. (Econometrics)
The main purpose of this study is the combining of forecasts with special reference to major macroeconomic series of South Africa. The study is based on econometric principles and makes use of three macro-economic variables, forecasted with four forecasting techniques. The macroeconomic variables which have been selected are the consumer price index, consumer expenditure on durable and semi-durable products and real M3 money supply. Forecasts of these variables have been generated by applying the Box-Jenkins ARIMA technique, Holt's two parameter exponential smoothing, the regression approach and mUltiplicative decomposition. Subsequently, the results of each individual forecast are combined in order to determine if forecasting errors can be minimized. Traditionally, forecasting involves the identification and application of the best forecasting model. However, in the search for this unique model, it often happens that some important independent information contained in one of the other models, is discarded. To prevent this from happening, researchers have investigated the idea of combining forecasts. A number of researchers used the results from different techniques as inputs into the combination of forecasts. In spite of the differences in their conclusions, three basic principles have been identified in the combination of forecasts, namely: i The considered forecasts should represent the widest range of forecasting techniques possible. Inferior forecasts should be identified. Predictable errors should be modelled and incorporated into a new forecast series. Finally, a method of combining the selected forecasts needs to be chosen. The best way of selecting a m ethod is probably by experimenting to find the best fit over the historical data. Having generated individual forecasts, these are combined by considering the specifications of the three combination methods. The first combination method is the combination of forecasts via weighted averages. The use of weighted averages to combine forecasts allows consideration of the relative accuracy of the individual methods and of the covariances of forecast errors among the methods. Secondly, the combination of exponential smoothing and Box-Jenkins is considered. Past errors of each of the original forecasts are used to determine the weights to attach to the two original forecasts in forming the combined forecasts. Finally, the regression approach is used to combine individual forecasts. Granger en Ramanathan (1984) have shown that weights can be obtained by regressing actual values of the variables of interest on the individual forecasts, without including a constant and with the restriction that weights add up to one. The performance of combination relative to the individual forecasts have been tested, given that the efficiency criterion is the minimization of the mean square errors. The results of both the individual and the combined forecasting methods are acceptable. Although some of the methods prove to be more accurate than others, the conclusion can be made that reliable forecasts are generated by individual and combined forecasting methods. It is up to the researcher to decide whether he wants to use an individual or combined method since the difference, if any, in the root mean square percentage errors (RMSPE) are insignificantly small.
APA, Harvard, Vancouver, ISO, and other styles
25

"Macroeconomic forecasting: a comparison between artificial neural networks and econometric models." Thesis, 2008. http://hdl.handle.net/10210/633.

Full text
Abstract:
In this study the prediction capabilities of Artificial Neural Networks and typical econometric methods are compared. This is done in the domains of Finance and Economics. Initially, the Neural Networks are shown to outperform traditional econometric models in forecasting nonlinear behaviour. The comparison is extended to indicate that the accuracy of share price forecasting is not necessarily improved when applying Neural Networks rather than traditional time series analysis. Finally, Neural Networks are used to forecast the South African inflation rates, and its performance is compared to that of vector error correcting models, which apparently outperform Artificial Neural Networks.
Prof. D.J. Marais
APA, Harvard, Vancouver, ISO, and other styles
26

Siriprapanukul, Pawin. "Essays in forecasting macroeconomic indicators." Phd thesis, 2009. http://hdl.handle.net/1885/150183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Morris, Alan Geoffrey. "An economic analysis of industrial disputation in Australia." Thesis, 1996. https://vuir.vu.edu.au/15259/.

Full text
Abstract:
Australia may present a special case in the analysis of strikes because, for most of the Twentieth Century, the Australian Industrial Relations Commission has acted as an industrial "umpire" charged with keeping the industrial peace. We begin with a review of major contributions to the theory of strikes, and reestimations and evaluations of the time-series models of previous Australian researchers. We then develop theoretical models of strikes and non-strike industrial action, stemming from Marshall's (1920) contribution to the theory of wages. If higher real wages lead to lower levels of employment, union demands are likely to be greater, and industrial action more frequent, when the duration of unemployment of retrenched workers is shorter. Important determinants of the opportunity costs of wage demands to employees, are wage losses of retrenched employees during unemployment and in subsequent re-employment. Critical in the union's decision to threaten a strike or a non-strike action, is a permanent loss of market share directly associated with strikes. The model of strikes is tested, along with variables suggested by other theories, using time-series data from the period 3:1959 to 4:1992. We show that the model is robust and out-performs modified versions of other Australian models. We find that the Prices and Incomes Accord is associated with a reduction in strike activity, but that other researchers have over-estimated its impact. Australian Workplace Industrial Relations Survey data is used to produce cross-sectional models of strikes and non-strike actions in unionised workplaces. We test the importance of the opportunity costs of wage demands and strikes, using variables describing the firm's competitive environment and local labour market conditions. Because the objectives of workplaces differ, we estimate separate models for privately owned workplaces, government non-commercial establishments and government business enterprises. All empirical models are broadly consistent with the predictions of our theoretical model.
APA, Harvard, Vancouver, ISO, and other styles
28

Akmal, Muhammad. "The structure of energy demand in Australia : an econometric investigation with some economic applications." Phd thesis, 2000. http://hdl.handle.net/1885/144955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

"The combination of high and low frequency data in macroeconometric forecasts: the case of Hong Kong." 1999. http://library.cuhk.edu.hk/record=b5889921.

Full text
Abstract:
by Chan Ka Man.
Thesis (M.Phil.)--Chinese University of Hong Kong, 1999.
Includes bibliographical references (leaves 64-65).
Abstracts in English and Chinese.
ACKNOWLEDGMENTS --- p.iii
LIST OF TABLES --- p.iv
CHAPTER
Chapter I --- INTRODUCTION --- p.1
Chapter II --- THE LITERATURE REVIEW --- p.4
Chapter III --- METHODOLOGY
Forecast Pooling Technique
Modified Technique
Chapter IV --- MODEL SPECIFICATIONS --- p.16
The Monthly Models
The Quarterly Model
Data Description
Chapter V --- THE COMBINED FORECAST --- p.32
Pooling Forecast Technique in Case of Hong Kong
The Forecasts Results
Chapter VI --- CONCLUSION --- p.38
TABLES --- p.40
APPENDIX --- p.53
BIBLIOGRAPHY --- p.64
APA, Harvard, Vancouver, ISO, and other styles
30

"Three essays on financial econometrics." 2013. http://library.cuhk.edu.hk/record=b5549821.

Full text
Abstract:
本文由三篇文章構成。首篇是關於多維變或然分佈預測的檢驗。第三篇是關於非貝斯結構性轉變的VAR 模型。或然分佈預測的檢驗是基於檢驗PIT(probability integral transformation) 序的均勻份佈性質與獨性質。第一篇文章基於Clements and Smith (2002) 的方法提出新的位置正變換。這新的變換改善原有的對稱問題,以及提高檢驗的power。第二篇文章建對於多變或然分佈預測的data-driven smooth 檢驗。通過蒙特卡模擬,本文驗證這種方法在小樣本下的有效性。在此之前,由於高維模型的複雜性,大部分的研究止於二維模型。我們在文中提出有效的方法把多維變換至單變。蒙特卡模擬實驗,以及在組融據的應用中,都證實這種方法的優勢。最後一篇文章提出非貝斯結構性轉變的VAR 模型。在此之前,Chib(1998) 建的貝斯結構性轉變模型須要預先假定構性轉變的目。因此他的方法須要比較同構性轉變目模型的優。而本文提出的stick-breaking 先驗概,可以使構性轉變目在估計中一同估計出。因此我們的方法具有robust 之性質。通過蒙特卡模擬,我們考察存在著四個構性轉變的autoregressive VAR(2) 模型。結果顯示我們的方法能準確地估計出構性轉變的發生位置。而模型中的65 個估計都十分接近真實值。我們把這方法應用在多個對沖基回報序。驗測出的構性轉變位置與市場大跌的時段十分吻合。
This thesis consists of three essays on financial econometrics. The first two essays are about multivariate density forecast evaluations. The third essay is on nonparametric Bayesian change-point VAR model. We develop a method for multivariate density forecast evaluations. The density forecast evaluation is based on checking uniformity and independence conditions of the probability integral transformation of the observed series in question. In the first essay, we propose a new method which is a location-adjusted version of Clements and Smith (2002) that corrects asymmetry problem and increases testing power. In the second essay, we develop a data-driven smooth test for multivariate density forecast evaluation and show some evidences on its finite sample performance using Monte Carlo simulations. Previous to our study, most of the works are up to bivariate model as it is difficult to evaluate with the existing methods. We propose an efficient dimensional reduction approach to reduce the dimension of multivariate density evaluation to a univariate one. We perform various Monte Carlo simulations and two applications on financial asset returns which show that our test performs well. The last essay proposes a nonparametric extension to existing Bayesian change-point model in a multivariate setting. Previous change-point model of Chib (1998) requires specification of the number of change points a priori. Hence a posterior model comparison is needed for di erent change-point models. We introduce the stick-breaking prior to the change-point process that allows us to endogenize the number of change points into the estimation procedure. Hence, the number of change points is simultaneously determined with other unknown parameters. Therefore our model is robust to model specification. We preform a Monte Carlo simulation of bivariate vector autoregressive VAR(2) process which is subject to four structural breaks. Our model estimate the break locations with high accuracy and the posterior estimates of the 65 parameters are closed to the true values. We apply our model to various hedge fund return processes and the detected change points coincide with market crashes.
Detailed summary in vernacular field only.
Ko, Iat Meng.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2013.
Includes bibliographical references (leaves 176-194).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts also in Chinese.
Abstract --- p.i
Acknowledgement --- p.v
Chapter 1 --- Introduction --- p.1
Chapter 2 --- Multivariate Density Forecast Evaluation: A Modified Approach --- p.7
Chapter 2.1 --- Introduction --- p.7
Chapter 2.2 --- Evaluating Density Forecasts --- p.13
Chapter 2.3 --- Monte Carlo Simulations --- p.18
Chapter 2.3.1 --- Bivariate normal distribution --- p.19
Chapter 2.3.2 --- The Ramberg distribution --- p.21
Chapter 2.3.3 --- Student’s t and uniform distributions --- p.24
Chapter 2.4 --- Empirical Applications --- p.24
Chapter 2.4.1 --- AR model --- p.25
Chapter 2.4.2 --- GARCH model --- p.27
Chapter 2.5 --- Conclusion --- p.29
Chapter 3 --- Multivariate Density Forecast Evaluation: Smooth Test Approach --- p.39
Chapter 3.1 --- Introduction --- p.39
Chapter 3.2 --- Exponential Transformation for Multi-dimension Reduction --- p.47
Chapter 3.3 --- The Smooth Test --- p.56
Chapter 3.4 --- The Data-Driven Smooth Test Statistic --- p.66
Chapter 3.4.1 --- Selection of K --- p.66
Chapter 3.4.2 --- Choosing p of the Portmanteau based test --- p.69
Chapter 3.5 --- Monte Carlo Simulations --- p.70
Chapter 3.5.1 --- Multivariate normal and Student’s t distributions --- p.71
Chapter 3.5.2 --- VAR(1) model --- p.74
Chapter 3.5.3 --- Multivariate GARCH(1,1) Model --- p.78
Chapter 3.6 --- Density Forecast Evaluation of the DCC-GARCH Model in Density Forecast of Spot-Future returns and International Equity Markets --- p.80
Chapter 3.7 --- Conclusion --- p.87
Chapter 4 --- Stick-Breaking Bayesian Change-Point VAR Model with Stochastic Search Variable Selection --- p.111
Chapter 4.1 --- Introduction --- p.111
Chapter 4.2 --- The Bayesian Change-Point VAR Model --- p.116
Chapter 4.3 --- The Stick-breaking Process Prior --- p.120
Chapter 4.4 --- Stochastic Search Variable Selection (SSVS) --- p.121
Chapter 4.4.1 --- Priors on Φ[subscript j] = vec(Φ[subscript j]) = --- p.122
Chapter 4.4.2 --- Prior on Σ[subscript j] --- p.123
Chapter 4.5 --- The Gibbs Sampler and a Monte Carlo Simulation --- p.123
Chapter 4.5.1 --- The posteriors of ΦΣ[subscript j] and Σ[subscript j] --- p.123
Chapter 4.5.2 --- MCMC Inference for SB Change-Point Model: A Gibbs Sampler --- p.126
Chapter 4.5.3 --- A Monte Carlo Experiment --- p.128
Chapter 4.6 --- Application to Daily Hedge Fund Return --- p.130
Chapter 4.6.1 --- Hedge Funds Composite Indices --- p.132
Chapter 4.6.2 --- Single Strategy Hedge Funds Indices --- p.135
Chapter 4.7 --- Conclusion --- p.138
Chapter A --- Derivation and Proof --- p.166
Chapter A.1 --- Derivation of the distribution of (Z₁ - EZ₁) x (Z₂ - EZ₂) --- p.166
Chapter A.2 --- Derivation of limiting distribution of the smooth test statistic without parameter estimation uncertainty ( θ = θ₀) --- p.168
Chapter A.3 --- Proof of Theorem 2 --- p.170
Chapter A.4 --- Proof of Theorem 3 --- p.172
Chapter A.5 --- Proof of Theorem 4 --- p.174
Chapter A.6 --- Proof of Theorem 5 --- p.175
Bibliography --- p.176
APA, Harvard, Vancouver, ISO, and other styles
31

Smith, Jeremy Paul Duncan. "Aspects of macroeconometric time series modelling." Phd thesis, 1991. http://hdl.handle.net/1885/121824.

Full text
Abstract:
This thesis contains six chapters which investigate different areas in applied econometrics. The major focus of the study has been the application of techniques from the applied econometrics literature to a study of the Australian macroeconomy. Chapter Two uses a Vector AutoRegressive (VAR) model and a structural model of the Australian economy to discover those variables responsible for the fluctuations which have buffeted the Australian economy over the last fifteen years. Despite marked differences in the appearance of the two models, the results are similar in predicting how the economy responds to certain shocks. Chapter Three examines the behaviour of the Australian dollar over the period since float in December 1983. The analysis shows that the dollar is over-valued, compared with a level that can maintain a sustainable debt-GDP ratio . The over-valuation has meant that the Australian dollar is discounted on the forward market and high domestic interest rates are necessary to offset the depreciation expected by foreign investors. Chapter Four conducts a Monte Carlo analysis to investigate the performance of alternative estimation methods in equations which include a generated regressor as an explanatory variable. The results show that while FIML tends to dominate with an increasing sample size, in small samples FIML standard errors are downward biased, leaving Correct OLS as the best estimation method. Chapter Five further examines the generated regressor problem using Barro’s (1977) New Classical unemployment model and shows that the results are robust to the estimation method. However, the results from the larger model suggested by Pesaran (1982) are sensitive to the estimation procedure from the larger model suggested by Pesaran (1982) are sensitive to the estimation procedure. Chapter Six evaluates alternative procedures for converting qualitative expectation responses to quantitative expectations for the Australian manufacturing sector and finds that a dynamic nonlinear model which is a generalisation of the model suggested by Pesaran (1987) is superior in picking up both turn in g points in the data and in minimising the forecast error. Chapter Seven further examines the behaviour of the Australian manufacturing sector using multivariate cointegration and the derived quantitative expectations of Chapter Six. The analysis shows that the role of price variables is much more significant than that of output in determining employment movements.
APA, Harvard, Vancouver, ISO, and other styles
32

Quang, Doan Hong. "Essays on factor-market distortions and economic growth." Phd thesis, 2000. http://hdl.handle.net/1885/147706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Giesecke, James. "FEDERAL-F : a multi-regional multi-sectoral dynamic model of the Australian economy / by James A.D. Giesecke." 2000. http://hdl.handle.net/2440/19810.

Full text
Abstract:
Bibliography: p. 648-661.
2 v. (xviii, 661 p. : ill.([1] col.) ; 30 cm.
Title page, contents and abstract only. The complete thesis in print form is available from the University Library.
Thesis (Ph.D.)--University of Adelaide, School of Economics, 2001
APA, Harvard, Vancouver, ISO, and other styles
34

Mynbaev, Kairat T. "Two essays in microeconomic theory and econometrics." Thesis, 1995. http://hdl.handle.net/1957/35191.

Full text
Abstract:
The thesis contains two chapters which address questions important both for the economic theory and applications. In Chapter I we show that inequalities are an important tool in the theory of production functions. Various notions of internal economies of scale can be equivalently expressed in terms of upper or lower bounds on production functions. In the problem of aggregation of efficiently allocated goods, if one is concerned with two-sided bounds as opposed to exact expressions, the aggregate production function can be derived from some general assumptions about production units subject to aggregation. The approach used does not require smoothness or convexity properties. In Chapter II we introduce a new forecasting techniques essential parts of which include using average high-order polynomial estimators for in-sample fit and low-order polynomial extension for out-of-sample fit. We provide some statements following the Gauss-Markov theorem format. The empirical part shows that algebraic polynomials treated in a proper way can perform very well in one-step-ahead prediction, especially in prediction of the direction of exchange rate movements.
Graduation date: 1995
APA, Harvard, Vancouver, ISO, and other styles
35

Nyasha, Sheilla. "Financial development and economic growth : new evidence from six countries." Thesis, 2014. http://hdl.handle.net/10500/18576.

Full text
Abstract:
Using 1980 - 2012 annual data, the study empirically investigates the dynamic relationship between financial development and economic growth in three developing countries (South Africa, Brazil and Kenya) and three developed countries (United States of America, United Kingdom and Australia). The study was motivated by the current debate regarding the role of financial development in the economic growth process, and their causal relationship. The debate centres on whether financial development impacts positively or negatively on economic growth and whether it Granger-causes economic growth or vice versa. To this end, two models have been used. In Model 1 the impact of bank- and market-based financial development on economic growth is examined, while in Model 2 it is the causality between the two that is explored. Using the autoregressive distributed lag (ARDL) bounds testing approach to cointegration and error-correction based causality test, the results were found to differ from country to country and over time. These results were also found to be sensitive to the financial development proxy used. Based on Model 1, the study found that the impact of bank-based financial development on economic growth is positive in South Africa and the USA, but negative in the U.K – and neither positive nor negative in Kenya. Elsewhere the results were inconclusive. Market-based financial development was found to impact positively in Kenya, USA and the UK but not in the remaining countries. Based on Model 2, the study found that bank-based financial development Granger-causes economic growth in the UK, while in Brazil they Granger-cause each other. However, in South Africa, Kenya and USA no causal relationship was found. In Australia the results were inconclusive. The study also found that in the short run, market-based financial development Granger-causes economic growth in the USA but that in South Africa and Brazil, the reverse applies. On the other hand bidirectional causality was found to prevail in Kenya in the same period.
Economics
DCOM (Economics)
APA, Harvard, Vancouver, ISO, and other styles
36

Jiang, Qiang. "Three essays on water modelling and management in the Murray-Darling Basin, Australia." Phd thesis, 2011. http://hdl.handle.net/1885/151262.

Full text
Abstract:
The primary contributions of this thesis are the economic studies of proposed water use reductions and climate change, and the development of an integrated hydro-economic model for the Murray-Darling Basin, Australia. This water model not only simulates the land and water use in the Basin, but also optimises these uses for certain targets such as environmental flows. More importantly, this model can be applied to evaluate policy options for the Basin, such as water buybacks, and provide estimates of the possible impacts of climate change. The thesis consists of three main essays focusing on issues in water modelling and management in the Basin. The first essay describes the development of a water model. This model is applied to estimate the impacts of water use reductions in the second essay; and climate change in the third essay. Other issues related to the Basin's water management, such as a review of existing water modelling, the background of the Basin, water trading, possible policy implementations and future research are also discussed. The first essay (Chapter 4) describes the construction of the Integrated Irrigated Water Model (IIA WM) including the structure of llA WM and the data sources. Using the latest hydrological data and revised catchment boundaries, llA WM can simulate and optimise land and water use in the Basin. To address the criticism that existing models have failed to consider water trading barriers, the physical constraints on water trading have been incorporated in llA WM. The model can also evaluate various water policies and estimate the impacts of physical condition changes. The second essay (Chapter 5) evaluates the impacts of proposed water use reductions by the Australian government. To balance the use of water between irrigated industries and environmental purposes, the Australian government draft plan released October 2010 proposed to reduce the volume of used water in the Basin from 3,000 to 4,000 GL/year. Simulations from IIA WM indicate that the impacts from proposed water use reductions will be modest, although there may be substantial impacts in particular locations. The third essay (Chapter 6) investigates the impacts of climate change in the Basin. A full range of climate change scenarios from modest to severe have been applied using IIA WM. This thesis finds that with water trading, profit reductions are substantially smaller than the water use reductions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography