Dissertations / Theses on the topic 'Economic forecasting Australia Econometric models'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 36 dissertations / theses for your research on the topic 'Economic forecasting Australia Econometric models.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Billah, Baki 1965. "Model selection for time series forecasting models." Monash University, Dept. of Econometrics and Business Statistics, 2001. http://arrow.monash.edu.au/hdl/1959.1/8840.
Full textJeon, Yongil. "Four essays on forecasting evaluation and econometric estimation /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1999. http://wwwlib.umi.com/cr/ucsd/fullcit?p9949690.
Full textAzam, Mohammad Nurul 1957. "Modelling and forecasting in the presence of structural change in the linear regression model." Monash University, Dept. of Econometrics and Business Statistics, 2001. http://arrow.monash.edu.au/hdl/1959.1/9152.
Full textLazim, Mohamad Alias. "Econometric forecasting models and model evaluation : a case study of air passenger traffic flow." Thesis, Lancaster University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.296880.
Full textEnzinger, Sharn Emma 1973. "The economic impact of greenhouse policy upon the Australian electricity industry : an applied general equilibrium analysis." Monash University, Centre of Policy Studies, 2001. http://arrow.monash.edu.au/hdl/1959.1/8383.
Full textKummerow, Max F. "A paradigm of inquiry for applied real estate research : integrating econometric and simulation methods in time and space specific forecasting models : Australian office market case study." Thesis, Curtin University, 1997. http://hdl.handle.net/20.500.11937/1574.
Full textMarshall, Peter John 1960. "Rational versus anchored traders : exchange rate behaviour in macro models." Monash University, Dept. of Economics, 2001. http://arrow.monash.edu.au/hdl/1959.1/9048.
Full textSteinbach, Max Rudibert. "Essays on dynamic macroeconomics." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/86196.
Full textENGLISH ABSTRACT: In the first essay of this thesis, a medium scale DSGE model is developed and estimated for the South African economy. When used for forecasting, the model is found to outperform private sector economists when forecasting CPI inflation, GDP growth and the policy rate over certain horizons. In the second essay, the benchmark DSGE model is extended to include the yield on South African 10-year government bonds. The model is then used to decompose the 10-year yield spread into (1) the structural shocks that contributed to its evolution during the inflation targeting regime of the South African Reserve Bank, as well as (2) an expected yield and a term premium. In addition, it is found that changes in the South African term premium may predict future real economic activity. Finally, the need for DSGE models to take account of financial frictions became apparent during the recent global financial crisis. As a result, the final essay incorporates a stylised banking sector into the benchmark DSGE model described above. The optimal response of the South African Reserve Bank to financial shocks is then analysed within the context of this structural model.
Silvestrini, Andrea. "Essays on aggregation and cointegration of econometric models." Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210304.
Full textChapter 1 surveys the econometric methodology of temporal aggregation for a wide range of univariate and multivariate time series models.
A unified overview of temporal aggregation techniques for this broad class of processes is presented in the first part of the chapter and the main results are summarized. In each case, assuming to know the underlying process at the disaggregate frequency, the aim is to find the appropriate model for the aggregated data. Additional topics concerning temporal aggregation of ARIMA-GARCH models (see Drost and Nijman, 1993) are discussed and several examples presented. Systematic sampling schemes are also reviewed.
Multivariate models, which show interesting features under temporal aggregation (Breitung and Swanson, 2002, Marcellino, 1999, Hafner, 2008), are examined in the second part of the chapter. In particular, the focus is on temporal aggregation of VARMA models and on the related concept of spurious instantaneous causality, which is not a time series property invariant to temporal aggregation. On the other hand, as pointed out by Marcellino (1999), other important time series features as cointegration and presence of unit roots are invariant to temporal aggregation and are not induced by it.
Some empirical applications based on macroeconomic and financial data illustrate all the techniques surveyed and the main results.
Chapter 2 is an attempt to monitor fiscal variables in the Euro area, building an early warning signal indicator for assessing the development of public finances in the short-run and exploiting the existence of monthly budgetary statistics from France, taken as "example country".
The application is conducted focusing on the cash State deficit, looking at components from the revenue and expenditure sides. For each component, monthly ARIMA models are estimated and then temporally aggregated to the annual frequency, as the policy makers are interested in yearly predictions.
The short-run forecasting exercises carried out for years 2002, 2003 and 2004 highlight the fact that the one-step-ahead predictions based on the temporally aggregated models generally outperform those delivered by standard monthly ARIMA modeling, as well as the official forecasts made available by the French government, for each of the eleven components and thus for the whole State deficit. More importantly, by the middle of the year, very accurate predictions for the current year are made available.
The proposed method could be extremely useful, providing policy makers with a valuable indicator when assessing the development of public finances in the short-run (one year horizon or even less).
Chapter 3 deals with the issue of forecasting contemporaneous time series aggregates. The performance of "aggregate" and "disaggregate" predictors in forecasting contemporaneously aggregated vector ARMA (VARMA) processes is compared. An aggregate predictor is built by forecasting directly the aggregate process, as it results from contemporaneous aggregation of the data generating vector process. A disaggregate predictor is a predictor obtained from aggregation of univariate forecasts for the individual components of the data generating vector process.
The econometric framework is broadly based on Lütkepohl (1987). The necessary and sufficient condition for the equality of mean squared errors associated with the two competing methods in the bivariate VMA(1) case is provided. It is argued that the condition of equality of predictors as stated in Lütkepohl (1987), although necessary and sufficient for the equality of the predictors, is sufficient (but not necessary) for the equality of mean squared errors.
Furthermore, it is shown that the same forecasting accuracy for the two predictors can be achieved using specific assumptions on the parameters of the VMA(1) structure.
Finally, an empirical application that involves the problem of forecasting the Italian monetary aggregate M1 on the basis of annual time series ranging from 1948 until 1998, prior to the creation of the European Economic and Monetary Union (EMU), is presented to show the relevance of the topic. In the empirical application, the framework is further generalized to deal with heteroskedastic and cross-correlated innovations.
Chapter 4 deals with a cointegration analysis applied to the empirical investigation of fiscal sustainability. The focus is on a particular country: Poland. The choice of Poland is not random. First, the motivation stems from the fact that fiscal sustainability is a central topic for most of the economies of Eastern Europe. Second, this is one of the first countries to start the transition process to a market economy (since 1989), providing a relatively favorable institutional setting within which to study fiscal sustainability (see Green, Holmes and Kowalski, 2001). The emphasis is on the feasibility of a permanent deficit in the long-run, meaning whether a government can continue to operate under its current fiscal policy indefinitely.
The empirical analysis to examine debt stabilization is made up by two steps.
First, a Bayesian methodology is applied to conduct inference about the cointegrating relationship between budget revenues and (inclusive of interest) expenditures and to select the cointegrating rank. This task is complicated by the conceptual difficulty linked to the choice of the prior distributions for the parameters relevant to the economic problem under study (Villani, 2005).
Second, Bayesian inference is applied to the estimation of the normalized cointegrating vector between budget revenues and expenditures. With a single cointegrating equation, some known results concerning the posterior density of the cointegrating vector may be used (see Bauwens, Lubrano and Richard, 1999).
The priors used in the paper leads to straightforward posterior calculations which can be easily performed.
Moreover, the posterior analysis leads to a careful assessment of the magnitude of the cointegrating vector. Finally, it is shown to what extent the likelihood of the data is important in revising the available prior information, relying on numerical integration techniques based on deterministic methods.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
Ben-Belhassen, Boubaker. "Econometric models of the Argentine cereal economy : a focus on policy simulation analysis /." free to MU campus, to others for purchase, 1997. http://wwwlib.umi.com/cr/mo/fullcit?p9842508.
Full textDe, Antonio Liedo David. "Structural models for macroeconomics and forecasting." Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210142.
Full textcentral debates in empirical macroeconomic modeling.
Chapter 1, entitled “A Model for Real-Time Data Assessment with an Application to GDP Growth Rates”, provides a model for the data
revisions of macroeconomic variables that distinguishes between rational expectation updates and noise corrections. Thus, the model encompasses the two polar views regarding the publication process of statistical agencies: noise versus news. Most of the studies previous studies that analyze data revisions are based
on the classical noise and news regression approach introduced by Mankiew, Runkle and Shapiro (1984). The problem is that the statistical tests available do not formulate both extreme hypotheses as collectively exhaustive, as recognized by Aruoba (2008). That is, it would be possible to reject or accept both of them simultaneously. In turn, the model for the
DPP presented here allows for the simultaneous presence of both noise and news. While the “regression approach” followed by Faust et al. (2005), along the lines of Mankiew et al. (1984), identifies noise in the preliminary
figures, it is not possible for them to quantify it, as done by our model.
The second and third chapters acknowledge the possibility that macroeconomic data is measured with errors, but the approach followed to model the missmeasurement is extremely stylized and does not capture the complexity of the revision process that we describe in the first chapter.
Chapter 2, entitled “Revisiting the Success of the RBC model”, proposes the use of dynamic factor models as an alternative to the VAR based tools for the empirical validation of dynamic stochastic general equilibrium (DSGE) theories. Along the lines of Giannone et al. (2006), we use the state-space parameterisation of the factor models proposed by Forni et al. (2007) as a competitive benchmark that is able to capture weak statistical restrictions that DSGE models impose on the data. Our empirical illustration compares the out-of-sample forecasting performance of a simple RBC model augmented with a serially correlated noise component against several specifications belonging to classes of dynamic factor and VAR models. Although the performance of the RBC model is comparable
to that of the reduced form models, a formal test of predictive accuracy reveals that the weak restrictions are more useful at forecasting than the strong behavioral assumptions imposed by the microfoundations in the model economy.
The last chapter, “What are Shocks Capturing in DSGE modeling”, contributes to current debates on the use and interpretation of larger DSGE
models. Recent tendency in academic work and at central banks is to develop and estimate large DSGE models for policy analysis and forecasting. These models typically have many shocks (e.g. Smets and Wouters, 2003 and Adolfson, Laseen, Linde and Villani, 2005). On the other hand, empirical studies point out that few large shocks are sufficient to capture the covariance structure of macro data (Giannone, Reichlin and
Sala, 2005, Uhlig, 2004). In this Chapter, we propose to reconcile both views by considering an alternative DSGE estimation approach which
models explicitly the statistical agency along the lines of Sargent (1989). This enables us to distinguish whether the exogenous shocks in DSGE
modeling are structural or instead serve the purpose of fitting the data in presence of misspecification and measurement problems. When applied to the original Smets and Wouters (2007) model, we find that the explanatory power of the structural shocks decreases at high frequencies. This allows us to back out a smoother measure of the natural output gap than that
resulting from the original specification.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
Kummerow, Max F. "A paradigm of inquiry for applied real estate research : integrating econometric and simulation methods in time and space specific forecasting models : Australian office market case study." Curtin University of Technology, School of Economics and Finance, 1997. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=11274.
Full textmodels for rent forecasting and models for analysis related to policy and system redesign. The dissertation ends with two chapters on institutional reforms whereby better information might find application to improve market efficiency.Keywords. Office rents, rent adjustment, office market modelling, forecasting, system dynamics.
Feng, Ning. "Essays on business cycles and macroeconomic forecasting." HKBU Institutional Repository, 2016. https://repository.hkbu.edu.hk/etd_oa/279.
Full textYang, Wenling. "M-GARCH Hedge Ratios And Hedging Effectiveness In Australian Futures Markets." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2000. https://ro.ecu.edu.au/theses/1530.
Full textBerger, Loïc. "Essays on the economics of risk and uncertainty." Doctoral thesis, Universite Libre de Bruxelles, 2012. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209676.
Full textIn the second chapter, I analyze the effect of ambiguity on self-insurance and self-protection, that are tools used to deal with the uncertainty of facing a monetary loss when market insurance is not available (in the self-insurance model, the decision maker has the opportunity to furnish an effort to reduce the size of the loss occurring in the bad state of the world, while in the self-protection – or prevention – model, the effort reduces the probability of being in the bad state).
In a short note, in the context of a two-period model I first examine the links between risk-aversion, prudence and self-insurance/self-protection activities under risk. Contrary to the results obtained in the static one-period model, I show that the impacts of prudence and of risk-aversion go in the same direction and generate a higher level of prevention in the more usual situations. I also show that the results concerning self-insurance in a single period framework may be easily extended to a two-period context.
I then consider two-period self-insurance and self-protection models in the presence of ambiguity and analyze the effect of ambiguity aversion. I show that in most common situations, ambiguity prudence is a sufficient condition to observe an increase in the level of effort. I propose an interpretation of the model in the context of climate change, so that self-insurance and self-protection are respectively seen as adaptation and mitigation efforts a policy-maker should provide to deal with an uncertain catastrophic event, and interpret the results obtained as an expression of the Precautionary Principle.
In the third chapter, I introduce the economic theory developed to deal with ambiguity in the context of medical decision-making. I show that, under diagnostic uncertainty, an increase in ambiguity aversion always leads a physician whose goal is to act in the best interest of his patient, to choose a higher level of treatment. In the context of a dichotomic choice (treatment versus no treatment), this result implies that taking into account the attitude agents generally manifest towards ambiguity may induce a physician to change his decision by opting for treatment more often. I further show that under therapeutic uncertainty, the opposite happens, i.e. an ambiguity averse physician may eventually choose not to treat a patient who would have been treated under ambiguity neutrality.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
D'Agostino, Antonello. "Understanding co-movements in macro and financial variables." Doctoral thesis, Universite Libre de Bruxelles, 2007. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210597.
Full textIn the first chapter of this thesis, the generalized dynamic factor model of Forni et. al (2002) is employed to explore the predictive content of the asset returns in forecasting Consumer Price Index (CPI) inflation and the growth rate of Industrial Production (IP). The connection between stock markets and economic growth is well known. In the fundamental valuation of equity, the stock price is equal to the discounted future streams of expected dividends. Since the future dividends are related to future growth, a revision of prices, and hence returns, should signal movements in the future growth path. Though other important transmission channels, such as the Tobin's q theory (Tobin, 1969), the wealth effect as well as capital market imperfections, have been widely studied in this literature. I show that an aggregate index, such as the S&P500, could be misleading if used as a proxy for the informative content of the stock market as a whole. Despite the widespread wisdom of considering such index as a leading variable, only part of the assets included in the composition of the index has a leading behaviour with respect to the variables of interest. Its forecasting performance might be poor, leading to sceptical conclusions about the effectiveness of asset prices in forecasting macroeconomic variables. The main idea of the first essay is therefore to analyze the lead-lag structure of the assets composing the S&P500. The classification in leading, lagging and coincident variables is achieved by means of the cross correlation function cleaned of idiosyncratic noise and short run fluctuations. I assume that asset returns follow a factor structure. That is, they are the sum of two parts: a common part driven by few shocks common to all the assets and an idiosyncratic part, which is rather asset specific. The correlation
function, computed on the common part of the series, is not affected by the assets' specific dynamics and should provide information only on the series driven by the same common factors. Once the leading series are identified, they are grouped within the economic sector they belong to. The predictive content that such aggregates have in forecasting IP growth and CPI inflation is then explored and compared with the forecasting power of the S&P500 composite index. The forecasting exercise is addressed in the following way: first, in an autoregressive (AR) model I choose the truncation lag that minimizes the Mean Square Forecast Error (MSFE) in 11 years out of sample simulations for 1, 6 and 12 steps ahead, both for the IP growth rate and the CPI inflation. Second, the S&P500 is added as an explanatory variable to the previous AR specification. I repeat the simulation exercise and find that there are very small improvements of the MSFE statistics. Third, averages of stock return leading series, in the respective sector, are added as additional explanatory variables in the benchmark regression. Remarkable improvements are achieved with respect to the benchmark specification especially for one year horizon forecast. Significant improvements are also achieved for the shorter forecast horizons, when the leading series of the technology and energy sectors are used.
The second chapter of this thesis disentangles the sources of aggregate risk and measures the extent of co-movements in five European stock markets. Based on the static factor model of Stock and Watson (2002), it proposes a new method for measuring the impact of international, national and industry-specific shocks. The process of European economic and monetary integration with the advent of the EMU has been a central issue for investors and policy makers. During these years, the number of studies on the integration and linkages among European stock markets has increased enormously. Given their forward looking nature, stock prices are considered a key variable to use for establishing the developments in the economic and financial markets. Therefore, measuring the extent of co-movements between European stock markets has became, especially over the last years, one of the main concerns both for policy makers, who want to best shape their policy responses, and for investors who need to adapt their hedging strategies to the new political and economic environment. An optimal portfolio allocation strategy is based on a timely identification of the factors affecting asset returns. So far, literature dating back to Solnik (1974) identifies national factors as the main contributors to the co-variations among stock returns, with the industry factors playing a marginal role. The increasing financial and economic integration over the past years, fostered by the decline of trade barriers and a greater policy coordination, should have strongly reduced the importance of national factors and increased the importance of global determinants, such as industry determinants. However, somehow puzzling, recent studies demonstrated that countries sources are still very important and generally more important of the industry ones. This paper tries to cast some light on these conflicting results. The chapter proposes an econometric estimation strategy more flexible and suitable to disentangle and measure the impact of global and country factors. Results point to a declining influence of national determinants and to an increasing influence of the industries ones. The international influences remains the most important driving forces of excess returns. These findings overturn the results in the literature and have important implications for strategic portfolio allocation policies; they need to be revisited and adapted to the changed financial and economic scenario.
The third chapter presents a new stylized fact which can be helpful for discriminating among alternative explanations of the U.S. macroeconomic stability. The main finding is that the fall in time series volatility is associated with a sizable decline, of the order of 30% on average, in the predictive accuracy of several widely used forecasting models, included the factor models proposed by Stock and Watson (2002). This pattern is not limited to the measures of inflation but also extends to several indicators of real economic activity and interest rates. The generalized fall in predictive ability after the mid-1980s is particularly pronounced for forecast horizons beyond one quarter. Furthermore, this empirical regularity is not simply specific to a single method, rather it is a common feature of all models including those used by public and private institutions. In particular, the forecasts for output and inflation of the Fed's Green book and the Survey of Professional Forecasters (SPF) are significantly more accurate than a random walk only before 1985. After this date, in contrast, the hypothesis of equal predictive ability between naive random walk forecasts and the predictions of those institutions is not rejected for all horizons, the only exception being the current quarter. The results of this chapter may also be of interest for the empirical literature on asymmetric information. Romer and Romer (2000), for instance, consider a sample ending in the early 1990s and find that the Fed produced more accurate forecasts of inflation and output compared to several commercial providers. The results imply that the informational advantage of the Fed and those private forecasters is in fact limited to the 1970s and the beginning of the 1980s. In contrast, during the last two decades no forecasting model is better than "tossing a coin" beyond the first quarter horizon, thereby implying that on average uninformed economic agents can effectively anticipate future macroeconomics developments. On the other hand, econometric models and economists' judgement are quite helpful for the forecasts over the very short horizon, that is relevant for conjunctural analysis. Moreover, the literature on forecasting methods, recently surveyed by Stock and Watson (2005), has devoted a great deal of attention towards identifying the best model for predicting inflation and output. The majority of studies however are based on full-sample periods. The main findings in the chapter reveal that most of the full sample predictability of U.S. macroeconomic series arises from the years before 1985. Long time series appear
to attach a far larger weight on the earlier sub-sample, which is characterized by a larger volatility of inflation and output. Results also suggest that some caution should be used in evaluating the performance of alternative forecasting models on the basis of a pool of different sub-periods as full sample analysis are likely to miss parameter instability.
The fourth chapter performs a detailed forecast comparison between the static factor model of Stock and Watson (2002) (SW) and the dynamic factor model of Forni et. al. (2005) (FHLR). It is not the first work in performing such an evaluation. Boivin and Ng (2005) focus on a very similar problem, while Stock and Watson (2005) compare the performances of a larger class of predictors. The SW and FHLR methods essentially differ in the computation of the forecast of the common component. In particular, they differ in the estimation of the factor space and in the way projections onto this space are performed. In SW, the factors are estimated by static Principal Components (PC) of the sample covariance matrix and the forecast of the common component is simply the projection of the predicted variable on the factors. FHLR propose efficiency improvements in two directions. First, they estimate the common factors based on Generalized Principal Components (GPC) in which observations are weighted according to their signal to noise ratio. Second, they impose the constraints implied by the dynamic factors structure when the variables of interest are projected on the common factors. Specifically, they take into account the leading and lagging relations across series by means of principal components in the frequency domain. This allows for an efficient aggregation of variables that may be out of phase. Whether these efficiency improvements are helpful to forecast in a finite sample is however an empirical question. Literature has not yet reached a consensus. On the one hand, Stock and Watson (2005) show that both methods perform similarly (although they focus on the weighting of the idiosyncratic and not on the dynamic restrictions), while Boivin and Ng (2005) show that SW's method largely outperforms the FHLR's and, in particular, conjecture that the dynamic restrictions implied by the method are harmful for the forecast accuracy of the model. This chapter tries to shed some new light on these conflicting results. It
focuses on the Industrial Production index (IP) and the Consumer Price Index (CPI) and bases the evaluation on a simulated out-of sample forecasting exercise. The data set, borrowed from Stock and Watson (2002), consists of 146 monthly observations for the US economy. The data spans from 1959 to 1999. In order to isolate and evaluate specific characteristics of the methods, a procedure, where the
two non-parametric approaches are nested in a common framework, is designed. In addition, for both versions of the factor model forecasts, the chapter studies the contribution of the idiosyncratic component to the forecast. Other non-core aspects of the model are also investigated: robustness with respect to the choice of the number of factors and variable transformations. Finally, the chapter performs a sub-sample performances of the factor based forecasts. The purpose of this exercise is to design an experiment for assessing the contribution of the core characteristics of different models to the forecasting performance and discussing auxiliary issues. Hopefully this may also serve as a guide for practitioners in the field. As in Stock and Watson (2005), results show that efficiency improvements due to the weighting of the idiosyncratic components do not lead to significant more accurate forecasts, but, in contrast to Boivin and Ng (2005), it is shown that the dynamic restrictions imposed by the procedure of Forni et al. (2005) are not harmful for predictability. The main conclusion is that the two methods have a similar performance and produce highly collinear forecasts.
Doctorat en sciences économiques, Orientation économie
info:eu-repo/semantics/nonPublished
Berger, Nicholas. "Modelling structural and policy changes in the world wine market into the 21st century." Title page, contents and abstract only, 2000. http://web4.library.adelaide.edu.au/theses/09ECM/09ecmb496.pdf.
Full textLenza, Michèle. "Essays on monetary policy, saving and investment." Doctoral thesis, Universite Libre de Bruxelles, 2007. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210659.
Full textCentral Banks behave so cautiously compared to optimal theoretical
benchmarks, (ii) do monetary variables add information about
future Euro Area inflation to a large amount of non monetary
variables and (iii) why national saving and investment are so
correlated in OECD countries in spite of the high degree of
integration of international financial markets.
The process of innovation in the elaboration of economic theory
and statistical analysis of the data witnessed in the last thirty
years has greatly enriched the toolbox available to
macroeconomists. Two aspects of such a process are particularly
noteworthy for addressing the issues in this thesis: the
development of macroeconomic dynamic stochastic general
equilibrium models (see Woodford, 1999b for an historical
perspective) and of techniques that enable to handle large data
sets in a parsimonious and flexible manner (see Reichlin, 2002 for
an historical perspective).
Dynamic stochastic general equilibrium models (DSGE) provide the
appropriate tools to evaluate the macroeconomic consequences of
policy changes. These models, by exploiting modern intertemporal
general equilibrium theory, aggregate the optimal responses of
individual as consumers and firms in order to identify the
aggregate shocks and their propagation mechanisms by the
restrictions imposed by optimizing individual behavior. Such a
modelling strategy, uncovering economic relationships invariant to
a change in policy regimes, provides a framework to analyze the
effects of economic policy that is robust to the Lucas'critique
(see Lucas, 1976). The early attempts of explaining business
cycles by starting from microeconomic behavior suggested that
economic policy should play no role since business cycles
reflected the efficient response of economic agents to exogenous
sources of fluctuations (see the seminal paper by Kydland and Prescott, 1982}
and, more recently, King and Rebelo, 1999). This view was challenged by
several empirical studies showing that the adjustment mechanisms
of variables at the heart of macroeconomic propagation mechanisms
like prices and wages are not well represented by efficient
responses of individual agents in frictionless economies (see, for
example, Kashyap, 1999; Cecchetti, 1986; Bils and Klenow, 2004 and Dhyne et al. 2004). Hence, macroeconomic models currently incorporate
some sources of nominal and real rigidities in the DSGE framework
and allow the study of the optimal policy reactions to inefficient
fluctuations stemming from frictions in macroeconomic propagation
mechanisms.
Against this background, the first chapter of this thesis sets up
a DSGE model in order to analyze optimal monetary policy in an
economy with sectorial heterogeneity in the frequency of price
adjustments. Price setters are divided in two groups: those
subject to Calvo type nominal rigidities and those able to change
their prices at each period. Sectorial heterogeneity in price
setting behavior is a relevant feature in real economies (see, for
example, Bils and Klenow, 2004 for the US and Dhyne, 2004 for the Euro
Area). Hence, neglecting it would lead to an understatement of the
heterogeneity in the transmission mechanisms of economy wide
shocks. In this framework, Aoki (2001) shows that a Central
Bank maximizing social welfare should stabilize only inflation in
the sector where prices are sticky (hereafter, core inflation).
Since complete stabilization is the only true objective of the
policymaker in Aoki (2001) and, hence, is not only desirable
but also implementable, the equilibrium real interest rate in the
economy is equal to the natural interest rate irrespective of the
degree of heterogeneity that is assumed. This would lead to
conclude that stabilizing core inflation rather than overall
inflation does not imply any observable difference in the
aggressiveness of the policy behavior. While maintaining the
assumption of sectorial heterogeneity in the frequency of price
adjustments, this chapter adds non negligible transaction
frictions to the model economy in Aoki (2001). As a
consequence, the social welfare maximizing monetary policymaker
faces a trade-off among the stabilization of core inflation,
economy wide output gap and the nominal interest rate. This
feature reflects the trade-offs between conflicting objectives
faced by actual policymakers. The chapter shows that the existence
of this trade-off makes the aggressiveness of the monetary policy
reaction dependent on the degree of sectorial heterogeneity in the
economy. In particular, in presence of sectorial heterogeneity in
price adjustments, Central Banks are much more likely to behave
less aggressively than in an economy where all firms face nominal
rigidities. Hence, the chapter concludes that the excessive
caution in the conduct of monetary policy shown by actual Central
Banks (see, for example, Rudebusch and Svennsson, 1999 and Sack, 2000) might not
represent a sub-optimal behavior but, on the contrary, might be
the optimal monetary policy response in presence of a relevant
sectorial dispersion in the frequency of price adjustments.
DSGE models are proving useful also in empirical applications and
recently efforts have been made to incorporate large amounts of
information in their framework (see Boivin and Giannoni, 2006). However, the
typical DSGE model still relies on a handful of variables. Partly,
this reflects the fact that, increasing the number of variables,
the specification of a plausible set of theoretical restrictions
identifying aggregate shocks and their propagation mechanisms
becomes cumbersome. On the other hand, several questions in
macroeconomics require the study of a large amount of variables.
Among others, two examples related to the second and third chapter
of this thesis can help to understand why. First, policymakers
analyze a large quantity of information to assess the current and
future stance of their economies and, because of model
uncertainty, do not rely on a single modelling framework.
Consequently, macroeconomic policy can be better understood if the
econometrician relies on large set of variables without imposing
too much a priori structure on the relationships governing their
evolution (see, for example, Giannone et al. 2004 and Bernanke et al. 2005).
Moreover, the process of integration of good and financial markets
implies that the source of aggregate shocks is increasingly global
requiring, in turn, the study of their propagation through cross
country links (see, among others, Forni and Reichlin, 2001 and Kose et al. 2003). A
priori, country specific behavior cannot be ruled out and many of
the homogeneity assumptions that are typically embodied in open
macroeconomic models for keeping them tractable are rejected by
the data. Summing up, in order to deal with such issues, we need
modelling frameworks able to treat a large amount of variables in
a flexible manner, i.e. without pre-committing on too many
a-priori restrictions more likely to be rejected by the data. The
large extent of comovement among wide cross sections of economic
variables suggests the existence of few common sources of
fluctuations (Forni et al. 2000 and Stock and Watson, 2002) around which
individual variables may display specific features: a shock to the
world price of oil, for example, hits oil exporters and importers
with different sign and intensity or global technological advances
can affect some countries before others (Giannone and Reichlin, 2004). Factor
models mainly rely on the identification assumption that the
dynamics of each variable can be decomposed into two orthogonal
components - common and idiosyncratic - and provide a parsimonious
tool allowing the analysis of the aggregate shocks and their
propagation mechanisms in a large cross section of variables. In
fact, while the idiosyncratic components are poorly
cross-sectionally correlated, driven by shocks specific of a
variable or a group of variables or measurement error, the common
components capture the bulk of cross-sectional correlation, and
are driven by few shocks that affect, through variable specific
factor loadings, all items in a panel of economic time series.
Focusing on the latter components allows useful insights on the
identity and propagation mechanisms of aggregate shocks underlying
a large amount of variables. The second and third chapter of this
thesis exploit this idea.
The second chapter deals with the issue whether monetary variables
help to forecast inflation in the Euro Area harmonized index of
consumer prices (HICP). Policymakers form their views on the
economic outlook by drawing on large amounts of potentially
relevant information. Indeed, the monetary policy strategy of the
European Central Bank acknowledges that many variables and models
can be informative about future Euro Area inflation. A peculiarity
of such strategy is that it assigns to monetary information the
role of providing insights for the medium - long term evolution of
prices while a wide range of alternative non monetary variables
and models are employed in order to form a view on the short term
and to cross-check the inference based on monetary information.
However, both the academic literature and the practice of the
leading Central Banks other than the ECB do not assign such a
special role to monetary variables (see Gali et al. 2004 and
references therein). Hence, the debate whether money really
provides relevant information for the inflation outlook in the
Euro Area is still open. Specifically, this chapter addresses the
issue whether money provides useful information about future
inflation beyond what contained in a large amount of non monetary
variables. It shows that a few aggregates of the data explain a
large amount of the fluctuations in a large cross section of Euro
Area variables. This allows to postulate a factor structure for
the large panel of variables at hand and to aggregate it in few
synthetic indexes that still retain the salient features of the
large cross section. The database is split in two big blocks of
variables: non monetary (baseline) and monetary variables. Results
show that baseline variables provide a satisfactory predictive
performance improving on the best univariate benchmarks in the
period 1997 - 2005 at all horizons between 6 and 36 months.
Remarkably, monetary variables provide a sensible improvement on
the performance of baseline variables at horizons above two years.
However, the analysis of the evolution of the forecast errors
reveals that most of the gains obtained relative to univariate
benchmarks of non forecastability with baseline and monetary
variables are realized in the first part of the prediction sample
up to the end of 2002, which casts doubts on the current
forecastability of inflation in the Euro Area.
The third chapter is based on a joint work with Domenico Giannone
and gives empirical foundation to the general equilibrium
explanation of the Feldstein - Horioka puzzle. Feldstein and Horioka (1980) found
that domestic saving and investment in OECD countries strongly
comove, contrary to the idea that high capital mobility should
allow countries to seek the highest returns in global financial
markets and, hence, imply a correlation among national saving and
investment closer to zero than one. Moreover, capital mobility has
strongly increased since the publication of Feldstein - Horioka's
seminal paper while the association between saving and investment
does not seem to comparably decrease. Through general equilibrium
mechanisms, the presence of global shocks might rationalize the
correlation between saving and investment. In fact, global shocks,
affecting all countries, tend to create imbalance on global
capital markets causing offsetting movements in the global
interest rate and can generate the observed correlation across
national saving and investment rates. However, previous empirical
studies (see Ventura, 2003) that have controlled for the effects
of global shocks in the context of saving-investment regressions
failed to give empirical foundation to this explanation. We show
that previous studies have neglected the fact that global shocks
may propagate heterogeneously across countries, failing to
properly isolate components of saving and investment that are
affected by non pervasive shocks. We propose a novel factor
augmented panel regression methodology that allows to isolate
idiosyncratic sources of fluctuations under the assumption of
heterogenous transmission mechanisms of global shocks. Remarkably,
by applying our methodology, the association between domestic
saving and investment decreases considerably over time,
consistently with the observed increase in international capital
mobility. In particular, in the last 25 years the correlation
between saving and investment disappears.
Doctorat en sciences économiques, Orientation économie
info:eu-repo/semantics/nonPublished
Bañbura, Marta. "Essays in dynamic macroeconometrics." Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210294.
Full textThe first two chapters consider factor models in the context of real-time forecasting with many indicators. Using a large number of predictors offers an opportunity to exploit a rich information set and is also considered to be a more robust approach in the presence of instabilities. On the other hand, it poses a challenge of how to extract the relevant information in a parsimonious way. Recent research shows that factor models provide an answer to this problem. The fundamental assumption underlying those models is that most of the co-movement of the variables in a given dataset can be summarized by only few latent variables, the factors. This assumption seems to be warranted in the case of macroeconomic and financial data. Important theoretical foundations for large factor models were laid by Forni, Hallin, Lippi and Reichlin (2000) and Stock and Watson (2002). Since then, different versions of factor models have been applied for forecasting, structural analysis or construction of economic activity indicators. Recently, Giannone, Reichlin and Small (2008) have used a factor model to produce projections of the U.S GDP in the presence of a real-time data flow. They propose a framework that can cope with large datasets characterised by staggered and nonsynchronous data releases (sometimes referred to as “ragged edge”). This is relevant as, in practice, important indicators like GDP are released with a substantial delay and, in the meantime, more timely variables can be used to assess the current state of the economy.
The first chapter of the thesis entitled “A look into the factor model black box: publication lags and the role of hard and soft data in forecasting GDP” is based on joint work with Gerhard Rünstler and applies the framework of Giannone, Reichlin and Small (2008) to the case of euro area. In particular, we are interested in the role of “soft” and “hard” data in the GDP forecast and how it is related to their timeliness.
The soft data include surveys and financial indicators and reflect market expectations. They are usually promptly available. In contrast, the hard indicators on real activity measure directly certain components of GDP (e.g. industrial production) and are published with a significant delay. We propose several measures in order to assess the role of individual or groups of series in the forecast while taking into account their respective publication lags. We find that surveys and financial data contain important information beyond the monthly real activity measures for the GDP forecasts, once their timeliness is properly accounted for.
The second chapter entitled “Maximum likelihood estimation of large factor model on datasets with arbitrary pattern of missing data” is based on joint work with Michele Modugno. It proposes a methodology for the estimation of factor models on large cross-sections with a general pattern of missing data. In contrast to Giannone, Reichlin and Small (2008), we can handle datasets that are not only characterised by a “ragged edge”, but can include e.g. mixed frequency or short history indicators. The latter is particularly relevant for the euro area or other young economies, for which many series have been compiled only since recently. We adopt the maximum likelihood approach which, apart from the flexibility with regard to the pattern of missing data, is also more efficient and allows imposing restrictions on the parameters. Applied for small factor models by e.g. Geweke (1977), Sargent and Sims (1977) or Watson and Engle (1983), it has been shown by Doz, Giannone and Reichlin (2006) to be consistent, robust and computationally feasible also in the case of large cross-sections. To circumvent the computational complexity of a direct likelihood maximisation in the case of large cross-section, Doz, Giannone and Reichlin (2006) propose to use the iterative Expectation-Maximisation (EM) algorithm (used for the small model by Watson and Engle, 1983). Our contribution is to modify the EM steps to the case of missing data and to show how to augment the model, in order to account for the serial correlation of the idiosyncratic component. In addition, we derive the link between the unexpected part of a data release and the forecast revision and illustrate how this can be used to understand the sources of the
latter in the case of simultaneous releases. We use this methodology for short-term forecasting and backdating of the euro area GDP on the basis of a large panel of monthly and quarterly data. In particular, we are able to examine the effect of quarterly variables and short history monthly series like the Purchasing Managers' surveys on the forecast.
The third chapter is entitled “Large Bayesian VARs” and is based on joint work with Domenico Giannone and Lucrezia Reichlin. It proposes an alternative approach to factor models for dealing with the curse of dimensionality, namely Bayesian shrinkage. We study Vector Autoregressions (VARs) which have the advantage over factor models in that they allow structural analysis in a natural way. We consider systems including more than 100 variables. This is the first application in the literature to estimate a VAR of this size. Apart from the forecast considerations, as argued above, the size of the information set can be also relevant for the structural analysis, see e.g. Bernanke, Boivin and Eliasz (2005), Giannone and Reichlin (2006) or Christiano, Eichenbaum and Evans (1999) for a discussion. In addition, many problems may require the study of the dynamics of many variables: many countries, sectors or regions. While we use standard priors as proposed by Litterman (1986), an
important novelty of the work is that we set the overall tightness of the prior in relation to the model size. In this we follow the recommendation by De Mol, Giannone and Reichlin (2008) who study the case of Bayesian regressions. They show that with increasing size of the model one should shrink more to avoid overfitting, but when data are collinear one is still able to extract the relevant sample information. We apply this principle in the case of VARs. We compare the large model with smaller systems in terms of forecasting performance and structural analysis of the effect of monetary policy shock. The results show that a standard Bayesian VAR model is an appropriate tool for large panels of data once the degree of shrinkage is set in relation to the model size.
The fourth chapter entitled “Forecasting euro area inflation with wavelets: extracting information from real activity and money at different scales” proposes a framework for exploiting relationships between variables at different frequency bands in the context of forecasting. This work is motivated by the on-going debate whether money provides a reliable signal for the future price developments. The empirical evidence on the leading role of money for inflation in an out-of-sample forecast framework is not very strong, see e.g. Lenza (2006) or Fisher, Lenza, Pill and Reichlin (2008). At the same time, e.g. Gerlach (2003) or Assenmacher-Wesche and Gerlach (2007, 2008) argue that money and output could affect prices at different frequencies, however their analysis is performed in-sample. In this Chapter, it is investigated empirically which frequency bands and for which variables are the most relevant for the out-of-sample forecast of inflation when the information from prices, money and real activity is considered. To extract different frequency components from a series a wavelet transform is applied. It provides a simple and intuitive framework for band-pass filtering and allows a decomposition of series into different frequency bands. Its application in the multivariate out-of-sample forecast is novel in the literature. The results indicate that, indeed, different scales of money, prices and GDP can be relevant for the inflation forecast.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
Duong, Lien Thi Hong. "Australian takeover waves : a re-examination of patterns, causes and consequences." UWA Business School, 2009. http://theses.library.uwa.edu.au/adt-WU2009.0201.
Full textVerikios, George. "Understanding the world wool market : trade, productivity and grower incomes." University of Western Australia. School of Economics and Commerce, 2007. http://theses.library.uwa.edu.au/adt-WU2007.0064.
Full textCurto, Millet Fabien. "Inflation expectations, labour markets and EMU." Thesis, University of Oxford, 2007. http://ora.ox.ac.uk/objects/uuid:9187d2eb-2f93-4a5a-a7d6-0fb6556079bb.
Full textHatangala, Chinthana. "Identifying the Future Directions of Australian Excess Stock Returns and Their Determinants Using Binary Models." Thesis, 2016. https://vuir.vu.edu.au/32888/.
Full text"Die kombinering van vooruitskattings : 'n toepassing op die vernaamste makro-ekonomiese veranderlikes." Thesis, 2014. http://hdl.handle.net/10210/9439.
Full textThe main purpose of this study is the combining of forecasts with special reference to major macroeconomic series of South Africa. The study is based on econometric principles and makes use of three macro-economic variables, forecasted with four forecasting techniques. The macroeconomic variables which have been selected are the consumer price index, consumer expenditure on durable and semi-durable products and real M3 money supply. Forecasts of these variables have been generated by applying the Box-Jenkins ARIMA technique, Holt's two parameter exponential smoothing, the regression approach and mUltiplicative decomposition. Subsequently, the results of each individual forecast are combined in order to determine if forecasting errors can be minimized. Traditionally, forecasting involves the identification and application of the best forecasting model. However, in the search for this unique model, it often happens that some important independent information contained in one of the other models, is discarded. To prevent this from happening, researchers have investigated the idea of combining forecasts. A number of researchers used the results from different techniques as inputs into the combination of forecasts. In spite of the differences in their conclusions, three basic principles have been identified in the combination of forecasts, namely: i The considered forecasts should represent the widest range of forecasting techniques possible. Inferior forecasts should be identified. Predictable errors should be modelled and incorporated into a new forecast series. Finally, a method of combining the selected forecasts needs to be chosen. The best way of selecting a m ethod is probably by experimenting to find the best fit over the historical data. Having generated individual forecasts, these are combined by considering the specifications of the three combination methods. The first combination method is the combination of forecasts via weighted averages. The use of weighted averages to combine forecasts allows consideration of the relative accuracy of the individual methods and of the covariances of forecast errors among the methods. Secondly, the combination of exponential smoothing and Box-Jenkins is considered. Past errors of each of the original forecasts are used to determine the weights to attach to the two original forecasts in forming the combined forecasts. Finally, the regression approach is used to combine individual forecasts. Granger en Ramanathan (1984) have shown that weights can be obtained by regressing actual values of the variables of interest on the individual forecasts, without including a constant and with the restriction that weights add up to one. The performance of combination relative to the individual forecasts have been tested, given that the efficiency criterion is the minimization of the mean square errors. The results of both the individual and the combined forecasting methods are acceptable. Although some of the methods prove to be more accurate than others, the conclusion can be made that reliable forecasts are generated by individual and combined forecasting methods. It is up to the researcher to decide whether he wants to use an individual or combined method since the difference, if any, in the root mean square percentage errors (RMSPE) are insignificantly small.
"Macroeconomic forecasting: a comparison between artificial neural networks and econometric models." Thesis, 2008. http://hdl.handle.net/10210/633.
Full textProf. D.J. Marais
Siriprapanukul, Pawin. "Essays in forecasting macroeconomic indicators." Phd thesis, 2009. http://hdl.handle.net/1885/150183.
Full textMorris, Alan Geoffrey. "An economic analysis of industrial disputation in Australia." Thesis, 1996. https://vuir.vu.edu.au/15259/.
Full textAkmal, Muhammad. "The structure of energy demand in Australia : an econometric investigation with some economic applications." Phd thesis, 2000. http://hdl.handle.net/1885/144955.
Full text"The combination of high and low frequency data in macroeconometric forecasts: the case of Hong Kong." 1999. http://library.cuhk.edu.hk/record=b5889921.
Full textThesis (M.Phil.)--Chinese University of Hong Kong, 1999.
Includes bibliographical references (leaves 64-65).
Abstracts in English and Chinese.
ACKNOWLEDGMENTS --- p.iii
LIST OF TABLES --- p.iv
CHAPTER
Chapter I --- INTRODUCTION --- p.1
Chapter II --- THE LITERATURE REVIEW --- p.4
Chapter III --- METHODOLOGY
Forecast Pooling Technique
Modified Technique
Chapter IV --- MODEL SPECIFICATIONS --- p.16
The Monthly Models
The Quarterly Model
Data Description
Chapter V --- THE COMBINED FORECAST --- p.32
Pooling Forecast Technique in Case of Hong Kong
The Forecasts Results
Chapter VI --- CONCLUSION --- p.38
TABLES --- p.40
APPENDIX --- p.53
BIBLIOGRAPHY --- p.64
"Three essays on financial econometrics." 2013. http://library.cuhk.edu.hk/record=b5549821.
Full textThis thesis consists of three essays on financial econometrics. The first two essays are about multivariate density forecast evaluations. The third essay is on nonparametric Bayesian change-point VAR model. We develop a method for multivariate density forecast evaluations. The density forecast evaluation is based on checking uniformity and independence conditions of the probability integral transformation of the observed series in question. In the first essay, we propose a new method which is a location-adjusted version of Clements and Smith (2002) that corrects asymmetry problem and increases testing power. In the second essay, we develop a data-driven smooth test for multivariate density forecast evaluation and show some evidences on its finite sample performance using Monte Carlo simulations. Previous to our study, most of the works are up to bivariate model as it is difficult to evaluate with the existing methods. We propose an efficient dimensional reduction approach to reduce the dimension of multivariate density evaluation to a univariate one. We perform various Monte Carlo simulations and two applications on financial asset returns which show that our test performs well. The last essay proposes a nonparametric extension to existing Bayesian change-point model in a multivariate setting. Previous change-point model of Chib (1998) requires specification of the number of change points a priori. Hence a posterior model comparison is needed for di erent change-point models. We introduce the stick-breaking prior to the change-point process that allows us to endogenize the number of change points into the estimation procedure. Hence, the number of change points is simultaneously determined with other unknown parameters. Therefore our model is robust to model specification. We preform a Monte Carlo simulation of bivariate vector autoregressive VAR(2) process which is subject to four structural breaks. Our model estimate the break locations with high accuracy and the posterior estimates of the 65 parameters are closed to the true values. We apply our model to various hedge fund return processes and the detected change points coincide with market crashes.
Detailed summary in vernacular field only.
Ko, Iat Meng.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2013.
Includes bibliographical references (leaves 176-194).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts also in Chinese.
Abstract --- p.i
Acknowledgement --- p.v
Chapter 1 --- Introduction --- p.1
Chapter 2 --- Multivariate Density Forecast Evaluation: A Modified Approach --- p.7
Chapter 2.1 --- Introduction --- p.7
Chapter 2.2 --- Evaluating Density Forecasts --- p.13
Chapter 2.3 --- Monte Carlo Simulations --- p.18
Chapter 2.3.1 --- Bivariate normal distribution --- p.19
Chapter 2.3.2 --- The Ramberg distribution --- p.21
Chapter 2.3.3 --- Student’s t and uniform distributions --- p.24
Chapter 2.4 --- Empirical Applications --- p.24
Chapter 2.4.1 --- AR model --- p.25
Chapter 2.4.2 --- GARCH model --- p.27
Chapter 2.5 --- Conclusion --- p.29
Chapter 3 --- Multivariate Density Forecast Evaluation: Smooth Test Approach --- p.39
Chapter 3.1 --- Introduction --- p.39
Chapter 3.2 --- Exponential Transformation for Multi-dimension Reduction --- p.47
Chapter 3.3 --- The Smooth Test --- p.56
Chapter 3.4 --- The Data-Driven Smooth Test Statistic --- p.66
Chapter 3.4.1 --- Selection of K --- p.66
Chapter 3.4.2 --- Choosing p of the Portmanteau based test --- p.69
Chapter 3.5 --- Monte Carlo Simulations --- p.70
Chapter 3.5.1 --- Multivariate normal and Student’s t distributions --- p.71
Chapter 3.5.2 --- VAR(1) model --- p.74
Chapter 3.5.3 --- Multivariate GARCH(1,1) Model --- p.78
Chapter 3.6 --- Density Forecast Evaluation of the DCC-GARCH Model in Density Forecast of Spot-Future returns and International Equity Markets --- p.80
Chapter 3.7 --- Conclusion --- p.87
Chapter 4 --- Stick-Breaking Bayesian Change-Point VAR Model with Stochastic Search Variable Selection --- p.111
Chapter 4.1 --- Introduction --- p.111
Chapter 4.2 --- The Bayesian Change-Point VAR Model --- p.116
Chapter 4.3 --- The Stick-breaking Process Prior --- p.120
Chapter 4.4 --- Stochastic Search Variable Selection (SSVS) --- p.121
Chapter 4.4.1 --- Priors on Φ[subscript j] = vec(Φ[subscript j]) = --- p.122
Chapter 4.4.2 --- Prior on Σ[subscript j] --- p.123
Chapter 4.5 --- The Gibbs Sampler and a Monte Carlo Simulation --- p.123
Chapter 4.5.1 --- The posteriors of ΦΣ[subscript j] and Σ[subscript j] --- p.123
Chapter 4.5.2 --- MCMC Inference for SB Change-Point Model: A Gibbs Sampler --- p.126
Chapter 4.5.3 --- A Monte Carlo Experiment --- p.128
Chapter 4.6 --- Application to Daily Hedge Fund Return --- p.130
Chapter 4.6.1 --- Hedge Funds Composite Indices --- p.132
Chapter 4.6.2 --- Single Strategy Hedge Funds Indices --- p.135
Chapter 4.7 --- Conclusion --- p.138
Chapter A --- Derivation and Proof --- p.166
Chapter A.1 --- Derivation of the distribution of (Z₁ - EZ₁) x (Z₂ - EZ₂) --- p.166
Chapter A.2 --- Derivation of limiting distribution of the smooth test statistic without parameter estimation uncertainty ( θ = θ₀) --- p.168
Chapter A.3 --- Proof of Theorem 2 --- p.170
Chapter A.4 --- Proof of Theorem 3 --- p.172
Chapter A.5 --- Proof of Theorem 4 --- p.174
Chapter A.6 --- Proof of Theorem 5 --- p.175
Bibliography --- p.176
Smith, Jeremy Paul Duncan. "Aspects of macroeconometric time series modelling." Phd thesis, 1991. http://hdl.handle.net/1885/121824.
Full textQuang, Doan Hong. "Essays on factor-market distortions and economic growth." Phd thesis, 2000. http://hdl.handle.net/1885/147706.
Full textGiesecke, James. "FEDERAL-F : a multi-regional multi-sectoral dynamic model of the Australian economy / by James A.D. Giesecke." 2000. http://hdl.handle.net/2440/19810.
Full text2 v. (xviii, 661 p. : ill.([1] col.) ; 30 cm.
Title page, contents and abstract only. The complete thesis in print form is available from the University Library.
Thesis (Ph.D.)--University of Adelaide, School of Economics, 2001
Mynbaev, Kairat T. "Two essays in microeconomic theory and econometrics." Thesis, 1995. http://hdl.handle.net/1957/35191.
Full textGraduation date: 1995
Nyasha, Sheilla. "Financial development and economic growth : new evidence from six countries." Thesis, 2014. http://hdl.handle.net/10500/18576.
Full textEconomics
DCOM (Economics)
Jiang, Qiang. "Three essays on water modelling and management in the Murray-Darling Basin, Australia." Phd thesis, 2011. http://hdl.handle.net/1885/151262.
Full text